Artificial Intelligence & Machine Learning
,
Data Privacy
,
Data Security
Explainability, Cost, Compliance Drive AI Choices in Enterprises

Artificial intelligence has been democratized and made widely accessible, but Sujatha S Iyer, head of security at ManageEngine – the IT management software division of Zoho Corp., cautioned against overusing large language models.
See Also: OnDemand | Transform API Security with Unmatched Discovery and Defense
“Not everything is an LLM problem just because it is the hype. AI is absolutely needed, LLM is absolutely useful. But the use cases that we see for LLMs in the enterprise landscape are more on summarization … more on content generation,” Iyer said.
When Black Boxes Won’t Cut It
In critical scenarios such as predicting outages or detecting fraud, explainability is key. “If my enterprise software is going to tell me there is an 80% chance of an outage … there has to be some explanation,” Iyer said. Traditional models can provide clear reasoning such as spikes identified in website load or server limitations – insights that help leaders act quickly with confidence.
The regulatory landscape increasingly demands this transparency. Financial institutions using AI for credit scoring or fraud detection particularly face stringent requirements, where explainable AI has become not just beneficial but imperative for compliance.
In the banking sector, for example, explainable AI solutions help compliance teams understand why alerts were triggered, enabling faster triage and more effective investigations.
The GPU Tax
Cost is another deciding factor driving enterprises toward traditional approaches. “You don’t want to incur a GPU tax for every inference that you’ve done. It’s going to be costly. And someone has to foot the bill,” Iyer said. “Why do you want the customer to foot the GPU tax for something that you can actually solve using a traditional machine-learning technique.”
The numbers support this concern. Compute costs represent an estimated 55% to 60% of OpenAI’s total $9 billion operating expenses in 2024. The “Nvidia tax” – where hyperscalers pay $20,000 to more than $35,000 per GPU unit, which costs Nvidia just $3,000 to $5,000 to manufacture – creates significant operational expenses for LLM deployment.
Research from various enterprise studies shows that classical machine learning models are resource-efficient, often trainable on simple laptops or minimal cloud infrastructure. This computational efficiency allows organizations to deploy predictive models faster, without the costly overhead of collecting and managing enormous datasets required by deep learning models.
The Digital Maturity Foundation
AI success also depends on digital maturity. Many organizations are still laying data foundations. “Let’s say you want to run analytics on how many tickets were raised, do a dashboard on how many tickets one can expect … all of that was over a call. Nothing was digitized. There is no trace of it. That is the reason why chatbots are getting created because they are now recording and getting traced,” Iyer said.
This observation aligns with the MIT CISR Enterprise AI Maturity Model, which shows that 28% of enterprises remain in “Stage 1 – Experiment and Prepare.” These organizations focus on educating their workforce, formulating AI policies and experimenting with AI technologies before scaling to more sophisticated implementations.
Speaking with Information Security Media Group, Nagaraj Nagabhushanam, vice president of data and analytics and designated AI officer at The Hindu Group, shared how traditional AI underpins many core systems. “It has been the backbone of recommender systems and next-best-action systems that we’ve designed over the years,” Nagabhushanam said. These recommender systems are often a mix of heavily heuristic and rules-based applications as well as established NLP models critical for entity recognition, personalization and subscription management, he said (see: How AI Is Transforming Newsroom Operations).
The Privacy and Compliance Advantage
Strict compliance and privacy requirements push enterprises toward controlled AI development. “We only train [AI models] on commercially licensed open-source datasets … Even in such cases, we ensure the data in the model that we build, it stays exclusively. At any point of time, your data or your model is not going to be used for the betterment of someone else,” Iyer said.
This approach reflects broader enterprise concerns about AI governance. According to KPMG research, frameworks such as local interpretable model-agnostic explanations and Shapley Additive exPlanations help clarify AI decisions, support compliance and build stakeholder confidence. These tools enable organizations to maintain transparency while protecting proprietary data and meeting regulatory requirements.
Right-Sizing AI Solutions
Iyer said enterprise needs are often highly contextual, making massive models unnecessary. “Do you need a 600-700 billion [parameter] model sitting in your enterprise running inferences when the questions are going to be very contextual?” she said.
This practical wisdom is supported by recent industry analysis. Traditional ML models often produce classification accuracy at a fraction of the cost compared to deep learning alternatives. Banks regularly utilize logistic regression and random forests for credit scoring, fraud detection and risk management, while healthcare organizations deploy decision trees for diagnostic support and treatment planning.
That doesn’t mean enterprises are avoiding LLMs entirely. Zoho’s research labs continue with experiments on models ranging from 7 billion to 32 billion parameters, as well as with the exploration of “mixture of experts” models that combine efficiency with capability.
Current enterprise adoption statistics show that 78% of organizations use AI in at least one business function, up from 55% a year earlier. But the most successful deployments often involve hybrid approaches that use both traditional ML and LLMs strategically.