Ramprakash Ramamoorthy, Director of AI Research at Zoho & ManageEngine
Ramprakash Ramamoorthy, Director of AI Research at Zoho & ManageEngine, in an insightful conversation with CIOTechOutlook, discussed the critical role of explainable AI in enterprise analytics. He discussed the significance of building trust and transparency in AI systems, conceptual drift in dynamic conditions, and the growing demand for integrated platforms in the generative AI era. Ramprakash also described the future implications of Causal AI, the difficulties of model interpretability, and the ethical challenges organizations must consider when using machine learning solutions.
Explainable AI is important in an enterprise environment where trust is essential. It is the feature that determines whether an enterprise trusts an AI system and keeps it grounded. It has been observed that when an explanation is added to AI outputs, usage increases. A model trained on only six months of data is unlikely to attain deep subject-matter expertise. However, when explanations are provided, domain experts can understand the model’s rationale, know the area in which it works and in which it doesn't, and are more willing to accept its conclusions.
Measuring concept drift is critical. Concept drift happens when a model is trained on a dataset and initially achieves high accuracy; however, the real-world data distribution changes over time. The model continues to make predictions based on outdated training data. Understanding concept drift helps to identify when to retrain the model. Since retraining a model is expensive, techniques such as active learning can be applied to detect concept drift and support the decision-making process. These processes are called model operations, or ModelOps. Monitoring a model for concept drift is a standard practice after a model has been deployed.
In the current LLM-driven world, reasoning and explainability are frequently ignored. A model is created to generate work without reasoning, and then a different model is created to explain the reasons for the output. This highlights a critical gap in current AI development. Causal AI has an important future role to fill because almost all existing models heavily depend on correlation. Although various causal techniques exist, research in this area has significantly declined since the rise of LLMs, which focus on inferring patterns from large volumes of data. It is important to pursue research directions that incorporate knowledge graphs, enabling larger models to be inherently explainable rather than relying on post hoc explanations.
Establishing the basics is critical. Integrated processes and integrated data streams are incredibly important. Generative AI has brought the need for enterprises to be very clear on what they need, to determine what it requires across the enterprise, and to adopt a single, simplified platform.
Generative AI only becomes powerful when machine learning and analytics components are fully integrated. It is not necessary to use generative AI to query data that is already in view. The power comes from querying data that spans across multiple systems. However, privacy considerations play a crucial role. Organizations cannot allow unrestricted access to all information. It is essential to define appropriate privacy boundaries and establish the right data connections to ensure that generative AI operates effectively on top of the existing machine learning and analytics infrastructure.
Never send data to the ML model provider. If data is shared, it can only be used by the company. It is important to set organizational and privacy boundaries and it is enterprise-specific and sensitive. The statistical foundations of ML remain relevant, especially since the cost of inference and training for an ML model is minimal today. This is not the case with LLMs, which is why right-sizing models are important. Large generative AI models should be used only when emergent behavior is needed. For querying or analyzing existing data, traditional ML models are sufficient. Therefore, it is important to continue investing, ensure that privacy boundaries are appropriately maintained, and prevent both data and processes from being confined within silos. In addition, it is important to recognize context drift, ensure appropriate retraining, and maintain explainability to build user trust.