How Explainable AI, Causal Models, & Gen AI Integration are Reshaping Enterprise Analytics

How Explainable AI, Causal Models, & Gen AI Integration are Reshaping Enterprise Analytics

Ramprakash Ramamoorthy, Director of AI Research at Zoho & ManageEngine

  •  No Image

Ramprakash Ramamoorthy, Director of AI Research at Zoho & ManageEngine, in an insightful conversation with CIOTechOutlook, discussed the critical role of explainable AI in enterprise analytics. He discussed the significance of building trust and transparency in AI systems, conceptual drift in dynamic conditions, and the growing demand for integrated platforms in the generative AI era. Ramprakash also described the future implications of Causal AI, the difficulties of model interpretability, and the ethical challenges organizations must consider when using machine learning solutions.

How do you see the role of explainable AI evolving in critical business analytics? What practical steps can teams take to ensure machine learning models remain unbiased over time?

Explainable AI is important in an enterprise environment where trust is essential. It is the feature that determines whether an enterprise trusts an AI system and keeps it grounded. It has been observed that when an explanation is added to AI outputs, usage increases. A model trained on only six months of data is unlikely to attain deep subject-matter expertise. However, when explanations are provided, domain experts can understand the model’s rationale, know the area in which it works and in which it doesn't, and are more willing to accept its conclusions.

In fast-changing industries, how can analytics teams keep models relevant without constant retraining? What metrics do you prioritize when measuring the real-world impact of deployed machine learning solutions?

Measuring concept drift is critical. Concept drift happens when a model is trained on a dataset and initially achieves high accuracy; however, the real-world data distribution changes over time. The model continues to make predictions based on outdated training data. Understanding concept drift helps to identify when to retrain the model. Since retraining a model is expensive, techniques such as active learning can be applied to detect concept drift and support the decision-making process. These processes are called model operations, or ModelOps. Monitoring a model for concept drift is a standard practice after a model has been deployed.

How do you balance model complexity with the need for interpretability in enterprise settings? What are some overlooked challenges when moving from proof-of-concept ML models to production?

In the current LLM-driven world, reasoning and explainability are frequently ignored. A model is created to generate work without reasoning, and then a different model is created to explain the reasons for the output. This highlights a critical gap in current AI development. Causal AI has an important future role to fill because almost all existing models heavily depend on correlation. Although various causal techniques exist, research in this area has significantly declined since the rise of LLMs, which focus on inferring patterns from large volumes of data. It is important to pursue research directions that incorporate knowledge graphs, enabling larger models to be inherently explainable rather than relying on post hoc explanations.

How do you see generative AI complementing traditional analytics workflows? What advice would you give to organizations struggling to align ML initiatives with business ROI?

Establishing the basics is critical. Integrated processes and integrated data streams are incredibly important. Generative AI has brought the need for enterprises to be very clear on what they need, to determine what it requires across the enterprise, and to adopt a single, simplified platform.
Generative AI only becomes powerful when machine learning and analytics components are fully integrated. It is not necessary to use generative AI to query data that is already in view. The power comes from querying data that spans across multiple systems. However, privacy considerations play a crucial role. Organizations cannot allow unrestricted access to all information. It is essential to define appropriate privacy boundaries and establish the right data connections to ensure that generative AI operates effectively on top of the existing machine learning and analytics infrastructure.

What emerging ethical considerations around ML deployment should teams watch closely? How do you foster a culture of experimentation and learning within data science teams?

Never send data to the ML model provider. If data is shared, it can only be used by the company. It is important to set organizational and privacy boundaries and it is enterprise-specific and sensitive. The statistical foundations of ML remain relevant, especially since the cost of inference and training for an ML model is minimal today. This is not the case with LLMs, which is why right-sizing models are important. Large generative AI models should be used only when emergent behavior is needed. For querying or analyzing existing data, traditional ML models are sufficient. Therefore, it is important to continue investing, ensure that privacy boundaries are appropriately maintained, and prevent both data and processes from being confined within silos. In addition, it is important to recognize context drift, ensure appropriate retraining, and maintain explainability to build user trust.