Simplifying Processes with GenAI Integration for Faster Results

Palanivel Saravanan, Head-Technology, Cloud Engineering, Oracle India, | Friday, 24 May 2024, 12:15 IST

  •  No Image

In an interaction with CIOTechOutlook, Palanivel Saravanan, Head-Technology, Cloud Engineering, Oracle India, shares his insights on how businesses can overcome the challenges of integrating AI technologies into existing workflows without disrupting operational efficiency and more.

How can businesses overcome the challenge of integrating AI technologies into existing workflows without disrupting operational efficiency?

Customers nowadays require various new services built around Generative AI (GenAI) and spend ample time synchronizing them with other services. This approach is not only time-consuming but also does not help achieve the desired results. On the other hand, there are other customers who are integrating the use of AI as a service into their operations. This approach does not involve creating complex models or other AI services for a specific organization.

Let us consider a simple scenario, for instance, a contact center or call recording system. When call recordings are added with sentiment analysis, summarization and action points, the scope of AI usage increases, and the time taken for market penetration reduces. In such a case where AI is considered an object at the center of service development, the integration process would be a very long and complex one.

Alternatively, AI must be incorporated into existing applications to facilitate a smooth process. This not only improves the effectiveness of the process but also makes the application even smarter, thereby improving the service's response time.  

Businesses often need help accessing high-quality, relevant data. Data may be siloed across departments or stored in disparate formats, making it challenging to utilize effectively. How can this problem be solved?

Any enterprise or even an SME has multiple data sources. Customer data, support data, and sales data are all data islands; when you add another large language model or a GenAI, it becomes another island. This is because we build a model from a common data source.

Now, all the islands must be integrated to obtain the best business outcome. Since we are a data company and have been in this segment for the last 47 years (managing customers' data), our approach is to bring AI to the data instead of moving the data to the AI, which is a unique approach. Since you have various data models or data islands, bring AI as the center of it and use the data as an RAG, (Retrieval Augmentation Generation). Use the data in the model, gather all the data, and then make a model that is relevant to the customer and the use case you build across.  

How can enterprises manage the complexity of training generative AI models to ensure consistent and reliable results across diverse operational tasks?

This can be done through a thorough analysis, which enterprises must undertake. There are two ways to approach this. The first method is to take a common model and fine-tune it. This is typically daunting because the common model is vast, leading to overspending and increased costs. Further fine-tuning of the model will generate another model, referred to as the X model. Hence, it is crucial to identify the specific problem to be addressed to avoid this. We should provide the right solution to the problem and not require a 7 trillion parameter model; instead, we might only need a smaller model. Fine-tuning this smaller model with the data is much simpler. Therefore, fine-tuning it, adding the data, and storing it in a dedicated cluster would make it more secure, which is what many enterprises seek.

Enterprises today want to ensure their model is secure. To fine-tune the data, the process should occur within a single model since a standard model accessible by everyone presents a significant security risk. Therefore, choosing a relevant model for your use case is essential. This model should solve the business pain point, help fine-tune your data, and run in your dedicated cluster, making it highly efficient, effective, and secure.

How can businesses overcome the scalability limitations of generative AI systems, especially when deploying them across large-scale operational environments with varying demands and resource constraints?

Standard or large language models are not one-size-fits-all solutions. Every enterprise views each use case and associated costs differently when adopting a GenAI model. Therefore, to scale a model, it must be relevant to your specific data. This involves building a model that becomes the common model, tailored to your data and cluster, to avoid development challenges.

For instance, when adopting Cohere under Meta’s Llama open-source model, we used a simple tool called the OCI GenAI agent. This tool helps customers choose and fine-tune the model, which then becomes their specific customer model. The new model is exclusively allocated to the customer, and the infrastructure becomes elastic. As usage increases, the infrastructure scales from a few GPUs to several hundred based on demand, and it scales down as needed. Therefore, scalability challenges are addressed, and the model becomes more suitable for the customer than a generic one.

How do you perceive the evolution of data security practices with the rise of artificial intelligence? Will it alter how data security is currently being executed?

Security is a critical topic, and it is not a one-off event. Data should be protected 24/7 and secured at any given point in time. With AI, data should be more secure, and AI should be leveraged to address the issue of how much data will be used.

This is where the dedicated AI cluster comes into the picture, particularly for enterprises. Enterprises can utilize a dedicated AI cluster model instead of a common AI model. One of the main reasons for using data is that Generative AI always combines enterprise data and the language model. Hence, if one element is sacrificed, the other will also be compromised. Enterprise data is always secured due to higher tightening security policies and postures, and everything in the enterprise is built over time.

Although GenAI is secure, the commonality is higher, and the risk element is much greater. Hence, transferring the risk element to the cloud falls under your security posture's control. Thus, the data and model are secure, and consequently, the use case is also secure. Therefore, this is the direction all enterprises will move towards, and it will be considered the common model.          

CIO Viewpoint

Unlocking the Potential of Cloud and AI: A...

By Pratik Jain, Lead Business Analyst – Digital Transformation, ACS Global Tech Solutions

Importance of Zero-Trust Cloud Security in the...

By Sameer Danave, Senior Director Marketing, MSys Technologies

The Transition to a Cloud-First World

By Kapil Makhija, Vice President -Technology Cloud, Oracle India

CXO Insights

Simplifying Processes with GenAI Integration...

By Palanivel Saravanan, Head-Technology, Cloud Engineering, Oracle India,

Balancing Generative AI Capabilities with Data...

By Murad Wagh, Director - Sales Engineering, Snowflake

AI Integration in Multi-Cloud and Edge...

By FaizShakir, VP & Managing Director – Sales, Nutanix