In an era where AI continues to transform industries, the debate around continuous learning in AI models has never been more relevant. Today, we delve into how businesses leverage cross-industry insights, adapt to rapid technological advancements, and address the ever-present challenge of maintaining high-quality data to ensure their AI models remain effective and adaptive.
Joining us for this compelling discussion are Ajit Pethkar, Head of Solutions Delivery, and Sameer Bhangale, Associate Director of Technology, from Xoriant. Together, they bring a wealth of experience in crafting innovative AI solutions across sectors, exploring the intersection of technology, strategy, and domain expertise.
1. AI is making strides across various industries. How do you leverage cross-industry insights to enhance continuous learning in your models? Can you share an example where this approach has been impactful?
Ajit: Continuous learning is absolutely crucial for the successful deployment and long-term performance of AI models. Over time, as we accumulate larger and more diverse datasets, it's essential to retrain the models to ensure that their outcomes remain relevant and aligned with business objectives.
A great advantage of working across various industries is the ability to identify commonalities and transferable insights. Recently, we tackled a problem for one of our healthcare clients. Interestingly, we discovered that a similar challenge existed in sectors like hospitality and airlines. While the specific features and data sets differed, the underlying structure of the problem provided valuable inputs for refining our models. By incorporating these cross-industry learnings, we were able to enhance the accuracy and effectiveness of the solutions, ensuring the models delivered business-relevant results. This approach not only strengthens the models but also underscores the importance of collaborative innovation across industries.
2. Given the rapid shifts in market dynamics, how do you maintain the AI models' relevance and accuracy? What specific continuous learning strategies do you rely on to keep the models adaptive?
Sameer: Xoriant has a dedicated methodology for continuous accuracy improvement and user satisfaction enhancement. This framework ensures that our AI solutions adapt dynamically to users’ evolving usage patterns and business needs.
We achieve this by continuously monitoring user satisfaction metrics and the accuracy of the models, followed by targeted enhancements to improve their performance. Our Center of Excellence (CoE) team plays a pivotal role by staying updated on the latest advancements from hyperscalers and open-source communities. This proactive approach helps us identify and adopt new, efficient models to deliver optimal results. If we consider the case of Large Language Models (LLMs), we fine tune the smaller versions of LLMs (aka SLMs) that are 3 billion or 7 billion parameters sized to achieve some specific outcomes e.g. making them generate content in specific format or tone or understanding specific types of documents better. Even though fine tuning of SLMs helps us to produce more reliable and accurate outcomes, sometimes there are better alternatives which evolve in the opensource community or hyperscalers’ new launches. For instance, there was a time when we had to fine-tune these models extensively to handle tabular data from documents. However, with the advent of Multi-Modal Generative AI models, interpreting such data has become significantly more efficient. In many cases, replacing existing models with newer, more accurate (in this case a multi-modal model) pre-trained cloud-hosted models not only improves accuracy but also delivers a better ROI. We need to choose the best approach (plug alternate evolved models or fine tune existing models) to meet the business needs in terms of performance and cost. This blend of continuous monitoring, fine tuning, strategic enhancements, and leveraging cutting-edge advancements ensures that our AI models remain adaptive, relevant, and effective.
3. Ensuring data quality is essential for successful model training. What methods do you use to maintain high-quality, relevant data inputs for the AI models? Have you encountered any challenges in these efforts?
Ajit: Data quality is a cornerstone of any successful AI journey. For data to be meaningful, it must be complete, accurate, and up to date. We follow industry best practices to enhance and maintain data quality, ensuring it is optimized for AI model consumption. These practices include rigorous data validation, regular audits, and implementing robust data governance frameworks to eliminate inconsistencies.
Sameer: In most AI projects, the initial challenge is that the data is often unstructured and poorly defined. To address this, we collaborate closely with customer stakeholders, asking the right questions to pinpoint usable and reliable data sources. Once identified, we employ advanced data engineering techniques to clean, organize, and transform this data into high-quality datasets suitable for model training. By combining a focus on data integrity and leveraging data engineering expertise, we ensure that our AI models are built on a foundation of reliable and relevant data. This process not only improves model performance but also aligns with evolving business needs.
4. Feedback loops are essential for real-time model adjustments. How do you establish these loops to support continuous learning, and could you give an example where this approach boosted model accuracy?
Sameer: Feedback loops are crucial for keeping AI models accurate and adaptive. We establish these loops by integrating reinforcement learning algorithms, tracking user actions, and deploying automated MLOps pipelines to ensure models continuously improve over time.
For example, for a leading procurement platform, we implemented reinforcement learning where the model adapted in real-time based on user actions, such as product selections and browsing behavior. This approach significantly improved the accuracy of product recommendations, tailoring them to individual user preferences.
In another case involving a pharmaceutical logistics provider, we processed shipment-related documents for order management. Using MLOps pipelines, we captured instances of misclassified documents corrected by users to create incremental datasets. These datasets were then used to retrain the classification models, ultimately achieving an outstanding 99.5% accuracy.
We helped a global financial institution build a generative AI solution for extracting entities from financial and legal documents. By leveraging user-corrected data, we retrained the models in nightly batch jobs, which enhanced the accuracy of entity extraction to approximately 85%.
These feedback-driven strategies ensure that our AI solutions not only adapt to changing requirements but also deliver exceptional precision across diverse industries and applications.
5. Collaboration with domain experts can drive effective learning pathways. How have you used domain expertise to improve your models' continuous learning capabilities? Could you share an instance where this collaboration made a difference?
Ajit: While AI experts excel at extracting features from datasets and building models to achieve desired business outcomes, the contextual understanding of the data is just as critical. This is where the involvement of domain experts or subject matter experts (SMEs) becomes invaluable.
Domain experts bring deep knowledge of the specific data or dataset, enabling AI teams to interpret it within the appropriate context. When AI experts collaborate closely with SMEs, it becomes significantly easier to fine-tune or design targeted models that address the nuances of the business problem.
In a recent project, domain experts helped us uncover subtle patterns within the data that weren’t immediately apparent to the AI team. This insight allowed us to refine the model's feature selection process and improve its performance. Such synergistic collaboration ensures the AI models not only meet but often exceed the expectations for delivering business value.
Involving domain experts isn’t just a best practice—it’s a key driver for achieving reliable, context-aware AI solutions.
6. What are the key performance metrics you monitor to evaluate the effectiveness of continuous learning in AI models? How do these metrics inform any model improvement strategy?
Sameer: We monitor a mix of technical and user-centric metrics to evaluate the effectiveness of continuous learning in AI models.
On the technical side, metrics like Recall, Precision, Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), and Word Error Rate (WER) are crucial for fine-tuning models to meet business requirements. These metrics help us assess how well the model performs in terms of accuracy and error minimization.
Once the models are deployed, we shift our focus to user-centric metrics such as user satisfaction, correction rates, and the time spent by users on the interface. These provide valuable insights into how well the AI solution aligns with the end-user's needs and the overall business value it delivers.
It’s worth noting that achieving high technical accuracy isn't always enough. In scenarios where accuracy has limitations, enhancing the user experience (UX) becomes pivotal in improving the AI's overall effectiveness. By prioritizing both performance metrics and user engagement insights, we ensure our strategies drive holistic improvement in AI solutions.
7. With the fast pace of technological advancement, how do you incorporate new AI technologies and methodologies into your continuous learning models? Is there a case where such integration led to enhanced accuracy?
Ajit: Continuous learning is crucial in any AI project, especially when working with rapidly evolving technologies like generative AI. We've seen a surge in the development and release of large language models (LLMs), which primarily focus on enhancing consistency, accuracy, and efficiency at an optimal cost.
As part of our practice, we continuously evaluate these LLMs and incorporate them into our solutions when applicable. For example, we regularly assess the capabilities of emerging models to determine their potential in solving specific business challenges. In addition, new techniques are being developed to process unstructured data, which has opened new possibilities for tackling complex problems across industries.
By staying on top of these advancements and integrating relevant new methodologies, we ensure that our models remain cutting-edge and capable of delivering increasingly accurate and effective results. One such integration involved leveraging the latest LLMs to improve document processing for a client, which resulted in a notable accuracy improvement and streamlined operational efficiency. This constant adaptation to new AI technologies is key to maintaining our competitive edge and driving continuous improvement in our models.
8. Looking ahead, what trends do you believe will shape continuous learning in AI? How are you preparing to adapt to these changes, and do you have examples of proactive steps you've taken to future-proof your models?
Ajit: AI is inherently a continuous process, not a one-time exercise. As technology experts, it’s crucial for us to evaluate the significance of emerging technologies, especially in relation to the challenges we've already solved. We need to constantly look back at previous solutions and assess how new advancements could enhance or refine our models. It’s an ongoing cycle of evolution, where continuous learning plays a vital role in adapting to changes and improving outcomes.
Sameer: Our established Center of Excellence (CoE) keeps a close eye on new frameworks, design patterns, and models. This team is dedicated to exploring innovative ways of solving both new and existing problems. With the recent rise of real-time APIs from LLM providers, we're exploring how these can enhance business use cases like repetitive conversations in call centers, appointment bookings, and enquiry centers. By proactively adopting such advancements, we’re preparing to adapt to the future and continuously improve the way we engage with end-users.
These proactive steps ensure that our models remain relevant and responsive to ever-evolving business needs, keeping them future-proof and aligned with the latest trends in AI technology.
Hoping this conversation was insightful. Learn more about Xoriant's Data & AI offerings to adapt to rapid technological advancements.