Operationalizing Large Language Models (LLMs) in Enterprises

Date: 27 Apr, 2023 | Time: 10:00 AM ET | 4:00 PM CET

Presenter: Michele Goetz, Vice President - Principal Analyst at Forrester | Nitesh Jain, President, and Chief Operating Officer at Course5 Intelligence

Top 5 Takeaways from Course5’s Webinar on Using Large Language Models for Business Impact

By Nitesh Jain, President & COO at Course5 Intelligence

 

Thanks to all the attendees of Course5 Intelligence’s recent webinar on ‘Operationalizing Large Language Models (LLMs) in Enterprises’ hosting guest speaker Michele Goetz, VP Principal Analyst at Forrester. It was a very well-attended webinar, with many highly nuanced questions from the audience.

Much of it concerned the very real, hard-hitting points discussed on the webinar that got all of us thinking. Here are some of the key takeaways from the webinar on leveraging generative AI and LLMs at scale in enterprises:

  • Growing focus on uses cases with a strong return on investment – The main success factor we are seeing in the way leaders are approaching generative AI is that AI technology is no longer being seen as just an automation tool to make employees more efficient but somewhat to meaningfully and substantially help humans do a better job, which in turn drives significant business impact. The Industry 5.0 revolution is aligned with the strategy of Human-in-the-Loop (explained next).
  • Need for Human-in-the-loop or Human-partnered AI to see reals gains from AI – People at all levels within the organization must see the AI capability as a new colleague (Co-Pilot), which can help you if you know how to partner with it. An essential factor to remember here is that you need to get the human expertise in at the beginning, at the foundational or design level of the AI model, to get the context right. And you need to ensure the subject matter expert adequately reviews the model output so that it maps to actual business and customer experience parameters. Once thoroughly followed, you can move the AI into automated modes with regular check-ins. In general, AI will naturally augment human roles, where people will have to guide the AI through better prompt engineering and do more oversight, validation, and reviews than the crunch work.
  • Connected data drives better outcomes – LLMs have the power to harness data across the enterprise and enable cross-functional insights in a way that becomes easy to consume.
  • A Governance framework to keep the AI model on its rails – The overarching requirement of setting up an AI capability is to look at it from a governance perspective – - Is my model doing what it’s supposed to do, and is it doing it holistically? - Do I need to optimize it? - Is there a drift? - Is there bias? - Have I breached my regulatory compliance? - Is the confidential data secure?You need to take a CoE approach to ensure that the model is continuously tried, tested, validated, and calibrated to ensure the safeguards are there. It’s an iterative process. It would be best if you worked with the business, legal, and IT teams and thought of the overall implementation framework.
  • It all goes back to data integrity and trustworthiness – As LLMs evolve and gain the ability to parse more and more data in shorter time frames, the critical challenges of reliability of responses and visibility into how the program processed the data to arrive at a specific response must be met. Many LLMs are pretty black-box, and you could go into a complex archaeological dig to try and fathom the origins of specific data output, but there’s no time to value in it.

There’s also a lot of obfuscation, and many LLM models are even trained on synthetic data. You must be very careful and work with partners like your security teams to ensure no malicious data or code behind the scenes.

Eventually, you must be able to trust the input if you are to trust any output from the model. You have to ensure the source data is created, managed, cleansed, and assisted properly for quality, and you have to ensure that the AI models are reliable and explainable.

In conclusion – Simple but not so Simple

LLM models like ChatGPT look deceptively simple. But there’s a complex framework with multiple components, including subject matter experts, data experts, security, legal, and others required to work together to build trusted frameworks that can be automated and scaled for real business impact. You must consider the applicability, feasibility, data availability, data integrity, and eventual user and business trust in the overall system to adopt it in everyday business.

The other aspect is that deploying LLMs on your existing data and analytics layers needs advanced data integration platforms and capabilities.

At Course5, some of our most significant client use cases are coming from Enterprise BI, where we have Augmented Analytics and Generative AI helping users “talk” to and engage their existing cross-enterprise data – so they can ask questions and receive answers and ask more questions to drill down, create summaries, and generate content, and do all of this very quickly and effectively. Other prominent use cases are Content creation, update and refresh with high level of personalization for each customer segment and SEO optimized content, Enterprise Knowledge Management and Customer Service.

We promise to continue to keep you all engaged on this topic and look forward to hearing your views.