The next wave of GenAI innovation
Agentic models, LLM commoditisation and vertical AI opportunities
The next wave of GenAI innovation
Agentic models, LLM commoditisation and vertical AI opportunities
Speak to technologists in Silicon Valley about emerging trends in generative AI and it is the rise of agentic frameworks that creates a lot of excitement.
Agentic models work by sending prompts to an autonomous agent to take on and perform entire tasks with limited supervision. By linking together multiple actions to achieve complex goals, these advanced AI models are set to become increasingly important over the next few years, in much the same way that APIs did in the 2000s.
To date, much of the impact within companies has been applying generative AI tools to specific productivity-focused use cases.
The rise of autonomous agents pushes innovation into the generative AI application layer, enabling software leaders to deliver extra value to customers. This is possible because agentic frameworks have the capacity to understand goals, select the most appropriate LLMs, and provide users with the perfect response; or better still, take action on their behalf, without the user/customer even having to do anything.
Accuracy is vital
As enterprises start to address the potential of the generative AI application layer, and this move towards autonomous agents, accuracy of outcomes will be vital.
Industry leaders such as Salesforce's Head of Competitive Intelligence, Brent Hayward, stressed that enterprise data has to be trusted and secured. “There are real liabilities to getting the answers wrong, not just reputationally but legally. I think, that creates an interesting human in the middle loop. And I think the best companies will start to evaluate, not just the ROI that they're delivering customers, but they'll also look at things like the accuracy.” The broader wave of adoption of semi-autonomous and ultimately autonomous models is unlikely to happen until data accuracy is at the 97%-plus range. At which point, GenAI apps will have the power to automate a wide range of tasks, as well as hypothesise, test and uncover new insights with less involvement from humans.
Hg’s Head of Value Creation, Chris Kindt, offered an example of where this agentic model could be applied in future by drawing the leadership audience’s attention to Howden where a GenAI agent is being deployed alongside its insurance agents. By listening to customer conversations and populating a complex data capture form, Howden is handling reduced call backs and enjoying productivity gains of 20%’. This is just the tip of the spear as companies introduce a more agentic way of working.
Brittle RPA
Agentic models are able to handle more dynamic, complex workflows across different systems than RPA technology, which is quite a rigid architecture.
This is going to raise the bar for customer services. The largest opportunity for enterprises will be at the intersection of accessing conversational workflows and activating more dynamic workflows. Over time, this will likely lead to the replacement of more traditional, brittle RPA architectures.
This is not withstanding the fact that there are significant risks to consider from a data toxicity, trust and privacy perspective as Brent Hayward was keen to emphasise by drawing upon the consumer model; going out and misappropriating proprietary data is not something that any enterprise would ever entertain doing.
“I think the question then becomes, how do we get to this fully autonomous idea? And we're ignoring some huge elements when we try to make that leap. We're ignoring trust. We're ignoring toxicity. We're ignoring data grounding. What's data grounding? It's actually being able to argue articulate where this answer came from. Where did this information come from? And we're also ignoring data privacy.”
Vertical velocity
Vertical AI is a second area of future innovation and investment opportunity within the next wave of generative AI.
VC investors like Touring Capital see the biggest potential in vertical rather than horizontal AI, based on the view that LLMs will become commoditised. Start-ups like OpenAI have teamed up with one of the largest incumbents, Microsoft, allowing them to innovate at speed, with tremendous resources.
Touring Capital Co-founder, Nagraj Kashyap, observed that the level of integration needed to create full autonomous agents is “off the charts”. With large incumbents focusing on horizontal AI, there is no clear investment case for the start-up market.
“For us, the easiest place to invest is in vertical AI. Incumbents are going after the horizontal layer. If you talk about LLMs, we absolutely believe they will be commoditized. That is not an investable area from a VC perspective…it should not be backed by venture dollars.”
Proprietary data is regarded as a fundamental advantage within vertical AI, as it will effectively become a digital barrier; the data will not be accessible to a general purpose LLM for it to train on.
Companies that use their proprietary data will have the opportunity to build domain-specific models and train them with less cost and computing power – providing a pathway to future value creation.
The sheer number of LLMs means that the next wave of innovation in the application layer offers huge potential for enterprises to orchestrate conversational workflows across B2B processes. The intersection of high value and high data in one’s enterprise, whether externally and internally, is going to offer the most ROI, according to Hayward.
This will also depend on accessing metadata, in order to provide context as the industry moves towards semi-autonomous and autonomous agents.
Going forward, it will become less clear to draw distinction between what is an LLM and what is a GenAI application. The application stack will fundamentally change, as applications will become intrinsically tied to LLMs.