During the last six months, we spent time with our firm’s corporate partners to assess whether the enterprise is ready for generative AI and updated our investment theses accordingly. Our work convinced us that Large Language Models (LLMs)/Foundation Models and applications that incorporate them will open the door to the development of a new class of intelligent enterprise applications.
Since its founding, our firm has been investing in early-stage startups that develop horizontal or vertical enterprise software AI applications. We focused our attention on data-driven and machine-learning-based AI applications that overcome many of the limitations of logic-based systems. During the last six months, we spent time with our firm’s corporate partners to assess whether the enterprise is ready for generative AI and updated our investment theses accordingly. Our work convinced us that Large Language Models (LLMs)/Foundation Models and applications that incorporate them will open the door to the development of a new class of intelligent enterprise applications and also enable the enhancement of existing applications with important novel capabilities.
We organize generative AI software startups into four categories:
- Picks and shovel developers. This category includes companies developing infrastructure software to build and support generative AI applications. Vector databases, agent-building environments, model integration and management, automatic data labeling, and specialty AI clouds all belong in this category. Example companies include Pinecone and LlamaIndex.
- LLM/Foundation model developers. Companies developing and licensing access to horizontal or vertical LLMs. These LLMs can become new platforms for the consumer and the enterprise. Example companies include OpenAI, Cohere, and Anthropic.
- Content generators. Applications that interface with existing LLMs to automatically create various forms of content, e.g., images, video, documents, ads, etc. using simple prompt engineering. Example companies include Stability AI and Simplified.
- Intelligent agent developers. Companies developing horizontal or vertical applications that perform complex tasks either on their own or with a human in the loop. In these applications, LLMs are a component but not necessarily the key component. Example: Replit, and Grammarly.
Synapse Partners focuses its generative AI investment efforts on companies developing intelligent agents that incorporate proprietary LLMs. The LLMs used by these agents may be developed from scratch or as specializations of open-source models. In these agents, LLMs contribute 20-30 percent of the system’s intelligence and overall capabilities but not 70-80 percent as is the case with the content generators. These agents will create value by targeting industry-specific use cases that involve frequently changing conditions, e.g., automotive configuration management, or logistics delivery planning. While we will undoubtedly see attempts to create general-purpose intelligent agents, such as the one being developed by Inflection.ai, our firm’s enterprise software focus make task-specific and industry-specific agents more suitable to our investment theses.
We have two investment theses for companies developing intelligent agents. The first is around startups developing industry-specific intelligent agents for tasks that have not been automated before and change frequently, such as helping architects configure office buildings, or vehicle designers to generate new designs that exhibit certain characteristics and abide by certain regulations. Under this thesis, we also consider companies whose intelligent agents provide fresh takes on previously automated tasks, such as retail returns analysis report generation.
The second investment thesis focuses on companies that incorporate intelligent agents into their existing applications to improve their functionality and make them more flexible. For example, a customer support application can incorporate an intelligent agent that uses generative AI to summarize the cases handled by the call center in a way that helps the customer support representatives and provides a better interface to the customer.
Whether it incorporates an LLM or not, an intelligent agent is a software program that can:
- Accept input about a goal to be accomplished.
- Reason and create plans to address the stated goal as well as situations presented to it in the process of accomplishing the goal (implying that it can sense its environment),
- Act autonomously on the plan it deems most appropriate (implying that it must be able to track its performance), and
- Learn and improve its performance over time (implying that it can modify its knowledge based on feedback it receives from its environment).
To accomplish these operations, an intelligent agent has a model of the world where it operates. An LLM can be a component of an intelligent agent, particularly agents that work in domains that are based on grammar, e.g., data analysis, programming, and protein folding.
An intelligent agent must have the following properties:
- Flexibility: The agent must be able to adapt to changing circumstances and modify its behavior as the world around it changes.
- Reliability: The agent must be able to perform its tasks reliably, handling unexpected events, and recovering from errors.
- Explainability: The agent must be able to explain its reasoning and its decisions. In this way, users will understand how the agent works and trust it.
If the LLM of these agents is connected to a knowledge base with industry knowledge, it can be used to automatically refine the goal that is stated by the user which facilitates the formulation of one or more plans, generate queries to retrieve data that can be used by the candidate plans created by the agent’s planning component, and process the results of the performed actions by summarizing them and explaining them to the user.
Generative AI has quickly emerged as yet another important field within AI. While it has led to incredible excitement among investors and entrepreneurs leading to the formation of hundreds of new startups, just based on the data in our firm’s database. Our corporate partners have started to investigate its potential to address existing problems some of which are addressed with applications that employ other technologies, as well as be a key ingredient in solving new problems. However, they see these as preliminary sandbox efforts and are not yet ready to allocate a significant budget for them. As a result of the excitement for generative AI, our portfolio companies see greater demand for their solutions even if they utilize discriminative AI approaches. Though we will pursue our new generative AI investment theses, we will not ignore the AI theses we have been pursuing and around which we have already built a strong portfolio and continue to receive extremely promising business plans.
In my forty years of work with AI, I have seen several “AI springs” that came as a result of a particular technology. During every AI spring, we saw the creation of many venture-backed startups, most of which ended up stalling after an initial period of high expectations that was characterized by large funding rounds and stratospheric valuations. In all cases, the culprit of their demise was overpromising and underdelivering to customers that were also moving slower than the startups expected them to. We may see the same outcome this time as well. For this reason, the next twelve months will be critical for generative AI. The startups will need to use the funds they can raise to both educate their prospective customers about how generative AI will benefit their enterprises but also deliver the value they claim their solutions will bring.