As with every previous AI era, the benefits promised to the enterprise from the application of generative AI will likely be lower than those that will be achieved. To succeed, enterprises must avoid the pitfalls of the previous AI eras. Over my 40-year career in enterprise AI, I understood the types of and reasons for these difficulties and what enterprises must do to avoid repeating them as they apply generative AI. I share them below.
Over the past forty years, and prior to the introduction of Large Language Models (LLMs), AI went through three eras:
- Expert Systems era in the 80s,
- Machine Learning era in the 90s,
- Deep Learning era that started in 2006.
During the first era, I was developing AI technology for the enterprise. During the second era, I was running organizations and startups that offered machine learning products. And for the past twenty years, I’ve been funding private companies that create cutting-edge enterprise AI solutions. The hype about generative AI is growing dangerously. You are actively experimenting with generative AI because you see it as an enabler to higher employee productivity and cost savings in areas such as customer support, marketing and sales, office operations, design, and engineering. But you must understand the pitfalls from the previous eras to successfully harness and deliver generative AI’s enterprise potential.
Each past AI era produced important successes. Digital Equipment Corporation’s XCON expert system was a success of the Expert Systems era, HNC’s Falcon system was a success of the Machine Learning era, and DeepMind’s (Google) AlphaFold was a success of the Deep Learning era. But each of these eras was also associated with a hype cycle. The hype cycles associated with the first two were followed by multiyear “winters” that were caused by underachieving what had been promised. The underachievement took several forms. In some cases, successful prototypes that addressed important problems could not scale to production-grade systems. In others, promised features proved prohibitively expensive to develop, or could not even be developed regardless of cost. And in many cases AI-based solutions were developed successfully by the enterprise itself or third parties but for problems that were unimportant.
There are several reasons for the mismatch between hype and reality.
- Enterprises were caught in the hype and believed they must adopt AI technology out of fear of either missing an opportunity or just to state to their shareholders that are not being left behind. In the meantime, they had not analyzed what the opportunity or opportunities for their particular company may actually be, or which important existing problems they would be able to effectively address using AI-based solutions. They didn’t even try to determine whether they had the capacity to effectively develop, adopt, and deploy such solutions. Enterprises had a tough time connecting AI strategy and investment, with business models and processes.
- In many instances, the AI technology needed to address the enterprise’s specific problem was not available, or it was not ready to address the complexity of the enterprise’s business processes. I found that two important questions were rarely asked. First, how many AI innovations need to be achieved before my enterprise problem can be effectively addressed? In other words, how many “codes need to be cracked” before the solution can be developed? Second, even if I’m willing to adopt a third-party AI solution to a problem my enterprise is facing, does the solution address my company’s complexities, and does my company have the ability to deploy such an AI solution?
- Data, and models. Particularly during the Machine Learning and Deep Learning eras, the availability of labeled data at scale, and of models became critical factors. Though most enterprises had data, it was not in the state needed by the machine learning systems. Many enterprises focused on creating the right data infrastructures, efforts that continue to this day.
- Expert systems required large teams of engineers with access to domain experts who often had contradicting knowledge about how to address particular problems. The systems of the second and third eras required data architects, engineers, and scientists (who in fact can also be considered as other types of experts) to manipulate the data and create the best models. Properly labeling data also requires a certain level of expertise. The systems of each era required enterprises to have the right people and the right number of people. Most corporations found it difficult to recruit data experts many of whom, particularly after 1995 as the internet became popular, wanted to work for high-tech companies, including internet-first startups.
- Cost and other metrics. Enterprise AI solutions typically take longer to develop than other types of enterprise solutions. Moving from a prototype to scaling an AI solution and deploying it across the enterprise can take a long time and be particularly expensive. In addition to the development and maintenance costs, people need to be trained in new processes and new ways to interact with such systems. Most of the time enterprises didn’t understand these parameters and belatedly arrived at the conclusion that such deployment would require resources they were not able or willing to allocate.
With the benefit of understanding the reasons between hype and reality in the previous AI eras, as they embark on their generative AI initiatives, enterprises must take three actions.
Action 1: Create a strategy that identifies the business processes and important problems where generative AI is the appropriate ingredient to address them and enables you to take advantage of untapped value. As part of the strategy, determine the type of Large Language Models (LLMs) that should be used and define an architecture that will be necessary to provide you with flexibility since the capabilities of these models are changing rapidly. If proprietary modifications to these models will be necessary ascertain that your enterprise can access the necessary data, and address the ethical, privacy, intellectual property, and cybersecurity issues that may arise.
Action 2: Identify and properly evaluate the types of risk these systems carry for your enterprise, including technology, people, cybersecurity, regulatory, and others. Identify ways to mitigate each type and the cost of each mitigation carries.
Action 3: Experiment wisely and iterate. Define the experiments that you will perform to test the hypotheses of how to address each problem identified in the strategy. For each experiment establish evaluation criteria that are meaningful to your enterprise, and consistent with your strategy. Iterate rapidly and prune hypotheses that cannot be validated. Establish the milestones each prototype much achieve before it can be considered a candidate for scaling, including how to address LLM hallucinations, identify the resources the scaling will require, and where the resources will come from, e.g., take resources from a different effort, or allocate a new budget.
AI continues to hold tremendous value for the enterprise. We are in the early stages of what promises to be a long-term trend. As the hype for generative AI increases, enterprises must draw lessons from the previous AI eras as they formulate their strategies and develop and roll out generative AI solutions. With technology, infrastructure, and data becoming broadly available and accessible, the enterprises that understand the lessons of past efforts and employ the three actions presented here will succeed during the emerging generative AI era.
 Telcos represented an interesting case. They had rich data, in their research centers they had people with the appropriate technical knowledge, and in their data centers, they had the computing infrastructure to both take full advantage of the learning systems technology but also help their enterprise customers do the same. Yet the lack of strategy, appropriate business model, and fear of cannibalizing their existing model (Innovator’s Dilemma) held them back from taking advantage of the AI opportunity.