Site icon Re-Imagining Corporate Innovation with a Silicon Valley Perspective

Succeeding in the Era of Generative AI

As with every previous AI era, the benefits promised to the enterprise from the application of generative AI will likely be lower than those that will be achieved. To succeed, enterprises must avoid the pitfalls of the previous AI eras. Over my 40-year career in enterprise AI, I understood the types of and reasons for these difficulties and what enterprises must do to avoid repeating them as they apply generative AI. I share them below.

Over the past forty years, and prior to the introduction of Large Language Models (LLMs), AI went through three eras:

  1. Expert Systems era in the 80s,
  2. Machine Learning era in the 90s,
  3. Deep Learning era that started in 2006.

During the first era, I was developing AI technology for the enterprise. During the second era, I was running organizations and startups that offered machine learning products. And for the past twenty years, I’ve been funding private companies that create cutting-edge enterprise AI solutions. The hype about generative AI is growing dangerously. You are actively experimenting with generative AI because you see it as an enabler to higher employee productivity and cost savings in areas such as customer support, marketing and sales, office operations, design, and engineering. But you must understand the pitfalls from the previous eras to successfully harness and deliver generative AI’s enterprise potential.

 

Each past AI era produced important successes. Digital Equipment Corporation’s XCON expert system was a success of the Expert Systems era, HNC’s Falcon system was a success of the Machine Learning era, and DeepMind’s (Google) AlphaFold was a success of the Deep Learning era. But each of these eras was also associated with a hype cycle. The hype cycles associated with the first two were followed by multiyear “winters” that were caused by underachieving what had been promised. The underachievement took several forms. In some cases, successful prototypes that addressed important problems could not scale to production-grade systems. In others, promised features proved prohibitively expensive to develop, or could not even be developed regardless of cost. And in many cases AI-based solutions were developed successfully by the enterprise itself or third parties but for problems that were unimportant.

 

There are several reasons for the mismatch between hype and reality.

 

With the benefit of understanding the reasons between hype and reality in the previous AI eras, as they embark on their generative AI initiatives, enterprises must take three actions.

 

Action 1: Create a strategy that identifies the business processes and important problems where generative AI is the appropriate ingredient to address them and enables you to take advantage of untapped value. As part of the strategy, determine the type of Large Language Models (LLMs) that should be used and define an architecture that will be necessary to provide you with flexibility since the capabilities of these models are changing rapidly. If proprietary modifications to these models will be necessary ascertain that your enterprise can access the necessary data, and address the ethical, privacy, intellectual property, and cybersecurity issues that may arise.

 

Action 2: Identify and properly evaluate the types of risk these systems carry for your enterprise, including technology, people, cybersecurity, regulatory, and others. Identify ways to mitigate each type and the cost of each mitigation carries.

 

Action 3: Experiment wisely and iterate. Define the experiments that you will perform to test the hypotheses of how to address each problem identified in the strategy. For each experiment establish evaluation criteria that are meaningful to your enterprise, and consistent with your strategy. Iterate rapidly and prune hypotheses that cannot be validated. Establish the milestones each prototype much achieve before it can be considered a candidate for scaling, including how to address LLM hallucinations, identify the resources the scaling will require, and where the resources will come from, e.g., take resources from a different effort, or allocate a new budget.

 

AI continues to hold tremendous value for the enterprise. We are in the early stages of what promises to be a long-term trend. As the hype for generative AI increases, enterprises must draw lessons from the previous AI eras as they formulate their strategies and develop and roll out generative AI solutions. With technology, infrastructure, and data becoming broadly available and accessible, the enterprises that understand the lessons of past efforts and employ the three actions presented here will succeed during the emerging generative AI era.

 

[1] Telcos represented an interesting case. They had rich data, in their research centers they had people with the appropriate technical knowledge, and in their data centers, they had the computing infrastructure to both take full advantage of the learning systems technology but also help their enterprise customers do the same. Yet the lack of strategy, appropriate business model, and fear of cannibalizing their existing model (Innovator’s Dilemma) held them back from taking advantage of the AI opportunity.

Exit mobile version