Join senior executives in San Francisco on July 11-12 to learn how leaders are integrating and optimizing AI investments for success. Learn more
The Infinite Monkey Theorem professes the idea that a monkey typing for an infinite amount of time would eventually generate the complete works of William Shakespeare, and OpenAI and ChatGPT have sparked what looks like a form of that.
ChatGPT, or more broadly generative AI, is everything, everywhere, all at once. It’s magic: ask a question about anything and get a clear answer. Imagine an image in your mind and visualize it instantly. Seemingly overnight, people started proclaiming generative AI either as an existential threat to humanity or as the most important technological advancement of all time.
In previous technology waves like machine learning (ML), a consensus has formed among experts about the capabilities and limitations of the technology. But with generative AI, the disagreement, even among AI specialists, is stark. A recent leak of a memo from a Google researcher suggesting early GenAI pioneers had “no moat” has sparked a heated debate about the very nature of AI.
Just a few months ago, the trajectory of AI seemed to follow previous trends like internet, cloud and mobile technology. Overhyped by some and dismissed as “old news” by others, AI has had various effects on areas such as healthcare, automotive and retail. But the revolutionary impact of interacting with an AI that seems to understand and respond intelligently has led to unprecedented user adoption; OpenAI attracted 100 million users in two months. This, in turn, sparked a frenzy of zealous endorsements and vehement rebuttals.
Join us in San Francisco on July 11-12, where senior executives will share how they integrated and optimized AI investments for success and avoided common pitfalls.
Undoubtedly, it is now evident that generative AI is poised to bring significant change to businesses at a pace that far exceeds previous technological changes. As CIOs and other technology leaders strive to align their strategies with this unpredictable but influential trend, a few guidelines can help them navigate these changing currents.
Create opportunities for AI experimentation
Understanding the potential of AI can be overwhelming due to its expansive capabilities. To simplify this, focus on encouraging experimentation in concrete, manageable areas. Encourage the use of AI in areas such as marketing, customer service, and other simpler applications. Prototype and pilot in-house before defining complete solutions or dealing with all exception cases (i.e. workflows to handle AI hallucinations).
Avoid lockdown, but buy to learn
The speed of generative AI adoption means that entering into long-term contracts with solution providers is more risky than ever. Traditional category leaders in HR, finance, sales, support, marketing and R&D could face a seismic shift due to the transformative potential of AI. In fact, our very definitions of these categories can undergo a complete metamorphosis. Therefore, vendor relationships must be flexible due to the potentially catastrophic cost of locking in solutions that do not scale.
That said, the most effective solutions often come from those with deep domain expertise. A select group of these vendors will seize the opportunities presented by AI in agile and inventive ways, producing returns far beyond those typically associated with implementing enterprise applications. Engaging with potential game changers can address immediate practical needs within your business and illuminate the big patterns of AI’s potential impact.
Current market-leading apps may not be able to pivot fast enough, so expect to see a wave of startups launched by veterans who have left their motherships behind.
Enable Human + AI Systems
Large Language Models (LLMs) will disrupt industries like customer support that rely on humans to provide answers to questions. Therefore, integrating human + AI systems will provide key benefits now and create data for further improvements. Reinforcement learning from human feedback (RLHF) has been central to accelerating the progress of these models and will be critical to how well and how quickly these systems adapt and impact business. Systems that produce data that can feed future AI systems will create an asset to accelerate the pace of creating ever more automated models and functions.
This time, believe in a hybrid strategy
With cloud computing, I’ve derided hybrid on-premises and cloud strategies as just cloud washing; these were feeble attempts by traditional sellers to maintain their relevance in a rapidly changing landscape. The remarkable economies of scale and pace of innovation made it clear that any application attempting to straddle the two domains was doomed to obsolescence. The triumphs of Salesforce, Workday, AWS, and Google, among others, have firmly shattered the idea that a hybrid model would be the dominant industry paradigm.
As we enter the era of generative AI, the diversity of opinion among the deepest experts, coupled with the transformative potential of information, signals that it may be premature, if not perilous, to entrust the all of our efforts to public providers or anyone. strategy.
With cloud applications, the change was simple: we moved the environment in which the technology operated. We have not provided our cloud providers with unrestricted access to sales figures and financial metrics within these applications. In contrast, with AI, the information becomes the product itself. Every AI solution craves data and needs it to scale and progress.
The struggle between public and private AI solutions will strongly depend on the context and the technical evolution of the model architectures. Commercial and commercial efforts, combined with the importance of real and perceived progress, justify public consumption and partnerships, but in most cases the future of gen AI will be hybrid – a mix of public and private.
Validate AI limits — repeatedly
Generative AI that can write an essay, create a presentation, or create a website about your new product differs significantly from predictive AI technology driving autonomous vehicles or diagnosing cancer through x-rays. How you define and address the problem is a critical first step that requires an understanding of the breadth of capabilities offered by different AI approaches.
Consider this example. If your business is trying to leverage past production data to predict your ability to meet next quarter’s demand, you get structured data as inputs and a clear goal to assess the quality of the forecast. Conversely, you can assign an LLM to analyze company emails and produce a two-page memo on the likelihood of meeting demand this quarter. These approaches appear to serve a similar purpose, but are fundamentally distinct in nature.
The personification of AI makes it more accessible, engaging, or even controversial. This can add value, facilitating tasks that reliable forecasts alone may not be able to tackle. For example, asking the AI to construct an argument for why a prediction may or may not come to pass can spur new perspectives on issues with minimal effort. However, it should not be applied or interpreted in the same way as predictive AI models.
It is also important to anticipate that these limits may change. The generative AI of the future could very well write the first — or final — versions of the predictive models that you will use for your production planning.
Require leadership to iterate and learn together
In crisis or rapidly changing situations, leadership is paramount. Experts will be needed, but hiring a management consulting firm to create a momentary AI impact study for your business is more likely to hamper your ability to navigate this change than to prepare you for it.
Because AI is evolving so rapidly, it is attracting far more attention than most new technologies. Even for companies in industries other than high-tech, C-suite executives regularly see AI demos and read about generative AI in the press. Be sure to regularly update your C-suite on new developments and potential impacts on core functions and business strategies so they connect the right dots. Use demos and prototypes to show practical relevance to your needs.
Meanwhile, CEOs should drive this level of engagement from their technology leaders, not just to extend learning across the organization, but to gauge the effectiveness of their leadership. This collective and iterative learning approach is a compass for navigating the dynamic and potentially disruptive landscape of AI.
For centuries, the quest for human flight remained entrenched as inventors focused on imitating the flapping wing designs of birds. The tide turned with the Wright Brothers, who reframed the issue, focusing on fixed-wing designs and the principles of lift and control rather than replicating bird flight. This paradigm shift propelled the first successful human flight.
In AI, similar reframing is vital for every industry and function. Companies that see AI as a dynamic field ripe for exploration, discovery and adaptation will see their ambitions soar. Those who approach it with strategies that have worked with past platform shifts (cloud, mobile) will be forced to watch their industries evolve from the ground up.
Narinder Singh was co-founder of Appirio and is currently CEO of Look Deep Health.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers