×
×
×
×
×

Tell us once and we'll remember.

I'm an...

Don't worry, you can always change this selection using the icons at the top left of the site.
  

Investment Implications of Generative AI

June 2023

Key Takeaways
  • We believe AI demand will create growth opportunities for companies in the semiconductor value chain enabling greater computing power; and hyperscale cloud providers and software companies through enhanced services and new product development.
  • The surge in generative AI usage is accelerating demand for GPUs that are the building blocks of high-volume parallel processing as well as for makers of chips for data centers, foundries and advanced equipment makers.
  • Mega cap companies that have captured the lion’s share of growth in public cloud are equally well-positioned in generative AI, as they own both the foundational language models and the raw compute power needed to apply generative AI at scale.
  • The potential to infuse AI into a broad range of existing applications and across every layer of the software stack should increase the industry’s total addressable market as software automates more manual tasks.
Large Language Models Signal Inflection Point in AI Development

The World Wide Web was released to the public four years after its creation and more than 20 years after the initial development of network communications. Artificial intelligence (AI) is experiencing a similar inflection point with the rollout of generative AI. While AI has been in commercial use for over a decade, continuous advances in natural language processing and computing power over the last four to five years have led to increasingly sophisticated capabilities. Whether in voice recognition devices like Siri and Alexa or in autonomous driving, AI has unlocked a new cycle of rapid innovation.

Looking past the enthusiasm and calls for caution spawned by ChatGPT and similar large language models (LLMs), we believe AI is entering a period of broad adoption and application that will enhance business efficiency and expand existing end markets. As with any emerging innovation, the AI development ball remains in constant motion, with new opportunities and competitive risks emerging on an ongoing basis.

From an investment standpoint, we believe AI demand will create growth opportunities in the short- to medium-term for the companies in the semiconductor value chain enabling greater computing power as well as hyperscale cloud providers and software companies through enhanced services and new product development. Generative AI could create new competitive risks in some areas of Internet usage and force increased spending among incumbents to catch up with peers. Early mover advantages could make a difference in some areas, while others could become commoditized through competition. How LLMs develop, for example, and whether open source becomes a competitive threat, could have significant long-term business implications for first-to-market hyperscalers.

Generative AI Driving Explosive Demand for GPUs

AI refers to the development of computing power and related technologies such as robots to emulate and even surpass human capabilities. Computers gain these capabilities by training themselves on enormous amounts of data, which requires substantial processing power. Generative AI refers to the ability of natural language processing models to generate textual and graphical responses to queries.

The most optimal way for servers to analyze data is through a substantial number of cores (or processing units) embedded within a graphics processing unit (GPU), a specialized chip that can process a high volume of low-precision calculations efficiently and in parallel. The massive parallel computing requirements for training LLMs is spurring a tidal shift away from serial processors, also known as central processing units (CPUs), to GPUs (Exhibit 1). GPUs are the enabler of AI, and the surge in interest and usage of generative AI is leading to accelerating demand for these building blocks. ChatGPT has resulted in an inflection in AI adoption, with various industries leveraging AI algorithms and machine learning to improve productivity and enhance revenue generation.

Exhibit 1: AI Servers Rely on GPUs

Exhibit 1: AI Servers Rely on GPUs

Source: J.P. Morgan estimates. 

Within data centers, which house a variety of types of servers for different computing needs, the growing penetration of AI is driving an acceleration in AI server shipments. AI adoption within the data center is expected to increase substantially from mid-single-digit percentages today to about one-third of data center servers having AI-related semiconductor content over the medium term.

Exhibit 2: AI Server Shipment Growth Runway

Exhibit 2: AI Server Shipment Growth Runway

Source: IDC, J.P. Morgan estimates.

Nvidia, the dominant provider of GPUs, with an estimated 95%-100% share of the AI training semiconductor market, raised its quarterly revenue guidance 64% in late May, well above Wall Street forecasts. According to Nvidia CEO Jensen Huang, “Impressive versatility and capability of generative AI has triggered a sense of urgency for enterprises around the world.”

Nvidia is expected to retain its market leadership as generative AI demand expands due to its full stack computing platform, the high performance of its GPUs and its cost of compute advantage over competing chips, as well as its head start in software such as industry-specific libraries and pre-trained models to facilitate enterprise adoption. Advanced Micro Devices is a distant second in the data center server market, while cloud providers are also developing chips in-house, with Google leading the effort. Several privately held companies offering enhanced computing technology could also vie for enterprise customers but currently lack a full ecosystem crucial to deploying effective AI infrastructure and addressing niche use cases.

Exhibit 3: AI Server Penetration Uplift from Generative AI

Exhibit 3: AI Server Penetration Uplift from Generative AI

Source: Bank of America Merrill Lynch, J.P. Morgan, UBS, Visible Alpha.

Heightened demand trends also benefit semiconductor makers serving cloud hyperscalers with other products related to AI infrastructure deployment. These include custom chips and networking solutions, semiconductor foundries and semiconductor equipment makers that are critical to producing the leading-edge chips required for AI. To that point, Marvell Technology is well-positioned to benefit from these trends by co-developing 1) custom chips with cloud service providers to run AI workloads at better total cost of ownership, as well as 2) optical connectivity chips that connect servers and switches in data centers with higher levels of bandwidth to support AI data processing. Furthermore, Taiwan Semiconductor stands to benefit as the primary chip manufacturer for AMD, Broadcom, Nvidia and Marvell, among others, while ASML maintains a monopoly on the lithography tools essential to produce advanced chips that can handle more complex computing.

Cloud Adoption to Accelerate With AI Usage

Well before the recent rollout of ChatGPT and advanced LLMs, compute workloads were rapidly migrating to the cloud, making large hyperscalers the most important providers of sophisticated technology infrastructure to enterprise customers. Scale matters in public cloud, which caused a small group of companies, namely Microsoft, Google and Amazon, along with potentially Oracle, to capture the lion’s share of growth in the space. These companies are just as well-positioned in the generative AI era, as they own both the foundational language models and the raw compute power needed to apply generative AI at scale. We therefore see the infrastructure layer behind generative AI development shaping into an oligopoly over time.

Microsoft, through its partnership with OpenAI and enterprise relationships, has established a first-mover advantage through its early rollout of commercial LLMs. Microsoft’s AI services have already become a meaningful driver of incremental growth in its Azure cloud business. Oracle’s unique architectural approach to its cloud infrastructure (OCI) also should accrue real benefits. Google, through its Google Cloud Platform (GCP), is not far behind, having invented the “transformer” architecture that powers state-of-the-art LLMs and being recognized as a joint leader alongside Microsoft for AI and analytics workloads within cloud. Amazon, meanwhile, is taking a full-stack approach to help enterprise customers tap into the power of generative AI. The company’s AWS Trainium and Inferentia chips are optimized to serve generative AI workloads, while its Bedrock is a marketplace for foundational models from leading AI startups, and it can be fine-tuned based on a customer’s proprietary data.

Exhibit 4: Cloud Hyperscalers Poised to Maintain Leadership in AI

Exhibit 4: Cloud Hyperscalers Poised to Maintain Leadership in AI

Source: Morgan Stanley Research.

As the pace of cloud adoption normalizes from its pandemic-era surge, we see generative AI catalyzing the next leg of its growth. Public cloud provides both the speed and flexibility needed to apply AI to business problems. Early adopters can build AI-driven applications in a matter of weeks using Google’s PaLM API and infrastructure-as-a-service (IaaS) layer, rather than months or years if they build from scratch using on-premise infrastructure. Customizing LLMs involves vast amounts of data that is often housed in the cloud, expanding the pie for hyperscale cloud providers and the ecosystem behind them, including startups and services firms.

Hyperscalers, however, could be challenged by increasing competition from open-source LLMs. Some within the cloud industry believe open source could eventually make LLMs a commodity, with many companies able to provide fairly undifferentiated LLMs at a low cost. But users of open-source models must consider “who owns the data” that drives the models. While it is still early days in LLM development, we believe concerns over security and usage of proprietary data present a significant risk for open-source vendors/technologies, which should favor public clouds with existing safeguards in place. While some customers will likely experiment with open-source LLMs, many larger enterprises are unlikely to incur the risks associated with this model.

Beyond cloud services, AI has the potential to reshape trillion-dollar industries such as online advertising. From a web search perspective, chatbots like ChatGPT can drastically compress the time it takes to answer complex questions versus a traditional search engine (e.g., “What is the best canyon in Colorado to hike with a dog”?). This could have a negative impact on search monetization for incumbents like Google, at least in the near term, much like the company’s desktop-to-mobile transition in the early 2010s. The incremental investment to implement generative AI at scale could also lead to higher capital expenditure, pressuring cash flows as margins come under pressure.

Once we get past the growing pains, AI tools are expected to provide tailwinds to both platforms and advertisers by allowing better targeting of ads. Generative AI can be used to dynamically generate advertising content tailored to individual users of search and YouTube. Online ad platforms like those operated by Meta Platforms that have had to rethink customization due to Identifier for Advertisers (IDFA) privacy regulations should regain some of those targeting capabilities with generative AI capabilities. For instance, Meta’s Instagram could use these tools to generate video ads from a brand’s static images, driving up conversion rates. Chatbots built into WhatsApp can help small businesses connect with more of their customers in real time, enabling Meta’s sub-$10 billion click-to-messaging advertising business to grow into a major revenue contributor. We are closely watching shifts in consumer Internet usage to understand how these headwinds and tailwinds might play out for Internet firms of all sizes as they incorporate generative AI.

Another key area to watch regarding LLMs is the application layer, which will entail development of vertical and company-specific software. While the largest models are good at providing generalized knowledge gleaned from massive data sets, models trained on domain-specific data will have an advantage over larger, less targeted models for most enterprise applications. This will require access to proprietary first-party data as well as real-world usage by millions of end-users to refine the quality of an LLM through human feedback. A good example is a conversational search engine powered by generative AI where its users implicitly help improve the model over time through their clicks, engagement levels and follow-up questions. As LLMs themselves get commoditized over time, we believe companies that leapfrog their peers in leveraging generative AI will also possess superior design and user experience skills. This is one of the key areas to consider when evaluating AI’s impact on software and services providers.

Generative AI to Drive Next Software Innovation Wave

Microsoft, Adobe, Salesforce, HubSpot, Workday and Atlassian, for example, are already marketing AI-enhanced versions of their software, offering a preview of the requirements for successful software integration of AI: having good data, domain expertise and the ability to apply LLMs to solve specific customer problems. The potential to infuse AI into a broad range of existing applications and across every layer of the software stack should increase the industry’s total addressable market as software automates more manual tasks. Code development as well as data management and analytics in particular appear well-suited to see significant improvements from AI integration. Software vendors servicing areas with high barriers to entry should also command pricing power to enable greater customer productivity.

Exhibit 5: AI Share of IT, Software Spend to Become Meaningful

Exhibit 5: AI Share of IT, Software Spend to  Become Meaningful

*Total IT spending excludes devices.
Source: ClearBridge Investments. 2026 projections based on October 2022 IT and software spending estimates from Gartner. 

Software-as-a-service (SaaS) vendors have quickly embraced AI to leverage opportunities to remain competitive, unleashing a rapid innovation cycle in generative AI applications. While seeing fewer users (or “seats”) per enterprise customer remains a risk in some cases, we see that as more than offset by higher pricing on AI-enabled offerings over time. Moreover, SaaS companies with large amounts of customer data and significant regulatory barriers to entry, such as in human resources and financial applications, are best positioned to maintain their competitive advantage as AI automates more functions. We believe the risk of software disintermediation, on the other hand, will be highest in categories that are driven by manual processes, focused on consumers and content, and characterized by low barriers to entry and low customer retention rates.

Services companies will play an important role in guiding customers through initial integration of AI, an exercise that could last three to five years. What is unknown at this point is how much AI automation will take over going forward, potentially lessening the need for ongoing services and IT consulting support.

What’s Next

Taking into account the early adoption of generative AI across enterprise IT and consumer markets, the integration of AI into the global economy is still in the very early innings. From a business model and investment standpoint, we believe some key areas to watch as generative AI gains wider usage include the implementation cost curve, consumer Internet behavior with AI-enabled search, and actions by regulators and publishers to control and likely limit the proprietary data available to train LLMs. In addition to vertical company- and industry-specific impacts, generative AI will have broader impacts as use cases expand across more segments of the economy. We plan to take a closer look at the macroeconomic impacts of generative AI and how it could affect long-term productivity and inflation expectations in a follow-up report.

Related Perspectives

Large Cap Growth Monthly Update
Large Cap Growth October 2025: Mega caps in the benchmark reported solid earnings results in October, with those directly involved in AI continuing to ramp up capex.
Large Cap Growth Strategy 4Q25 Update
Portfolio Manager Margaret Vitrano recaps Strategy performance and positioning since Liberation Day, highlighting active moves to improve relative results.
Gen AI Buildouts Spur Tech Infrastructure Boom
Ambitious capital spending on generative AI projects by hyperscalers, which we view as a once-in-a-decade cycle, represents a meaningful tailwind for technology infrastructure providers supplying the picks and shovels to run large language models.
Large Cap Growth Performance Update: From Challenge to Opportunity
In a new Q&A, Portfolio Manager Margaret Vitrano discusses actions being taken to improve Strategy performance and where she sees the most attractive growth opportunities going forward.
Large Cap Growth Strategy 3Q25 Update
Portfolio Manager Margaret Vitrano highlights positioning moves in a growth market being dominated by an aggressive AI buildout.
More
  • Past performance is no guarantee of future results. Copyright © 2023 ClearBridge Investments. All opinions and data included in this commentary are as of the publication date and are subject to change. The opinions and views expressed herein are of the author and may differ from other portfolio managers or the firm as a whole, and are not intended to be a forecast of future events, a guarantee of future results or investment advice. This information should not be used as the sole basis to make any investment decision. The statistics have been obtained from sources believed to be reliable, but the accuracy and completeness of this information cannot be guaranteed. Neither ClearBridge Investments, LLC  nor its information providers are responsible for any damages or losses arising from any use of this information.

more