Schneider Electric plans to spend $700 million through 2027 to expand its US operations and bolster the supply of its power equipment necessary to sustain the proliferation of AI datacenters.
The investment, Schneider's largest single capital splurge in America, comes as bit-barn builders grapple with shortages of key power and thermal management equipment.
As we reported last week, lead times for datacenter physical infrastructure now average 28 weeks for many electrical and thermal systems, with chillers, transformers, switchgears, and generators — all things, we note Schneider Electric manufactures — taking considerably longer.
To address these needs, over the next two years, Schneider hopes to open new facilities or modernize existing ones across at least eight sites in six states – Tennessee, Massachusetts, Texas, North Carolina, Missouri, and Ohio – and hire 1,000 workers in manufacturing, engineering, development, and technical analysis roles.
These efforts will include ramping up the production of power switching and distribution equipment, circuit breakers, and other "medium-voltage" systems, alongside establishing test and research facilities specific to robotics and AI datacenters.
In addition to capitalizing on AI infrastructure demand, by bolstering US manufacturing, the French multinational no doubt sees an opportunity to sidestep the Trump administration's obsession with tariffs.
Schneider's US expansion comes a week after Nvidia CEO Jensen Huang set the tone for the next generation of "AI factory" datacenters purpose-built to train and run machine-learning workloads. Today, Nvidia's rack-scale systems, like its Blackwell Ultra GB300 NVL72 announced at GTC, top out at around 120kW per rack. However, by the end of 2027, Nvidia wants to cram upwards of 600kW of compute into a single densely-packed rack containing 576 GPUs.
In order to achieve this goal, Huang emphasized the rest of the industry needs to catch up.
Schneider Electric, for its part, is already working with Nvidia to develop datacenter reference designs optimized for AI workloads. At GTC last week, the two companies revealed their work on a digital twin that simulates the operations of an AI datacenter. The idea is these simulations will help operators to predict energy needs and adjust the design accordingly — something that will no doubt come in handy as hyperscalers and cloud providers begin deploying Blackwell accelerators in volume later this year.
Speaking of which, Apple may be cozying up to Nvidia as well. A research note from Loop Capital claims the iGiant is in the process of placing an order for roughly $1 billion of Nvidia's new GB300 NVL72 systems, which are said to be selling for between $3.7 and $4 million apiece if the report is to be believed.
The Register reached out to Apple and Loop Capital for comment; we'll let you know if we hear anything back.
Similar to Schneider, Apple earlier pledged a hefty $500 billion toward US operations - including manufacturing - over the next four years. These investments include a new facility in Texas that will produce AI-accelerated servers to power its Apple Private Cloud Platform and the so-far-fabled Apple Intelligence services. From what we understand, these systems will use the iGiant's custom silicon, but it's possible Apple is interested in Nvidia's kit for something like model training as well. ®