Current data infrastructure is already capable of handling the influx of cloud computing, 5G networks and video streaming, but this may not be enough to support the latest digital transformation brought about by the full application of artificial intelligence .
Instead, digital infrastructure for AI may require an entirely separate cloud computing framework. This new framework requires redefining the existing data center network based on the location of specific data center clusters and the capabilities they have.
The recently discussed ChatGPTAI artificial intelligence speech synthesizer has more than 1 million users and has received a US$10 billion investment from Microsoft. Additionally, Amazon Web Services partnered with StabilityAI in November, and Google created a ChatGPT-like system called Lamda. Meanwhile, Meta recently announced a pause in its data center construction so that it can reconfigure its server farms to meet the data processing requirements of AI.
The data processing needs of the artificial intelligence platform have grown to such an extent that OpenAI, the creator of ChatGPT, will not be able to continue operating the platform without Microsoft's upcoming upgrade of the Azure cloud platform.
The “brain” of an AI platform like ChatGPT operates through two different “hemispheres” or “lobes”, with the former extracting satisfying All the data required for user content requests, which power the generation platform to answer users’ questions in a more “human” way as soon as they are asked.
Training Leaf will require a lot of "computing power" to process all the data points needed to generate all the content ChatGPT creates. Essentially, the training leaf extracts data points and reorganizes them within the model. This process happens iteratively, and each time the AI entity understands better, it teaches itself how to absorb the information and communicate what it learns like a human would.
Although it is an interesting process, training Ye requires not only powerful computing power, but also state-of-the-art graphics processing unit (GPU) semiconductors to achieve maximum functionality. Additionally, any infrastructure focused on “training” an AI platform requires large amounts of electricity, so data centers must be located near renewable energy sources. A new liquid cooling system and redesigned backup power and generator systems also had to be installed.
As for the other half of the AI platform’s brain, the inference leaf, which is responsible for answering questions within seconds of users asking them, has its own set of needs that cannot be met by current data infrastructure. The good news is that currently connected data center networks can accommodate this demand, but facilities must be upgraded to handle the massive processing power required. These facilities must also be located near the substation.
The largest cloud computing providers are now providing data processing capabilities to artificial intelligence startups in need. They are willing to offer this service because they see AI startups as potential long-term customers.
And there is a proxy war going on among large cloud computing companies. They are really the only ones capable of building truly large-scale AI platforms with countless parameters.
The above is the detailed content of What data center infrastructure needs to be upgraded for artificial intelligence?. For more information, please follow other related articles on the PHP Chinese website!