Cerebras introduces one of the largest supercomputers

On November 14, 2022, Cerebras Systems introduced its artificial intelligence supercomputer, Andromeda. The supercomputer will now be available for academic and commercial research. Andrew Feldman, along with Gary Lauterbach, Jean-Philippe Freeker, Michael James and Sean Lee, founded Cerebras Systems in 2016.
Cerebras Systems is known for its dinner plate-sized chip, designed for work related to artificial intelligence. The supercomputer was built by connecting 16 Cerebras CS-2 systems – the newest startup computer with artificial intelligence, which was built on a large chip called Wafer-Scale Engine 2.
Andromeda is capable of performing one quintillion of operations per second or one AI-based exoflop computation based on a 16-bit floating point format.
Earlier this year, the fastest US supercomputer capable of simulating nuclear weapons, known as «Frontier» at the Oak Ridge National Laboratory, based on a 64-bit double-precision format, exceeded the performance of 1 exaggeration. Asked about Frontier’s supercomputer, Cerebras founder and CEO Andrew Feldman said, It’s a big machine. We don’t beat them. Their construction cost 600 million dollars. It is less than 35 million dollars».
Feldman said that while sophisticated weather and nuclear simulations are performed on 64-bit double-precision computers, the Andromeda format is computationally resource intensive. He said that researchers were also studying whether AI algorithms would eventually match the results.
The supercomputer, owned by Cerebras, is built in a high-performance data center called «Colovor» in Santa Clara, California. Feldman said: «Companies and researchers, including from US national laboratories, can access it remotely».

Seeking to maintain the largest models, Cerebras last year introduced the world’s first multimillion-dollar cluster architecture, which handles neural networks with 120 trillion parameters. The chip is said to have the computational power of the human brain.
Startups include Sequoia Capital, SV Angel, Foundation Capital, Benchmark, Coatue, Eclipse Ventures, Altimeter Capital, Vy Capital, Empede Capital and Abu Dhabi Growth Fund. To date, it has raised a total of $720 million in six rounds of funding.
Volumetric language models such as the OpenAI GPT-3, Microsoft NLG, and NVIDIA’s Megatron have grown exponentially over the years. To run these models on a scale, companies need megawatts of power, a cluster of GPUs, and special commands that operate them. Scale-based AI chips play a crucial role by focusing on scaling computing, large amounts of memory, and connectivity.