Artificial intelligence is making progress with the latest developments and AI models. In addition, leading companies are showing the capabilities of computer systems for training machine learning neural networks, setting new benchmarks.
This evolution has presented large language models (LLM), including GPT-3, earlier this year. In a recent update, MLPerf introduced Stable Diffusion, a text-to-image generator. Moreover, Google, Nvidia, and Intel are making progress with their projects.
Nvidia setting the Benchmark with Eos
Nvidia has introduced Eos, an AI supercomputer featuring 10,752 GPUs. Also, it has completed the GPT-3 training benchmark in under 4 minutes, demonstrating its progress. Not to mention, these GPUs collectively can perform 42.6 billion billion floating-point operations per second.
Furthermore, these GPUs are interconnected with Nvidia’s Quantum-2 InfiniBand and can transfer data at a speed of 1.1 million billion bytes per second. This exceptional performance highlights the importance of efficient scaling in the realm of generative AI. Not to mention, AI developments are growing at a rate of tenfold annually, and companies are trying to keep up with that.
Intel’s Gaudi 2 competes with Nvidia
Keeping up with the race, Intel has introduced the Gaudi 2 accelerator chip, equipped with 8-bit floating point (FP8) capabilities. Additionally, it has the power of lower-precision numbers like FP8 in parts of GPT-3 and other neural networks. What’s more, Intel has achieved a 103% reduction in time-to-train for the 384-accelerator cluster.
Subsequently, this accomplishment has placed Gaudi 2 in a competitive position, being approximately one-third the speed of the Nvidia system on a per-chip basis. Also, it is three times faster than Google’s TPUv5e. While FP8 was enabled only for the GPT-3, Intel is actively working on extending its usage to other benchmarks.
Lastly, Nvidia’s Eos and Intel’s Gaudi 2 have shown significant advancements in the field of AI benchmarking. These advancements not only show the rapid evolution of AI but also reflect the importance of efficient scaling and precision in AI training. Apart from that, it is now clear that the competition among technology giants will yield innovations of the future.