How ‘Google fear and threat’ just made Nvidia just spend $20 billion
Nvidia has acquired the assets of AI chip startup Groq in a $20 billion. The deal represents a 3x premium over Groq’s valuation just months ago and it is being viewed as a defensive play to protect Nvidia’s dominance against a rising trend of tech companies preferring custom silicon, specifically Google TPUs. The move is said to be driven by two key factors: the need for cost efficiency and the demand for higher processing speeds. Nvidia is currently struggling to meet the overwhelming global demand for its GPUs, and tech companies are looking for specialised alternatives as AI race hots up.Nvidia lost billions after Meta reportedly struck deal with Google for TPUsEarlier this year, Nvidia issued a statement defending its market position after a report claimed that one of its biggest customers, Facebook parent company Meta, is in advanced talks to spend billions on Google’s competing AI chips. This hit Nvidia’s stock, erasing roughly $250 billion in market value.
Difference between Nvidia GPU vs Google TPU
Before we jump on to Nvidia vs Google debate, it must be understood the difference between the products offered by both the companies. Nvidia GPUs have are ‘General Purpose’, which means they can be utilised for AI, gaming, crypto mining and scientific simulations. Google TPUs, however, are specialised chips designed for speeding up the “tensor” math that powers machine learning. Apart from flexibility, other differences include speed and efficiency. While Nvidia’s GPUs offer excellent speed, it has overhead for general tasks and are costly. Google’s TPUs are ultra-fast for specific AI training/inference (running an AI model and getting an answer with less lag), and can be mixed-and-matched with Nvidia GPUs to increase efficiency and lower cost.
Why Nvidia struck deal with Groq
Groq offers a technology similar to Google’s TPUs. According to the company’s website:Groq builds fast AI inference. Groq LPU AI inference technology delivers exceptional AI compute speed, quality, and affordability at scale. Groq AI inference infrastructure, specifically GroqCloud, is powered by the Language Processing Unit (LPU), a new category of processor. Groq created and built the LPU from the ground up to meet the unique needs of AI. LPUs run Large Language Models (LLMs) and other leading models at substantially faster speeds and, on an architectural level, up to 10x more efficiently from an energy perspective compared to GPUs.By acquiring Groq’s fast and efficient LPU technology to complement its powerful GPUs, Nvidia is trying to neutralise the risk of a rival offering a low-cost alternative that could have ‘shrunk’ the company’s dominance to the “training-only” market. The strategic deal will allow Nvidia to capture the entire customer lifecycle through a tiered offering – directing premium clients toward high-end GPUs for heavy-duty model training and steering price-sensitive users toward LPUs for high-speed inference.
