TPU vs GPU in AI: A Comprehensive Guide to Their Roles and Impact on Artificial Intelligence

author avatar

04 Mar, 2024

An AI-generated depiction of a modern day chipset

An AI-generated depiction of a modern day chipset

Gaining a comprehensive understanding of the distinct capabilities and applications of TPUs and GPUs is crucial for developers and researchers aiming to navigate the complex and rapidly changing terrain of artificial intelligence.

The advancement of Artificial Intelligence (AI) has significantly increased computational demands, necessitating powerful hardware solutions. Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs) stand out as pivotal technologies in this space. GPUs, initially designed for graphics rendering, have evolved into versatile processors capable of handling AI tasks due to their parallel processing strengths. On the other hand, TPUs, developed by Google, are specialized for AI computations, offering optimized performance for tasks like machine learning. 

This article aims to compare these technologies to understand their contributions to AI progress. We will delve into the origins, architecture, and key features of GPUs and TPUs. A technical comparison will cover aspects such as performance, scalability, energy efficiency, and cost. The discussion will extend to application areas, future trends, and the challenges faced by these technologies, aiming to provide a clear understanding of their roles in AI development and guide users in selecting the right hardware for their AI projects.

Introduction

The intersection of hardware and AI has become a focal point in advancing machine learning and deep learning technologies. Two pivotal types of hardware in this domain are GPUs and TPUs. Each plays a critical role in the computational tasks required for AI, but they do so in different ways, with distinct advantages and limitations.

GPUs, originally designed for handling the demands of graphics and video processing, have been repurposed for AI due to their ability to perform parallel operations on large data blocks. This makes them particularly well-suited for the matrix and vector computations common in machine learning algorithms. Their widespread adoption has been driven by the availability of powerful, general-purpose GPUs and the development of software frameworks that can leverage their capabilities for AI tasks.

Further reading: Artificial Intelligence: A Comprehensive Guide to its Engineering Principles and Applications

On the other hand, TPUs are application-specific integrated circuits (ASICs) developed specifically for accelerating machine learning workloads. Introduced by Google, TPUs are tailored for the operations that underpin neural network computations, offering high throughput and efficiency for both the training and inference phases of deep learning models. The design of TPUs reflects a focused approach to AI hardware, with an architecture that streamlines the execution of tensor operations, which are fundamental to neural network algorithms.

The choice between using a GPU or a TPU can significantly impact the efficiency, cost, and scalability of AI projects. As such, understanding their roles, capabilities, and the scenarios in which they excel is crucial for anyone involved in the development and deployment of AI systems.

Understanding GPUs in the Context of AI

GPUs have emerged as the workhorses of artificial intelligence, especially for tasks demanding massive parallel processing capabilities. Originally designed to accelerate the rendering of 3D graphics and images, GPUs have architectures that consist of hundreds or thousands of cores capable of handling multiple operations simultaneously. This parallelism is leveraged in AI for the rapid processing of large datasets and the execution of complex mathematical models, especially those involving matrices and vectors, which are prevalent in machine learning algorithms.

nvidia-rtx-gpuFig. 1: Nvidia GeForce RTX GPU installed in a personal computer

Advanced ALUs (Arithmetic Logic Units) are key components in GPUs, providing the fundamental arithmetic and logical operation capabilities required for executing the complex algorithms that underpin modern computing tasks. The computational might of GPUs is measured in floating-point operations per second (FLOPS), a critical metric in AI that indicates the number of calculations a GPU can perform every second. High-end GPUs can achieve teraflops (trillions of FLOPS) of performance, making them exceptionally suited for training deep neural networks, which involves processing vast amounts of data and performing a high volume of calculations. For instance, NVIDIA's V100, one of the most advanced data center GPUs from 2017, based on the Volta architecture, delivers up to 100 teraflops of performance for deep learning. High-performance GPUs such as Nvidia H200, released in 2023, can reach up to 1,979 teraflops for floating point 16 (FP16) calculations.[1] These GPUs are available through cloud services, prebuilt configurations, partner programs, etc.

The architecture of a typical GPU is composed of several key components, including global memory, compute units and a high-bandwidth memory interface. The global memory stores the data processed by the compute units, which are organized into groups called Streaming Multiprocessors (SMs). Each SM contains scalar processors, a scheduler, and shared memory, enabling efficient data sharing and synchronization among threads. The memory interface ensures rapid data transfer between the GPU and the system memory, which is crucial for maintaining high throughput in AI applications.

GPUs also benefit from a robust ecosystem of software tools and libraries that facilitate their use in AI. Frameworks such as TensorFlow, PyTorch, and CUDA have abstracted much of the complexity involved in programming GPUs, allowing AI researchers and developers to focus on building and training models rather than managing hardware intricacies. This software support, combined with the raw computational power of GPUs, has solidified their position as a preferred choice for AI acceleration in both research and industry settings.

Further reading: Understanding Nvidia CUDA Cores: A Comprehensive Guide

Understanding TPUs in the Context of AI

TPUs are a type of application-specific integrated circuit (ASIC) developed by Google specifically for accelerating machine learning workloads. Unlike GPUs, which were originally designed for graphics processing and later adapted for AI, TPUs were conceived from the ground up with the demands of machine learning in mind. This purpose-built approach has resulted in a hardware solution that excels in executing tensor operations, the fundamental building blocks of many AI algorithms.

googlt-tpu-v4Fig. 2: Google TPU v4

Training involves using large datasets to teach a model how to make predictions, while inference is the process of using that trained model to make predictions on new data.

The architecture of a TPU is distinct from that of a GPU. At the heart of a TPU is the Matrix Multiplier Unit (MXU), which performs the bulk of the computations required for machine learning. The MXU is capable of performing a large number of multiply-accumulate operations (matrix multiplication followed by addition) in parallel, which is key to the high throughput of TPUs. In addition to the MXU, a TPU also includes a Unified Buffer (UB) that serves as a high-speed on-chip memory, and an Activation Unit (AU) that applies non-linear functions to the data.

TPUs are integrated into Google's cloud computing platform, Google Cloud, providing AI researchers and developers with easy access to their capabilities. This integration allows for seamless scaling of machine learning workloads, as additional TPUs can be added to handle larger datasets or more complex models. Furthermore, Google provides a software stack that includes TensorFlow, an open-source machine learning framework optimized to take full advantage of TPU hardware.

The performance per watt of TPUs is another key advantage. Energy efficiency is a critical consideration in data centers, where power consumption can significantly impact operational costs. TPUs are designed to deliver high performance while minimizing power usage, making them a cost-effective choice for large-scale AI applications. For instance, Google's TPUs deliver up to 4 teraflops of performance while consuming only 2 watts of power.[2]

Architectural Advantages of GPUs for AI

GPUs have several architectural features that make them particularly advantageous for AI applications. One of the most significant is their ability to perform parallel processing. GPUs are composed of hundreds or even thousands of cores that can execute input/output operations simultaneously. This is in contrast to Central Processing Units (CPUs), which typically have a much smaller number of cores. The parallel processing capabilities of GPUs are especially beneficial for AI tasks, which often involve performing the same operation on large amounts of data.

For example, training a neural network involves repeatedly applying the same mathematical operations to the network's weights and the input data. With a GPU, these operations can be performed in parallel, significantly speeding up the training process. NVIDIA's Ada Lovelace architecture, for instance, features up to 16,384 CUDA cores and 76 billion transistors, enabling a high degree of parallelism and computational power.

Another architectural advantage of GPUs for AI is their optimization for floating-point computations. These are calculations that involve real numbers, as opposed to integers, and they are prevalent in AI algorithms. GPUs are designed to perform these computations quickly and accurately. For instance, the NVIDIA H200 GPU, based on the Hopper architecture, can deliver up to 67 teraflops of single-precision (FP32) performance, which is crucial for many AI workloads.

GPUs are capable of handling a variety of AI tasks. They can be used for both training and inference, and they are compatible with a wide range of machine-learning algorithms. This versatility is partly due to the programmability of GPUs. With languages like CUDA and OpenCL, developers can write code tailored to the specific requirements of their AI tasks, allowing them to leverage the capabilities of the GPU fully.

Finally, GPUs benefit from a mature and extensive ecosystem of software tools and libraries. Frameworks like TensorFlow, PyTorch, and CUDA have been optimized for GPUs, providing developers with high-level abstractions for GPU programming. These tools make it easier to develop and deploy AI applications, and they allow developers to take full advantage of the architectural features of GPUs.

Architectural Advantages of TPUs for AI

TPUs have several architectural features that make them particularly well-suited for AI applications. One of the most significant is their design for high-throughput AI inference and training. TPUs are built to handle the specific computational demands of machine learning, with a focus on tensor operations, which are fundamental to many AI algorithms. This focus allows TPUs to achieve high performance in AI tasks.

This high throughput is achieved through the use of a MXU, which can perform a large number of multiply-accumulate operations in parallel. A high-speed on-chip memory complements the MXU, the UB, which allows for rapid data access, further enhancing the TPU's performance.

Another advantage of TPUs is their integration into cloud computing platforms. Google, for example, has integrated TPUs into its Google Cloud platform, allowing developers to easily access and use TPUs for their AI workloads. This integration also enables seamless scaling of machine learning tasks, as additional TPUs can be added as needed to handle larger datasets or more complex models.

TPUs also excel in performance per watt, a critical metric in data center operations. High performance per watt means that a device can deliver high computational power while consuming less energy, making it more cost-effective. This energy efficiency makes TPUs a preferred choice for large-scale AI applications.

Finally, TPUs benefit from a robust ecosystem of software tools and libraries. Google provides a software stack that includes TensorFlow, an open-source machine learning framework that is optimized to take full advantage of TPU hardware. This software support allows developers to leverage the capabilities of TPUs in their AI applications easily.

Technical Comparison between TPUs and GPUs

When it comes to AI development and deployment, the choice between GPUs and Tensor Processing Units TPUs is pivotal. Each offers distinct advantages in performance, scalability, energy efficiency, cost, and accessibility. Understanding these differences is crucial for selecting the right hardware for specific AI projects.

Performance

Speed and Efficiency: GPUs are renowned for their ability to efficiently handle a broad range of AI tasks, from deep learning model training to inference operations. Their parallel processing architecture makes them highly effective in reducing training times and improving inference speeds across various AI models. TPUs, on the other hand, are optimized for speed and efficiency in tensor operations, which are fundamental to neural networks. This specialization often allows TPUs to outperform GPUs in specific deep learning tasks, particularly those that Google has optimized them for, such as large-scale neural network training and complex machine learning models.

Performance Benchmarks: Benchmarks comparing TPUs and GPUs on similar tasks often show TPUs excelling in tasks optimized for their architecture, offering faster training times and more efficient processing. However, GPUs maintain competitive performance across a wider range of applications, thanks to their versatility and the extensive optimization efforts by the community.

In discussions about GPU vs TPU performance, especially in the context of building or leveraging a supercomputer for AI, it's essential to consider how each technology supports the execution of transformer models and convolutional neural networks (CNNs), with TPUs often leading in specialized matrix multiplications.

Scalability

TPUs are designed with scalability in mind, particularly for use in Google's cloud infrastructure. They excel in large-scale AI projects, capable of handling vast datasets and complex neural networks with ease. Google's integration of TPUs into its cloud services also simplifies scaling AI applications. GPUs, while versatile and powerful, may require more careful architecture design to achieve similar scalability in large projects, though advancements in GPU technology and software frameworks continue to improve their scalability.

Energy Efficiency

TPUs are engineered to deliver high performance per watt, making them a more energy-efficient option for intensive AI tasks. This efficiency reduces the operational costs and environmental impact of running large-scale AI applications. GPUs also focus on energy efficiency but may consume more power than TPUs in equivalent tasks due to their general-purpose design.

Cost

The initial investment and ongoing costs can vary significantly between GPUs and TPUs. GPUs are generally available from multiple manufacturers and come in various configurations, offering a range of price points. TPUs, however, are proprietary to Google and primarily available through Google Cloud services, which may introduce different cost considerations, especially for cloud-based computational resources.

Availability and Accessibility

Development Tools and Libraries: GPUs benefit from widespread support across numerous development tools and libraries, thanks to their longer presence in the market and use in a variety of computing tasks beyond AI. This extensive ecosystem makes GPUs highly accessible to developers. TPUs, while supported by powerful tools like TensorFlow, are more niche, with resources and support primarily tailored towards machine learning applications.

Community and Support: The GPU developer community is vast, with a wealth of forums, tutorials, and resources available to help troubleshoot issues and share advancements. TPUs, being relatively newer and more specialized, have a smaller but growing community, with support concentrated around Google's platforms and tools.

Feature

GPUs

TPUs

Design Focus

General-purpose parallel computing, initially for graphics rendering

Specifically designed for machine learning and deep learning tasks

Primary Use

Broad range of computing tasks, including AI, gaming, and scientific research

Accelerating neural network training and inference for AI

Processing Style

Parallel processing of large data blocks, suitable for matrix and vector computations

Optimized for high-throughput tensor operations, fundamental in neural networks

Performance

Highly versatile, supports a wide range of AI tasks with good performance

Exceptionally high performance for specific AI tasks, especially those involving large-scale deep learning models

Energy Efficiency

Efficient but can vary based on task and model optimization

Designed for high efficiency in tensor computation, offering better performance per watt in specialized tasks

Scalability

Good scalability, supported by a mature ecosystem of software and tools

Excellent scalability for AI tasks, particularly in cloud environments with Google's infrastructure

Development Support

Extensive support across various frameworks and libraries, with a large developer community

Focused support, mainly through TensorFlow and Google Cloud services; somewhat limited compared to GPUs

Cost

Wide range of options available, from affordable to high-end; cost varies by model and performance level

Access mainly through cloud services; costs associated with cloud usage and potentially higher for specialized hardware

Accessibility

Widely available from multiple manufacturers with a variety of models

Primarily accessible through Google Cloud, with limited availability outside Google’s ecosystem

Developer Experience: TPU vs GPU in AI

The developer experience when working with TPUs and GPUs in AI applications can vary significantly, depending on several factors, including the hardware's compatibility with machine learning frameworks, the availability of software tools and libraries, and the support provided by the hardware manufacturers.

TPUs are designed to work seamlessly with TensorFlow, an open-source machine learning framework developed by Google. TensorFlow provides high-level APIs that allow developers to easily define and train neural networks, and it is optimized to take full advantage of TPU hardware. This means that developers can leverage the high performance of TPUs without having to write low-level code. Google also provides extensive documentation and tutorials for using TPUs with TensorFlow, which can help reduce the learning curve for developers.

In addition to TensorFlow, TPUs are also supported by JAX, a high-performance machine learning library developed by Google. JAX provides APIs for defining and training neural networks, and it includes support for automatic differentiation and GPU/TPU acceleration. This gives developers another option for leveraging the capabilities of TPUs in their AI applications.

GPUs, on the other hand, are supported by a wider range of machine learning frameworks, including not only TensorFlow but also PyTorch, Caffe, and others. This gives developers more flexibility in choosing a framework that fits their needs. NVIDIA, the leading manufacturer of GPUs, provides a software development kit called CUDA that allows developers to write code that runs directly on the GPU. This can enable more fine-grained control over the computations, but it also requires a deeper understanding of the hardware.

NVIDIA also provides extensive documentation and tutorials for using GPUs with various machine learning frameworks, as well as a range of software tools for profiling and debugging GPU-accelerated code. These resources can be valuable for developers who are looking to optimize their AI applications for GPU hardware.

In terms of cloud integration, both TPUs and GPUs are well-supported. Google's Cloud TPUs are integrated into Google Cloud, allowing developers from startrups and large enterprises alike to easily scale their AI workloads. NVIDIA's GPUs, meanwhile, are available through various cloud providers, including AWS, Azure, and Google Cloud. Among other options, AMD has also emerged as a significant competitor to Nvidia and Intel in the AI acceleration GPU market, driving innovation and performance improvements beneficial to AI and data science.

Further reading: Building Robust Edge AI Computer Vision Applications with High-Performance Microprocessors

Application Areas

The landscape of AI hardware is rich and varied, with GPUs and TPUs each excelling in distinct domains. Understanding the application areas where each processor type shines can help guide the selection process for specific AI tasks, from graphics rendering to the training of large language models.

GPUs: Where They Excel

  • Graphics Rendering and Gaming: The original application of GPUs remains one of their strongest areas. They are unparalleled in rendering graphics, processing visual data, and supporting the high-performance demands of modern video games.

  • General-Purpose Computing: GPUs have been widely adopted for a variety of computational tasks beyond graphics, including scientific research, where their parallel processing capabilities can significantly speed up simulations and data analyses.

  • Deep Learning and AI Training: GPUs are versatile tools for training deep learning models, capable of handling diverse neural network architectures with efficiency. Their widespread adoption is supported by a robust ecosystem of development tools and libraries.

  • Edge Computing: GPUs are also found in edge devices, where their processing power supports AI-driven applications, from autonomous vehicles to smart cameras, by enabling local data processing and real-time decision-making.

TPUs: Where They Excel

  • Deep Learning Model Training: TPUs are specifically designed to accelerate the training of deep learning models. Their architecture is optimized for the high-volume matrix operations that are fundamental to neural networks, making them highly efficient for this purpose.

  • Large Language Models: The training and inference of large language models, such as those used in natural language processing (NLP) applications, benefit significantly from TPU capabilities. TPUs can manage the immense computational demands of these models more efficiently than other processors.

  • Cloud Computing: Given their integration into Google Cloud, TPUs are readily accessible for cloud-based AI projects. They offer a scalable solution for businesses and researchers needing to leverage substantial computational resources without the overhead of managing physical hardware.

Use Cases in Industry, Research, and Development

  • Industry: In the commercial sector, GPUs are widely used in content creation, including video editing and 3D modeling, while TPUs drive efficiency in services that rely on AI, such as personalized recommendations and automated customer support. Image processing, a common application in computer vision, showcases the strengths of GPUs in handling intensive graphical computations.

  • Research: GPUs support a broad range of scientific research, from climate simulations to genomic sequencing, by accelerating computation-intensive tasks. TPUs, with their efficient processing of deep learning models, are pivotal in AI research, enabling breakthroughs in areas like healthcare, where they can, for example, accelerate the analysis of medical images.

  • Development: Both GPUs and TPUs play crucial roles in software development and testing, enabling rapid prototyping and evaluation of AI models. GPUs offer flexibility across various development tasks, while TPUs provide high efficiency for deep learning projects.

Case Studies: Industry Adoption of TPUs and GPUs in AI

The adoption of TPUs and GPUs in the AI industry is widespread, with many companies leveraging these technologies to accelerate their AI workloads. Here are a few case studies that illustrate how these technologies are being used in practice.

Google, the creator of TPUs, uses these devices extensively in its own services. For example, Google uses TPUs to power the AI models behind Google Search, Google Photos, and Google Translate. These services require high-throughput, low-latency AI inference, which TPUs are designed to deliver. By using TPUs, Google is able to process billions of search queries per day, analyze hundreds of millions of photos, and translate hundreds of billions of words.

OpenAI, a leading AI research organization, uses GPUs to train its large-scale AI models. For instance, OpenAI used GPUs to train GPT-3, one of the largest language models ever created. GPT-3 has 175 billion parameters and was trained on hundreds of gigabytes of text data. Training such a large model requires a massive amount of computational power, which GPUs are able to provide.

Another example is Waymo, a subsidiary of Alphabet that is developing self-driving cars. Waymo uses TPUs in its data centers to train its self-driving algorithms. These algorithms need to process vast amounts of sensor data and make complex decisions in real time, which requires high-performance AI inference. By using TPUs, Waymo is able to accelerate its training process and improve the performance of its self-driving cars.

NVIDIA, the leading manufacturer of GPUs, uses its own devices to power its AI research and development. For example, NVIDIA uses GPUs to develop new AI algorithms, optimize its software libraries, and benchmark its hardware. NVIDIA also uses GPUs to power its AI-powered products, such as its autonomous vehicle platform and its AI-powered video analytics platform.

Microsoft has been at the forefront of integrating AI and machine learning capabilities into its suite of products, leveraging its vast infrastructure to enhance productivity tools and cloud services with advanced analytics and AI-driven insights.

These case studies illustrate the diverse ways in which TPUs and GPUs are being used in the AI industry. Whether it's powering large-scale web services, training cutting-edge AI models, or developing new AI technologies, these devices play a crucial role in the advancement of AI.

Future Trends and Developments

The rapid advancement of AI continues to drive innovation in the hardware sector, with both TPUs and GPUs at the forefront of this evolution. As we look to the future, several emerging technologies and research directions promise to redefine the landscape of AI hardware, potentially ushering in new breakthroughs that could significantly enhance AI capabilities and accessibility.

Emerging Technologies and Next-Generation TPUs and GPUs

The next generation of TPUs and GPUs is expected to focus on increasing computational efficiency, reducing power consumption, and enhancing the ability to process AI tasks in real time. Innovations in chip design, such as advanced semiconductor materials and 3D stacking technologies, are likely to play a critical role in achieving these improvements.

Both Google and leading GPU manufacturers are investing in integrating AI-specific optimizations into their processors, such as better support for sparsity (wherein the hardware can skip over zeros in data to save on computations) and adaptable precision levels to enhance performance without sacrificing accuracy.

Research Directions and Potential Breakthroughs

Quantum Computing Integration: One of the most exciting prospects is the integration of quantum computing elements with traditional computing hardware. Quantum computing promises to solve specific problems exponentially faster than classical computers can, and integrating these capabilities with TPUs and GPUs could revolutionize areas such as optimization problems and quantum machine learning.

Neuromorphic Computing: Inspired by the human brain's architecture, neuromorphic computing aims to mimic neural networks' efficiency and learning capabilities. Future TPUs and GPUs may incorporate neuromorphic elements to drastically improve power efficiency and learning capabilities, particularly in unsupervised learning scenarios.

Implications of Hardware Advancements on AI Capabilities and Accessibility

Enhanced AI capabilities: As hardware becomes more specialized and efficient, AI models can become more sophisticated, capable of processing larger datasets and achieving higher accuracy levels. This could lead to significant advancements in fields such as natural language processing, autonomous vehicles, and personalized medicine.

Improved accessibility: Advances in hardware efficiency and cost-effectiveness can make powerful AI tools more accessible to smaller organizations and researchers, democratizing AI development and fostering innovation across a broader spectrum of society.

Speculating on Future Breakthroughs

The future might see the development of hybrid systems that combine the best of TPUs, GPUs, quantum computing, and neuromorphic technologies, offering unparalleled processing power and efficiency.

As AI algorithms become more sophisticated, there could be a push towards developing hardware that is inherently adaptable, capable of dynamically adjusting its architecture to best suit the task at hand, further blurring the lines between hardware and software in AI systems.

Conclusion

The choice between TPUs and GPUs for AI applications is multifaceted, involving considerations of performance, cost, energy efficiency, developer experience, and industry adoption. Both TPUs and GPUs have their strengths and are suited to different types of AI-specific tasks. TPUs, designed specifically for high-throughput tensor operations, offer high performance and energy efficiency, particularly for large-scale AI inference and training tasks. GPUs, on the other hand, offer versatility and a wide range of compatibility with various machine learning frameworks, making them a flexible choice for a variety of AI tasks.

FAQs

  1. What is a TPU? 
    A Tensor Processing Unit (TPU) is a type of application-specific integrated circuit (ASIC) developed by Google specifically for accelerating machine learning workloads. They are particularly effective for tasks involving large-scale tensor operations, such as neural network inference and training.
  2. What is a GPU? 
    A Graphics Processing Unit (GPU) is a type of processor that was originally designed for rendering graphics in video games. However, their ability to perform many calculations simultaneously makes them well-suited to the high computational demands of AI tasks.
  3. Which is better for AI, a TPU or a GPU? 
    The answer depends on the specific requirements of the AI tasks. TPUs tend to be more energy-efficient and may offer better performance for large-scale tensor operations, while GPUs offer more versatility and are compatible with a wider range of machine learning frameworks.
  4. What are some examples of how TPUs and GPUs are used in the AI industry? 
    Google uses TPUs to power the AI models behind services like Google Search, Google Photos, and Google Translate. OpenAI used GPUs to train GPT-3, one of the largest language models ever created. Waymo uses TPUs to train its self-driving algorithms, and NVIDIA uses GPUs for its AI research and development, as well as to power its AI-powered products.
  5. What are the cost considerations when choosing between TPUs and GPUs for AI? 
    Cost considerations include the upfront costs of the hardware, the ongoing operational costs (primarily power consumption and cooling), the cost of software and development tools, and the specific requirements of the AI tasks.

References

[1] NVIDIA. H200 Tensor Core GPU [Internet]. NVIDIA Corporation. Available from: https://www.nvidia.com/en-in/data-center/h200/

[2] Google Cloud. Introduction to Cloud TPUs [Internet]. Google. Available from: https://cloud.google.com/tpu/docs/intro-to-tpu

More by Ravi Rao

Meet Ravi, a backend engineer at the world's largest IT services company. He collaborates with solution architects and clients on a massive digital transformation project. Graduating in electrical engineering from a public university in India, Ravi ranked among the top 0.32% of 18,000+ students. His...