TensorWave Making Waves at GTC

At this years highly anticipated GTC event in San Jose, Jensen Huang, CEO of Nvidia, did exactly what was expected – hype up and announce a new product (Blackwell series) before they release their next product (H200) to try to overshadow what AMD has available right now.

And we, at TensoroWave, made sure to let everyone know.

We circled the venue of the well known event with an LED truck offering the red pill to all attendees in visual range. Our goal is to promote optionality in the AI market, reminding interested parties that there are other viable options on the market today.

The news we wanted to share? That TensorWave is the first to market at scale with cloud AI development infrastructure based on Instinct MI300X GPUs from Advanced Micro Devices (AMD).

How did we choose to make our splash? Simple: Crash the 2024 GTC AI conference in San Jose, California—an event sponsored by NVIDIA, the GPU market Goliath to AMD’s David.

Why would we choose to announce our launch at an event populated with NVIDIA GPU fans? To make a point: AMD has a compelling GPU product offering that beats NVIDIA’s flagship H100 GPU on several metrics and soon the H200 as well. Our service makes it easy for AI developers to leverage the superior power that AMD is currently offering.

AMD Instinct MI300X vs. NVIDIA H100

AMD introduced its Instinct line of processors in late 2023. Why did TensorWave choose a new and unproven hardware platform as the basis for its services? 

For one thing, all indications are that the MI300X is a superior product. Consider these specifications:

 AMD MI300XNVIDIA H100
Memory capacity192 GB80 GB
Memory bandwidth5.3 TB/s3.3 TB/s
Streaming processors19,45614,592
Engine clock2,100 MHz1,755 MHz

Furthermore, published benchmark testing results lean heavily in favor of the MI300X. All of these numbers represent trillions of floating-point operations per second (TFLOPS):

 AMD MI300XNVIDIA H100
FP6481.733.5
FP64 Matrix163.466.9 (Tensor)
FP32163.466.9
FP32 Matrix163.4N/A
FP161,307.4133.8 / 989.4 (Tensor)
FP16 Sparse2,614.91,978.9
BFLOAT161,307.4133.8 / 989.4 (Tensor)
BFLOAT16 Sparse2,614.91,978.9
FP82,614.91,978.9
FP8 Sparse5,229.83,957.8
INT82,614.91,978.9
INTA5,229.83,957.8

Although NVIDIA has disputed some of these performance comparisons, this may be a sign that they fear losing market share to AMD. And they have a reason for that fear: Aside from the obvious performance advantages, AMD’s GPUs are more available. Up to this point, NVIDIA has dominated the GPU market, so much so that their order backlog is reported to be a year or more. MI300X GPUs are available today.

The TensorWave Advantage

The performance advantage of AMD’s MI300X GPUs means that TensorWave can offer benefits that other cloud AI infrastructure providers can’t, such as:

• Easy scalability

• Higher bandwidth and much lower latency

• Native support for PyTorch and TensorFlow with no code modifications

• Implementation options to meet your needs

These performance advantages mean your AI development cycles are shorter and your total cost of ownership is lower.

TensorWave’s Coming-Out Party at GTC

Making our entrance at this year’s GTC—complete with a digital sign truck—was an unconventional and risky way to introduce ourselves to the AI development community, but it worked! People are talking about AMD’s GPUs as being game changers, and TensorWave attracted significant business interest from participants at the event.

For more information on how TensorWave can help you accelerate your AI development plans,  with products that are actually available right now, contact us today.

Leave a Reply

Your email address will not be published. Required fields are marked *

Related