TensorWave thinks it can break Nvidia’s grip on AI computer with an AMD-powered cloud

Nvidia has been dominating the AI hardware market, driven by the skyrocketing demand for its GPUs. These powerful chips are crucial for training AI models, thanks to their ability to handle complex computations with thousands of parallel cores. 

Nvidia’s recent quarterly revenue of $30 billion highlights the insatiable demand in the AI industry for GPUs.While most AI companies rely on Nvidia’s GPUs, a new player, TensorWave, is aiming to shake things up.

 Founded just last year, TensorWave is launching a cloud platform that only offers access to AMD hardware for AI workloads. This bold move goes against the trend, positioning TensorWave as a notable Nvidia competitor.

According to TensorWave’s CEO, Darrick Horton, the company’s goal is to bring back competition in the AI hardware space. Horton claims that Nvidia’s dominance is stifling innovation by limiting options for AI developers. TensorWave aims to provide AI companies with more choices and make high-performance computing more accessible to a wider audience.

you can also read about What Happened to Blockbuster? How Streaming Killed the Video Store

Winding paths

The journey of TensorWave’s three co-founders, Darrick Horton, Jeff Tatarchuk, and Piotr Tomasik, began with a game of pickleball. Tatarchuk and Tomasik, long-time friends and doubles partners, invited Horton, a former colleague, to join them for drinks after a match. 

There, they started discussing the GPU supply issues impacting the AI industry, which sparked the idea for TensorWave.Horton, Tatarchuk, and Tomasik each brought unique experiences to the table.

 Tatarchuk had co-founded VMAccel with Horton, a cloud vendor, before selling Lets Rolo, a CRM startup, to LifeKey, a digital identity firm. Horton, with degrees in mechanical engineering and physics, had worked at Lockheed Martin’s Skunk Works division and co-founded VaultMiner Technologies, a crypto mining company that was VMAccel’s parent company.

Tomasik, for his part, had co-launched Lets Rolo with Tatarchuk and also co-founded Influential, an influencer marketing company acquired by Publicis for $500 million. Despite limited experience in the hyperscaler landscape, they believed their determination and diverse skill sets would help them make an impact in the AI hardware space.

They saw the growing monopoly in GPU compute capacity as a barrier to innovation, so they formed TensorWave to offer an alternative. By focusing on AMD hardware, they hope to give AI developers more options and foster competition in an industry largely dominated by Nvidia.

Together, they aim to challenge the status quo, motivated by a shared vision of democratizing AI. Their combined expertise in cloud technology, customer relations, and marketing strengthens their commitment to making TensorWave a viable option for AI workloads.

Vegas, inc.

TensorWave, headquartered in Las Vegas, isn’t the usual choice for a cloud infrastructure startup. But CEO Darrick Horton saw promise in the city. He believed Vegas could foster a thriving tech and startup ecosystem, with lower energy costs and overhead compared to other major U.S. cities.

Vegas has over 600 startups, employing more than 11,000 people, and drew in over $4 billion in investments in 2022. Tomasik and Tatarchuk, TensorWave’s co-founders, have deep ties in the city’s VC network. 

Tomasik previously worked with 1864 Fund and now collaborates with local nonprofits StartUp Vegas and Vegas Tech Ventures. Tatarchuk is an angel investor with Fruition Lab, a Vegas incubator with Christian roots.

Their connections helped TensorWave launch as one of the first clouds to offer AMD Instinct MI300X instances. Designed for AI workloads, TensorWave rents GPU capacity on an hourly basis, requiring a minimum six-month commitment. 

These setups come with dedicated storage and high-speed interconnects tailored for AI.Horton sees TensorWave as a complementary player in the cloud space. By offering AI-focused computers at competitive rates, they aim to bring more options to the market, enhancing choice for AI developers and companies looking for cost-effective solutions.

AMD-forward

CoreWeave, originally a crypto mining startup, recently raised $1.1 billion in new funds and secured $7.5 billion in debt. It signed a major capacity deal with Microsoft, showcasing the growing demand for GPU infrastructure.

 Lambda Labs also entered the race, gaining access to up to $500 million in financing and seeking an additional $800 million. Nonprofit Voltage Park, backed by crypto billionaire McCaleb, invested $500 million in GPU-powered data centers last October.

Together AI, a GPU cloud provider focused on AI research, raised $106 million in March, led by Salesforce.TensorWave, however, aims to stand out by offering AMD’s MI300X GPU, which Horton says is cheaper than Nvidia’s H100. 

Though he didn’t disclose exact prices, he hinted TensorWave’s prices range from $1 to $10 per hour, depending on the setup. To compete with the H100’s cost of around $2.50 per hour, TensorWave’s pricing needs to be competitive, but Horton suggests it can still undercut the H100.

In terms of performance, Horton claims the MI300X outperforms the H100 for running certain AI models, like Meta’s Llama 2. However, he notes that this advantage is mainly in inference rather than training.

Other tests show the MI300X’s effectiveness can vary based on the workload, but it’s generally strong for running AI models. Meta announced it would use the MI300X to power its AI assistant, Meta AI, indicating the chip’s capabilities.

OpenAI, creator of ChatGPT, has also shown interest in the MI300X, planning to integrate it into their developer tools. This interest from industry leaders adds credibility to the MI300X’s potential.

Horton believes that by offering cost-effective access to the MI300X, TensorWave can provide a viable alternative for AI developers. With the pricing advantage of the MI300X, they aim to pass savings on to customers who need efficient AI computing resources.

The MI300X’s appeal is growing as the AI sector looks for alternatives to Nvidia’s GPU options. This interest could signal a shift in the GPU market, where AND might compete more directly with Nvidia. 

Tensor Waves focus on AMD’s hardware highlights a new direction for AI infrastructure, emphasizing affordability and choice for companies in need of AI capabilities.

The competition

Startups like Lamini and Nscale, as well as established cloud providers like Azure and Oracle, are increasingly betting on AMD’s AI chips. However, Google Cloud and AWS remain skeptical about AMD’s competitiveness in the space.

 For now, these companies benefit from the ongoing Nvidia GPU shortage and delays in Nvidia’s upcoming Blackwell chip release.This shortage might ease as manufacturing of crucial components, especially memory, ramps up.

 If production picks up, Nvidia could boost shipments of the H200, the successor to the H100, which offers vastly improved performance. This could spell tougher competition for companies relying on AMD hardware.

A significant challenge for AMD-based cloud providers is Nvidia’s entrenched position. Nvidia’s software is considered more mature and widely adopted, making it easier for developers to use. AMD’s CEO, Lisa Su, acknowledged that AMD requires more work to adopt and integrate.

Looking ahead, competing on price alone may become harder as big players increase investments in custom hardware. Google has its TPUs, Microsoft launched Azure Maia and Azure Cobalt, and AWS offers Trainium, Inferentia, and Graviton for AI. 

These custom chips allow major cloud providers to tailor performance to AI workloads, potentially undermining AMD’s pricing edge.Despite these challenges, Horton is optimistic about AMD’s role in the market.

He argues that AMD’s MI300X, with its increased memory and performance, can effectively handle AI demands, helping democratize compute resources. He believes that AMD’s growing footprint could lead to more accessible and cost-effective AI solutions for developers.

The race to dominate AI infrastructure continues, and Nvidia’s dominance is far from guaranteed. AMD, with support from companies like TensorWave, is attempting to disrupt this monopoly by offering an alternative in the AI computer market. 

Horton suggests that if production delays persist for Nvidia, AMD may maintain its advantage for an extended period.In this dynamic landscape, AMD and its allies may continue to challenge Nvidia by offering a combination of affordability and accessibility, shifting the competitive balance in AI computing.

Early demand

TensorWave started onboarding customers this spring in a preview phase and is already generating $3 million in annual recurring revenue, according to Horton. He expects this figure to rise to $25 million by year-end as TensorWave scales up to 20,000 MI300X GPUs. 

With each GPU costing around $15,000, this expansion would represent a $300 million investment. Horton insists that TensorWave’s burn rate remains sustainable.The company plans to use its GPUs as collateral for a large debt financing round, following a model similar to other data center operators like CoreWeave. 

Horton describes this strategy as a sign of “strong financial health,” positioning TensorWave to handle potential challenges by delivering targeted value to its clients.When asked about TensorWave’s current customer count, Horton didn’t provide specifics due to confidentiality. 

However, he highlighted partnerships with Edgecore Networks, a networking provider, and MK1, an AI startup founded by former Neuralink engineers. He emphasized that TensorWave is rapidly expanding its capacity to meet growing demand.

The company also plans to bring AMD’s upcoming MI325X GPUs online as early as November or December. These next-gen chips are expected to meet the needs of AI customers requiring high performance.

 Horton said that TensorWave is consistently increasing its capacity across multiple nodes, reflecting strong pipeline demand.Investors appear pleased with TensorWave’s growth so far. Nexus VP recently led a $43 million funding round, with participation from Maverick Capital, StartupNV, Translink Capital, and AMD Ventures.

This investment underscores confidence in TensorWave’s ability to compete in the AI hardware market.With its ambitious expansion goals and strategic partnerships, TensorWave aims to establish itself as a formidable player in AI infrastructure. 

The company is positioning AMD GPUs as a viable alternative to Nvidia’s offerings, banking on affordability and innovation to capture market share.As TensorWave continues to scale, Horton remains confident that the company can capitalize on demand for diverse AI compute options, contributing to a more competitive and accessible AI landscape.

CONCLUSION

TensorWave is challenging Nvidia’s dominance in AI compute by betting on AMD’s powerful MI300X GPUs. The company aims to provide a cost-effective alternative, appealing to customers who seek competitive pricing and robust performance.

 By expanding quickly and establishing strategic partnerships, TensorWave is positioning itself as a key player in AI infrastructure.However, competing with Nvidia’s well-established ecosystem will be no easy feat.

 Nvidia’s software tools and reputation are deeply ingrained in the AI industry, giving it a strong edge. Yet, with rising demand for AI computers and the looming GPU shortages, TensorWave could attract users looking for flexibility and savings.

While it remains to be seen if TensorWave can truly disrupt Nvidia’s stronghold, the company is making strides to broaden the market. By offering an AMD-powered option, TensorWave  opening doors for those in need of high-performance, affordable AI solutions, promoting greater competition in the AI compute space.

Leave a Comment