paint-brush
Democratizing AI: How IO.NET's CTO is Building the 'Airbnb of GPUs'"by@ishanpandey

Democratizing AI: How IO.NET's CTO is Building the 'Airbnb of GPUs'"

by Ishan PandeyDecember 18th, 2024
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

IO.NET is building a platform that could democratize access to AI computing resources while reducing costs by up to 75% compared to traditional providers.
featured image - Democratizing AI: How IO.NET's CTO is Building the 'Airbnb of GPUs'"
Ishan Pandey HackerNoon profile picture

The artificial intelligence boom has created an unprecedented demand for GPU computing power, but access remains concentrated among a few major cloud providers. IO.NET, a startup focusing on decentralized GPU infrastructure, aims to change this dynamic by creating what its leaders call the "Airbnb of GPUs." In this exclusive interview, Gaurav, IO.NET's CTO and former Binance technical leader, discusses how the company is building a platform that could democratize access to AI computing resources while reducing costs by up to 75% compared to traditional providers.


Ishan: Welcome to our 'Behind the Startup' series. Please tell us about yourself, your journey, and what inspired you to join IO.NET?


Gaurav: My journey has been quite straightforward, starting as a software engineer in Pune. I worked at several startups there before moving to Bangalore, where I joined HP R&D and helped build their network file system from scratch. At Amazon, I worked on their publishing pipeline for Android apps, e-books, and Audible books. I then moved to eBay, followed by a major OTA company in Thailand that was a market leader in Vietnam, Singapore, and Malaysia for hotel and flight bookings.


I spent about 5-6 years in their leadership team before joining Binance, where I led the creation of a scalable platform for KYC compliance and fraud detection for over half a billion users. Throughout my career, I've worked with AI in various forms and witnessed firsthand how people struggle with accessing the computing resources they need.


Ishan: Tell us about your role at IO.NET and what future you see for decentralized computing compared to centralized architecture?


Gaurav: As CTO, my main role is creating a scalable platform that makes it easy for suppliers to plug in and for consumers to use these resources. We started with GPUs, but our vision extends beyond that.


The key advantage of our decentralized approach is scalability. Traditional data centers face significant challenges when expanding to new regions - they need to rent space, hire teams, order equipment, and handle maintenance. This creates high upfront costs that eventually get passed to users. Our decentralized model allows us to scale much more efficiently by leveraging existing infrastructure.


Ishan: How does your business model work compared to centralized vendors like Azure, which charge significant amounts for AI model hosting?


Gaurav: We follow a model similar to Uber - while anyone can create similar software, our advantage lies in our supply-side connections. Our team has built deep relationships with infrastructure providers worldwide, enabling us to source GPUs at competitive prices. Our prices are typically 75% lower than Amazon and Google.


We offer both hourly rates and longer-term commitments of 6-9 months. We also provide managed services for startups that want to focus on their core business rather than managing infrastructure.


Ishan: How's the traction been so far?


Gaurav: The response has been strong. We recently fulfilled an order for 1,500 4090s and are close to signing deals with two Asian Web2 companies that each have over 200 million users. While we initially focused on crypto companies due to our network, we're seeing increasing interest from traditional tech companies looking to save costs.


Ishan: Can you explain how a decentralized training architecture would work? With decentralization, either scalability or security might be affected, how do we reconcile this.


Gaurav: It depends on how you define scalability. Let me illustrate with an example from the data center business. If you're a data center provider in North America and I need 1,000 H100s in Singapore, the traditional process is extremely challenging. You'd need to rent space, hire a team, order GPUs, handle shipping, maintenance, and setup. This creates significant upfront costs and slow time-to-market, which ultimately gets passed on to users.


In our decentralized model, because the inventory is distributed, we don't face these challenges. Adding capacity is as simple as connecting new providers to our platform. It's similar to how hotel availability works - just because major chains are fully booked doesn't mean there are no rooms in a city. There's actually substantial GPU capacity available, but no one has built an "Airbnb for GPUs" to aggregate this inventory efficiently.


Ishan: To understand correctly - if there's a student or gamer in Bangalore and a company in the US with idle GPUs, they could connect through your platform?


Gaurav: Exactly. Someone from Thailand or India who wants to train a specific model - whether it's an LSTM or any other type - can use these GPUs. Because it's a rental-based model, it's more economical than traditional providers.


Ishan: What do you think about the race between frontier models right now - from Llama to OpenAI to Anthropic?


Gaurav: It's largely speculation at this point. We've taken a significant leap forward in AI capabilities over the past couple of years. While it's unclear which company will ultimately lead the space - it could even be a Web3 player - what's certain is that we'll see tremendous innovation over the next three years.


Ishan: How is IO.NET's governance model structured right now?


Gaurav: We're currently semi-decentralized. We actively listen to our community through weekly AMAs and implement their feedback. Our internal team reviews all user tickets and requests weekly to guide our development priorities. Our community engagement primarily happens through X (formerly Twitter), Discord, and our AMAs, with over half a million followers across platforms.


Ishan: What technical challenges did you face while developing this platform, given it's a novel concept without existing decentralized AI architectures?


Gaurav: Our rapid scaling presented both opportunities and challenges. When I joined, the platform was designed for 100,000 GPUs, but we quickly needed to handle millions. This required significant architectural changes to manage security, stability, and scalability. The founder recognized the need for experienced leadership in building scalable platforms, which led to hiring me and allowing me to build a team of experienced professionals from companies like Amazon, VMware, and top AI researchers.


The key was having people who had previously built similar scalable systems. We've assembled a team including PhDs in machine learning and veterans from major tech companies, all focused on solving these complex technical challenges while maintaining the decentralized nature of the platform.


Ishan: Tell us more about the background of the team, how the journey started, what the first idea was, any pivots before arriving at this model, and what future you see for IO.NET in the next 1-2 years.


Gaurav: I joined about seven months ago, roughly three to four months after the company was founded. From day one, the vision was to create a hybrid of DeFi and AI platforms to enable builders to create models. When I joined, the founders and I aligned on a crucial strategy - we needed to offer something that would be extremely difficult for competitors to match. We identified GPU sourcing at competitive prices as that key differentiator.


While other crypto platforms might offer similar pricing, they struggle with scale. If you ask them for 1,500 GPUs, they often can't deliver because their business model isn't truly decentralized. Even if they create smart contracts, if they own their own data centers, scaling becomes extremely challenging. It's the same problem Azure faces - you can't claim to be decentralized just by adding smart contracts on top of centralized infrastructure.


Ishan: Software development is always challenging. When developing this platform, which is truly novel since there are no decentralized AI architectures for GPU hosting right now, what technical problems did you encounter?


Gaurav: We faced an interesting challenge of scaling much faster than anticipated - a good problem from a business perspective but tricky from an engineering standpoint. Imagine building a platform for 100,000 GPUs and suddenly needing to handle half a million or more. During airdrops, we faced massive influxes of users and potential Sybil attacks while scaling rapidly.


Creating a secure, stable platform that could handle 50-100 clusters simultaneously, with no bottlenecks, while allowing for rapid supply additions of 1,000 GPUs per minute - these were significant challenges. The founder recognized that while he could build the company to a certain level, taking it further required people with experience building scalable platforms and businesses.


That's what I respect about him - he acknowledged this need and gave me the authority to build the right team. We've brought in talent from Amazon, VMware, and various other top companies. We have PhDs in machine learning, product experts from major tech companies - you can verify these backgrounds on our website.


The founders supported this approach, understanding that turning the product into a real business required people who had done it before. Their support in this transition has been crucial to our success.


Don’t forget to like and share the story!

Vested Interest Disclosure: This author is an independent contributor publishing via our business blogging program. HackerNoon has reviewed the report for quality, but the claims herein belong to the author. #DYOR