Straightforward answers to general queries.

What is io.nets' mission, and what are you working towards? is a decentralized GPU network designed to give unlimited computing power to ML applications. We make computing more scalable, accessible, and efficient. Our mission is to unlock fair access to computing power by assembling 1 million + GPUs from independent data centers, crypto miners, and crypto projects such as Filecoin or Render.

How big is the GPU shortage? How is solving it?

The major cloud providers currently have around 10-15 exaFLOPS of GPU compute capacity available. However, given the surging demand for AI/ML model training and inferencing workloads, the potential demand for GPU compute in the cloud could be as high as 20-25 exaFLOPS.

This suggests the current shortage in cloud GPU capacity is likely in the range of 5-10 exaFLOPS. In other words, cloud GPU capacity may need to expand 2X-3X over current levels in order to fully meet user demand.

Given the long lead time to develop GPU supply, this problem is unlikely to resolve any time soon. is solving this by accessing sources underutilized GPU sources outside of the cloud such as:

  • Independent data centers: There are thousands of independent data centers in the US alone, and their average utilization rate is only 12% to 18%
  • Crypto miners: Miners have suffered significant losses with Ethereum’s switch to Proof-of-Stake
  • Consumer GPUs: Consumer GPUs account for 90% of the total supply yet the majority of these resources lie latent in consumer households and minor cloud farms

When combined, it is estimated these sources can provide an additional 200 exaFLOPs of capacity.

How is different from AWS? offers a fundamentally different approach to cloud computing, leveraging a distributed and decentralized model, which can provide more control and flexibility to users, our services are permissionless and cost efficient. The combination of all these factors sets in its own league of Decentralized providers.

How/Why is cheaper or faster than other providers like AWS? is orders of magnitude cheaper and faster than current solutions.

Leveraging underutilized sources such as independent data centers, crypto miners and consumer GPUs allows us to offer compute for up to 90% cheaper than traditional cloud providers.

We are also much faster, as creating distributed clusters through traditional cloud providers is a time-consuming process. Companies like AWS often ask for detailed KYC information, require long-term contracts and often have waitlists for the most sought-after hardware.

As such, obtaining GPU compute from the cloud can often take weeks., on the other hand, does not impose such restrictions and users can access supply and deploy clusters in < 90 seconds.

Ultimately, the combination of speed and cost allows to be 10x to 20x more efficient than traditional cloud offerings.

What is a DePIN and how does fit ?

DePIN, or Decentralized Physical Infrastructure Networks, leverages blockchains, IoT and the greater Web3 ecosystem to create, operate and maintain real-world physical infrastructure. These networks leverage token incentives to coordinate, reward and safeguard members of the network. is the first and only GPU DePIN. We are optimized for machine learning but suitable for all GPU use cases as we connect computing power providers with users to offer accessibility and profit for everyone involved.

What type of GPUs does offer ?

  • We offer a wide range of:
    GPUs, including NVIDIA RTX series, and AMD Ryzen series ;
    CPUs, including Intel, AMD, and the Apple M2 Chip with its unparalleled neural engine.
    Please refer to (pricing page) to see the full list of supported GPUs and contact our support team if your hardware is not listed.
    Our minimum requirements are
  • +12 GB of RAM.
  • +500 GB Free Disk Space
  • Internet Speed : Download +500 MBs and +250 Mbps Upload with < 30ms ping.

Test your internet from here:

How is needed for Machine learning ?

  • is natively built on top of, a machine learning framework for distributed computing, the same framework used by to train GPT3 over 300k CPUs and 20k GPU. You can use to distribute your AI and Python application from reinforcement learning to deep learning to tuning, and model serving across an extensive grid of GPUs.
  • We are pipelined to support all the frameworks that ML engineers use for their workloads distribution like Anyscale, Pytorch FSDP, Tensorflow,Predibase..etc.

Who are your target customers?

Ultimately, anyone looking to create or operate an ML model or AI app is a potential customer. Given the explosion of “no-code tools” like Predibase and user-friendly model creation platforms like Hugging Face, this will eventually be a massive market.

How do we manage availability and allocation to users across your global network of GPUs? connects a global network of clients to a global network of suppliers. We deploy our container on each worker machine, facilitating the Virtual Network's integration and monitoring of all the devices` availability across the network. Our algorithm intelligently groups resources matching the selections made by the engineer and glues them into a cluster , All within 90 seconds. Our networking solution has been thoroughly tested and found reliable.

What is the connectivity requirement for suppliers?

  • We are offering clients different tiers of connectivity, from low to ultra high. While our absolute minimum connectivity requirement is 250 mbps, we strongly encourage suppliers to support at least 1 gbps download and upload speeds to remain attractive to our clients.
  • We expect data traffic to average 5GB / hour.

How Flexibly can clients create their GPUs?

Clients can create their cluster with unmatched flexibility through a set of selections and options : cluster type by use case, sustainability (e.g., “Green GPUs” powered by 100% clean energy), geographic location, security compliance level (SOC2, HIPAA, end-to-end encryption), connectivity tier, and cluster purpose (we currently support Ray App, but we are expanding into other use cases). Our out-of-the-box configuration requires no additional set up by our clients to deploy the cluster.

What sort of pricing model you have? Are there different pricing tiers based on GPU model / performance?

Prices are automatically determined based on supply and demand; GPU specs, such as internet speed, GPU make and model, security / compliance certifications, etc., will also affect pricing. For example, top-tier enterprise-grade GPUs with SOC2 compliance and >2 Gbps will have higher Prices than consumer-grade GPUs without SOC2 compliance and slower connectivity.