JOIN NOW

Business technical

Technical business insights

Do you have proof of compute / verification, what kind of proof do you use ?

For a protocol , It is done through validators that randomly replicate compute jobs on the network and check if it matches the results given by the participants . second, with a rewards punishment system that ensures that participants are not providing false results since the compute is done off chain. Third , through proof of learning which is basically an anti-cheat mechanism where you provide logs of your compute process and some other steps, This proof is still not mature enough and has not proven itself as it remains theoretical. Due to the state dependency of deep learning models; that is, each layer in a model takes as an input the output of the previous layer. Therefore, to validate work has been completed at a specific point, all work up to and including that point must be performed. Plus much more obstacles when following such a model.

For io.net : Because we feel that there is no need to reinvent the wheel as the existing model is sufficient With improvements from us.
Our compute providing follows an hourly based mechanism, where Users book clusters for a specific Period of time. Because the flow of this mechanism is time based and not compute based, all we need to prove is that all the GPU`s compute power is fully committed during the period it is being rented. We achieve that with our own new way of Proof : Proof of Time-Lock , it is basically a proof that within the time that this process was supposed to be executed on this device, there weren't any other Processes and threads being used by other services or apps on this GPU. We can prove that within T1 and T2, the GPU is fully committed to whatever task the engineer wants to compute. This proof consists of multiple Steps by benchmarking Consumption, Monitoring containers, eliminating any foreign processes and applying a punishment and rewards system for all workers to remain compliant. And the revolutionary thing is all that will be done by our own AI that is being fine tuned with every cluster booked to ensure fairness and a no trust environment for the entire flow.

How do you get around the latency problem?

  • With our flexible system , our algorithm intelligently groups resources matching their Connectivity Speed , Geolocations and Hardware Specs to eliminate bottlenecks and reduce latency .
  • Our distribution Technology on Ray and Mesh networking ensures data can travel along multiple paths increasing redundancy, fault tolerance, and better load distribution resulting in minimized latency.
  • Usually when maximizing security through VPNs you have to sacrifice some latency, we got around that by using a kernel level VPN which follows one of the most secure mesh VPN protocols without compromising on network latency .
  • Majority of our supply is hosted by Tier 3 - 4 DCs and high-End mining facilities meaning that Latency is not an issue for such Infrastructure. Our benchmarks showed that more than 40 percent of our supply has internet speed more than lambda labs cloud .

How do you actually parallelize ? / How are you connecting all the GPUs together?

Distribution and decentralization: Leveraging Ray With specialized libraries for data streaming, training, fine-tuning, hyperparameter tuning, and serving with our technology and Mesh VPN results in a simplified process of developing and deploying large-scale AI models over a massive grid of GPUs.

How do you preserve data privacy and security?

Our IO agent ensures that unauthorized containers are not running on a hired GPU to eliminate any risks. When a node is hired, the data existing between one worker node and the other worker node is encrypted in the docker file system. Any network traffic is also on a mesh VPN, which ensures maximum security. We also prioritize suppliers with SOC2 compliance and continue to stress the importance of SOC2 compliance with our suppliers.