paint-brush
Click Here To Claim This Company
ninja emoji
employees
light emoji
Since 2017
Raven is creating a network of compute nodes that utilize idle compute power for the purposes of AI training where speed is the key. AI companies will be able to train models better and faster. We developed a completely new approach to distribution that speeds up a training run of 1M images and brings it down to a few hours. We solve latency by chunking the data into really small pieces (bytes), maintaining its identity, and then distributing it across the host of devices with a call to action: gradient calculations. Other solutions require high-end compute power. Our approach has no dependency on the system specs of each compute node in the network. Thus, we can utilize idle computer power on normal desktops, laptops, and mobile devices allowing anyone in the world to contribute to the Raven Protocol network. This will bring costs down to a fraction of what you need to pay for traditional cloud services. Most importantly, this means Raven will create the first truly distributed and scalable solution to AI training by speeding up the training process. Our consensus mechanism is something we call Proof-of-Calculation. Proof-of-Calculation will be the primary guideline for the regulation and distribution of incentives to the compute nodes in the network. Following are the two prime deciders for the incentive distribution: Speed: Depending upon how fast a node can perform gradient calculations (in a neural network) and return it back to the Gradient Collector. Redundancy: The 3 fastest redundant calculation will only qualify for receiving the incentive. This will make sure that the gradients that are getting returned are genuine and of the highest quality.

RAVEN PROTOCOL

5D1M6Mmax

EVERGREEN INDEX #10425

Representing this company? Click here to claim and customize this page!

Read More Tech Stories Related to #Raven Protocol

Raven Protocol WIKI

Apply to get your Tech Company News Page live on HackerNoon.com today!