paint-brush
Versatus: Aiming to Onboard the First 1 Million Developers to Web3, Reveals CEOby@ishanpandey
176 reads

Versatus: Aiming to Onboard the First 1 Million Developers to Web3, Reveals CEO

by Ishan PandeyOctober 23rd, 2023
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Andrew Smith discusses the inception of Versatus during the 2017 ICO boom, aiming to address the blockchain trilemma and foster a robust developer ecosystem to drive competition and innovation in the Web3 space. Versatus is designed to lower the barriers to entry for developers transitioning from Web2 to Web3, with a goal to onboard the first 1 million developers to Web3. The platform has achieved notable milestones like operating in 54 languages and aims to cover 90% of the developer market initially.

People Mentioned

Mention Thumbnail
featured image - Versatus: Aiming to Onboard the First 1 Million Developers to Web3, Reveals CEO
Ishan Pandey HackerNoon profile picture

In a recent conversation with Ishan Pandey, Andrew Smith, the Founder and CEO of Versatus, unfolded the origin story of his venture, its mission in the decentralized compute innovation domain, and the challenges and strategies entailed in attracting and retaining developer talent in the blockchain and Web3 space. He also shared insights on the 'Blockchain Trilemma' and how Versatus aims to tackle it while elaborating on the platform’s accomplishments and future plans to expand its capabilities.

Versatus’s Future Expansion Plans Unveiled: Aiming to Cover 90% of the Developer Market

Ishan Pandey: Hi Andrew Smith, What motivated you to start Versatus, and how did your background in scalable distributed systems influence your vision for the company?


Andrew Smith: It originally started in 2017 during the ICO boom, when Ethereum fees skyrocketed. My background in building resilient distributed systems, and the conversation around the so-called “blockchain trilemma” were key influences. Thinking and experimenting with concepts that could potentially break the blockchain trilemma was the original motivation. During that journey, I noticed the lack of developers and a lack of diverse application layer innovation. It seemed everyone was working on some version of the same thing.


It became clear that while scaling Blockchains is extremely important, arguably more important is building a large, robust developer ecosystem that drives competition and innovation. Killer applications come about in mysterious ways. We all know Facebook started as an Ivy League dating application and Slack started as an internal messaging system for a game development studio. Outside of email and websites, we didn’t get multiple killer applications for the internet until there were millions of developers building and trying things. Some of those things will not be possible without scalable systems, so scaling infrastructure cannot be abandoned. Even if we had ultra-scalable chains, there’s not enough developers to expect multiple killer apps. Ultimately, killer apps will drive mass adoption. We’ve basically captured the speculator/gambler market, a market for which there’s low retention for obvious reasons. If we want high retention killer applications we need hundreds of thousands to millions of developers trying a lot of different things.


Ishan Pandey: Can you provide an overview of Versatus and its mission in the decentralized compute innovation space?


Andrew Smith: Versatus is a decentralized compute stack that enables web2 developers to seamless transition to web3 and build without barriers. Any language. Any chain. Any purpose. Our mission is to onboard the first 1 million developers to web3. The way that we will accomplish this is to provide the most versatile developer experience in all of web3, for both smart contracts and general compute.


We see ourselves as the global development interface for blockchains. An imperfect analogy, but one that I think helps people understand why we’re doing what we’re doing, is to cloud compute. If you think of blockchains and their network of nodes maintaining them as the next generation of data centers, what makes data centers owned by cloud providers valuable is not the hardware itself, it’s the interfaces they develop to provide developers and users with easy access to those data centers. Similarly, what will make blockchains valuable is not the infrastructure itself, but the interfaces they provide for developers to build on top of that infrastructure.

The 'Blockchain Trilemma' and Versatus' Strategy

Ishan Pandey: Could you please delve into the concept of the 'Blockchain Trilemma,' outlining its key components, and then elaborate on how Versatus has strategically positioned itself to tackle these challenges?


Andrew Smith: Very high level, the blockchain trilemma states that developers of blockchains can only achieve 2 of the following 3, Security, Decentralization & Speed. It was originally presented in a paper by Vitalik Buterin, and it was clearly influenced by CAP theorem in distributed systems, which states developers of distributed systems can only choose 2 of 3 when considering Consistency, Availability and Partition Tolerance. The truth is CAP theorem has been superseded by PACELC theorem which states that in distributed systems, if you have partitions you must chose between consistency and availability otherwise (Else) you must choose between latency and consistency. I think the blockchain trilemma is wrong, you don’t have to choose 2 of the 3 between decentralization, speed and security.


There are more options, and we have seen them implemented in real life. Sharding, Parallel Execution, Vertical Scaling, Efficient use of Bandwidth. I think the better way to approach it would be Given Decentralization & Security, which should be precursors, you either choose between Partition (in which case Availability vs Consistency applies) or Latency vs Consistency. Obviously, chains that choose sharding choose to partition the chain, and then must choose between high availability and consistent views of the various shards. Chains that choose parallel execution choose between a single consistent view, which would introduce latency, or acceptance of asynchronous or eventual consistency.

Challenges in Attracting and Retaining Blockchain and Web3 Developers

Ishan Pandey: You mentioned the shortage of developers in the blockchain and Web3 space. Could you elaborate on the challenges you've observed in attracting and retaining talent in this field? How does Versatus plan to attract and onboard developers, including those from Web2, and what strategies are in place for fostering developer engagement?


Andrew Smith: The primary challenge, currently, is convincing developers that the opportunity cost of building in web3 is worth overcoming the barriers to entry and burdens placed on them. Those barriers are significant. Learning a new programming language is a significant time cost. Now, to be fair, as I can already hear the solidity devs saying “it’s not that hard”, it’s true, learning solidity is not that hard for an experienced developer, when it comes to learning the syntax, types, and how to write functions within the framework of solidity… It is however, very difficult to master, and to write contracts that are not left vulnerable to exploitation, and since in many cases your customers are entrusting your smart contract with their hard earned money, that’s no laughing matter. And even if it weren’t that difficult to learn, ultimately Solidity is a domain specific language that you will only ever be able to do one single thing with, write EVM compatible smart contracts.


Ishan Pandey: In your opinion, what are the key regulatory and governance challenges that may arise as blockchain and AI technologies become more integrated, and how can the industry address these concerns proactively?


Andrew Smith: I am generally in the camp that believes the point of this technology and decentralization is to build unstoppable systems and programs, that are not owned or controlled by any one person, and therefore are not really subject to regulatory enforcement. In other words are goal should be to make regulation technologically obsolete. With doing so brings massive amounts of responsibility though. It means that we need to find ways to insure risk of loss due to hacks, scams and system outages, it means we need to have communities of open source vigilantes search for, calling out, and being rewarded for discovering vulnerabilities and malicious actors and it means we need to implement standards, publicize those standards and encourage users not to adopt or use anything that doesn’t meet those standards.


Conversations around regulation often start with the assumption that entrepreneurs and builders are at best inherently misguided and at worst have malicious intent, while on the flipside regulators are wise, benevolent, and on the verge of omniscient and omnipresent.


I fundamentally disagree with the assumption. Having good laws in place to punish those who defraud others, those who neglect their duties, or otherwise commit crimes is necessary, and should be handled by law enforcement, but pre-emptive regulation, in every case I have every seen or read about prevents innovation, increases barriers to entry, and as a result actually typically gives cover to bad actors. I think with regards to blockchain technology this is an easier conversation to have, as there are potentially ways to create insurance schemes and self-regulatory mechanisms that reduce malicious intent.


Further, using the technology and having more projects that retain their treasuries and manage their business fully on-chain would increase transparency and lead to significantly less bad actors operating in the space, this is a simple standard. This would also be the case for traditional corporations, and would reduce audit time and cost, and potentially make many of the financial regulators obsolete entirely. With regards to AI it’s a different animal, and it really depends on what kind of AI you are talking about. As of right now most of the discussion around AI is with regards to Large Language Models and Video/Image generators, or what has been commonly referred to as “generative AI”, and I do actually think there are some dangers there in the intermediate term. I wish I had good answers for how you can protect innovation and freedom while preventing the potential to create crisis. Ultimately, in my opinion, is going to require “good AI” to fight “bad AI” in the future.


A lot of the AI doomers that want to decelerate AI innovation have, in my opinion, either self interest in preventing competition, or believe that all AI will become bad AI and will want to dominate humanity. I disagree with that for now. I think there is a turning point where that conversation needs to be had, and that is when AI provides us with working instructions on how to build self-sustaining and self-replicating nano-bots that AI can control, at that point, I think we should start raising eyebrows, and maybe having a simple regulatory standard that says “If you’re working on an AI and it directs you to build a self-sustaining, self-replicating nanobot you must shut it down immediately and file a report with the authorities” would probably handle most of the doomsday scenarios. I’m personally more worried about Quantum computers right now. Quantum computers can break most modern encryption and that will be a major problem. I have yet to see an LLM that has the capability to break modern encryption which is what most doomers today have an immediate fear of.


Don’t forget to like and share the story!


Vested Interest DisclosureThis author is an independent contributor publishing via our brand-as-author program. Be it through direct compensation, media partnerships, or networking, the author has a vested interest in the company/ies mentioned in this story. HackerNoon has reviewed the report for quality, but the claims herein belong to the author. #DYOR