paint-brush
Superalgos, Part Two:  Building a Trading Supermindby@luisfernandomolina
3,826 reads
3,826 reads

Superalgos, Part Two:  Building a Trading Supermind

by Luis Fernando MolinaJanuary 14th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>A supermind of humans and machines thinking and working together, doing everything required to maximize the group’s collective intelligence so as to minimize the time needed for superalgos to emerge is being built right now.</em>

People Mentioned

Mention Thumbnail
Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Superalgos, Part Two: 
Building a Trading Supermind
Luis Fernando Molina HackerNoon profile picture

Image © Who is Danny, Shutterstock.

A supermind of humans and machines thinking and working together, doing everything required to maximize the group’s collective intelligence so as to minimize the time needed for superalgos to emerge is being built right now.

By Luis Molina & Julian Molina

This article is the second half of a two-piece introduction to Superalgos. We highly recommend reading Part One before diving under the hood!

The Rabbit Hole

It all started as I woke up one day by mid-summer, 2017, the Danube as the backdrop for what would turn into my first serious trading adventure.

Being deep into bitcoin since 2013, I had been fooling around trading crypto as a hobby without any real knowledge for years.

Having finalized my commitments with my latest project, I found myself with time to spare, some bitcoin, and an urge to give trading a serious try.

Earlier that year, I had been spending more time than usual doing manual trading and educating myself on the basics. I had fallen for the illusion that it was quite easy to accumulate profits. I was ready to give it a try at automating the primitive strategies I had come up with.

Little did I know I had actually been surfing the start of the biggest bull wave the markets had ever witnessed…

Even Grandma was making money then!

I had spent some time checking out a few of the trading platforms of the moment and several open source projects. It all seemed clunky, overly complex or constrained to the typical approaches to trading.

I had something else in mind.

Swarms of tourists roamed Budapest.

The sun burnt, the gentle spring long gone.

The crisp morning air screamed through my bedroom window… “get out… it’s awesome out here”.

Nonetheless, I knew this restlessness would not subside. At 44, having spent my life imagining and creating the wildest projects, I knew this feeling well.

I was doomed. I had to give it a shot or I wouldn’t stand myself.

I knew I risked getting sucked into a multi-year adventure to which I would give my 100%, 24/7, until I was satisfied. Ideas have that effect in obsessive characters.

I crawled out of bed and sat at the keyboard.

By the end of the day, I was deep down the rabbit hole. Falling.

The Canvas App

Screenshot of the Canvas App

I had envisioned a platform that would provide a visual environment in which I would be able to browse data in a unique way. Besides the typical candlestick charts and indicator functions, I wanted bots to overlay their activities and decision-making clues in a graphic fashion, directly over the charts.

That would allow me to debug strategies visually.

Although exchanges provide online and historical data, and because I wanted full control over visualization capabilities, I knew I needed to extract data from exchanges and store it within the system, so that data could be loaded in very specific manners.

By that time, Poloniex was one of the largest crypto exchanges out there, so I started with them. Still, I knew from day one this platform would be exchange agnostic.

It took me several months and many iterations to craft a process that would extract historical data from Poloniex in a way that guaranteed — no matter what happened to the process — that the resulting data set would be reliable, with no errors, holes or missing parts — ever.

The raw data available at exchanges represents trades. With trades, candles and volumes can be calculated, and with those a constellation of other data structures, usually known as indicators.

I realized early on that the amount of data to manage was quite big. The USDT/BTC market in Poloniex spanning four years of history at that time consisted of more that 300 million trade records.

Even when grouped into candles, it was a lot of data.

While I was polishing the extraction process, I was also programing the graphical environment to plot the data. I called this part of the platform the Canvas App, as it uses an HTML5 canvas object to render the graphics.

By that time I knew what I was doing was too big for a single developer. I also knew it would be ridiculous to build such a platform for my personal use only.

I was ready to go open source.

I showed what I had to a few trader friends, one of which — Andreja — would eventually join the project.

I also showed it to Julian, my brother, an entrepreneur himself, always avid for good adventures, co-author in this series of articles, who would too join later on.

The feedback was varied.

Everyone had their own views.

That lead me to believe that I should not constrain the platform to my own personal taste and that I should make it as open as possible. I knew how I wanted to visualize data, but I didn’t know how other people might want to do it.

That also meant that the platform needed to enable the visualization and navigation through data sets independently of what they were, what information they contained, or how information would be plotted.

Well, that was a big problem to solve.

How do I create a tool that is able to visualize huge data sets when I don’t know how data sets will be structured or how people will want to graphically represent them on the charts?

Google Maps

Google Maps allows navigation of huge datasets in an amazingly intuitive way.

The inspiration came from Google Maps.

It too has a number of huge data sets, with different layers of different information.

The app allows users to smoothly navigate from one side of the globe to the other just by dragging the map.

Zooming in and out dynamically loads new layers containing more detailed information.

It supports multiple plugins (traffic, street view, etc) and runs them within the same interface.

It has an API that allows embedding the app on any web page.

These were all great features and I wanted them all in the Canvas App.

It took Google several years — and a few acquisitions — to evolve the app to what it is today: one of the most powerful collection of geospatial tools.

In my case I had to figure out which were the common denominators in all data sets in order to give some structure to the user interface. It seemed that most data sets were organized over time.

This is not strictly true in all cases, but it was a good enough approximation to start with.

So the UI needed to manage the concept of time, which meant that instead of navigating locations as in Google Maps, users would navigate through time, back and forth.

Time would be plotted on the X axis — usually, the longest in the screen.

Zooming in and out would have the effect of contracting and expanding time, not space.

The Y axis was reserved for plotting different markets at different exchanges. I wanted users to be able to easily browse and visually compare markets side by side.

Since the platform should not constrain the way the user would visualize data, I introduced the concept of a plotter — a piece of code that renders a graphical representation of a certain type of data set, which is separate from the Canvas App.

From the Canvas App point of view, a plotter is like a plug-in.

People would be able to create plotters to render their own data sets. And, why not, to be used by others on data sest of the same type.

I took me several 100% code rewrites to bring all this to reality.

Canvas App v0.1, checked.

Of course, to test all these concepts, I needed proper data sets that I could plot in the canvas app. Raw trades are, well, insipid?

So, at the same time I was working on the Canvas App, I was developing the first few elaborate data sets: candles, volumes, and others…

It was December, 2017.

Five months free falling.

The Core Team

Conversations with my brother Julian, a partner in many previous ventures, had become more and more frequent.

We had been toying with ideas around the gamification of trading, specialization of bots, bots’ autonomy and a bots’ marketplace.

By December 2017, Julian put to rest everything else he was doing and started working on the project full time.

It took us two months and half a dozen rewrites of a Project Narrative document to destil most of the concepts explained in Part I of this series of articles.

Our friend Andreja, a former hedge fund manager in Switzerland and former pro trader with Goldman Sachs, was getting excited about it all and started offering valuable expert feedback.

In the meantime, before the end of 2017 I made a trip to my hometown, Cordoba, Argentina.

As usual, I got invited to speak at meetups to share my latest adventures in the crypto world. Ten minutes from wrapping up one of the talks, someone asked what I was doing at the moment.

I realised I hadn’t publicly discussed the project before.

I wasn’t ready, but I gave it a shot. Those final 10 minutes where the most exciting for the audience.

Matias, a senior developer for an American car manufacturer approached me with interest to help in the project. Six months later he would quit his job to work with us full time.

Back in Budapest, I asked my friend Daniel Jeffries for some help regarding trading strategies. He recommended implementing a simple trending strategy based on LRC, short for Linear Regression Curves.

Matias started developing the LRC indicator and its corresponding plotter along with the trading bot that would use it.

In many trading platforms, indicators are math functions that come in libraries, ready to use. In our case this didn’t fit the way we wanted to visualize data sets ala Google Maps style.

Google Maps dynamically loads the data around the location the user is in, considering the zoom level. If the user moves, it will download more data, on demand. If the internet connection is slow or the user moves too fast, the worst case scenario is that the user might see parts of the map pixelated or with information appearing at different zoom intervals. It is clearly a very efficient system.

In our platform the experience is similar, only that the Canvas App is still in its infancy and doesn’t have the iterations required to remove all the rough edges.

Indicators may require data close to the user’s position in the timeline or very far from it, or maybe even all of it.

This fact made it impossible to use indicators as functions as it would be impossible to render them in Google Maps style. That would require loading big chunks of data and calculating the indicator on the fly correctly. Every little move in the timeline would require more online calculations after loading the data.

Now, this may not sound like a problem if you are thinking of one bot or one indicator.

However, we knew we wanted to be able to plot large volumes of data in a multi-user environment, with tens of bots in the screen concurrently, and still enjoy the Google Maps navigational characteristics.

The solution to this problem was to precalculate the indicators at the cloud, and output a new data set with the results of those calculations. That would allow the platform to load only small chunks of indicator data around the user’s position on the timeline.

Matias experienced this first hand, as calculating Linear Regression Curves requires analyzing information beyond the current position.

His first implementation was through a function, but later switched to a process which would have candles as an input, and data points of the curve as an output.

By March 2018 we were ready to start working on some front end stuff so Julian invited Barry, a graphic designer and front end dev based in California we knew from previous projects. After some exploration time toying with branding elements and some coding, he closed shop for his freelance clients and started working with us full time too.

Andreja finally entered the core team in the second half of 2018 bringing his friend Nikola — at 26, the youngest member of the core team, and — most likely — the strongest developer.

Financial Beings

Four bots competing during the last 7 days of January 2019. The chart plots in real-time the ROI of each bot.

Trading bots are — at their core — algorithms that decide when to buy, or when to sell the assets under management based on a certain strategy. Evolution would be the mechanism to increase their intelligence.

At some point in time we got to understand that there where other kinds of computer programs in our ecosystem that would benefit from the paradigm of financial life.

Take, for example, the processes which extract data from exchanges.

Or the processes which digest data sets to produce other data sets — indicators.

Should they be financial beings too?

And what about the plotters used to render data sets in a graphical environment?

Becoming financial beings would convey these entities all the virtues entailed in the evolutionary model. They too would improve with time.

Note that this does not mean these entities need to compete in trading competitions. They do not trade. But still, they would compete to offer quality services in a marketplace — a data marketplace.

This internal data marketplace is a fundamental piece of the ecosystem. Why? Because quality data is a crucial aspect of the decision-making process of trading algorithms.

It is important that data organisms evolve together with trading organisms.

As a result, we expanded the evolutionary model to include sensors who retrieve information from external sources like exchanges, indicators who process sensors’ or other indicators’ data to create more elaborate data sets, plotters who render data sets on the canvas, and traders who decide when to buy or sell.

We call all these bots Financial Beings (FB).

By definition, a FB is an algorithm that features financial life, which means they have an income and expenses, and require a positive balance of ALGO tokens to be kept alive. Being kept alive means being executed at a specific time interval so that the algorithm may run and do whatever it is programmed to do.

Previously, we stated that trading algorithms participate in trading competitions, get exposure according to competition rankings, and the crowd may pick them for breeding — by forking them. Later on, asset holders may rent them to trade on their behalf.

But what about the others?

How do sensors, indicators and plotters lead a financial life?

Sensors and indicators will sell subscriptions to their data sets, mainly to trading bots, but also to other indicators as well. That will be their source of income. There will be an inter-bot economy within the ecosystem — the data marketplace.

The market will decide what data is worth paying for, therefore, only useful indicators will survive.

Plotters render datasets on the screen. The crowd of traders and developers in the community may choose to subscribe to plotter services, in particular those who offer a superior visual experience.

The part of the system involving transfer of value in between bots is not implemented yet. We are very much aware that it will be challenging to build a balanced model that works well in every sense. Current analysis on that front is still shallow, but we are looking forward to getting to work on those issues.

So, now that all these entities had been recognized, the next question in line was: where would each of these FBs going to run?

The Browser & the Cloud

I always thought that in order to debug trading strategies properly, the debugging tool should be highly coupled with the graphic user interface; the one plotting the data.

It is tremendously much easier for humans to identify patterns in graphical representations of data than on hard numbers.

The same should be true for debugging the behaviour of trading bots.

If we plotted everything a trading bot did directly over the timeline, including all relevant information as of why it did what it did, it would be a powerful tool to later identify why it didn’t act as expected or how the current behaviour may be improved.

To me, it was clear from the start that first versions of bots should be able to run at the browser, directly at the platform’s Canvas App. Also, while not debugging but running live or competing with other bots, they should run at the cloud.

It took me a while — and a few rewrites — to put the pieces together and create the software that would act as the execution environment of bots in the cloud — what I called the Cloud App — and that, at the same time and without any changes at the bot source code level, could run bots in the browser too.

That goal was achieved by mid-2018.

Since then, the exact same bot source code is able to trade in debug mode from the browser, and be executed at the cloud in live, backtesting or competition modes.

Developers are now able to use the browser developer tools to debug bot’s code, while viewing the real-time plotting of bot’s activities in the Canvas App.

This setup effectively lowers entry barriers for developers.

There is no environment to install.

No setups.

No configurations.

No requirements on developers’ machines.

It takes one minute to sign up.

Right after sign up, the system provides the user with a fork of the latest trading champion.

Developers may have a bot up and running in — literally — two minutes.

Of course, all these advantages have a trade off.

The main one is probably that in order to run bots at the browser they need to be written in JavaScript, which is not the most popular language to develop trading bots.

The main trade off seemed reasonable. Mainly because at early stages, with a minimum viable product, we intend to bootstrap the community with amateur traders and developers willing to experiment, have fun and compete.

Pro traders are — of course — very much welcome too. They too may find the proposition fun and interesting. But we do not really expect them to bring in their top secret python bots to be open sourced.

With time, the community will demand other languages to be supported and that demand will drive the evolution of the platform.

In a similar way, we foresee that a segment of the community may tend to professionalize, and different interest will drive the platform to evolve in different directions.

The Assistant

Because trading bots are meant to compete with each other and eventually service clients, several issues arise.

How do we guarantee a fair competition?

How do we ensure that bots do not cheat in terms of what they did and achieved?

Also, how do we assure future clients that the asset management services they will be subscribing to have the track record they claim to have?

The logical thing to do is to forbid trading bots to place orders on exchanges by themselves. Instead, we offer a common and transparent infrastructure to do it on demand.

The platform keeps track of all operations bots perform at the exchange — usually buy and sell orders — and monitors the exchange to track orders that get filled — both in full or partially.

We call this piece of the platform The Assistant, since it assists trading bots with all interactions with exchanges.

The Assistant solves the problem of trading bots cheating in regards with their performance.

It is the Assistant the one which, before each trading bot execution, checks the account at the exchange, verifies which orders were filled, keeps a balance for each asset in the pair, keeps a running available balance — the balance of funds available, that is, the funds not committed in unfilled orders — and exposes the methods trading bots can call when they decide to buy or sell.

In addition, the Assistant calculates trading bots ROI and saves that information which becomes available for the rest of the system.

The features packed in the Assistant guarantee a fair competition, since trading bots have no control over the tracking the Assistant does. This means that two or more bots competing play on a leveled field, bound by the same rules.

Scalability

Docker + Kubernetes = High Scalability.

Sometime along the way we got feedback from a hardcore trading algos aficionado: he was worried the platform would not scale to run thousands of bots.

He was right and he was wrong at the same time.

He was right because, at that stage, any audit of the code and the infrastructure would have concluded neither scaled.

But he was equally wrong.

He missed the crucial point that the platform itself — like most open source projects — was subject to evolution.

He also failed to understand that what he was reviewing at that point in time was merely the solution to the first handful of problems — from a list of hundreds — to be solved for superalgos to emerge.

There is no technical problem that time and brain power can not solve.

The key takeaway is that superalgos will emerge from a global supermind.

Whatever needs to be done for superalgos to emerge, will be done.

It took Matias the last quarter of 2018 to containerize the execution at the cloud.

By doing so, it not only solved the scalability problem, but also a few other issues found hiding in the attic…

How do we allow bot creators to run bots by themselves?

How do we limit accessibility so that evil bots can not interfere with other bots or mess up with the platform itself?

The solution came in the form of Kubernetes, an open source software capable of orchestrating the deployment of Docker containers designed precisely to solve these problems.

Probably in the spirit to emphasize the fact that the platform is subject to the dynamics of evolution too, Nikola preaches a theory that the platform itself should be considered a type of bot, or at least some parts of it, like the Assistant.

System Modules

The system is divided in modules. We expect to build at least 50 modules in the next few years. | Image © D1SK, Shutterstock.

The second half of 2018 saw us working on a number of issues further down the list…

If the crowd was to organize itself into teams, they would need to sign up, manage their profiles, discover existing teams, apply to join or create new ones…

The Canvas App was not the right place to manage all this.

We needed a web based system.

We anticipated it would be rather big, and it too would be subject to evolution.

Once again, the mission is to do everything required to have superalgos emerge. If we accept no one has the definite recipe, we need to build flexible solutions capable of evolving.

This reasoning pointed us in the direction of a modular system.

The most urgent was the Users Module. A piece of software responsible for managing everything related to users. It would allow users to discover and search for other users, as well as create and manage user profiles. The module would readily accept more functionality related to users as demand evolves.

I developed this module myself.

Second in line, was the Teams Module, developed by Barry. Its scope is everything related to teams. It allows users to explore and find existing teams, apply to become members, create teams, define admins, invite users to join and manage membership applications. It too is ready to evolve within that scope.

Trading bots need the API Keys to access the exchange and perform trades. The keys are obtained from the exchange and need to be imported into our system so that they are available when needed.

To solve this requirement, Matias developed the Key Vault Module, which is responsible for allowing users to manage their exchange API keys. That includes defining which bot will use each key. Needless to say, all Key handling must be done in a secure way.

We also needed a module to define and manage competition events, so Nikola developed the Events Module. It allows browsing upcoming, active and past competition events, as well as creating competitions with their own set of rules, prizes and more.

We already envision over 50 more modules, each one with a specific responsibility and scope.

But before building more modules, we needed a framework capable of running this modules in a coordinated manner, providing common navigation and other shared functionality.

The Master App

System modules front ends are transformed into NPM JavaScript libraries, which are listed as dependencies of the Master App so that only one web app is loaded at the browser. By the end of 2018 this framework was up and running, with the first set of modules in it.

Implementation wise, the Master App is a React / Redux app, which accesses a GraphQL backend API via Apollo. The Master App API is in fact an API Router, which forwards incoming requests to the backend of each individual module.

A module backend consists of a Node.js / Express app, with either MongoDB or Postgre databases depending on the preferences of each individual developer.

Integrating the Canvas App, the Cloud App and the Master App, represented a significant milestone. By then we had all initial pieces in place and were ready to start adding more, while evolving the existing ones.

The system is untested, thus in alpha stage.

Code quality varies a lot.

The current perception on the odds of the code in each part of the system to survive evolution determines how much work is put in cleaning up and making the code robust.

As the core team grew and ideas started being discussed among a larger group of people, the initial set of concepts got refined and extended.

Algobots

ROI of two Algobots as seen on the Canvas App.

Trading algorithms may be simple or complex. Maybe complex ones can be disassembled into many simpler ones.

What would be the simplest kind of trading algorithm, the equivalent to a unicellular being in biology?

The basic trading algorithms are the ones that implement a single strategy. The ones that do one thing well. In many cases this could mean the strategy is based in a single indicator.

We don’t need to impose any specific border on its intelligence. We just assume that — like a unicellular organism — it is self-sufficient and stays alive on its own merits — as long as it can obtain food from its surroundings.

We call this — the smallest unit of trading intelligence — an algobot.

Algonets

ROI of multiple Algobot clones controlled by a single Algonet. Notice how some of them decided to sell at a different moment, impacting their ROI.

We knew early on that algobots where not going to be alone in the trading space of bots running on the platform.

Let’s analyze algobots further…

What are algobots main components?

The most important component is the source code. After that, the configuration, which provides many details needed at execution time. Third in line: parameters; values set for each specific execution.

I’ll explain this with an example:

Let’s imagine an algobot called Mike.

Mike’s source code features the logic to decide when to buy and when to sell based on the LRC indicator described earlier. If Mike identifies 3 buying signals then it buys; if it identifies 2 selling signals, then it sells.

Numbers 3 and 2 on this example are arbitrary numbers. They may have been 4 and 3, for instance.

To clarify the difference between configuration and parameters, what we would find at Mike’s configuration file is the valid ranges for the number of buy and sell signals. Such ranges could be, for example, [2..6] and [3..7]. The actual parameters or values which Mike will use at any specific run are set right before it is put to run. These values could be 3 and 2 as proposed earlier, or 4 and 3, or any valid combination within the configured ranges.

Because parameters can be manipulated at will and any change may affect the behaviour of trading algorithms, we call parameters genes.

In the example above, we would say that Mike has two genes, each one with a range of possible values. The genes would be number of buy signals and number of sell signals.

An algonet is a different kind of trading organism; a higher order intelligence.

Algonets do not trade on their own. They use algobot clones to do the job. They dynamically create different clones of a single algobot, each with different values on their genes. Potentially, algonets may deploy as many algobot clones as there are genes value combinations.

Let’s imagine Mike Algonet — yes, algonets are constrained to one type of algobot only.

Mike Algonet is deployed with 1 million dollars in assets under management.

Initially, it deploys as many Mike Algobot Clones as there are possible genes combinations, each with a unique set of gene values. Upon deployment, Mike Algonet assigns each clone a tiny amount of capital to trade with.

Our evolutionary model dictates that each clone needs ALGO tokens to stay alive, so, on top of capital to trade with, each clone receives a small amount of ALGO… whatever is required to live long enough to test their genes on the market!

Mike Algonet monitors the performance of each clone and dynamically decides which clones should be kept alive longer. Those are fed more ALGO.

When Mike Algonet figures out that any individual clone is not doing a good enough job, it simply stops feeding it, and the abandoned clone soon dies.

Once Mike Algonet is confident that a clone is performing well, it may place more capital under the selected clone’s management.

These are standard algonet mechanisms.

On top of that, algonets have their own source code so that developers can innovate in any way they see fit.

We have coded a simple version of an algonet, and experienced first hand how clones with different genes operate differently, as shown in the image above.

Advanced Algos

Advanced algos control algonets which in turn control algobots

Now, imagine an entity of an even higher order, above algonets.

We call these advanced algos.

Advanced algos are capable of deploying clones of algonets. They may clone the same algonet multiple times with different parameters, or different algonets altogether, or any combination in between.

Like algonets, advanced algos use capital and ALGO to keep algonets working and alive, and dynamically decide how to allocate these two resources to obtain the best possible overall performance.

In turn, algonets working under advanced algos’ tutelage still control and operate their own algobots as explained earlier.

The same as with algonets and algobots, advanced algos too have source code for creators to innovate beyond the standard functionality all advanced algos have.

The advanced algos-algonet-algobot structure is analogous to the biological structure of organs-tissues-cells.

In biological life, cells work to keep themselves alive, but also work together with other cells to form tissues and adopt certain higher order functions. Something similar happens with multiple tissues forming organs. Even though organs are made of tissues, they have their own life, purpose, and mission.

After playing around long enough with this 3-layer hierarchy, the next set of questions emerged…

Are there even higher order entities with some well-defined purpose?

Will we have an equivalent of a complex biological organism? An entity that contains organ systems, organs, tissues and cells, and coordinates them all for a higher purpose?

Financial Organisms

A Financial Organism is made out of a hierarchy of nodes in which each node may control advanced algos, algonets or algobots as needed.

Financial Organisms are structures of bots that assemble themselves into hierarchies with no imposed rules or limits.

In the paradigm of advanced algos-algonets-algobots hierarchy described earlier, the individual decision to trade or not is up to each algobot clone. If any clone decides to buy, then it would do so using its own trading capital.

Financial Organism (FO) are made up of nodes.

The first node in the hierarchy is called by the Assistant.

This node may in turn call other nodes, which may themselves call other nodes if needed.

All of these nodes are free to clone algobots, algonets or advanced algos and put them to run.

Nodes that put bots to run would effectively be impersonating the Assistant — remember it is normally the Assistant who calls bots. That means that when any of the clones wishes to place a buy or sell order, the command is received by the calling node, which can use that call as a signal for its own decision making process.

Put in other words, nodes may use any set of trading bots as signalers.

Once a node arrives to a buy or sell decision, it places a buy or sell order upwards through the nodes hierarchy. The order is received and factored in by the node upstream through its own logic.

The process continues all the way up to the root node.

The ultimate buy and sell order is placed by the root node through the actual Assistant, which takes the order to the exchange.

Mining ALGO

The system required to support the concepts described herein is big and complex.

Having all modules developed by a small group of developers does not seem to be the most efficient way to accomplish the mission.

It is not the most resilient approach either.

In terms of technology requirements, the unique characteristic of this project is that all of the modules are expected to evolve significantly. Moreover, the modules identified so far are certainly not all the modules that will be required for superalgos to emerge.

The project’s mission states we are to do “whatever it takes”, explicitly suggesting that we might need to do more than we can envision today.

Just like we expect a huge community of developers to breed bots, we also expect a sizeable part of the community to get in the project’s development team and help build systems modules.

ALGO tokens need to be issued and distributed among all supermind participants.

Because the key engine for evolution are competitions, it was decided early on that 25% of the total supply of ALGO were to be reserved for competition prizes.

15% of ALGO would go to developers building the system.

10% would go to people working on business development.

50% would be reserved for financing and other incentives.

The mechanism to distribute the pool reserved for competitors was clear from the beginning: competition prizes would be distributed through the system among competition participants.

The reserved pool was not going to be distributed yet, so no need to establish a distribution mechanism.

That left us with the issue of defining the mechanism to distribute the 15% among system developers and the 10% among business developers.

We toyed and iterated several ideas which resulted in the following requirement:

Whoever develops a system module should also maintain it.

The reason for that is that the system would quickly become too complex for a small core team to maintain. It would be manageable for a large extended team, though. In fact, it would work particularly well if each party developing a module remains responsible for maintaining it in the long run.

This means that both development and long term maintenance needed to be incentivized.

The solution we found mimics, in a way, a kind of crypto mining scheme.

Whoever takes responsibility for a system module, develops it, integrates it with the rest of the system and pushes it to production, then acquires an ALGO Miner.

An ALGO Miner is not a hardware piece, but a smart contract running on the Ethereum blockchain.

The miner, that is, the person mining, receives a daily stream of tokens.

Each ALGO Miner has a pre-defined capacity. There are 5 different categories, each with their own capacity, since not all modules have the same importance, difficulty or criticality.

For instance, a category 1 ALGO Miner distributes 1 million ALGO. The same goes all the way up to category 5 miners which distribute 5 million ALGO.

Where do these tokens come from?

For developers, from the System Development Pool distributing 15% of the total supply.

A similar mining scheme has been established for people working in business development. In this case, the responsibility does not lie in developing and maintaining a system module, but in developing and maintaining a business area.

So, for business people, miners obtain the tokens from the Business Development pool.

ALGO Miners feature a distribution schedule that mimics the halving events of bitcoin mining, except that instead of halving every 4 years, the halving occurs every year.

50% of an ALGO Miner capacity is distributed day by day during the first year.

The second year distributes half of what was distributed the first year, that is 25% of the total.

The third year, half of the previous year, and son on.

By the fifth year, over 98% of the capacity will have been distributed.

ALGO Miners also distribute ALGO collected in fees from financial beings running on the platform.

These fees are part of the expenses financial beings have to pay. We call it the Time to Live fee, meaning financial beings buy time to be kept alive.

The Time to Live fee is collected and distributed among active miners, on a daily basis.

This is the second source of ALGO tokens for miners. We expect this source of income to be minimal during the first few years, but with time and increased adoption, it should steadily grow, allowing miners to have a substantial income by the time the ALGO coming from distribution pools is exhausted.

These are the mechanism we have engineered to incentivize the team in the long run. These set of incentives are also part of the evolutionary model, in this case enabling the system itself to evolve.

The Team

Beyond the Core Team, there is a wider Project Team building parts of the system. Team members are compensated through the mining of ALGO tokens, as described earlier.

To become a team member the applicant needs to take responsibility either over one piece of the software pending development, or over a business development area.

Examples of pieces of the puzzle being assembled by the Project Team are the ethereum smart contracts needed to issue and distribute the ALGO tokens, the ALGO Shop module, the Logs module and the Financial Beings module.

Smart contracts are still undergoing testing, thus mining hasn’t started yet.

As soon as testing is finalized, the contracts will be deployed and mining will kick off.

It’s been a while since that crisp summer morning of 2017 that saw the Superalgos project start.

Lots of water run down the Danube.

The Core Team has laid down the foundations for a great project and we are ready to step up the game by growing the team that will develop and maintain the remaining system modules and business areas.

We encourage everyone willing to see our vision materialize to come forward, contribute some work and become a part of the team.

Come plug your brain in the largest-ever trading supermind!

A bit about me: I am an entrepreneur who started his career long time ago designing and building banking systems. After developing many interesting ideas through the years, I started Superalgos in 2017. Finally, the project of a lifetime.

Follow Superalgos on Twitter or Facebook; or visit us on Telegram or at our web site.