Tokenization Use Case: Accounting of Risk Tolerance

Written by alexosh | Published 2018/05/20
Tech Story Tags: investing | asset-management | blockchain

TLDRvia the TL;DR App

Empowerment of the Work Token Model

Over the past few months, the professional discussion has developed on which frameworks, heuristics, and metrics can be reasonably applied to value crypto assets. The token velocity thesis has attracted much attention. Other frequently mentioned models include the network value-to-transaction ratio, number of daily active addresses, store of value concept, and market cap divided by daily transactions. Despite the variety, no method is as clear as the good old discounted cash flow. Fair values we can calculate for most crypto assets are somewhat questionable, although noteworthy. The “normal” NPV approach doesn’t work for the majority of token models; utility tokens and native currencies do not generate cash flows.

One token architecture that allows for a quick estimation of the traditional net present value for future cash flows is stake token, often referred to as work token, and one important special case of which is discount token (owning the token grants rights, priorities, discounts on transactions performed using the system and other cryptocurrencies within it). Below is an example of further empowerment of such tokens.

Since the introduction of Augur, one of the first tokens in this family, people have extensively experimented with the range of rights stake tokens could grant. However, other forms of parameterisation and enhancement are yet to be offered and researched. For example, must the deposit (tokens temporarily locked in the system to provide transactional rights) be really untouched? Do we necessarily need to keep it in a “vault”? Can tokens change hands, still being inside the system and, therefore, performing their main function — giving weight and stability to the system?

Below, we offer one variant of such parameterisation. When the deposit gradually flows from one system actor to another and back, it provides some additional value to the entire structure.

Use Case: Rethinking Risk

Risk is an important part of life. The etymology of the word is connected with navigations and, primarily, with trade. One of the earliest European meanings of the word is “difficulties that need to be avoided at sea.” The Chinese construction of the word is more helpful: the symbol of risk is the sum of “danger” and “favorable opportunity.” In the modern western rationalist tradition, the first component usually prevails.

Risk is easy to misunderstand. It may range from anti-certainty to anti-security, from randomness of outcomes to lack of individualistic utilitarianism and decision rationality, from utility function to virtue and courage.

So let’s create our own synthesised definition. Risk is the present uncertainty of damage in the future. Importantly, the damage includes both deterioration of the existing state and lost opportunities.

Everyone has their own risk. The calculation of risk is always a personal procedure. On one hand, if someone gets the rationality wrong and does stupid things, that doesn’t mean other people would act in a similar manner in the same situation. On another hand, if someone is considered a reasonable person, this doesn’t make him “like everyone else.”

Therefore, as we separate a group of dependent observers (investors) and a group of decision makers (asset managers), we encounter an extremely complex self-reflective loop of risk. People who favor risky behavior are not willing to entirely tolerate the hazards arising from those around them.

In practice, relations between these groups are enforced with agreements. However, a straightforward one (such as: “If it doesn’t go the way you say, then you will pay me a penalty”) is never the case. In professional asset management scenarios, people are limited to rather voluntary economic motivations. As time and the expectations about future risk move continuously forward, it is necessary to build an effective, symbolically generalized communicative environment, where the motivation of both groups — investors and managers — will be well structured.

There are many theoretical constructs to handle the problem. Pareto, Max Weber, Durkheim, Parsons, Maturana and many others have suggested them. They have different biases and include general criticism for the prerequisites of rationality. However, in our opinion, it is unlikely that this problem has a theoretical solution. We suppose it takes a token-based calculus to conduct real-life experimentation.

Within the management company and its mutual funds, risk exists in the following ways:

  1. As a statistical value — how much has return fluctuated in the past? It may be controversial. For example, the average yield for the sampling {+200%, -100%, +200%} is 100%, but for a consecutive series it is zero, because in the second step the previous result will be reset and thus the next step cannot occur.
  2. The manager’s personal expectations of risk that he mustn’t report to anyone. He has skin in the broader game (management fees from all investors and all funds) and, therefore, considers factors outside of the particular investor’s scope of interest within one fund.
  3. Investor’s current tolerance to risk which may be different from that at the moment of the entry to the mutual fund. This may be a change in personal circumstances, or the result of changing market sentiment.

All factors are interconnected, and though the circle of events might not be necessarily vicious, the chaos is certainly serious.

Imagine now that with token-based calculus we can project exact values for each of these risk cases in real-time. What does this yield? The process optimization: we could comfortably know all of the aspects of risk so the calmness of all parties bolsters the relationships between investors and managers. When risk is measured well, investors “squeeze” the maximum from the market.

Notably, this is very different from hedging. Structural products give a very accurate risk value, but the mechanism that they use for accuracy is also a limiting factor. Besides, in the realm of crypto, there are no risk-free assets and negatively correlated assets to form a structured product.

Risk Tolerance Tokens

The model is the following.

Each fund requires a deposit of some risk tolerance token upon entry. This amount is the multiplication of two factors: the minimum investment period and the planned rate at which the tokens will be spent over time. If return deviation would remain constant at the level planned throughout the entire period, all tokens would be gradually spent during the first half of the time and then gradually returned during the second half. So, at the end, it is a zero-sum game.

However, as the real measurement of risk will be constantly changing, the rate of token flow will closely follow the deviation of return. Should the manager decide to implement a riskier strategy starting tomorrow, his investors must be ready to lose more risk tolerance tokens every following day. This is a two-way road because once all deposited tokens are spent, this is the asset manager who now has to pay them back. Thus, he either “slows down” with his risky attitude or pays some extra tokens to investors by the end of the period. This way, investors are compensated for a bit rockier ride than they probably planned for. Or, it may be all around in favour of the manager if he performs better than expected.

The free market will decide the price of such token and, in fundamentals, it certainly depends on asset management fees and other tariffs in the fund family. The resulting manager’s compensation is comprised from the traditional fees and the increment in tokens he manages to receive. The same is true for investors who get their returns in the traditional sense plus the delta in risk tolerance tokens.

This hybrid structure is much easier to implement compared to a reorganisation of the highly regulated infrastructure of asset management companies, funds, registries, custodians, etc.

What is the big picture? Besides a more precise accounting, what makes this sort of risk tolerance calculus useful?

Let’s consider a large number of funds competing for investors’ money. Units of funds are tokenized so money can migrate fluidly. Funds are open for everyone, but each one has maximum capacity so optimal portfolio settings can’t immediately accept all eager investors. Investors line up in queues and switch funds once their previous minimum investment periods end.

This queue of investors idea may sound strange as we see a somewhat opposite picture in reality. Asset management companies spend millions of dollars on ads and funds line up to get access to investors’ money, as there seems to be no lack of vacant spots for almost anyone in any given mutual fund.

However, we also perfectly know that most of the money is poorly managed and investors would love to relocate elsewhere, but the “asymmetry of information” doesn’t allow them. In plain English, in the traditional environment, investors are being fooled; if they knew the real quality of their investments on-the-go, most managers would lose their jobs. It is sales, marketing, and clumsy regulations that help them stay afloat.

Circulation of risk tolerance tokens would create a transparent investment environment where truly successful managers get the fair share of profits and investors compete with their money for the scarce resource of true professionalism.


Published by HackerNoon on 2018/05/20