paint-brush
The Fantasy of Self-Service Security Token Issuances and Some Ideas to Fix It: Part Iby@jrodthoughts
399 reads
399 reads

The Fantasy of Self-Service Security Token Issuances and Some Ideas to Fix It: Part I

by Jesus RodriguezFebruary 28th, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Issuing security tokens shouldn’t be this hard. That’s a phrase that I constantly hear from one my mentors in the blockchain space explaining that most security token issuance these days involve writing smart contracts by hand. Almost as an overreaction to the complexity of the security token issuance processes, there have been several attempts to promote self-service issuance tools that promise the creation of security tokens with a few clicks. Today, I would like to explain some of the fundamental flaws with that approach and propose what I consider is a better alternative.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - The Fantasy of Self-Service Security Token Issuances and Some Ideas to Fix It: Part I
Jesus Rodriguez HackerNoon profile picture

Issuing security tokens shouldn’t be this hard. That’s a phrase that I constantly hear from one my mentors in the blockchain space explaining that most security token issuance these days involve writing smart contracts by hand. Almost as an overreaction to the complexity of the security token issuance processes, there have been several attempts to promote self-service issuance tools that promise the creation of security tokens with a few clicks. Today, I would like to explain some of the fundamental flaws with that approach and propose what I consider is a better alternative.

The security token space has adopted two extreme positions when comes to crypto-security issuances. On one extreme, every security token requires smart contract developers to write a lot of repetitive code and leveraging some basic protocols for compliance reasons. On the other extreme, we can fill out a couple of forms, press a button and get a security token on the other hand. The first approach is complex and error-prone but can be seen as a consequence of the early stage of the security token market. The second approach is simply impractical and it just sets the wrong precedents in a market that is still trying to setup its foundations.

Self-service security token issuance tools are rooted on the idea that you can express the fundamental mechanics of a security token using a deterministic set of rules abstracted via a friendly user interface. In principle, the benefits seem obvious as anybody should be able to create their own security token without mastering a smart contract language. While conceptually appealing, the idea of self-service security token issuance is fundamentally flawed from both the technical and financial standpoint. Self-service security token issuance can be seen as an expression of a cognitive psychology phenomenon known as reductionism.

Reductionist Thinking and Security Tokens

The reductionist fallacy is a cognitive dynamic that explains how, when presented with a complex argument, humans gravitate towards a simple explanation omitting critical details. The simple explanation only works in some of the most abstract representation of the problem and fails when applied under any analytical rigor. In the context of security tokens, thinking that we can model the structure and behavior of a crypto-security using a few rules is an example of the reductionist fallacy. Self-service security token issuance tries to oversimplify the process of creating crypto-securities to appeal to a segment of the market that might feel overwhelmed with the technological complexities of the space. However, by taking the reductionist route, self-service security token issuance technologies ignore many of the critical elements of crypto securities creating tokens that are useless for any practical purpose.

Self-service security token issuance tools are not only technologically flawed but their market timing is pretty off by any historical measure. Below, I’ve listed three main arguments that might help us to identify reductionist thinking patterns in self-service security token issuances:

  1. Rules vs. Dynamic Behaviors: Issuing a security token via a user interface configuration implicitly assumes that the behavior of a crypto-security can be abstracted with a few IF-Then-Else rules. That couldn’t be further from the truth. Security tokens need to express fairly dynamic financial behaviors such as dividend distributions, defaults, risks adjustments and other constructs that require complex business logic beyond logical rules. Even some of the simplest tokenize representations such as shares in publicly traded companies can be subjected to all sort of complex behaviors. The process of issuing traditional securities is complex not only because of the players involved but because the complex nature of those financial instruments.

  2. Mature vs. Nascent Technology Markets: Self-service tools are a biproduct of mature technology markets. In the integration space, platforms like IFTTT where only possible after decades of evolution of middleware technologies. Self-service business intelligence tools only became widely adopted after decades of work on data visualization and analytics. The security token space is less than two years old and there is absolutely no foundation to abstract the issuance of crypto-securities using a few clicks. Reductionist thinking…

3) Off-Chain vs. On-Chain Runtimes: This one is a bit subtler argument but really relevant nonetheless. Assuming that we can implement a security token using a few UI-based rules intrinsically implies that everything we need to model the behavior of a crypto-security takes place off-chain( via rules) which is nothing short of a fallacy. To be effective, security tokens need to leverage plenty of on-chain artifacts such as oracles, gas, invoking other smart contracts and several others that can’t simply be abstracted via a UI.

To put some of the previous arguments in perspective, let’s use a few examples of successful and failed self-service technology stacks across several technology trends:

· Bad: UML and Code Generation Tools: In the early 2000s there was an explosion of tools that tried to model object-oriented programs using graphical standards such as the unified modeling language(UML). Even though that trend produced successful acquisitions such as Rational Software, most of the tools proven to be very limited to architect any form of sophisticated program. Part of the failure of this generation of tools can be pinpointed to the impedance mismatch between a graphical, rules-driven environment and dynamic, code-based logic.

· Bad: Machine Learning Workflow Tools: One trend that is developing in the machine learning space is the idea of creating models using visual workflows. While is still too early to tell if this trend is going to be successful, they seem to only be applicable on some very basic scenarios and data scientists resulting to frameworks like TensorFlow or PyTorch to write more complex models.

· Good: Self-Service Analytics Tools: For decades, the authoring of reports and data visualizations in business intelligence(BI) solutions required specialized domain experts for doing something that was essentially commodity. Eventually, the market produce a new generation of self-service data visualization platforms such as Tableau or QlikView that allow non-experts authored really sophisticated dashboard without the need of writing any code

· Good: Machine Learning Domain Specific Languages: Fragmentation and complexity is one of the main challenges of the machine learning space. With so many machine learning frameworks and platforms in the market, writing machine learning programs is not only complex but there is no portability between stacks. Recently, companies like Facebook and Microsoft sponsored the creation of the Open Neural Network Exchange Format(ONNX) that provides a higher-level (but not based on visual rules) language for creating machine learning models in a way that is compatible with different underlying frameworks.

In summary, effective self-service tools share a few characteristics:

a) They allow for the addition of complex business logic.

b) They operate on similar environments that the underlying runtime.

c) They operate in mature technology markets.

Does that sounds like the self-service security token issuance tools to you? 😉

A Possible Solution: A Security Token Domain Specific Language

How do we simplify the issuance of security tokens without creating useless abstractions? Writing Ethereum smart contracts is hardly scalable but using UI-based rules is useless. How about something in between? Imagine a declarative language that can model the structure of crypto-securities in a way that is immediately translatable to different smart contract languages.

The idea of a higher-level, domain-specific language for crypto securities has many benefits and doesn’t look to neglect any of the advantages of smart contract languages like Solidity. I’ve been doing some work in this area and will cover that in detail in the next post.