paint-brush
Blockchain-Verified Sentimentby@kameir
316 reads
316 reads

Blockchain-Verified Sentiment

by Christian KameirJanuary 22nd, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The explosion of content on the world wide web, social media and chat networks greatly increased the interest in sentiment analysis. Marketers, traders and political advisers are looking for more sophisticated solutions in the field of sentiment analysis. This outline seeks to address the core problems facing sentiment analysis today with a view towards the need of different key verticals. The approach to sentiment analysis can be grouped into three main categories: knowledge-based techniques, statistical methods, and approaches addressed using the three approaches outlined by IEEE Intelligent 28 (2): 15–21).

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Blockchain-Verified Sentiment
Christian Kameir HackerNoon profile picture

The explosion of content on the world wide web, social media and chat networks greatly increased the interest in sentiment analysis from a growing number and variety of interested parties. Additionally, the expanding reliance of internet users on online reviews, ratings, recommendations and other forms of digital expression, turned crowd-sourced opinions into a form of virtual currency for businesses looking to market their products, identify new opportunities and, manage their reputations.

Text is not unadulterated fact. The majority of data sources used in sentiment analysis today lack topic specificity, making for a noisy, often over-leveraged data source pool.

A more discerning solution must empower developers to choose from a wide variety of sources, adding additional sources and cluster these for purpose. This long-tail approach should provide significantly greater relevance of the source data than more generic sources - i.e. Twitter.

As marketers, traders and political advisers look to automate the process of filtering out the noise, understanding the conversations, identifying the relevant content and taking appropriate action, many are looking for more sophisticated solutions in the field of sentiment analysis. This outline seeks to address the core problems facing sentiment analysis today with a view towards the need of different key verticals.

Standardization and Tagging

Sentiment source data often lacks the most basic standardization and even small changes in the environment of data being aggregated might render it useless for analysis purposes. .. incentive website owners and moderators to provide permissioned, standardized feeds to the platform and further enables developers to create tagging requests.

Sentiment Dictionary

Most sentiment analysis solutions provide only general labeling of information such as ‘positive’ or ‘negative’. However, many use case require a more nuanced and/or tailored labeling of data streams - i.e. for buy/sell signals of stocks or commodities or the opinions on the outcome of an event. A sophisticated approach will enable developers to create ‘sentiment dictionaries’ - content-specific labeling tools fit for purpose.

Programming Interfaces

The system should provide a set of open application programming interfaces (APIs), enabling developers to create custom plugins and analysis tools. These will be accessible via the platform, which will allow users to purchase user sentiment data and various customized features.

Sentiment and Natural Language Processing

The accuracy of a sentiment analysis system depends in principle on the intersection of judgments by human beings with natural language processing software. This is established by variant measures based on precision and recall over two or more target categories such as negative, positive and neutral phrases.

However, according to research human raters on average only agree about 80% of the time. Thus, a program which achieves 70% accuracy in classifying sentiment is doing nearly as well as humans. If a program were "correct" 100% of the time, humans would still disagree with its assessment 20% of the time on average, since people disagree that much about any answer.

On the other hand, computer systems will make very different errors than human assessors, and thus the figures are not entirely comparable. For instance, a computer system will have trouble with negations, exaggerations, humor, or sarcasm, which typically are easy to handle for a human reader, and some classifications a computer program makes will seem simplistic to a human.

In general, the utility for practical commercial tasks of sentiment analysis as it is defined in academic research has been called into question, mostly since the simple one-dimensional model of sentiment from negative to positive yields little actionable information for a company worrying about the effect of public discourse on e.g. brand or corporate reputation. And, traders of stocks and commodities will benefit from signals related to ‘buying’ and ‘selling’ more than general sentiment.

To better fit market needs, evaluation of sentiment analysis today has moved to more task-based measures; for example: For market research professionals the focus in the evaluation data set is less on the content of the text under consideration and more on the effect of the text in question on brand reputation.

Method and Features

Existing approaches to sentiment analysis can be grouped into three main categories: knowledge-based techniques, statistical methods, and hybrid approaches (Cambria, E; Schuller, B; Xia, Y; Havasi, C (2013). "New avenues in opinion mining and sentiment analysis". IEEE Intelligent Systems. 28 (2): 15–21). All three categories have significant downsides which can be addressed using the outlined approaches.

Graph Theory

Graph-theoretic methods, in various forms, have proven particularly useful in sentiment analysis, since natural language often lends itself well to discrete structure. Traditionally, syntax and composition semantics follow tree-based structures, whose expressive power lies in the principle of composition, modeled in a hierarchical graph.

More contemporary approaches such as head-driven phrase structure grammar model the syntax of natural language use typed feature structures, which are directed acrylic graphs.

Within lexical semantics, especially as applied to computers, modeling word meaning is easier when a given word is understood in terms of related words; semantic networks are therefore important in computational linguistics.

Still, other methods in phonology (e.g. optimal theory, which uses lattice graphs) and morphology (e.g. finite-state morphology, using finite-state transducers) are common in the analysis of language as a graph. The usefulness of this area of mathematics to semantic analysis has borne organizations such as TextGraphs, as well as various 'Net' projects, such as WordNet, VerbNet, and others.

Basic Cohesion Metric

Basic Cohesion Metric is based solely on frequency of sentiment-bearing nodes in or derived from the source text, i.e. the sum of polarity values for all nodes in the graph.

Relation Type Metric

Relation Type Metric modifies the basic metric with respect to the types of relations in the text-derived graph. For each node in the graph, its sentiment value is the product of its polarity value and a relation weight for each relation this node enters into in the graph structure.

Unlike most lexical chaining algorithms, not all relations are treated as equal. In this sentiment overlay, the relations which are deemed most relevant are those that potentially denote a relation of the effective dimension, like antonyms, and those which constitute key organizing principles of the database, such as hypernymy.

A hyponym is a word or phrase whose semantic field is included within that of another word, its hyperonym or hypernym. For example, pigeon, crow, eagle and seagull are all hyponyms of bird (their hypernym); which, in turn, is a hyponym of animal. Potentially affecting relations have the strongest weighting while more amorphous relations, such as “also see”, have the lowest.

Node Specificity Metric

Node Specificity Metric (i.e. sites chosen from the pool of available resources) modifies the basic metric with respect to a measure of node specificity calculated on the basis of topographical features of the lexical database. The intuition behind this measure is that highly specific nodes or concepts may carry more informational and, by extension, affective content than less specific ones.

Researchers have noted the difficulty of using a knowledge base whose internal structure is not homogeneous and whose idiosyncrasies are not quantified.

The specificity measure aims to factor out population sparseness or density in lexical databases by evaluating the contribution of each node relative to its depth in the hierarchy, its connectivity (links) and other sites in its cluster.

Named Entity Recognition

Utilizing named-entity recognition (NER) to locate and classify ​named entities​ in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, et​c. ​In the expression ​named entity,​ the word named restricts the task to those entities for which one or many strings, such as words or phrases, stands consistently for some referent. This is closely related to rigid designators, although in practice NER deals with many names and referents that are not philosophically "rigid".

For instance, terms such as ‘Jaguar’ might refer to the animal or to the car maker. ‘Ford’ the automotive company created by Henry Ford in 1903 can be referred to as Ford or Ford Motor Company, although "Ford" can refer to many other entities as well.

Rigid designators include proper names as well as terms for certain biological species and substances, but exclude pronouns, descriptions that pick out a referent by its properties , and names for kinds of things as opposed to individuals (for example "Bank").

Coreference resolution

Coreference resolutions aim to find (all) expressions that refer to the s​ame​ entity in a text. It is an important step towards a higher level NLP tasks that involve natural language understanding such as document summarizing, question answering, and information extraction.

General Sentiment Dictionary

The system may use Wikipedia as its General Sentiment Dictionary (GSD) or blockchain-based approaches likely use Everipedia in its next iteration. Everipedia's encyclopedia is recognized as the largest English online encyclopedia, with over 6 million articles, including all articles from the English Wikipedia​. It has been labeled as an 'fork' and 'expansion pack​' to Wikipedia, as it provides a significantly larger range of articles than the English Wikipedia​. This is due to Everipedia's lower threshold for notability and emphasis on inclusive criteria.

Crowd-sourced Sentiment Dictionary

Due to the need for context-specific sentiment analysis tools and the rich language used for expressing sentiment in text colored by specific topics (i.e. politics), automatic sentiment analysis suffers heavily from the scarcity of annotated sentiment data.

This is especially true for directional sentiment, i.e. annotations that a holder has sentiment about a specific target.

A human analysis component is required in sentiment analysis, as automated systems are not able to analyze historical tendencies of the individual commenter, or the platform and are often classified incorrectly in their expressed sentiment.

Automation impacts approximately 23% of comments that are correctly classified by humans.

A crowd-sourced sentiment dictionary is supported by an incentivized consensus-based algorithm which ensures accuracy of the chosen labels through the validation of 3-5 agreements. Each new label nominally receives a reward tokens.

However, these tokens are staked until a minimum of three matches are reached. If after three labeling events consensus has not been achieved, the consensus mechanism is extended until a minimum of 80% consensus has been reached.

Research shows that the medium number of human labeling agreement is achieved within five labeling events, resulting in an average of one (1) token reward per labeling activity in the aforementioned use case.

Blockchain

A sentiment platform could be built on the EOSIO blockchain to perform user sentiment analysis processing. EOSIO allows decentralization of the computational power and storage needed to manage big data. In addition to spreading the costs of running the platform, this decentralized approach leverages the security of blockchain by immutably time-stamping digital contributions.

The aim of a platform build on EOSIO - or a copy thereof - is to provide decentralized application hosting, smart contract capability and decentralized storage enterprise solutions that solve the scalability issues of blockchains like ​Bitcoin​ and ​Ethereum​, as well as eliminating fees for users. EOSIO’s multi-threaded structure consensus system (delegated proof-of-stake) make it ideally suited for a decentralized sentiment analysis platform.

A native token, is a utility token that provides both bandwidth and storage on a blockchain, in proportion to total stake. Tokens also allow the owner to cast votes and participate in the on-chain governance of the platform, again in proportion to the owner's stake.

Consensus Mechanisms

Proof of Stake

The proof of stake consensus system was created as an alternative to proof of work (PoW), to address issues such as power consumption in PoW and latency.

When a transaction - such as the stream of a comment to the data pool - is initiated, the transaction data and metadata are fitted into a ​block​ with a maximum capacity of 1 megabyte, and then duplicated across multiple computers or nodes on the network.

The nodes are the administrative body of the ​blockchain​ and verify the legitimacy of the transactions in each block. To carry out the verification step, the nodes or miners would need to solve a computational puzzle, known as the proof of work problem.

The first miner to decrypt each block transaction problem gets rewarded with coin. Once a block of transactions has been verified, it is added to the blockchain, a public transparent ledger.

Mining requires significant amounts of computing power to run different cryptographic calculations which solve computational challenges. The computing power requires a high amount of electricity and power needed for the proof of work.

In 2015, it was estimated that one Bitcoin transaction required the amount of electricity needed to power up to 1.5 American households per day. To foot the electricity bill, miners would usually sell their awarded coins for ​fiat money​, which would lead to a downward movement in the price of the cryptocurrency.

The proof of stake (PoS) addresses this issue by attributing mining power to the proportion of coins held by a miner. This way, instead of utilizing energy to answer PoW puzzles, a PoS miner is limited to mining a percentage of transactions that is reflective of his or her ownership stake. For instance, a miner who owns 3% of the Bitcoin available can theoretically mine only 3% of the blocks.

Bitcoin uses a PoW system and as such is susceptible to a potential Tragedy of Commons​. The Tragedy of Commons refers to a future point in time when there will be fewer bitcoin miners available due to little to no block reward from mining.

The only fees that will be earned will come from transaction fees which will also diminish over time as users opt to pay lower fees for their transactions. With fewer miners than required mining for coins, the network becomes more vulnerable to a 51% attack. A 51% attack is when a miner or mining pool controls 51% of the computational power of the network and creates fraudulent blocks of transactions for himself, while invalidating the transactions of others in the network. With a PoS, the attacker would need to obtain ​51​% of the cryptocurrency to carry out a 51% attack. The proof of stake avoids this problem by making it disadvantageous for a miner with a 51% stake in a cryptocurrency to attack the network.

Although it would be difficult and expensive to accumulate 51% of a reputable ​digital coin​, a miner with 51% stake in the coin would not have it in his best interest to attack a network which he holds a majority share.

If the value of the cryptocurrency falls, this means that the value of his holdings would also fall, and so the majority stake owner would be more incentivized to maintain a secure network.

Delegated Proof-of-stake

To mitigate the potential of centralization and the associated negative impacts, the system makes use of delegated proof of stake (DPOS). IN DPOS a total of N witnesses sign the blocks and are voted on by those using the network with every transaction.

By using a decentralized voting process, DPOS is by design more democratic than comparable systems. Rather than eliminating the need for trust all together, DPOS has safeguards in place the ensure that those trusted with signing blocks on behalf of the network are doing so correctly and without bias.

Additionally, each block signed must have a verification that the block before it was signed by a trusted node. DPOS eliminates the need to wait until a certain number of untrusted nodes have verified a transaction before it can be confirmed.

This reduced need for confirmation produces an increase in speed of transaction times. By intentionally placing trust with the most trustworthy of potential block signers, as decided by the network, no artificial encumbrance need be imposed to slow down the block signing process. DPOS allows for many more transactions to be included in a block than either proof of work or proof of stake systems.

DPOS technology allows cryptocurrency technology to transact at a level where it can compete with the centralized clearing houses like Visa and Mastercard. Such clearinghouses administer the most popular forms of electronic payment systems in the world.

In a delegated proof of stake system centralization still occurs, but it is controlled. Unlike other methods of securing cryptocurrency networks, every client in a DPOS system has the ability to decide who is trusted rather than trust concentrating in the hands of those with the most resources.

DPOS allows the network to reap some of the major advantages of centralization, while still maintaining some calculated measure of decentralization. This system is enforced by a fair election process where anyone could potentially become a delegated representative of the majority of users.

Rationale for DPOS

Give shareholders a way to delegate their vote to a key (one that doesn’t control coins ‘so they can mine’)

● Maximize the dividends shareholders earn

● Minimize the amount paid to secure the network

● Maximize the performance of the network

● Minimize the cost of running the network(bandwidth,CPU,etc)

● Shareholders are in Control

The fundamental feature of DPOS is that shareholders remain in control. If they remain in control then it is decentralized. As flawed as voting can be, when it comes to shared ownership of a company it is the only viable way. Fortunately if you do not like who is running the company you can sell and this market feedback causes shareholders to vote more rationally than citizens.

Every shareholder gets to vote for someone to sign blocks in their stead (a representative if you will). Anyone who can gain 1% or more of the votes can join the board.

The representatives become a “board of directors” which take turns in a round-robin manner, signing blocks. If one of the directors misses their turn, clients will automatically switch their vote away from them.

Eventually these directors will be voted off the board and someone else will join. Board members are paid a small token to make it worth their time ensuring uptime and an incentive to campaign.

They also post a small bond equal to 100x the average pay they receive for producing a single block. To make a profit a director must have greater than 99% uptime.

Pooled Mining as Delegated Proof of Work

So how is this different than a proof-of-work consensus (Pow)? With PoW, users must pick a mining pool and each pool generally has 10% or more of the hash power. The operator of these pools is like a representative of the clients pointed at the pool.

PoW systems expect users to switch pools to keep power from becoming too centralized, but collectively five major pools control the network and manual user intervention is expected if one of the pools is compromised. If a pool goes down then the block production rate slows proportionally until it comes back up. Which pool one mines with becomes a matter of politics.

Reasons to not randomly select representatives from all users:

● High probability they are not online

● Attackers would gain control proportional to their stake, without any peer review

● Without any mining at all, the generation of a random number in a decentralized manner is impossible and thus an attacker could control the random number generation

Scalability

Employing fixed validation cost per transaction and a fixed fee per transaction, results in limits to the amount of decentralization that can take place.

Assuming the validation cost exactly equals the fee, a network is completely centralized and can only afford one validator. Assuming the fee is 100x the cost of validation, the network can support 100 validators.

Developers of DPOS assume that everyone with less than the amount required to validate won’t participate. Also assumed is a “reasonable” distribution of wealth. It’s clear that unless alternate chains have unusually high fees, there will only be a handful of people with enough stake to validate profitably.

In conclusion, the only way for POS to work efficiently is to delegate. In the case of tokens, holders can pool their stake by some means and ultimately this will end up like DPOS prior to approval voting with a variable number of delegates.

Delegates would not actually receive any income as with mining pools because the validation expenses will consume the vast majority of the transaction fees.

The end result is that decentralization has a cost proportional to the number of validators and that costs do not disappear. At scale, these costs will centralize any system that does not support delegation.

This kind of centralization should be designed as part of the system from the beginning so that it can be properly managed and controlled by the users, instead of evolving in some ad hoc manner as an unintended consequence.

Role of Delegates

A witness on the blockchain is an authority that is allowed to produce and broadcast blocks. Producing a block consists of collecting transactions of the P2P network and signing it with the witness’ signing private key.

A witness’ spot in the round is assigned randomly at the end of the previous block

Analysis

Future sentiment analysis tools shall utilize more granular and/or topic-appropriate tagging.

Emotion: brief organically synchronized evaluation of a event major

● Angry, sad, joyful, fearful, ashamed, proud, elated

● Mood: diffuse non-caused low-intensity long-duration change in subjective feeling

● Cheerful, gloomy, irritable, listless, depressed, buoyant

● Interpersonal stances: effective stance toward another person in a specific interaction

● Friendly, flirtatious, distant, cold, warm, supportive, contemptuous

● Attitudes: enduring, effectively colored beliefs,

dispositions towards objects or persons

● liking, loving, hating, valuing, desiring

● Personality traits: stable personality

dispositions and typical behavior tendencies

● nervous, anxious, reckless, morose, hostile, jealous

Example Use Cases

Virtual Asset and Cryptocurrency Trading

While sentiment analysis has been a standard tool for stock and commodity traders worldwide for years, these tools are basically non-existing for virtual asset trading.

While nascent, these markets have already reached daily trading volumes of $15-25 billion.

The problem of information validity is multiplied by the crowd’s incentives to manipulate market sentiment. FUDers FUD to shake weak hands and claim their coins, and Mooners hype to get their Return on Interest (ROI) at the expense of newcomers.

This dynamic opens crypto-markets to scammers of all kinds. Small amounts of valuable information are often the key to incredible profits, so for the ambitious trader or investor the exhaustive search must continue. But the lack of clear and reliable reputations makes that information and data hard to acquire.

Trust is expensive, and crypto markets cannot professionalize until these problems are solved.

Politics & Governance

Analyzing the polarity of texts has a long tradition in politics and governance. A prominent example is media negativity, a concept that captures the over-selection of negative over positive news, the tonality of media stories, and the degree of conflict or confrontation in the news.

Its “measurement in quantitative content analytic research can be defined as the process of linking certain aspects of textual data to numerical values that represent the presence, intensity, and frequency of textual aspects relevant to communication research” (Lengauer et al. ​2012​, p. 183).

A number of recent studies demonstrate the benefits of sentiment analysis for such analyses (Van Atteveldt et al. ​2008​; Soroka ​2012​; Young and Soroka ​2012​; Burscher et al. ​2015​; Soroka et al. ​2015a​, ​b​). Sentiment analysis has also been used to establish the level of support for legislative proposals or polarization from the analysis of parliamentary debates (Monroe et al. 2008​), to identify issue positions or public opinion in online debates (Hopkins and King ​2010​; Ceron et al. ​2012​; González-Bailón and Paltoglou ​2015​), or for studying negative campaigning (Kahn and Kenney 2004​; Lau and Pomper ​2004​; Geer ​2006​; Nai and Walter ​2015​) to mention just a few prominent uses.

Sports & Betting

Point spreads and over/under lines are set by sports betting agencies to reflect all publicly available information about upcoming games, including team performance and the perceived outlook of fans.

Assuming market efficiency, one should not be able to devise a betting strategy that wins often enough to be profitable.

Several researchers have designed models to predict game outcomes. Hong and Skiena used sentiment analysis from news and social media to design a successful betting strategy.

However, their main evaluation was on in-sample data, rather than forecasting. Also, they only had Twitter data from one season (2009) and therefore did not use it in their primary experiments.

We use large quantities of tweets from the 2010–2012 seasons and do so in a genuine forecasting setting for both winner WTS and over/under prediction.

Marketing & Branding

To understand customers' feelings about a brand, it's important to analyze social sentiment regularly. It's an exercise that aids in the understanding of people’s feelings about a company, product, or service.

Companies can use an automatic sentiment analysis tool to obtain a simple overview of your brand's health without analyzing each post.

Review Rating

There are a variety of potential applications for automated review rating. Tong’s (2001) system for detecting and tracking opinions in online discussions could benefit from the use of a learning algorithm, instead of (or in addition to) a hand-built lexicon.

With automated review rating (opinion rating), advertisers could track advertising campaigns, politicians could track public opinion, reporters could track public response to current events, and trend analyzers could track entertainment and technology trends.

Oracles for Rating Sentiment Algorithm

Oracles in the context of blockchains and smart contracts, are agents which find and verify real world occurrences and submits this information to a blockchain to be used by smart contracts.

Smart contracts contain value and only unlock that value if certain predefined conditions are met. When a particular value is reached, the smart contract changes its state and executes the programmatically predefined algorithms, automatically triggering an event on the blockchain.

The primary task of oracles is to provide these values to the smart contract in a secure and trusted manner. Blockchains cannot access data outside their network.

An oracle is a data feed – provided by a third party or service – designed for smart contracts on a blockchain. Oracles provide external data and trigger smart contract executions when predefined conditions are met. Such condition could be any data like weather temperature, successful payment, price fluctuations, etc.

The platform should allow for the deployment of inbound software oracles to rate the performance and accuracy of sentiment algorithms which predict real world occurrences (RWO). Examples for RWO are:

  • Outcome of sport events reported on news sites, i.e. - Horse, car races
  • Political appointments
  • Public listings (stock; virtual currencies)

Developers can deploy oracles which confirm outcomes predicted by their sentiment algorithms. To deploy an oracle the developer will stake a number of Tokens (“Oracle Stakes” or “OS”).

Developers may set their own confidence level (CL) for the outcome of the sentiment algorithm prediction to which the OS will be applied (i.e. 75%).

As long as the CL is met or the value is exceeded by the result reported by the oracle no OS are being transferred. If the CL is not achieved tokens from the OS are transferred to subscribers of the algorithm.

Why Sentiment on the blockchain

Comments and social media postings can be manipulated or entirely removed by the author or moderators. This can lead to false signals in comment streams used for sentiment analysis.

To make reliable sentiment predictions, the platform should record all collected data on a blockchain where it is time-stamped, creating and immutable record for analysis.

This process will further allow reliable back-testing of algorithms developed by creators on the platforms (and reward these accordingly).

All sentiment analysis algorithm created by developers on the platform must further be time-stamped and recorded to a blockchain to ensure that only future oracle data is utilized for the verification of sentiment algorithm rating.

Token

Tokens can be awarded to website owners who provide demographic and comment data through an API to analysts using these feeds for sentiment analysis purposes.

Token Mechanics

Tokens fulfill the following criteria:

  • bestow the right that results in usage, voting/staking, or plain access to the platform
  • Value exchange
  • Toll (pay-per-use)

Data Providers

Website operators can verify their websites as sources for the platform via WordPress plugin or Javascript to become verified data sources (VDS). After the the validation is completed, the website operators receives an allocation of Sentiment Tokens issued by smart contract to a wallet on the platform (Initial Stake). 100% of the Initial Stake are staked immediately by a smart contract.

As developers add the VDS to their sentiment algorithms and end users start consuming the resulting sentiment data, tokens from the Initial Stake are being unstaked and released to the account holder.

The total number of CP sources is limited to 100,000 (SourceMax). Should the total number of sources reach 100,000 new sources will be added to a waiting list until existing sources expire. Source expire if they are not being utilized by a developers over a period of 360 days at SourceMax.

Developers

Developers familiar with sentiment analysis can deploy algorithms on the platform and choose appropriate CPs to source data from. Developers will receive 5,000 Sentiment Tokens for the deployment of a new algorithm deposit by smart contract into their account.

100% of the initial token allocation is staked and will be released to the developer when consumers subscribe to his sentiment algorithm feed.

Taggers

Any user can earn Tokens by tagging data sources. Every tag must be validated by five other independent taggers on the platform. Each tagging event is therefore rewarded with an average of one token. Each token is staked by smart contract until first consumed by a developers algorithm.

Expected Resource Requirements

  • 6x servers 4 vCPUs, 15 GB memory for Kubernetes Cluster - Analyzer & UI part live.
  • 3x servers 8 vCPUs, 52 GB memory for Crawler DB
  • 13x servers 4 vCPUs, 26 GB memory main DataBase
  • 3x servers 4 vCPUs, 26 GB memory for release environment DataBase where all the testing and QA takes place
  • 18x micro-instances 1 vCPUs, 2 GB memory for Crawler's workers (sufficient to crawl about 50,000 websites)

Requirement calculations on EOS.IO.

Further reading: An Internet of Identity