Hi, I’m Mark Nadal and I am not Twitter. But I was fortunate enough to be included in the recent paper that Jack's team published. The review of our protocol, GUN, was mostly accurate but already outdated. Since the time of the writing, we have already seen up to 55K users per second in our network and nearly a quarter billion downloads in 2020 alone. We're now trying to rapidly ramp 10X up to handle a 100M monthly user capacity by end of the year.
While this does not compare to Twitter's scale, it has been on a shoe-string budget and has taught us a lot of important lessons during our collaborations with non-profits and other startups. Simply, human problems are more important than technological problems.
In this proposal to Twitter, I explain why and how we could create bluer skies for the future.
Our community believes in first principles thinking, meaning we originated all our own algorithms through socratic dialogue. We then test it at scale using code to see if we are right. We have found that the least wrong algorithms were designed by imagining how ten billion people might coordinate themselves if they were to perform the same job as the internet itself.
I believe that Bluesky should be an evolution of the internet that addresses the human-specific problems that were not obvious at the time of the internet’s technologically focused creation. Such as the following:
Our conclusion is that while centralization is expedient, its abuses create harm that far outweigh its potential gains. Giving every individual their own freedom is a way to reduce harm’s reach, but decentralization on its own does not lead to fairness. Fairness is interpreted differently depending on your cultural background, so multi-cultural fusion becomes important in helping people bridge towards a more collective .
These are our values: Freedom, Fairness, Fusion. Now, what patterns are needed to codify them?
Our goal is not to tell people what to think, but to provide tools that empower free thinking. The theory is that non-violent ideas independently form prosocial consensus over time[1][2][3] and therefore out-survive violent content which has a tendency to be reactionary and authoritarian. Being told what to think triggers a friend or foe response, and is therefore classified as a violent form of communication.
So what algorithms are needed to enable independent thought?
A user ought be able to own their own identity and not be reliant upon a 3rd-party. To understand how we apply public-key cryptography to p2p identities and end to end encryption with ECDSA/ECDH, please see our explainer videos. All our security and cryptographic functions use the native and industry audited WebCrypto APIs. “3FA” 3-Friend Auth is a Shamir-like social recovery for an account. And “data-mining resistance” is a way to troll mass surveillance even in public settings, by using proof-of-work seeds for decryption.
Actually sharing and synchronizing data is its own technical challenge. Most teams use logs to achieve data consistency in a decentralized network, but for us, logs were too slow with only just a few hundred users. Instead, we use a type of CRDT that allows for deletes and updates that can overwrite data in-place. To understand how it works, please see our explainer. Logs also limit what applications can do, so we chose graphs because they enable a wide range of use cases, including immutability, when desired:
Finally, search result rankings and badges are measured using the Iris algorithm. It calculates a confidence score by down weighting Web of Trust attestations against various uncertainties in the network - such as the expected number of what population should exist versus what is stored. For instance, we know ~8 billion people exist, but the Web of Trust may only reach 10 million. So by incorporating the uncertainty into the score, we get a more accurate result.
This means that even if many users attempt to collude on fake news or information, the algorithm can detect missing or suspicious connections in data by tracing how many "degrees of separation" exist between the peer-to-peer accounts. We applied our research to the Bitcoin network and found that bots have a large sybil cluster, but are usually 4 degrees out, which matches independent research[4][5] from others that any random human user is within 3.5 degrees of each other.
Let’s explore what all of this might actually look like for a user.
A user may still click on a profile claiming to be Elon Musk, but there will be a red unverified badge with a “2%” score listed based on the automatic calculations. This is the case even if all their 1st degree friends collude and make lying attestations that the account is indeed Elon Musk, because there is a high degree of more consistent attestations externally that conflict with your 1st degree, despite your friends’ attestations being more heavily weighted in your Web of Trust. In contrast, if your friends' attestations agree with external degrees for “Elon Musk”, then there is both low conflict and low uncertainty - meaning it is probably the real Elon Musk, and a green “100%” checkmark would display.
But if the user were to search for an unpopular phrase, the only counterweight that can be applied is the increased total uncertainty from the fewer or lack of search results that would have been expected. The advantage of this approach is it can be applied equally as well to profiles as it can to search phrases, tweets, photos, or wiki texts. This means from a UI perspective, it would be better if this “protocol” operated at the browser or OS level, applying to any element. This would enhance the existing web, sites like Twitter and others, rather than off boarding users from it. Especially when combined with secure rendering techniques that maintain a user’s privacy.
To put this in the context of our values language, it first enables diversity through Freedom of thought, identity, and speech. It then measures Fairness through either non-violent consensus, or by using the conflict itself and uncertainty to indicate warnings that the impartiality of a subject is disputed across cultures. Either way, this constantly exposes users to the bias of their echo chamber and let’s them “escape” to see how other degrees of separation have assigned different meanings to the same words - why a minority group in America may share the majority view in China, or how some people are underrepresented in all cultures.
We can find bluer skies if we walk a kilometer in another’s football moccasins. This is the Fusion that a global community needs to engage in more meaningful civil discourse.
Optional: Provide a marketplace that, if a user desires, automatically cross-posts their content to all other social media sites, in order to help amplify their voice. By default, this will be free and set their content to a Creative Commons license for others to remix - if they do not want that, the overlay network will ask them to pay a monthly subscription to list proprietary or encrypted items for sale. Different curation and recommendation algorithms could also be installed from the store.
Additionally, it may be worth asking developers to pay for (or pass forward) any off-user device fees incurred by their apps - like the bandwidth costs for rebroadcasting videos. Interestingly, if you use bandwidth as a proxy for time attention, companies like Disney could make more money using a CDN of pirates than hosting the content themselves and charging subscriptions. More importantly, I believe we should shift society towards a post-capitalist, post-socialist economic system.
Thank you for your consideration.
Also published on GitLab.