An interview with Ray Dillinger (This particular interview is published in two episodes due to technical content, here we discuss smart contracts, lightning network, token economy, cybersecurity)
I meet and connect with many inspiring entrepreneurs, innovators, thought leaders and academics, working relentlessly in the blockchain space. Some of them are in the public lights and some are not. I thought it would be great to dive into the mind of such known and unknown blockchain innovators. Hence, I’m starting this interview blog. This is a technical interview, so if you are new to Blockchain, you may want to get up to speed with CoinDesk.
The Episode1 is listed here.
Q5. Alluding to the previous problem, lighting network is supposed to champion scalability issues, though it’s a work in progress and people are already throwing their alleged doubts. Bitcoin’s average transaction time touched almost 3500mins on 1st Jan2018 and transaction fees reached $55. Though both the numbers have reduced thus far, yet it is still pretty high causing issues in Bitcoin’s for peer-to-peer transaction. During the development iterations of Bitcoin were these problems discussed or foreseen by Satoshi and everyone involved?
Ray Dillinger : Actually I think I was the only guy who was very worried about scalability. Bitcoin is a seriously brute-force approach to preventing double spends, spending all the bandwidth all the time to make sure that everyone’s aware of the first spend in real time. The total bandwidth scales with the number of users multiplied by number of transactions, and assuming each user actually makes a transaction every so often (ie, is actually a user and not just some spook monitoring the block chain for snooping purposes) that’s scaling with the square of the userbase. And scaling with the square of the userbase just isn’t kind to your application, on a planet with eight billion or so people.
There are scalability solutions of various kinds. What they all have in common is that the faster and higher capacity they get, the more they look like a completely centralized payment processing service. I’ll go over a few options.
First there is what they’re calling the lightning network. I think I’m going to call it the “ripple” protocol after the first altcoin implementation that made it work. Technically it’s sound. But I don’t think it’s going to capture use cases as well as the developers hope it will. What it will do is create a drive toward conventional-style banking. All these interlocking escrow accounts tie up capital, and if you’re going to tie up capital you’d rather tie it up in an account that allows you to exchange payments with thousands of people than in an account that allows you to exchange payments with one. That’s basic business efficency. So you’re going to tie it up in your account with Trent’s Bank and Trust, and when Alice wants to buy a coat off your rack, you’re going to direct her to pay your account at Trent’s Bank and Trust. Which she can do, because she also has an account at Trent’s Bank and Trust, right? Or one of a very few other such institutions, which all do interbank settlements in some civilized manner where we mortals don’t get to see it.
And, later on, if there’s a dispute, what happens? Alice can show her interactions with Trent’s Bank and Trust, and you can show your interactions with Trent’s Bank and Trust, and Trent’s Bank and Trust can show its interactions with both of you, and aside from the fact that now there is cryptographic evidence to add a weight of proof to the testimony of the accounting records, the situation is absolutely no different than the way we deal with banks right now. And of course if that’s the business Trent is in, Trent is going to comply with KYC/AML laws, and will have to be ready to freeze accounts on command, and all the rest of it, because Trent’s is going to have to be, in law as well as name, a bank. The ideal where people establish their own payment channels with the lightning network and preserve their financial privacy is possible, but in practical terms a fantasy because it would require them to disadvantage themselves — to use their money far less efficiently than their neighbor who just opens a bank account with Trent.
You can also have an “infinite” number of transactions in each block if you make the block into an opaque merkle tree root, where the ability to prove that a transaction output exists means having the merkle branch that leads to the transaction that created it so you can show the transaction and then show that it is part of the block. However, you then need extra machinery to carry information about what outputs are invalidated (spent) by transactions in a block, to prevent them from being spent again, and that scales linearly in transactions — so the whole scales linearly. Much more effective use of block chain space, but still linear — and the bulk isn’t really gone, it has just been shifted from the blocks where everybody has to deal with it to the spendable coins where only a few people ever see it.
You can have the contest to decide who gets to make the next block end before the next block begins. Then, instead of having everybody send their transactions to everybody all the time, you can have all of them send their transactions directly to the node that’s making the next block. This saves you a bunch of your online bandwidth. With most of these schemes, though, there’s a problem that the contest to make the next block is ususally decidable by the builder of the current block if they simply decide what transactions to include and what to exclude in the block they build. And while they save online bandwidth they do nothing to save on the static space occupied by the block chain or the transactions-per-block limitation.
You can use a cross-linked structure — a block mesh instead of a block chain — to provide a short path linking back to the root or genesis block, so that people don’t have to have the whole block chain downloaded to make a good solid proof of the soundness of a payment. That would save you some fairly significant online bandwidth. But then you have a technical problem, given a transaction output that someone wants to use to make a payment, to chart your short path back to the root through blocks that are guaranteed to contain any evidence that that transaction output is no longer valid. In other words, let’s say Alice wants to make a payment to Bob. She’s able to show Bob the block containing the transaction where she got the coin (from Zebulon) and Bob can verify that by following its hash trail back to the genesis block. But Bob also has to know that Alice hasn’t already used the coin to pay Carol, in some transaction in another block. So the “shorter path” from the current block (the one where Alice’s payment to Bob will appear if it’s accepted) back to the genesis block has to be charted through blocks that will record any such transaction to Carol if it exists, which makes it hard to skip any. And at that point your advantage over a block chain has broken down because Bob is checking every block. I have just come up with a solution to this problem, by the way, but it doesn’t matter much because it’s still only online bandwidth, and doesn’t address the transactions-per-block limitation.
And you can use “Super nodes”, but if you have “Super nodes” that store the whole block chain while everybody else is a light client, you’ve gone halfway toward the traditional banking network model, where servers keep all the database records and everybody else just calls the server to make a payment. It saves on bandwidth because all the light clients aren’t bothering to download the block chain or check much of anything, but that’s sort of like saying the guy who’s walking is saving on jet fuel. True but sort of misses the point.
The early protocols for digital cash created a digital record where a double spend would create an inconsistency, and an inconsistency — regardless of when the inconsistency were discovered — would reveal a secret. In effect, if Alice got a $10 coin once, and then spent it twice, then eventually, when Bob (the issuer of the $10 coin) was asked to redeem both coins, he’d find in the first a spend whose ‘secret’ he couldn’t see because it had been obliterated by the process of having been received by Alice, and then additional information about the same secret in the second, canceling the obliteration to reveal that “ALICE SPENT THIS COIN.” The idea being that he could then send Alice a bill for the coin she’d borrowed, or prosecute her for fraud or whatever. The point was that the early protocols saved on bandwidth by catching double spend after the fact. They let people double spend, secure in the knowledge that they’d be found out. This doesn’t even need a block chain, but it is sort of structurally impossible in the permissionless world of block chain cryptocurrencies. If you’re going to have someone enforcing the use of prosecutable legal identies who can be held liable for anything, the only way to do that is through a Trusted node where those legal identities are verified. No trusted nodes means no legal identities. This is perfectly viable, however, for permissioned systems where you have the trusted nodes anyway.
Q6. 90% of the coins may not survive a market correction but that impact is unlikely to mute the cryptocurrency’s market cap. We have privately discussed that few of the currencies that are out there are technically a mess. What should developers focus on, code and business model wise, while building second and third generation cryptocurrencies ?
Ray Dillinger: Developers should focus on solving real problems. Anybody can copy block chain code from github or wherever, overwrite a few parameters and identification strings, and come up with a block chain that solves the same problem that the original solved. But you won’t get anywhere with a solution to a problem that’s already been solved. Right now I’m putting most altcoins in this category. There is already a solution to the cryptocurrency problem; they contribute little.
If you’re making a new block chain based application, make something new. Make something that has a reason to exist, something that couldn’t be handled by keeping records at a central server, something that answers a need that’s been causing people real pain, and then your business will have something to contribute.
The biggest technical mistake is that people are taking the expertise and standards they have for normal programming and trying to apply it to security and cryptographic code. It isn’t the same. Other programs are measured by what they do; but for security programs it is just as vitally important to be sure what they don’t do. In every other field of computer science, a minor bug is minor, because you have users who want your program to work. When they find a bug they’ll report it. In security, you have attackers who want your program to break. When they find a bug they’ll steal your users’ customer databases and sell them on clandestine markets. In every other field of computer science, your users have a set of routine tasks they want to accomplish and one more feature is one step saved along the way for some fraction of them. In security, your attackers are looking at every combination of movements in those sets of routine tasks, and trying to find sequences and combinations that can lead to a break, and one more feature is exponentially more attacks.
And cryptography will work just fine even if you’re using it wrong. People who encrypt and then decrypt something in ECB mode will find that the decryption is correct, and their encryption routine does all the test vectors correctly. People who implement RSA without padding or hashing will use it in protocols and wonder why their opponent can sign everything with their own private key. People who implement CBC mode with no initialization vector may as well not have bothered. People can still be misled by an old NIST standard into using a pseudo-random number generator that’s now known to be broken, or be tricked into using stream ciphers. And the list goes on. Downloading the libraries and calling the routines does not mean you have securely implemented anything cryptographic.
And finally, people who have focused on cryptography often neglect key management. Key management is likely the hardest job in cryptography and the single most important thing to do well. You don’t just store them in plain text in a file named “secretkeys.txt” where someone can steal it.
Honestly it as important for the investors to focus as the coders. Incompetent code is, sadly, a fact of life. There isn’t much I can do to prevent investors, or anyone else really, from running into incompetent code. But investors in particular, simply by recognizing it and keeping their money in their pockets, can both prevent losses for themselves, and prevent it from becoming a hazard to anyone else. I know, that’s completely unfair, because the guys with the training to recognize bad code — especially bad crypto and security code, which is in a class of subtlety that even a lot of coders won’t recognize — are not the same ones with the training to recognize good investments.
But bad code, and more specifically, the people and organizations that produce it and can’t tell it from good code, are always bad investments. Investors who are considering investments in teams that produce crypto or security code should be consulting with people who know about crypto and security code to evaluate it.
However, I can give investors some solid business advice. Do your due diligence on highest alert for everything, even the things you wouldn’t ordinarily believe. Don’t invest without seeing running code. If the code is cryptographic or security code, get a security or cryptographic programmer to look it over for quality. And don’t invest without getting any equity. Now that the markets have turned down the heat a bit and the SEC is on the scene, I expect some of the scammy stuff to get less severe. But it won’t go away, and it won’t happen overnight.
Q7. Cybersecurity is becoming the biggest threat in the coming decades; it’s a new mechanism to wage war against countries, systems, institutions, and people. We have seen several hacks in our codes from the DAO hack to the recent heist in Nicehash of $64M. How can those loopholes in smart contract security code be addressed?
Ray Dillinger : When we were going over Bitcoin, Hal reviewed the scripting language part of it and removed several instructions. That’s why Bitcoin scripts are “linear programs” where the only control structures are structures that skip ahead (like over the unused branch of an if), never structures that skip back (like a loop). This was done very specifically to keep the semantic model extremely simple, to keep it so simple that it could be proven that certain things, certain kinds of attacks, were not possible. This has been good for the security of Bitcoin. In several other block chains, more general scripting languages which allow transaction types that could not occur in Bitcoin have also allowed several kinds of attacks and coin thefts that could not have occurred in Bitcoin.
The cryptocurrency protocols are secure, in that the block chains do exactly what they are intended to do and nobody has ever yet found a way to trick them into doing anything else. But there is still grave risk to investors; a secure protocol does not stop an Ethereum contract from being code that does something other than what the person presenting it claims it does. You have to actually check the code. A secure protocol does not stop an exchange or an online wallet from employing a dishonest person, nor from being run by a dishonest person. A secure protocol does not make a machine secure, and if someone has spyware on your box, or the exchange’s, to nab the keys, they will spend the tokens. etc. Cryptocurrencies make “ordinary” levels of computer security seem very flimsy because they introduce high-value targets that thieves will spend a lot more effort trying to break into. But what they break to steal it has so far always been other parts of the system, not the crypto code itself.
One of the biggest problems is that human beings do not generally read code. When someone presents a smart contract and says that it does XYZ, people tend to believe that the smart contract does XYZ, even if its code states plainly that it actually does ABC.
To a first approximation, no consumer owns a truly secure machine. We have purchased our hardware from manufacturers all over the world who got parts from all over the world, and every point of manufacture or assembly has taken place in the jurisdiction of a nation that could be forcing the manufacturer to build security vulnerabilities into the hardware for their own use. Even if that has not happened, we have purchased our hardware from manufacturers who have been optimizing for speed at the expense of security, as the recent meltdown and spectre attacks demonstrate, or low cost at the expense of security. Once that’s done, most have installed operating systems on them whose source code they’re not even allowed to see and check for back doors, and the rest have installed operating systems whose sources we mostly haven’t looked at. And then we hooked them all together, using protocols which were designed in what can charitably be termed a far more trusting era, and hoped for the best.
Some of us can achieve very very good approximations of security, but the process is arcane, inconvenient, and limiting. We do it for special purposes, and we don’t do it for very long at a time. We hope that in the future things will be better. But we hope in the absence of evidence.
Q8. Decentralization and scalability are becoming a major issue in the blockchain ecosystem and both are in a tug-of-war with each other. There are so much power dynamics and game theory being played among the different crypto groups. How can we get a consensus in this decentralized model for greater good?
Ray Dillinger : I don’t believe that we can, unless people agree that the greater good is in fact what they want. After witnessing what they did in the block size fight, I have seen no evidence of that. A lot of self-interested people who are not being honest about their interests are lined up on both sides here, with the full intent never to budge a single inch, and more in the middle with the full intent never to allow any agreement to come to pass. They will make sockpuppet accounts and hire trolls to drive opposing voices away and do all the rest of that crap, all over again, for years if need be. Now that there’s more money involved and it’s more public, they’ll probably pay for publicity campaigns to smear and slander each other. If not ignored, they’ll actively prevent any agreement from being reached until sufficient people get fed up and ignore the fight to just plain do one thing or another precipitating an unplanned hard fork. I hope the community proves me wrong, but as far as recent events demonstrate, “That’s the Bitcoin Way.
Thank you Ray Dillinger for spending your valuable time in answering these questions.
2010–07–14 21:10:52 UTC — Original Post
“The design outlines a lightweight client that does not need the full block chain. In the design PDF it’s called Simplified Payment Verification. The lightweight client can send and receive transactions, it just can’t generate blocks. It does not need to trust a node to verify payments, it can still verify them itself.
The lightweight client is not implemented yet, but the plan is to implement it when it’s needed. For now, everyone just runs a full network node.
I anticipate there will never be more than 100K nodes, probably less. It will reach an equilibrium where it’s not worth it for more nodes to join in. The rest will be lightweight clients, which could be millions.
At equilibrium size, many nodes will be server farms with one or two network nodes that feed the rest of the farm over a LAN.”
****
I’ll end the interview with Nick Szabo’s quote “Collectibles augmented our large brains and language as solutions to the Prisoner’s Dilemma that keeps almost all animals from cooperating via delayed reciprocation with nonkin.” You can read details here.
End of Episode 2
Disclaimer:
The questions asked expresses Gayatri’s opinion and not of any company or any institutions or any groups.
This interview expresses Ray’s opinion and not of any company or any institutions or any groups.
This interview does not endorse any company, institutions or any groups.