In previous story “hackable” DAOs were discussed, now it’s time for “unhackable“ ones.
Morphy law says:
“Anything that can go wrong will go wrong”.
Perfect demo: unlucky boy meets lucky girl:
https://www.youtube.com/watch?v=OuJ4BBQ0nhc
[Un]fortunately we can’t control everything; of course, we can do what’s possible. 1st thing that is very simple and easy to do - play on your own field, by your own rules.
Web3 hacks are reported almost daily. Bugs are everywhere - in code, protocols, smart contracts logic, wallets, tools, etc. You can’t handle it all, but you can avoid it. Don’t use a buggy tech, or only use it if there is no other choice. This is one of the fundamental approaches to cybersecurity - to reduce the attack surface.
Follow a “nothing to hack” approach. What’s hackable in DAO on the tech level - treasury or shared account, don’t use it. Each proposal should be invested separately (money out). Profit can be divided immediately (money in). No treasury - no hack.
DAO is just a protocol, any protocol can be implemented on a p2p basis without intermediaries (thanks to cryptography). To integrate L2 DAO in existing blockchain networks just multisig wallet can be used. Moreover, onchain DAOs are very expensive at scale. Imagine onchain voting of 1K members, cost = tx_fee * 1K, what if 1M?
To be online (hot wallets use) is insecure, manage your private keys and sign your transactions on offline devices (bad UX, but better security).
Governance is the most difficult part, unhackable governance is a challenge. Some hackable approaches are there: governance tokens (51% attack), quadratic voting (sybil attack), etc.
Why not to delegate some resources to DAO “leaders”? If leaders do the best for DAOs and themselves - everyone wins (Nash equilibrium), if the leader does provable bad action - anyone can start a dispute (web3 dispute resolution protocol) - a leader can lose deposit and reputation.
DAO leaders/governors can be AI models.
No silver bullet, but active defense: attack surface reduction, diverse/adoptable governance methods, and dispute resolution, experiments with AI and governance automation to reduce human factor/ social engineering.