Decentralization is the new fad. We need to remind ourselves why centralisation was good in the past. Centralisation was required in the past for efficient communication and information organisation.
Information has a tendency to spread like bacteria. But there is ‘good’ bacteria; then there is ‘bad’ bacteria.
Centralised ‘control’ is ultimately about avoiding and restraining chaos. Centralised societies and systems, therefore, have vastly reduced levels of confusion, chaos and waste.
A centralised system requires ‘good’ information in order to make its decisions. Because of this, centralised systems have extremely good ‘detection’ systems for ‘bad’ data. The fidelity of data contained within a centralised system, is critical to the system’s viability and existence.
As a result, all centralised systems are also reliable data systems.
To call centralised systems ‘reliable’ data systems is NOT the same thing as to claim that they have ‘more’ data than decentralised ones. In fact, centralised systems are extremely poor at information generation. They tend to over-rely on past data (originally generated from decentralised sources) for information.
It is this over-reliance on past data that makes centralised systems extremely vulnerable to black swans; e.g. a change in central ‘rule’, a new ‘bacteria’ not previously factored in, a new product that changes how business is done, a deadly attack from another centralised system, etc.
The information dynamic of centralised systems then, is that of ‘waiting’ for centralised ‘command and control’.
The information dynamic of decentralised systems has no ‘waiting period’, it is instant, continuous, ubiquitous and multi-dimensional.
It is incumbent upon every system engineer to know the risks behind both centralised ‘rule’ and decentralized systems.
In the past, centralised rule systems seem to have fared much better than decentralised ones primarily due to the fact that ‘information’ was hard to come by. Viewed another way, decentralised systems offered no real benefit _while c_arrying enormous risks of chaos and ‘bad’ information overload!
In the present, there is a reigning predilection for decentralization. This is, partly, for good reasons. Centralised systems carry the heavy risk of stagnancy, being-disrupted, lacking-adaptability, and over-relying on history as opposed to the present and the future.
There is a way to judge ANY system, centralized or decentralized, on whether it is GOOD or not.
This new way, involves looking at the design process from the perspective of whether the system has 1) an information generation, and 2) a ‘reliability’ detection system.
A system that 1) is fragile and lacks emergent properties of growth, adaptation and intelligence.
A systems that 2) is ineffective for completing complex processes.
Since systems are not about ‘control’ or lack thereof. There is absolutely NO NEED to think of ‘decentralization’ as a paradigm to solving problems we have in the present.
But we can ADAPT the concepts of decentralization in terms of how they apply to information dynamics (see above section).
Having done that, we could then ask if our new system has built-in checks and balances for the execution of complex processes as and when required. Most ‘decentralization’ advocates like Nassim Taleb and Bitcoin/blockchain enthusiasts seem to miss this (extremely critical) point!
There is an increasingly exhausting excitement about decentralization that continues to fail to account for the needs of cohesion, and architectural fidelity.
Information is NOT about quantities. It is, sadly, not even about data ‘newness’! There is obviously, a tendency, for ‘data (generation) intensive’ systems to have a predilection for ‘new’ data over ‘old’ data. (Perhaps the main cause behind ‘fake news’/click-bait?).
Information is really about ‘actionability’. Centralised systems know this very well. Unfortunately, cetralised systems make the mistake of (inadvertently) then ‘limiting’ action by consciously limiting the amount of information.
Many software engineers are usually taught that some form of ‘information artechitecture/design’ is required to design a complex piece of software.
It is my contention in this essay that ANY existing system (physical, social/political or virtual) is in fact an information architecture.
This implies that the ‘design process’ that is followed is MORE DETERMINANT to the final system ‘architecture’ than the ‘system architecture’ that an engineer drew at the beginning stages of the project.
Another implication of this hypothesis is that it is NOT ONLY software engineers who actually DO ‘information architecture’. Every engineer is essentially responsible for achieving the same goal: making a system that is useful and somehow ‘generative’ of new insights.
The present obsession with BOTH AI and blockchain seems to prioritise ‘data’ over usefulness; ‘decentralization’ over centralisation; more ‘data’ over less (actionable) data.
But, we need BOTH!
The future lies in understanding this new ‘information’ paradigm.
We need truly useful systems; we also need information systems.
TO get both, decentralization is the WRONG paradigm for that to be realised.
TO get both, we need ‘more’ INFORMATION systems. (not necessarily ‘more data’). Thankfully, the Internet, blockchain, computing devices like cellphones, improved transportation systems (e.g. hyperloop, etc), social media and the proliferation of sensor devices, networks and technologies are of great help in this regard.
TO get both, we would need to build several ‘check and balance’ systems that allow us to make our own decisions time-ously and effectively.