Before you go, check out these stories!

Hackernoon logoThe Power of Assumption in Entrepreneurship by@B-wii

The Power of Assumption in Entrepreneurship

Author profile picture

@B-wiiRobert Hacker

The Power of Assumption in Entrepreneurship


Most of my books and major writings are prompted by a single comment or sentence that typically leads to two or three years of thinking about the related topic. This article is no exception and was prompted by a writer’s observation that every startup is based on a key assumption about the problem, the solution and scaling. These three assumptions can be simply illustrated by a proposed new medical device. The key assumption about the problem is whether the device correctly diagnoses the disease. Key assumption about the solution is whether doctors will use the device (and stop using their current approach). Key assumption about scaling might be getting FDA approval or Medicare reimbursement for a procedure using the device. It should be noted that once a key assumption is identified, it often leads to identifying other key assumptions about problem, solution or scaling. Such clear identification of assumptions also frequently focuses the entrepreneur on the viability of the business concept. The usefulness of this three-part technique and its easy adoption by entrepreneurs lead me to further study the concept of assumption.

Studying about assumptions, I realized that assumptions are one of those concepts that are critically important but with only limited writings on the subject. I see information theory, networks and incentives, as examples, to be similar subjects — critically important in multiple disciplines but almost ignored in popular and academic writing. While some would argue that these topics are widely written about by academics, I would point out that the themes in the academic articles have no practical application. For example, in information theory the positive asymmetry of information was presented by Kirzner to explain entrepreneurship and the negative asymmetry of information Spence used to explain poverty. However, I rarely see any discussion of asymmetry of information in the writing on poverty. The absence of writings on assumptions prompted me to write this article in an effort to facilitate practical applications about assumptions in entrepreneurship and related fields such as innovation and engineering.

While many terms have domain specific definitions or usage, especially in social, economic and political fields of study, “assumptions” is a term that remains unchanged as it crosses discipliness. Another concept that crosses domains unchanged is symbolic logic. Symbolic logic is a system of inference rules that dates back to Aristotle. What does the system of inference rules manipulate? Answer, the propositions, premises or assumptions in the argument. The entire nature of symbolic logic is domain agnostic and therein lies the “proof” for the first characteristic of assumptions — assumptions are domain agnostic, not changed in their behavior or definition by the domain. A simple example might clarify this point. A “set” in math is a “collection of individual objects which is itself an object” whereas in tennis a “set” is “the first player to win six or more games by two more games than the opponent, where a majority of sets alone determines the winner”. Obviously “set” is a domain specific term and “assumptions” is not.

The discussion of symbolic logic above reminds me of an important point. We cannot discuss assumptions without some references to philosophy, math and physics. George Polya, credited with coining the phrase “random walk” and a famous Stanford math professor, was once asked, “why did you study math?”. He responded, “I was too good for philosophy but not good enough for physics”. Any discussion of assumptions needs to include some philosophy, math and even physics, although the math and physics are very elementary and the philosophy may not even be identified as such. What is the significance of thinking about assumptions in terms of philosophy, math and physics? Philosophy, math and physics are the three ways to describe reality and the role of assumptions it turns out is a useful concept to better understand reality. This concept is fully developed in Section 2.

While philosophy, math and physics are widely disliked by many students, entrepreneurship and the related concept — social entrepreneurship — are increasingly popular with students at the five universities where I have taught entrepreneurship in various capacities over the last thirteen years. I have also designed, developed and executed two startup incubators and one accelerator in Miami, FL at Florida International University. Before that I built a billion-dollar publicly-traded company in Indonesia in seven years and served as the CFO of One Laptop per Child (OLPC). OLPC was a project that started at the Media Lab at MIT and gave me the chance to teach an IAP course in social entrepreneurship for seven years at MIT Sloan.

My practical experience combined with my academic pursuits have made me a serious student of entrepreneurship. One thing that I have realized is that entrepreneurship is best thought of as a process, whether one uses Eric Ries Lean Startup methodology or another approach. At every step in the process I have learned to identify the key assumption(s), to manage the validation of those assumptions as milestones and deliverables and to be extremely vigilant to not overlook a key assumption (as discussed in Section 4). Leading venture capitalist Mark Andreesseen put it well:

“So you come in and pitch to someone like us. And you say you are raising a B round. And the best way to do that with us is to say I raised a seed round, I achieved these milestones. I eliminated these risks. I raised the A round. I achieved these milestones. I eliminated these risks. Now I am raising a B round. Here are my milestones, here are my risks, and by the time I raise go to raise a C round here is the state I will be in.”

Note that milestones can also be intangible, like assumptions, and that such milestones and assumptions are linked to risk reduction. We will come back to this important point about the linkage between assumptions and risk in Section 3. I first explored the linkage between risk and assumption in my first book, Billion Dollar Company. Companies go out of business typically because they run out of cash. They run out of cash because they misjudge a known risk or miss an unknown risk. Systematically studying key assumptions in a business concept reduces the likelihood of unknown risks and may give new perspective on known risks, which in part explains why I keep writing about assumptions.

This article is organized in two parts, the first dealing with a definition of assumptions and some of their characteristics and the second part presenting some practical applications of assumptions in entrepreneurship…and many other fields. The Sections are shown below.

Part 1

Section I- Ignorance, Determinism and Complexity

Section II- Assumptions and Determinism

Section III- Pattern Recognition and Risk

Part 2

Section IV- Reframing the Problem

Section V- Additive vs Multiplicative

Section VI- What Physics Teaches Us

Section VII- Financial Modeling


Section I: Ignorance, Determinism and Complexity

“Truth is not manifest; and it is not easy to come by. The search for truth demands at least:

a) imagination

b) trial and error

c) the gradual discovery of our prejudices by way of (a) and (b), and of critical discussion.” Karl Popper Conjectures and Refutations: The Growth of Scientific Knowledge

In the quote above, the seminal philosophy of science thinker Karl Popper calls our attention to the notion of prejudice. Gordon Allport defined prejudice as a “feeling, favorable or unfavorable, toward a person or thing, prior to, or not based on, actual experience”[1].Nobel Laureate Richard Feynman offers the cavalier solution to the problem of prejudice, “it is much more interesting to live not knowing than to have answers that might be wrong. One could now launch into the philosophical examination of knowledge, a veritable history of epistemology. I will refrain from this tempting endeavor and instead opt for a more practical approach.

George Shackle, a noted economist, talks about the future as such, “The future is imagined by each man for himself and this process of the imagination is a vital part of the process of decision.” [2] Shackle begins this discussion by assuming a solipsistic view based naturally in empiricism. When the individual captures additional information they update the relevant hypothesis, which may or may not have been the purpose of the original information collection. Updating the hypothesis is based in induction. In such an approach, we take instances or a single idea or fact and generalize to a “macro” principle (the basis of prejudice). It should be noted that in Shackles approach there is no history and the future is merely a prediction. The future is derived by induction from the information then available (what in the vernacular is called history) in the present. This selection of information is obviously a personal initiative, which involves hypothesis validation and pattern recognition (the topic of Section 3). Both hypothesis validation and pattern recognition use inductive processes of cognition. The capturing of information and its selection for inductive processing are both stochastic. Capturing information is a stochastic process to acquire sensory data, highly dependent on what and how one perceives the data. Selection of data is in the present and influenced by many factors including the environment, experience, fatigue, etc. The wide range of factors affecting selection is why it is stochastic.

The question the reader might be asking is how is the author using the term stochastic. Stochastic means that a process output (by the observer) is derived as a discrete random variable and is not by definition a priori or deterministic. We will define a priori in detail in Section 2 and deal now with the ever-present concept of determinism. Determinism is one of the most widely used concepts that is completely wrong! How can such a popular concept be wrong? Daniel Kahneman, another Nobel Laureate, provides the answer. The brain is the largest consumer of energy in the body by far. Cognitive processes which require the least energy are the brain’s default. Determinism is a default process of the brain. Remember 40,000 years ago we were concerned about being eaten by wild animals and not concerned about predicting stock markets or ‘black swan” events[3]. Determinism, as a default, worked well in such survival situations, where the metaphysical properties of the lion and its behavior were not going to be questioned and the speed with which one drew a conclusion was critical to a successful outcome.

Before Kahneman we had great thinkers who did not use empirical methods to provide insight into complex ideas. As this excerpt explains, determinism is grounded in the cause-effect fallacy, which is a fallacy for a reason.

“According to Hagreaves-Heap and Hollis (1987) determinism is the philosophical notion according to which every event has a cause and, hence, there is a single course for history. …this definition was rejected by Hume using the argument that there is no such a preposition of necessary implication that links the event and its cause. Kant considered that the notion of causality was part of the mental structure with which the human being perceives the phenomena. In this way they appear to us as coherent. In both cases (Kantian and Humean) it is acknowledged that is impossible to prove logically the existence of causality. Hence, causality is more the result of a physiological characteristic at the individual level, and of consensus at the collective level. This implies that unless that logical preposition of causality can be found, its character in economics is one of consensus, . . “

[I wonder if Kahneman read Kant, perhaps as inspiration.]

Having said why determinism is flawed, now let me define it. Paul Davidson defines determinism as “statistical inference regarding future events based in present and past information”, which requires that “nature behaves according to ergodic stochastic processes”[4]. An ergodic stochastic process is a random process where the characteristics of the distribution do not change regardless of the sample or sample size. In other words, the past is a good predictor of the future, except when it is not. Events such as the financial crisis of 2007–2008 come to mind. The Pan Am airplane crash over Lockerbie, Scotland might be another and the Fukushima residents who went to work in Tokyo that day and missed dying in the nuclear disaster might be a third example. Determinism gives us no insight into such events and many more. However, as we continue to come to understand, complexity science is a better model of reality than determinism, albeit still in its infancy compared to determinism (which dates back to the ancient Greeks). Steven Hawkings has stated that he believes “complexity will be the science of the 21st century”. (Physics was the science of the 20th century if you wanted to know.) Complexity or complexity science has many definitions. Here I will quote Weaver and Bale on complexity:[5]

‘‘a sizeable number of factors which are interrelated into an organic whole and the behaviour of the system cannot be predicted by an understanding of the elements within it; statistical methods are no longer appropriate.”

One characteristic of complexity is that it uses computation rather than calculation, which might be described as the math of determinism. Computation is algorithmic, using a rules-based, hierarchical approach to statistically model or simulate in order to understand how something works. Perhaps the most exciting development in computation is that it works equally well to explain animal behavior and the social sciences. The discovery of the applicability of complexity models in evolutionary biology logically lead to their application to explain animal and human behavior. So to paraphrase at least 60 years of scholarship, social behavior(s) are complex because human evolution can be understood through complexity (although there is much work still to be done). Why is complexity relevant in a discussion of assumptions? By choosing complexity as the model of reality, we reject determinism. Such a decision makes the determination and selection of assumptions more challenging given our brains do not default to complexity, but the assumptions are more insightful and explanatory. For more on complexity, human behavior and its application in entrepreneurship, my writing, The Foundation of Entrepreneurship: Large Market Opportunities that have Repeated for 40,000 Years, may be helpful.

Earlier in this Section I used the term “induction” but chose not to define it. In the next Section I will deal with induction and pattern recognition.

Section II: Assumptions and Determinism

The practical difference between the two categories, risk and uncertainty, is that in the former the distribution of the outcome in a group of instances is known (either through calculation a priori or from statistics of past experience), while in the case of uncertainty this is not true, the reason being in general that it is impossible to form a group of instances, because the situation dealt with is in a high degree unique.” Frank Knight[6]

We now turn to the definition of “assumption”. Fortunately, we have the seminal work of Frank Knight on risk and uncertainty to guide us. However, it should be noted that his major writing on the issues surrounding assumptions were done in 1921, before Claude Shannon’s[7] groundbreaking work on information, the resultant advent of computing and computational modeling and complexity science. Nevertheless, as I hope to show, it was insightful in its day and remains insightful (but largely unknown) today. Knight identified three types of assumptions, which he referred to as probabilities, but which I will show are the same as assumptions. The three classifications are:

· A priori

· Statistical

· Estimates

A priori assumptions are true without the need for an observer. Arithmetic and symbolic logic might be examples. Statistical assumptions are empirically observations of the frequency of like objects (sometimes referred to as predicates). Drawing a conclusion on the height of men in the U.S. based on the normal distribution of heights would be an example of a statistical assumption. Expected value of a random variable would be another example. Estimates are where it gets interesting. Estimates occur when there is no logical way of analyzing the variable(s) and assigning probabilities. More specifically, we should think of estimates as having two parts 1) an unkown value and 2) an unknown probability that the value will occur. This definition looks surprisingly similar to the ideas of the 18th century statistician Thomas Bayes. Bayes in looking at a normal distribution, for example, would have said that we have a value determined by the range of the normal curve and the ability to determine the probability of the value from where it lies on the curve. Thinking backward, when you satisfy 1) and 2) above, you have a statistical assumption. I think Knight would say that it is not logically possible to have 1) or 2) without the other.

Knight eventually defined statistical probabilities as risk, in the case where you had enough information to determine probabilities. Estimates were renamed uncertainty, wherein you lack sufficient information to determine any probability. Looking to finance for an explanation of Knights linking assumption and risk, we see financial risk defined as the variance of an occurrence from the mean of the distribution of occurrence. In other words, where you establish an expected value, which requires a random variable, that is a risk. Geoffrey West’s new book, “Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies” helps us to make the connection clearer between risk and assumptions, as described below.

“Given the connection to the occurrence of rare events, it is not surprising that power law distributions and models based on fractal-like behavior have been gaining greater currency in the burgeoning field of risk management. A common metric used to address risk, whether in financial markets, industrial project failures, legal liabilities, credit loans, accidents, earthquakes, fire, terrorism, and so on, is the composite risk index, which is defined as the impact of the risk event multiplied by the probability of its occurrence. The impact is usually expressed in terms of the dollar cost of the estimated damage, and the probability by some version of a power law. As society becomes ever more complex and risk averse, developing a science of risk is becoming of increasing importance, so understanding fat tails and rare events is an area of increasing interest in both the academic and corporate communities.”

The Composite Risk Index is identical to how Frank described “estimates” and of course uses the concepts of Bayes. It is also rooted in complexity science and the rejection of determinism. (Geoffrey West is a leading scholar in complexity interested in particular in how it explains cities.)

So far we have a strong connection between probabilities and risk. Now we just need to demonstrate how probabilities are assumptions and to do that we turn next to pattern recognition and induction.

Section III: Pattern Recognition and Risk

“Human nature yearns to see order and hierarchy in the world. It will invent it where it cannot find it.” “The brain highlights what it imagines as patterns; it disregards contradictory information. “So limited is our knowledge that we resort, not to science, but to shamans.”[8] Bernard Mandelbrot

Bernard Mandelbrot was one of the early thinkers in a then new field called chaos theory. Using a concept that he developed, fractals, he showed that many complex natural phenomena were in fact deterministic. The unique feature of any chaotic systems is that at every level of scale, the units are identical. Any piece of a cauliflower looks the same as any other size piece of cauliflower. In many ways Mandelbrot brought a new understanding to pattern recognition, which is the subject of this Section.

Shackle gives us the necessary insights to understand pattern recognition.

“The ultimate indispensable permissive condition of knowledge is the repetition of recognizable configurations. These patterns or stereotypes form a hierarchy in our minds. A pattern of sense-impressions, perhaps from more than one sense, is pinned down as an object or an event. The occurrence, over and over again of similar objects or events establishes a class of objects or events, a concept. Such concepts themselves can then form the building-blocks of more complex and inclusive configurations.”

Knight provided the foundation for much of Shackles thinking and summarizes the quote above so well.

“…probability seems to depend upon the accuracy of classification of the instances grouped together”

What Knight has just shown is that pattern recognition is a probability distribution. What we need to do now is to understand the two key concepts within pattern recognition. To select a pattern, one must first select the area of interest — say lions. Setting aside a discussion of Carnap and Wittgenstein, we do not need a word “lion” for this selection to work. There are many patterns working in each of our minds that lack labels or words. The other selection required is to identify the characteristics that define a lion — color, size, smell, etc. Now let’s take our lion and explore a bit about how classification works, how characteristics are added and the risk in such an approach.

As a young gazelle, you learn from watching the herd that large, fast, hairy, yellow things are to be avoided. As you mature, you develop a better appreciation of the color of lions and realize that yellow is too narrow and light browns and greys are also applicable to lions. One day at the watering hole, a large animal appears but you are not worried. This animal is white and you know from thousands of encounters with lions over the last six years that lions are not white. The albino lion enjoyed a delicious lunch of my gazelle.

What the story hopefully illustrates is that the selection of the instances in the probability distribution is how the concept of risk is introduced. In fact, which instances are selected is how the concept of variance arises and the concept of variance we showed in the last Section is where the risk is created. When one considers all the defining characteristics, each with their own probability distributions, the variance on something as simple as spotting a lion before it eats you is quite “risky”. Good thing our ancestors were not distracted by their stock portfolios.

What one may also realize is that the selection of the instances, the establishment of the probability distribution, is what is commonly referred to as induction. When one is careful, thorough and methodical in selecting instances, one arrives at assumptions. When one uses insufficient instances or is not careful in the selection process, one can produce prejudices.

The observant reader might realize that pattern recognition is also the basis for determinism. To quote Knight:

“Although cause and effect can be experienced, they cannot be linked by a logical preposition. They are matched in our minds by our memory of the repetition of its sequence.”

In the place of determinism Knight leaves us with probability statements, which we have shown are the basis for assumptions, and the variance in the probabilities is how we equate assumptions with risk. The natural working of the brain, its deterministic bent, uses probability distributions in selection to disprove determinism — which leaves us with complexity to explain reality. Why is complexity a better model, because it is based in what Knight calls ‘uncertainty”. (I will refrain from further writing on complexity theory. For more information on complexity, visit the Santa Fe Institute website.) A good model explains all instances of the outcome. Determinism does not do this, so we need a better model. Perhaps the better model is complexity

By rejecting determinism, Knight used one of the most powerful applications of our understanding of assumption — reframing the problem — which is the subject of the next Section.

Section IV: Reframing the Problem

“This course describes Bayesian statistics, in which one’s inferences about parameters or hypotheses are updated as evidence accumulates. You will learn to use Bayes’ rule to transform prior probabilities into posterior probabilities, and be introduced to the underlying theory and perspective of the Bayesian paradigm. The course will apply Bayesian methods to several practical problems, to show end-to-end Bayesian analyses that move from framing the question to building models to eliciting prior probabilities to implementing in R (free statistical software) the final posterior distribution.”[9]

With some understanding of assumptions established, I would like to turn in the next four Sections to some more practical applications of assumptions.

For about the last five years I have taught a course based on design thinking called “Entrepreneurship, Design and Thinking”. It was originally designed to really be an entrepreneurship course, but over time I realized the students had much more interest in learning different approaches to thinking. In the early days of teaching the module on design thinking, it was obvious that design thinking was very useful. This was clear when the students started using it for their course projects with no prompting from me. When I inquired about their interest in design thinking, they frequently highlighted step two in the five-step process, which is shown below.

1. Empathize

2. Reframe the problem

3. Ideate

4. Prototype

5. Test

“Reframing the problem” was the powerful step for the students and what they characterized as the new way of thinking. In design thinking “empathize” is an open-minded exploration of the user’s problem, where ideally the observer “puts themselves in the shoes” of the user. By being open minded one hopefully rejects any existing assumptions about the problem. An HBS case illustrates the point well.

A group of Indian investors established telemedicine diagnostic centers in rural India. After several weeks not a single customer/patient has used the center. A subsequent factfinding team discovered that the locals saw no reason to give up their “witch doctors” who had served their families for hundreds of years.

The investors had obviously assumed that modern medicine was preferable to any current treatment provider. (I would note that this type of fatal assumption is common in my experience with social programs both domestically in the U.S. and internationally.)

With no assumptions tainting the fact finding in Step 1, in Step 2 technically one frames the problem (as Duke described it above). However, I prefer the phrasing “reframe the problem” to consciously remind the analyst that the problem itself should always be questioned. One effective technique to reframe the problem is to change a key assumption. A case from history, and increasingly a current topic, is “living forever”. From at least the time of the Spanish conquistadores and now revisited as a result of advances in modern science, man has explored everlasting life. This perhaps worthy theme, however, may not frame the problem correctly. Reframing the problem, we realize that not dying is a better objective because it is more manageable. More famous examples from history show the power of changing the assumption. The following examples, all from physics (I warned you), illustrate the point.

1. Until the time of Galileo, after more than 2000 years of science, did someone question the assumption that heavier objects fall faster from a height than lighter objects. Galileo, of course, showed that all objects dropped from the Tower of Pisa reached the earth at the same time. (Galileo’s work on the topic was one of the first applications of the new scientific method. Einstein gave his fellow physicist Galileo the title “Father of Modern Science”, not bad for questioning one assumption. I might have given the title to Da Vinci for his application of the scientific method in documenting the workings of the human body through cadavers. Perhaps Einstein did not know about Da Vinci’s work in medicine.)

2. This second example is perhaps an even more interesting example of reframing the problem because it relies on the use of probability. One of the fundamentals of Newtonian physics is the equation F=ma. This equation assumes exact position and velocity of the object (or particle). Based on quantum mechanics, Shrodinger introduced the concept of a wave function, which lead to his transformative insight that position and velocity are really statements of probability. Essentially information is a statement about probabilities, which should perhaps not be surprising given that all pattern recognition relies on probabilities. This connection between probabilities and information was, of course, one of Claude Shannon’s insights that launched Information Theory and the Third Industrial Revolution in the late 1950s (digital computing).

Combining the thinking of Schrodinger, Shannon and many other thinkers from the 20th century, one might conclude that reality is information or the probabilities that “establish” the information. However, information moves over natural and manmade networks powered by energy, so maybe we should think of reality as information and energy. This metaphysical position, of course, challenges one of the most fundamental assumptions about reality — materialism. To quote Wikipedia, “materialism is the view that matter is the fundamental substance in nature, and that all things, including mental aspects and consciousness, are results of material interactions. Matter is any substance that has mass and takes up space by having volume.” In summary, the theory of Quantum Mechanics (and the contributions of Schrodinger and Shannon and many others to the field) challenged the assumption that there is a physical world and reframed the whole question of reality. Now that is a significant example of reframing the problem!!

While a few examples from physics do not show that all great insights originate from challenging an assumption or reframing of the problem, I do in fact think this is the case. Poincare, the legendary French mathematician who first proposed non-Euclidean geometry, said that invention was the powerful step in innovation because that was when the inventor selected from his creative alternatives (what design thinkers call ideation). For Poincare, I think the selection was a framing or reframing of the problem wherein the assumptions that shaped the alternative “designs” in the ideation exercise were rejected until a finalist was picked.

To reframe a problem, I have hopefully shown that one merely needs to question a key assumption. Such questioning might better be described as an insight, which is how Kirzner[10] described the way entrepreneurs pick their opportunities.

Note: The concept of reframing the problem is explored in more detail in “The Foundation of Entrepreneurship: Large Market Opportunities that have Repeated for 40,000 Years”, Section 6 — Change the Assumption.

Section V: Additive vs Multiplicative

“Additive systems and multiplicative systems react differently when components are added or taken away…Most businesses, for example, operate in a multiplicative system. But they too often think they’re operating in additive ones.” Farnam Street

If I ask you to design a new product or create a new startup business, hopefully the first question you ask is “what is the problem we are solving”. The second question should be “what is the proposed solution”. At that point there is a key question that is rarely asked-surprise- it’s about an assumption. The assumption is about whether the approach to the solution or company will be additive or multiplicative. At this point most readers are completely baffled. This proves my point earlier that the risk in a new business may be the assumption one does not identify. Let me first define additive and multiplicative and then we will return to its application in entrepreneurship.

In Additive Systems each component adds to the next to build the outcome. 6+3+5=14. The components are 6,3,5 and the outcome is 14. Delete a component, say 3, and we still have a useful outcome 6+0+5, 11. In Multiplicative Systems the components are linked to produce the outcome. Taking our earlier example as a multiplicative system illustrates the point. 6x3x5=90. The multiplicative system clearly produces the better result. However, when we take out one component, we see the problem. 6x0x5=0. [Note: the weakness in multiplicative systems also highlights the risk in a hidden assumption, but that is not my point here.] So obviously any system or process — whether product, company or social solution — that one builds should not be designed as a multiplicative system. At this point you are probably asking why nobody every taught you this very important lesson, preferably in 2nd grade right after you learned multiplication. However, if you are still trying to understand multiplicative system, an example is coming.

Credit: The Risk Validation Pyramid | by @robfitz |

As shown in the diagram above, there are two ways to build a software application to reach an MVP (minimum viable problem). We can build the app with long flat blocks, one block on top of another (commonly referred to as the “software stack”). Alternatively, we can build the app by placing the blocks vertically one block next to the other (as shown on the left). In one case (Multiplicative System) if any layer of block malfunctions, the entire software app fails. Think of the blocks as components of the app. In the other case (Additive System) each vertical block has the functionality to serve the customer, albeit with a limited range of functionality. Additional vertical blocks add self-contained functionality, which shows the resilience of such an approach. While we have all been taught by HBS to opt for the most efficient solution, which is usually the sharing of resources in Multiplicative Systems, the “risk-adjusted” economic returns may show that the Additive System approach is better. Perhaps even more important, the selection of either system should not be a hidden assumption but rather a conscious decision.

Another key assumption overlooked comes from physics, which is discussed in the next Section.

Note: All systems in complexity science are described as emergent. Economies, marketplaces and all human designed systems are complex adaptive systems. The individual agents and objects have individual properties that do not explain the behavior of the whole [the system]. These systems are both integrated and multiplicative. This multiplicative quality may in part explain the unpredictable nature of human systems. Suffice it to say, we are trying to avoid or reduce the likelihood of unpredictable events. Therefore, when available we chose the additive approach.

Section VI- What Physics Teaches Us

“…That’s why we have math. Because our brains evolved to not get eaten in the plains of Africa. Really good to check if there’s a lion in the brush…” Neil deGrasse Tyson: Big Think

When you look to solve a problem, of course properly framed, do you work bottom up or top down? Another way to describe this issue is in the terms “microscopic” or “macroscopic”. Microscopic or bottom up essentially takes a reductionist approach, where individual components are put together to reach an end. In the macroscopic or top down approach, one works backward from the solution, making assumptions to explain the earlier assumption or step to finally arrive at the starting point. If you do not evaluate both approaches and default to bottom up you probably find difficult problems impossible to solve. My experience is that difficult problems are almost always approached top down, which is probably why physicists use this technique so often. (The inability of the physicist to “see” the problem (such as a black hole) probably is also a key reason.) An example may illustrate the point.

Suppose our objective is to sell butter to the food service provider at a large community hospital. Basically we would find a supplier, determine an assortment of butters, price appropriately and then have a salesperson call on the food service operator until we got the sale. (We will assume the operator uses butter, that no bidding process is required and that we have networked with every possible person who could influence the purchasing decision.) Now suppose we change the objective ever so slightly to “increase the sales of minority-owned food businesses with the food service operator”. One alternative approach would be to identify minority-owned food suppliers, train them, if necessary, to deal with the food service purchasing system, again train the sales people from each company, if necessary, to professionally present to the food service operator. We will assume the operator uses butter, that no bidding process is required and that we have networked with every possible person who could influence the purchasing decision. However, there is one key assumption that should be explored — why did we specify minority-owned suppliers? Why do we have a class or set of companies with the same challenge? We might get to this question by asking why minority-owned suppliers have a disproportionately small share of sales to the hospital. So we have identified a problem, which a whole set of suppliers have. Do we work with each supplier individually as described above or do we take a top down approach? If we opt for a top down approach, we would first investigate with the hospital to see what reasons might exist to explain the low percentage of minority-owned sales. The reasons might include discrimination, lack of awareness of the “problem” on the part of the hospital (you only manage what you measure), inability of suppliers to work with the computerized ordering systems of the hospital or perhaps the inability of suppliers to maintain proper inventories to keep the hospital properly supplied. Maybe the minority-owned businesses cannot match the prices of the large institutional food suppliers like Sysco. All of these possible issues suggest that the set of suppliers might have a systemic problem. Systemic problems tend to be best approached top down. Of course, if you do not take the time to determine what kind of problem you have, one tends to default to a bottom up approach. Better to think of the approach, bottom up/top down, as the first assumption to test when dealing with a problem and particularly with tough problems such as social issues.

Section VII- Financial Modeling

‘…when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind…” Lord Kelvin

The purpose of any model is to create a simplifying abstract of reality, which is achieved by the selection of the component parts of the model. In most models, including financial models, the components represent the modeler’s view of reality and their key assumptions. In financial models the assumptions are quantitative and properly investigated should be expected values.

The power of models is in the act of abstracting, the simplification that frames the ambiguity. Vincent Ostrom said “If our understanding is inadequate, our language is likely to be inappropriate”. (V. Ostrom 1997, 34–35). The same is true of models. If we cannot financially model the business, we do not understand it, which is Lord Kelvin’s point in the introductory quote.

In looking at financial models, many entrepreneurs in my experience totally miss the point. Their approach is I need the model to raise capital for my startup, but then we put the model in the drawer and largely forget it. Such an approach overlooks the real usefulness of a financial model of a new business. The model outlines the assumptions, which we have already shown represent the risks, which is what we should be managing. When the Lean Startup community says that one should validate hypotheses, what they are encouraging the entrepreneur to do is to eliminate a risk by confirming the statistical probability of a variable or assumption.

Another benefit of thinking of entrepreneurship as validating assumptions is that the assumptions that have the most effect on cash flow (and not profitability) identify the key requirements for management expertise and the timing of the requirement. A personal story illustrates the point. After raising $20 million in venture capital in Indonesia, we re-planned the new store opening schedule and discovered we would need about 6000 new store employees each year. What this assumption drove home was actually the need for a store manager development program to supervise the employees (and the belief in the internal development of management). Might have taken much longer to have identified the issue without a financial model.

To summarize about financial models:

1. The model identifies what are the important assumptions in the (new) business

2. The assumptions represent the risks in the business

3. We manage the risks to manage the business

4. The assumptions highlight the management expertise required to run the business

Note: There is a tendency for people to change their business models because they lack the expertise in Excel to accurately capture their own logic. In my experience Excel can model any logic required in a startup financial model. Find a better modeler rather than change the business model


By now you hopefully realize that almost all information, except for a priori statements, are probabilistic. You reach this conclusion by rejecting determinism or materialism or by accepting Quantum Mechanics as the best model of reality.

Knight’s statistical probabilities are understood both as assumptions and risks. When we lack sufficient information to determine a probability distribution, we have uncertainty. Uncertainty is when we lack the information to both determine the value and the probability of the value occurring. Note: Uncertainty defers from ignorance in that in uncertainty we recognize that we lack information.

The power of assumptions is that they identify the risks in a business concept or business model. By identifying the risks, the key management areas are highlighted for validation and continuous management.

At the beginning of any analytical exercise there are two key decisions to be considered. Are we accepting an additive or multiplicative approach and are we using a bottom up or top down methodology to reach a solution.

Unidentified assumptions are perhaps the greatest risk in entrepreneurship and most social pursuits.


This article draws heavily on the thinking and writings of Frank Knight, Vincent Ostrom, G.L.S Shackle, Vincent Castillo and Geoffrey West. The views expressed herein are the author’s and do not reflect any views of the author’s employers or clients.

The views expressed herein are totally my own and do not represent the positions of any organization with whom I may be affiliated.


[1] The nature of prejudice. GW Allport — 1954 —

[2] The Nature of Technology: What it is and How it Evolves. W. Brian Arthur — 2009 — Free Press

[3] Black Swan. Nassim Nicholas Taleb -2010- Random House

[4] “Shackle: Time and Uncertainty in Economics”. Andres F. Castillo — 2010 — University of Cambridge

[5]] :


[7] A Mathematical Theory of Communication. Claude Shannon — 1948 — The Bell System Technical Journal

[8] The Misbehavior of Markets: A Fractal View of Financial Turbulence. Bernard B. Mandelbrot — 2004 — Basic Books

[9] Bayesian Statistics Duke University Coursera Curriculum

[10] Competition and Entrepreneurship. Israel Kirzner -1973- University of Chicago Press


The Noonification banner

Subscribe to get your daily round-up of top tech stories!