

My wife brings up the following story any time she wants to make the point that Iâm pedantic: When one of my daughters was in second grade, her math teacher told the class that any number divided by zero was one. I dashed off an impassioned email to the teacher, insisting that the result had to be undefined. Supposedly this is evidence that Iâm sometimes difficult to be around.
Turns out the joke might be on meâââalthough itâs still hard to support the second-grade teacherâs answer. I recently learned a bunch of things I didnât know about floating point math:
These rules stem from the IEEE 754 âStandard for Floating-Point Arithmeticâ, which standardized floating point representations across platforms. The most recent version of the standard was completed in 2008 but the original version was issued in 1985, so this behavior is not new. The rules above are true in both C (gcc) and Swift on my Mac, and also true in Swift on an iPhone. Python on the Mac supports negative zero for floats, but throws an exception when you attempt to divide by zero of any sign.
There are a couple of surprising corollaries to these rules:
Iâm not a number theorist, but I find the concepts above surprising.
One immediate problem: Infinity is not a number, like zero or 3.25 or Ï. Rather, infinity is a concept. It is true that the rational numbers are countably infiniteâââbut infinity is not a member of the set of rational numbers.
Furthermore, from a number theory perspective, division by zero is nonsensical. You can understand why if you get precise about what division means. Technically, âdivisionâ is âmultiplication by a numberâs inverse,â where the inverse satisfies: a Ă a^-1 = 1. Zero is the only number in the set of real numbers that simply has no multiplicative inverse. And since this inverse doesnât exist, we canât go around multiplying by it.
But surely the people who designed floating point numbers knew all this. So, I got wondering about why the behavior described came to be written into the IEEE standard.
To start, letâs consider the problem that floating-point math is trying to address. The real numbers are uncountably infinite, and yet we wish to represent this entire set within the bounds of finite computer memory. With a 64-bit double, there are 2^64 possible symbols, and the designers of the IEEE standard were trying to map these symbols onto the set of real numbers in a way that was both useful to real-world applications and also economically feasible given the constraints of early 80s silicon. Given the basic requirements, clearly approximations were going to be used.
The reasoning for negative zero appears to date to a 1987 paper [1] by William Kahan, a Berkeley professor who is considered the âfather of floating pointâ and who later won the Turing Award for his work in drafting IEEE 754. It turns out that the existence of negative zero is intimately tied to the ability to divide by zero.
Letâs start by discussing the usual reason that division by zero is not allowed. A naĂŻve approach to division by zero is the observation that:
In other words, as x gets smaller, the result of 1/x gets larger. But this is only true when x approaches 0 from the positive side (which is why thereâs that little plus sign above). Running the same thought experiment from the negative side:
As a result, the generic limit of 1/x as x approaches 0 is undefined, because there is a discontinuity (what Kahan calls a slit) in the function 1/x.
However, by introducing a signed zero, Kahan and the IEEE committee could work around the difficulty. Intuitively, the sign of a zero is taken to indicate the direction the limit is being approached from. As Kahan states in his 1987 paper:
Rather than think of +0 and -0 as distinct numerical values, think of their sign bit as an auxiliary variable that conveys one bit of information (or misinformation) about any numerical variable that takes on 0 as its value. Usually this information is irrelevant; the value of 3+x is no different for x := +0 than for x := -0âŠ. However, a few extraordinary arithmetic operations are affected by zeroâs sign; for example 1/ (+0) = +â but 1/ (â0) = ââ.
Iâve made my peace with the concept by adopting a rationalization proposed by my partner Mike Perkins: The 2^64 available symbols are clearly inadequate to represent the entirety of the set of real numbers. So, the IEEE designers set aside a few of those symbols for special meanings. In this sense, â doesnât really mean âinfinityââââinstead, it means âa real number that is larger than we can otherwise represent in our floating-point symbol set.â And therefore +0 doesnât really mean âzero,â but rather âa real number that is larger than true 0 but smaller than any positive number we can represent.â
Incidentally, while researching this issue, I discovered that even Kahan doesnât love the idea of negative zero:
Signed zeroâââwell, the signed zero was a pain in the ass that we could eliminate if we used the projective mode. If there was just one infinity and one zero you could do just fine; then you didnât care about the sign of zero and you didnât care about the sign of infinity. But if, on the other hand, you insisted on what I would have regarded as the lesser choice of two infinities, then you are going to end up with two signed zeros. There really wasnât a way around that and you were stuck with it.â (From an interview of Kahan conducted in 2005.)
Iâm not certain if writing a blog post ten years later makes up for railing against a poor second grade teacher. For her part my daughter, now in high school, just rolled her eyes when I started talking about division by zero at dinner. So maybe that âdifficult to be aroundâ thing is hereditary.
[1] Kahan, W., âBranch Cuts for Complex Elementary Functions, or Much Ado About Nothingâs Sign Bitâ, The State of the Art in Numerical Analysis, (Eds. Iserles and Powell), Clarendon Press, Oxford, 1987, available here.
Create your free account to unlock your custom reading experience.