Recently Nada Amin and I discovered that Java and Scala are unsound. We submitted the finding and related discussion to OOPSLA, an academic conference on object-oriented programming. It has been great to see the paper making rounds in industrial circles, but there also seems to be a lot of confusion and I think that’s our fault. The paper was written for academics, and so it doesn’t discuss the issue from industry’s perspective. I’m crunching on a deadline right now, so let me write an abridged version of the industry-targeted paper here, focusing just on Java.
Most type systems aim to provide some sort of guarantee about how a well-type program will behave. For example, in many languages all memory accesses are “safe” in well-typed programs. A type system is sound if it actually succeeds at providing that guarantee. Thus informally a type system is sound if it ensures what its designers intended it to ensure. This is much like programming: a program is correct if it does what it is supposed to do.
Java’s type system is intended to ensure that if a method asks for an Integer, then it will only ever be given Integers, never Strings. Thus the following pseudo-code is rejected by the type system:
public Integer increment(Integer i) { return i + 1; }String countdown = increment(“98765432”);
Sure, in JavaScript this might make sense, but Java is implemented very differently, and allowing this code could really mess up the heap (or even stack if you’ve ever enjoyed some C/C++ memory mismanagement errors).
Now you might already know that Java’s type system has some unsound qualities. For example, for historical reasons Java has covariant arrays, making this code type check:
String[] strs = { "NaN" };Object[] objs = strs;objs[0] = 1;String one = strs[0];
And when generics were added, Java used “raw types” so that genericized code could be compatible with pre-generics code. Consequently the following type checks:
List<Integer> ints = Arrays.asList(1);List raw = ints;List<String> strs = raw;String one = strs.get(0);
Thus both of these examples seem like they allow one
to contain an Integer
despite the fact that it is declared to have type String
.
Now this brings us back to intent. The designers intended the type system to allow this behavior. This is because, although these examples are clearly bad, there are many examples where this permissiveness is useful. After all, untyped languages are all about allowing bad things to happen so that more cool things can happen as well. And because this behavior was intentional, the designers planned for it. Every time you assign a value into an array (of a reference type), the runtime looks up the array value’s actual element type and throws an ArrayStoreException
if it’s incompatible. So in the above sample, the assignment objs[0] = 1;
fails before the seemingly unsound assignment String one = strs[0];
can be reached. Similarly, whenever get
is called on a List<String>
, the runtime checks if the returned value is actually a String
in case a raw type was misused, throwing a ClassCastException
otherwise.
So Java’s type system seems to be as sound as it was intended to be, though maybe not as sound as you would like it to be. And since generics were introduced, it was believed to be “truly” sound so long as you didn’t use these intentional backdoors.
Here’s our “true” unsoundness example that has been making the rounds most recently. It assigns an Integer
instance to a String
variable. According to the Java specification, this program is well-typed, and according to the Java specification it should execute without exception (unlike the aforementioned backdoors) and actually assign the Integer
instance to the String
variable, violating the intended behavior of well-typed programs. Thus it illustrates that Java’s type system is unsound.
Now you, like many others, might be confused by this example and might have already done some things to make yourself even more confused. So let me first name each common point of confusion:
ClassCastException
, also contradicting my claim.You also wonder why this matters, especially since you and your colleagues are unlikely to ever write code like this. I’ll address all of these concerns, though out of order.
If your compiler doesn’t type-check our example, I have news for you. No, your compiler isn’t catching the bug. In fact, your compiler itself has a bug. Your compiler is broken. The Java specification says this example should type check and compile. Your compiler is supposed to implement that specification, and it’s failing here. It’s failing because it didn’t anticipate the corner case we created here.
How many bugs have you dealt with have been corner cases? A good programmer is supposed to anticipate and handle corner cases, not just excuse a bug because it is hard to create. Sure, sometimes resources are constrained and you might decide that the bug is low priority, but you can only assess its priority after you identify it. So even if this bug turns out to be low priority, which I’ll get to in a bit, it’s still important to know it exists.
And thank god one is, and purely due to some good luck! Remember how a call to get
on a List<String>
secretly checks that the returned value is actually a String
? This secret check is done for backwards compatibility with raw types. It shouldn’t be necessary when there are no raw types in the program. But thanks to it, a ClassCastException
is always thrown when our example is run (if your compiler is smart enough to compile it).
So had history been a little different, either with Sun deciding to throw out backwards compatibility after adding generics, or with Sun adding generics into the JVM, or with Java first being released with generics, then this exception would not be thrown. Instead I could cast anything into an int[]
and get direct access to the raw bytes of many objects, which I could then use to inject arbitrary code (using return-oriented programming to bypass security measures).
You probably would never write this, and your colleagues would never write this. That reasoning is good to apply in many situations. I in fact do this all the time; it’s a huge part of my research agenda. But you have to be careful where you apply it. Soundness is a security guarantee, not a usability concern. What matters is if someone can write this, because then that someone can get around the security measures that people have placed their confidence in and do whatever they would like to do. In the case of soundness, it’s the malicious programmer you should be worried, not just you and your friends.
Many people call out the type Constrain<U,? super T>
and say it should be invalid because Constrain
requires its second type argument to be a subtype of its first, but T
, and every supertype of T
, is not a subtype of U
. But this reasoning confuses invalidity and uninstantiability. It’s actually faulty reasoning that’s been around in both industry and academia for at least as long as I’ve been out of high school (which has been a while).
Here’s a simpler version of the fallacy:
class Foo<T extends Number> {Number foo(T t) { return t; }}
Is it safe for Foo<String>
to be a valid type? For example, is the following code safe:
Number bar(Foo<String> foos) { return foos.foo("NaN"); }
The conventional wisdom up until now, and probably your gut instinct, would say this is unsafe, since it turns a String
into a Number
, but the fact is that it’s completely safe. Here’s an equivalent program that illustrates why:
interface IFoo<T> {Number foo(T t);}
class Foo<T extends Number> implements IFoo<T> {public Number foo(T t) { return t; }}
Number bar(IFoo<String> foos) { return foos.foo("NaN"); }
For any class with a constrained type parameter, I can just make a corresponding interface without that constraint and use that throughout the program. This will behave identically to the original program. (There are some clarifications to this, but I’ll leave them to the paper.)
So why does it seem like Foo<String>
should be invalid? How is it bar
safe? Well, the answer to both is the same: Foo<String>
is uninstantiable. You’ll never be able to create an instance of Foo<String>
because String
is not a subtype of Number
. Thus the seemingly unsafe invocation of foo
in bar
will never happen because you can’t invoke a method of an object that can never exist.
So by this same reasoning it’s perfectly safe for Constrain<U,? super T>
to be valid. You’ll only ever be able to have an instance of it if T
is a subtype of U
at run time, satisfying the constraint.
But Constrain<U,? super T>
is inhabitable in Java, because null
inhabits every reference type. So keep that in mind as I explain why this program type checks.
A Java wildcard, ?
, is not a type. It is simply a piece of syntax representing something unknown. And different wildcards can represent different unknown types. So to reason about wildcards, Java uses a process called wildcard capture.
Suppose we have an expression e of type Constrain<U,? super T>
. There is a wildcard in the type of this expression. The type-checker doesn’t know what it represents, but it does represent something, so the type-checker gives that unknown something a temporary name, say X
. Furthermore, the type-checker knows that the unknown something must at least be a supertype of T
. Thus the type-checker views e as having type Constrain<U,X>
where X
is some supertype of T
.
But the type-checker knows something more. It knows that X
must be a valid type argument. Consequently, the type-checker also notes that X
must be a subtype of U
. We call this the implicit constraint on the wildcard, whereas super T
is the explicit constraint on the wildcard. I found out that, although uncommon, this implicit constraints are in fact used for type-checking a number of Java libraries. The paper gives a specific example showing why implicit constraints are important.
Now look at the example again. In the call to bind.upcast(constrain, t)
, the type-checker does the aforementioned wildcard-capture process and views constrain
as having Constrain<U,X>
where X
is (explicitly) a supertype of T
and (implicitly) a subtype of U
. Consequently it infers X
to be the type argument to bind.upcast
and uses its explicit and implicit constraints to type-check the rest of the invocation.
Okay, so if the wonky type isn’t the source of unsoundness, what is? Well, amazingly enough, it turns out null pointers don’t just cause bugs in programs, they cause bugs in type systems too! The reasoning for wildcard capture completely forgets about null pointers. It assumes there must be some actual Constrain<U,X>
for some X
, an assumption that manifests itself in the implicit constraint on X
. So all of this boils down to a null-pointer bug. But unlike most null-pointer bugs, this one took 12 years to discover. Oh, and it’s also not nearly as easy to fix as most null-pointer bugs, since every feature involved is used in practice. But the Java team is working on fixing it. And before you once more insist that the solution is to disallow the Constrain<U,? super T>
type, realize that that kind of reasoning is what delayed core Scala’s proof of soundness (without null) for a whole decade, and I’m not particularly eager to spend a decade solving this.
Future language designers, please keep this in mind! Especially if you’re not compiling to the JVM. Java was lucky it was held back by backwards compatibility; you may not be so lucky.
Hacker Noon is how hackers start their afternoons. We’re a part of the @AMIfamily. We are now accepting submissions and happy to discuss advertising &sponsorship opportunities.
To learn more, read our about page, like/message us on Facebook, or simply, tweet/DM @HackerNoon.
If you enjoyed this story, we recommend reading our latest tech stories and trending tech stories. Until next time, don’t take the realities of the world for granted!