The fourth decade of programming: How the Web was won

Written by azw | Published 2018/04/01
Tech Story Tags: programming | software-development | history | business | history-of-programming

TLDRvia the TL;DR App

Part Two of Chapter Four in a very personal history of programming

C goes to class

Before SmallTalk, there existed another language that used objects. It was called Simula. It was not a general-purpose language. It was specifically designed to run simulations in research settings. Bjarne Stroustrup, a Danish computer scientist, got to know it while working on his Ph.D. thesis. Later, when he was working for AT&T Bell Labs he was given the problem of updating Unix for distributed computing.

Distributed computing is exactly what it sounds like. A computer program that runs on more than one computer. To do this successfully, the program has to be separated into chunks, and because networking between computers is not 100% reliable, those chunks have to be made independent of each other somehow. Objects are a good way to do this, and this is what made Stroustrup think of Simula.

Before OOP, it was common practice to use something called global variables. A global variable is a way to store a piece of information that needs to be used, and possibly changed at different times, by different bits of code that are in different places in the program.

For example, if a large company needs an audit trail to keep track of modifications to accountBalance, then I have to have code to do that in each subroutine on every computer that can send modifications to accountBalance. Since the subroutines must be synchronized, if I change the way I do it in one subroutine, I have to remember to find all other subroutines and change it there too. If these subroutines were running on different computers, I would need to design some sort of shared memory among physically separate CPUs or computers.

I could create a global variable called “accountBalance” and initialize it to zero. Then in my program I would have subroutines for deposit and withdrawal, which would add to or subtract from accountBalance. There are a number of problems with this tactic that mostly center around conflicts that arise from many pieces of code never “knowing” if another piece of code has changed what was stored in that shared piece of memory.

In OOP, I could create a special class called a singleton that can only be used to construct one instance, one object. This would be a self-contained package of code that did everything there was to be done in regard to changing the account balance. It would create its own private variable, and it would know how to keep and store a record of changes to it without any potential conflicts, and how to queue requests that arrived at the same time. It would have error checking and validation code. And it would have code to increment and decrement the amount stored. It would then “publish” what are called public methods.

Methods are methods of access. So my object, which I will call accountBalanceObject has an increment method and a decrement method. In most languages “calling” the method would look like this: accountBalanceObject.increment and accountBalanceObject.decrement. Whenever I call one of these methods from another object (the Sender) I pass an amount along with the call (as a parameter) and the code inside accountBalanceObject will use that amount to perform the appropriate action(s), and then, accountBalanceObject will return a value. The return value will either be some sort of success code (usually the new value), or some sort of error code. The Sender will have code that waits for the return value, and will either proceed if there is success, or take corrective action if there is an error. This way the two objects can now be on separate CPUs and still work properly.

Since Stroustrup was familiar with objects from using Simula for his PhD. Thesis, he followed a natural course of action by taking C, which is the language in which Unix was already written in and adding objects to it. Originally called C with Classes, it was shortened to C++ which is Yet Another¹ programmer’s joke. “++” is a programmer’s shortcut to add one to a number, and C with Classes was adding to C.

Nowadays C is still used extensively. Most “embedded” systems — that is systems in your car, a vending machine, or a cash register, etc. — are implemented in C. Although C is still a universal language supported on almost every platform, C++ is nearly as universal, and because of the conveniences provided by OOP it is more likely to be used for writing an application. Most Adobe apps are written in C++ because it reduces the effort required to release versions on both Windows and Mac. Important parts of Google, as well as the Chrome browser, the Firefox browser and MySQL are written in C++.

If you have a really big application to write and you want a lot of control over how it uses the hardware C++ will give you the power of C with the convenience of objects to make the job easier, but C++ did not really substantially change the way programmers work. Which might be why it was so very popular. Your average programmer is not overly fond of big change. They have invested a lot to learn a language, and do not relish starting again at the bottom.

“C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do, it blows away your whole leg.”

~ Bjarne Stroustrup

How the Web was won.

The last language we will look at in this chapter is Objective-C, created by Brad Cox and Tom Love in the early eighties. They were particularly concerned with the question of cross platform development, but they also had another agenda.

Cox had a Big Idea: interchangeable software components, or Software ICs as he called them. We will take a deep look at the general concept of interchangeability in Part Three of this book, but for now let’s just focus on Objective-C without delving into why the word interchangeable is so significant. For the moment, we will simply say that Cox wanted to apply a well-known manufacturing technique to software, and he wanted to prove that it could be done without a major overhaul and wholesale disruption of software development processes existing at the time. He did it by simply extending C without changing it by adding something called a pre-processor.

Whereas C++ is a complete language with its own compiler that compiles directly to machine language, Objective-C was designed to first be converted by the pre-processor to standard C, which was then compiled to machine code. And whereas C++ included only standard libraries, Objective-C came with an extensive Class Library intended to promote the concepts of Software ICs, and finally, whereas C++ was followed two years later with a simple reference manual describing the language syntax, Objective-C was released concurrently with Object-Oriented Programming, An Evolutionary Approach by Cox and Andy Novobilski describing not only the language, but a systematic approach to object-oriented design and development.

The failure of the Object Pascal based MacApp had obviously been a great learning experience for Steve Jobs because when he directed the acquisition and further development of Objective-C’s OpenStep into the NextStep object framework for his NeXT workstation, he and his team hit a home run. NextStep is a robust and very useable framework that is still used extensively today for Macs, iPhones and iPads. It a tremendous productivity enhancer because it gives developers simple coherent access to complex preprogrammed capabilities… like the channel changer on your TV.

As I mentioned earlier, the World Wide Web was created using the NextStep framework in Objective-C on the NeXT computer. Sir Tim Berners-Lee invented the web as a means of sharing research documents while he was working as a software consultant at the CERN particle physics research facility. He says that because of the simplicity and straightforwardness of the NextStep libraries, creating HTTP and the Web Browser was “was remarkably easy”.²

Class libraries, of which we will see a lot later on, were the next level of abstraction that would bring about an order of magnitude prod­uctivity increase. While one FORTRAN instruction might compile to dozens of lines of assembly language, a programmer might use a one-line NextStep class method call that contains a few hundred lines of Objective-C that pre-compiles to a few thousand lines of C that compiles to a few tens of thousands of lines of assembly language.

In 1996, Apple acquired NeXT as part of a deal to get Jobs to return to Apple, and renamed NeXTstep to Cocoa. Today it powers the apps that run on the iMac, iPhone, iPad, and Apple Watch. In its first year, the iPhone had 65,000 apps, mostly from small independent programmers, because of the ease with which an iPhone app can be developed. This amazing productivity is the result of three things working together in close concert: The Objective-C language which is strongly influenced by SmallTalk; Xcode, Apple’s integrated development environment which is a direct descendant of the SmallTalk IDE; and the Cocoa class library. Really the three are inseparable. That is the legacy of SmallTalk.

As he had done so many times before and would do again and again, Steve Jobs had made technology picks that were to have lasting repercussions on the entire computer industry.

“The most important thing in the programming language is the name. A language will not succeed without a good name. I have recently invented a very good name and now I am looking for a suitable language.”

~ D. E. Knuth

<- Previous article in the series

Next article in the series ->

[1] http://catb.org/jargon/html/Y/Yet-Another.html

[2] Berners-Lee, Tim. “Steve Jobs And The Actually Usable Computer | W3C Blog”. W3.Org, 2011, https://www.w3.org/blog/2011/10/steve-jobs/.


Written by azw | IT strategist, Startup positioner, Cargo cult programmer. chaosfactorythebook.com
Published by HackerNoon on 2018/04/01