Over the last couple of years, I have been thinking and reflecting on programming languages and environments. Programming as a skill (or art depending on how you view such things) has gone through many changes over the last 10 years. Probably the most profound change is in the complexity of the hardware that needs to be controlled, and the scale of interaction that takes place.
The two most obvious areas of change are smartphones/tablets and web services at scale. A personal computer from 2006 typically had < 1GB of main memory, 1024×786 graphics, a < 500 GB hard drive. Multiple processor core designs (i.e. Intel Core Duo) were just being introduced into the market in that time frame. For programming, use of C# on the Windows PC using the Visual Studio IDE was starting to become prevalent (though people were using a wide variety of languages) and on the Macintosh Objective C in the Xcode IDE tended to be the weapon of choice. Another popular choice for both platforms was Java, though consumer facing applications were rarely delivered in that form.
Todays smartphones are much more capable, and in addition have a large number of peripherals attached such as radios of the phone and Bluetooth, cameras, IMU, and touch screens. Smartphones are mostly programmed using Objective C or Java.
Today of course, Amazon Web Services (AWS) rule the lands. There are several large outposts, such as the Google land and the Facebook lands, but for the most part AWS is the go to for everyone from startups to companies such as Netflix. AWS takes care of most of the basic architectural plumbing that transparently scales to millions of users on demand.
Back in 2006, most embedded systems were written in assembler or cross compiled in C or some low-level domain specific language. Some of the larger embedded systems were built with Java. Note that the Tegra is much different from what historically has been referred to as an embedded system, much more like the processor in a smartphone than something like an embedded controller.
We’ll call this The Crossover, where embedded systems need to be thought of more as a full computer system than a low level device used to control simple hardware. Conceptually this is something similar to what happened when computer data base programmers realized that there was enough main memory on computers to hold entire data bases in memory. This required a major paradigm shift in thinking, as up to that point most of data base management was about dealing with keeping the data base synchronized with disk. Note: Companies such as MapD are now keeping databases in GPU memory and working on everything in parallel, which will require another shift.
Another tidbit, Robot Operating System was originally developed in 2007 at the Stanford Artificial Intelligence Laboratory in C++, Python and LISP.
Cluster Fucks are Scalable
If you went to an engineering school, one of the first things you learn is that “Cluster Fucks are Scalable”. In polite company we refer to them as Charlie Foxtrot, here we’ll refer to them as CFs. The first day of Engineering 101, you are presented with this textbook case: Northeast Blackout of 1965. The basic story is that a technician incorrectly set a protective relay too low on a power transmission line. When a small surge of power caused the relay to trip, power was diverted to other lines. The added power caused properly set relays downstream to trip and reroute the incoming power. The ripple effect left over 30 million people over 80,000 square miles without power for up to 13 hours. The professor suggests, “Don’t let this be you”.
A major lesson here is that a system may “Operate as designed, but not as intended”. Also note that the system was not sabotaged or hacked, which are of greater concern today.
Most people aren’t very good with parables or metaphors, and think “I am not a power engineer. This won’t happen to me!” On top of that, computer scientists/programmers tend to be intelligent (“Too clever by half”) and believe that they will always engineer perfect systems and plan for all possibilities/edge cases. I’ll let you guess what happens next.
There are many famous CFs in the computer programming world. There are several root causes, the first of which starts with someone coming up with a really good idea and implementing it. What happens next is that the implementation is not what we’ll call “engineered for success”, usually with mitigating circumstances. A classic example is the “Twitter Fail Whale”, an image of a whale being lifted by birds shown to users in the event of a Twitter service outage back in the late 00s. That the image was famous for this exposure tells the story.
To be fair, at that time it was very difficult (and expensive) to build an exponentially growing system serving millions of users. Basically they ended up nuking the whole thing and bringing in an engineering group that knew how to build that type of system at scale. The pool of engineers that knew how to build at that scale was very small back then. There’s also the realization here of network effect, systems that grow exponentially in a very short amount of time. After all, it is getting to the point where most computers in the world are connected together! Cascading failures seem very real in the computer world all of the sudden.
There are many other causes of CFs course. For the purposes of this discussion, we’ll discuss “death by a thousand paper cuts”. You’ve probably experienced these types of projects yourself, where the underlying implementation has what feels like an unlimited number of issues that need to be fixed. This may be because the system is old and crufty, or more likely that the original engineering was lacking or doesn’t accurately reflect the underlying model. You may also find that the project has a lot of people who say “To fix it, can’t we just … ?”. That’s usually a tell that something is really wrong. There can also be another cause. The technology you’re building on invites disaster.
That brings us to Programming Languages.