Trigger Warning:
To all the adherents of the Statically Typed Functional Programming religion: I know that you believe that Static Typing is an essential aspect of Functional Programming and that no mere dynamically typed language could ever begin to approach the heights and glory of The One True and Holy TYPED Functional Apotheotic Paradigm. But we lowly programmers quivering down here at the base of Orthanc can only hope to meekly subsist on the dregs that fall from on high.
(R.I.P. Kirstie Alley
OK, so, once again…
A class is an intentionally named abstraction that consists of a set of narrowly cohesive functions that operate over an internally defined data structure.
We do not need the class
keyword. Nor do we need polymorphic dispatch. Nor do we need inheritance. A class is just a description, whether in full or in part, of an object.
For example – it’s time we talked about clouds (which I have looked at from both sides now; and do, in fact, understand pretty well).
So… Here come your father’s parentheses!
(ns spacewar.game-logic.clouds
(:require [clojure.spec.alpha :as s]
[spacewar.geometry :as geo]
[spacewar.game-logic.config :as glc]))
(s/def ::x number?)
(s/def ::y number?)
(s/def ::concentration number?)
(s/def ::cloud (s/keys :req-un [::x ::y ::concentration]))
(s/def ::clouds (s/coll-of ::cloud))
(defn valid-cloud? [cloud]
(let [valid (s/valid? ::cloud cloud)]
(when (not valid)
(println (s/explain-str ::cloud cloud)))
valid))
(defn make-cloud
([]
(make-cloud 0 0 0))
([x y concentration]
{:x x
:y y
:concentration concentration}))
(defn harvest-dilithium [ms ship cloud]
(let [ship-pos [(:x ship) (:y ship)]
cloud-pos [(:x cloud) (:y cloud)]]
(if (< (geo/distance ship-pos cloud-pos) glc/dilithium-harvest-range)
(let [max-harvest (* ms glc/dilithium-harvest-rate)
need (- glc/ship-dilithium (:dilithium ship))
cloud-content (:concentration cloud)
harvest (min max-harvest cloud-content need)
ship (update ship :dilithium + harvest)
cloud (update cloud :concentration - harvest)]
[ship cloud])
[ship cloud])))
(defn update-dilithium-harvest [ms world]
(let [{:keys [clouds ship]} world]
(loop [clouds clouds ship ship harvested-clouds []]
(if (empty? clouds)
(assoc world :ship ship :clouds harvested-clouds)
(let [[ship cloud] (harvest-dilithium ms ship (first clouds))]
(recur (rest clouds) ship (conj harvested-clouds cloud)))))))
(defn update-clouds-age [ms world]
(let [clouds (:clouds world)
decay (Math/pow glc/cloud-decay-rate ms)
clouds (map #(update % :concentration * decay) clouds)
clouds (filter #(> (:concentration %) 1) clouds)
clouds (doall clouds)]
(assoc world :clouds clouds)))
(defn update-clouds [ms world]
(->> world
(update-clouds-age ms)
(update-dilithium-harvest ms)))
Some years back I wrote a nice little spacewar game in Clojure. You can play it here. While playing, if you manage to blow up a Klingon, a sparkling cloud of Dilithium Crystals will remain behind, quickly dissipating. If you can guide your ship into the midst of that cloud, you will harvest some of that Dilithium and replenish your stores.
The code you see above is the class that represents the Dilithium Cloud.
The first thing to notice is that I defined the TYPE of the cloud
class – dynamically.
A cloud
is an object with an x
and y
coordinate, and a concentration
; all of which must be numbers. I also created a little type checking function named valid-cloud?
that is used by my unit tests (not shown) to make sure the TYPE is not violated by any of the methods.
Next comes make-cloud
the constructor of the cloud
class.
There are two overloads of the constructor. The first takes no arguments and simply creates a cloud
at (0,0) with no Dilithium in it. The second takes three arguments and loads the instance variables of the class.
There are two primary methods of the cloud
class: update-clouds-age
and update-dilithium-harvest
. The update-clouds-age
method finds all the cloud
instances in the world
object and decreases their concentration by the decay
factor – which is a function of the number of milliseconds (ms
) since the last time they were updated. The update-dilithium-harvest
method finds all the cloud
objects that are within the ship
object’s harvesting range and transfers Dilithium from those cloud
objects to the ship
object.
Now, you might notice that these methods are not the traditional style of method you would find in a Java program. For one thing, they deal with a list of cloud
objects rather than an individual cloud
object. Secondly, there’s nothing polymorphic about them. Third, they are functional, because they return a new world
object with new cloud
objects and, in the case of update-dilithium-harvest
, a new ship
object.
So are these really methods of the cloud
class? Sure! Why not? They are a set of narrowly cohesive functions that manipulate an internal data structure within an intentionally named abstraction.
For all intents and purposes cloud
is a °°°°°° °°°°°°° class.
So there.
]]>Should you subdivide a functional program into classes the way you would an object oriented program?
— Uncle Bob Martin (@unclebobmartin) January 17, 2023
Yes. You should. Because the rules don’t change just because you’ve chosen to use immutable data structures.
This led to a bevy of interesting responses about the difference between classes and modules. In answer to those responses I tweeted this:
A class is a group of cohesive and narrowly defined functions that operate on an encapsulated data structure. The functions may, or may not, be polymorphically deployed.
— Uncle Bob Martin (@unclebobmartin) January 17, 2023
Of course that only led to an increased number of interesting responses. And so I thought that it might be wise to blog about my reasoning rather than to continue trying to cram that reasoning into tweets.
If you are in doubt about what FP is, and about what OO is, and about whether the two are compatible, then I recommend this old blog of mine.
What is a class? According to the dictionary a class is:
A set, collection, group, or configuration containing members regarded as having certain attributes or traits in common; a kind or category.
Now consider that definition when reading the next paragraph.
In OO languages we organize our programs into classes of objects that share similar traits. We describe those objects in terms of the attributes and behaviors that they have in common. We strive to create hierarchies of classification that those objects can fit within. We consider the higher level classifications to be abstractions that allow the expression of general truths that are independent of irrelevant details. (Indeed, I once defined abstraction as: The Amplification of the essential, and the elimination of the irrelevant.[1])
In 1966 the power of abstraction by classification led the authors of Simula to create the keyword class
. In 1980, Bjarne Stroustrup continued that convention and used the class
keyword in C++. This was actually somewhat strange because C already had the keyword struct
which had a virtually identical meaning. But the power of the word class
held sway.
In the mid 90s the power of that word led the authors of Java (and then C#) to declare and enforce that everything in a program must be part of a class. This was a dramatic overreach. It seems to me that some of the things that Java forces into classes ought not to be in classes at all. For example, the class java.lang.Math
is really just a namespace for a batch of functions and is not, in any sense, a classification of objects.
This conflation of object classification and namespaces is confusing and unnecessary; and is probably part of the reason my initial tweet generated the responses that it did.
Another overreach in Java (and by extension C#) is that methods are polymorphic by default. Polymorphism is a tool, not a rule. Many, if not most, function calls do not require dynamic dispatch.
These kinds of overreach lead to confusion about what a class really is. I believe that most of the responses to my tweet were the result of that confusion.
So let’s cut to the chase.
One of the oldest rules of software design is that we should partition the elements of the system into loosely coupled and internally cohesive elements. Those elements become well named places where we can put data and behavior. This follows the old proverb: A place for everything, and everything in its place.
What are those elements? It seems obvious that the classification structures of objects ought to be high on the list. Namespaced function libraries like java.lang.Math
are another obvious choice. In the one case we have a batch of functions that manipulate an internal data structure. In the other case we have a batch of functions that manipulate an external data structure.
The essential charachteristic of these elements, these batches of functions, is that they are internally cohesive. That means that all the functions in the batch are strongly related to each other because they manipulate the same data structures, whether internal or external. It is that cohesion that drives the partitioning of a software design.
###Example
Recently I have been writing an application called more-speech
which is a client that browses messages on the nostr
network. This nework is composed of relays that use a simple websocket protocol to transmit messages to clients. The more-speech
client is written in Clojure, which is a Functional Programming language.
Early on I created a module named protocol
to house the code that implemented the nostr
protocol. I began this module by managing the websockets over which the messages travelled, and then decoding those messages and manipulating them according to the rules of the protocol.
Clojure is not a traditional OOPL, there is no class
keyword that is used to declare and define objects and the methods that manipulate them. Rather, a module in Clojure is just a batch of functions that are not syntactically bound to any particular data. Thus my protocol
module had functions that dealt with WebSocket
s and functions that dealth with messages and functions that dealth with protocol elements. They were cohesive in the sense that they were all related to the nostr
protocol; but there was no central data structure that unified them.
The other day I realized that I was missing an abstraction. The nostr
protocol may be transmitted over websockets but the protocol rules have nothing to do with websockets. Those rules deal with the data that comes through the websockets, but not the websockets themselves. Yet my protocol
module was littered with websocket code.
So I separated the websocket code from the protocol
code by creating an abstraction that I called relay
. A relay
is a data structure that contains the url
of a websocket, the websocket itself, and a function to call when messages are received. The relay
data structure is manipulated by functions such as make
, open
, close
, and send
.
This relay
module very clearly defines a class of objects. The protocol
constructs a relay
object for each of the urls in a list of active relays. It open
s those relay
s and send
s messages to them. Messages that are received are sent to protocol
through the callback functions that are passed into the function that constructs the relay
object. In order to maintain the immutability and referential transparency constraints of Functional Programming, the functions that update the state of a relay
return a new instance of that relay
.
###Lesson
Java, C#, Ruby, and C++ all either enforce, or stronly encourage, the partitioning of systems into classes. Clojure does not; it is entirely agnostic about classes. The lesson that I learned from protocol
and relay
is that I had not been paying enough attention to class structure when writing complex Clojure programs. Instead, I had been allowing functions to accumulate in modules in a, more or less, ad hoc fashion – similar to the way one might program in C, Fortran, Basic, or even Assembler. But that was lazy. Objects exist in programs, and they can, and should, be classified. So, from now on, I will be paying much more attention to the classification structure of the objects my systems.
]]>A place for everything, and everything in its place.
The first time I wrote Space War was in 1978. I wrote it in Alcom, which was a simple derivative of Focal, which was an analog of Basic for the PDP-8. The computer was an M365 which was an augmented version of a PDP-8 and was proprietery to Teradyne, my employer at the time.
The UI was screen based, using character graphics, similar to curses. Screen updates took on the order of a second. All input was through the keyboard.
We used to play it on one machine while waiting for a compile on another.
Forty years later, in September of 2018, I started working on this version of Space War. It's an animated GUI driven system with a frame rate of 30fps. It is written entirely in Clojure and uses the Quil shim for the Processing GUI framework.
My justification for writing it was so that I could use it as the case study for my cleancoders.com videos on Functional Programming. Once that series of videos was complete, I set Space War aside and started working on other things.
Then, a month ago, the program called to me. I don't know why. Perhaps it was because I'd left it in a partially completed state. Perhaps it was because I had just finished Clean Craftsmanship and I needed a way to decompress. Or, perhaps it was just because I felt like it. Whatever the reason, I loaded up the project and started goofing around with it.
Now I'm sure you've had that feeling of trepidation when you pick up a code base that you haven't seen in three years. I certainly felt it. I mean, what was I going to find in there? Would I be able to get my bearings and understand the code? Or would I flail around aimlessly for weeks?
I needn't have worried. The code base was nicely organized. There was a very nice suite of tests that covered the vast majority of the game logic. The GUI code, though not tested, was simple enough to understand at a glance.
But, perhaps most importantly, this code was written to be 100% functional. No variables were mutated, anywhere in the code. This meant that every function did exactly what it said it did; and left no detritus around to confound other functions. No function could be impacted by the state of the system because the system did not have "a state".
Now maybe you are rolling your eyes at that last paragraph. Several years ago I might have rolled my eyes too. But the relief I experienced coming back into this code base after three years of not touching it, and knowing it was functional, was palpable.
Another thing that gave me a significant amount of help was that all the critical data structures in the system were described and tested using clojure/spec
. This was profoundly helpful because it gave me the kind of declarative help that is usually reserved for statically typed languages.
For example, This is a Klingon:
(s/def ::x number?)
(s/def ::y number?)
(s/def ::shields number?)
(s/def ::antimatter number?)
(s/def ::kinetics number?)
(s/def ::torpedos number?)
(s/def ::weapon-charge number?)
(s/def ::velocity (s/tuple number? number?))
(s/def ::thrust (s/tuple number? number?))
(s/def ::battle-state-age number?)
(s/def ::battle-state #{:no-battle :flank-right :flank-left :retreating :advancing})
(s/def ::cruise-state #{:patrol :guard :refuel :mission})
(s/def ::mission #{:blockade :seek-and-destroy :escape-corbomite})
(s/def ::klingon (s/keys :req-un [::x ::y ::shields ::antimatter
::kinetics ::torpedos ::weapon-charge
::velocity ::thrust
::battle-state-age ::battle-state
::cruise-state
::mission]
:opt-un [::hit/hit]))
These kinds of clojure/spec
descriptions gave me the documentation I needed to reaquaint myself with the critical data structures of the system. They also gave me the ability to check that any functions I wrote kept those data structures conformant to the spec.
All of this means that I was able to make progress in this code base quickly, and with a high degree of confidence. I never had that feeling of wading through bogs of legacy code.
Anyway, I'm done now, for the time being. I've given the player a mission to complete, and made it challenging, but possible, to complete that mission. A game requires 2-3 hours of intense play, is tactially and strategically challenging, and is often punctuated by moments of sheer panic.
I hope you enjoy downloading it, firing up Clojure, and playing it. Consider it my Christmas present to you.
One last thing. Three years ago Mike Fikes saw my Space War program and converted it from Clojure to ClojureScript. The change was so miniscule that the two are now a single code base with a tiny smattering of conditional compilation for the very few differences. So if you want to play the game on-line you can just click on http://spacewar.fikesfarm.com/spacewar.html. Mike has kindly kept this version up to date so -- have at it!
]]>One of the changes I made was to populate the initial space with a few random bases scattered here and there. This would allow the player some extra resources with which to battle the Klingons while building up a network of more bases.
While I was playing the modified game, it crashed. Hard.
Now I wrote this with TDD, and I was very disciplined about the cleanliness of the code, and the test coverage. So this was unexpected. So I dug up all my old debugging skills from the pit in which I had buried them, and started to work out what was going on.
It wasn't long before I realized that crash was occuring because a transport was being launched between two bases, but the angle of the velocity vector of the transport was :bad-angle
. This can only happen if the two bases exist at the exact same location.
Bases don't move around in this game, so there's no chance that two bases will accidentally slide on top of each other. There is a very (very) minor chance that the random number generator will put two bases on top of each other at the start of the game; but the odds are so miniscule that didn't worry about it. In any case, this crash happened well into the game I was playing, so initial values could not have been the cause.
Fortunately it's pretty easy to hunt and peck around in the game, so I was quickly able to discover that the two bases in question were duplicates of each other. Something in my code was duplicating bases!
Well now that should't be too hard to find. So I wrote a litte function that would examine the world and halt with a message if the world contained two bases at the same location. I called this function in the main update loop, and sure enough after 20 minutes of play the program halted with my message.
Unfortunately being able to detect that the duplication occurred did not tell me where it occurred. So I laced the code with calls to my check-for-duplicate-base
function.
It took me a few tries because the problem was not in any of the obvious places. So over a few hours I added more and more calls to check-for-duplicate-base
.
Eventually I found the culprit in a low frequency function named klingons-steal-antimatter
.
This function is called once per second. It checks to see if any klingons are within docking-distance
of a base, and if so it steals antimatter from that base.
This explained why the crash took so long to create. Most of the time it takes 20 minutes or so for a Klingon to move close enough to a base to start stealing.
Anyway, I looked at the code and didn't see any obvious duplication. So I wrote a unit test to check whether that function duplicated bases. My test positioned a klingon near a base, called the klingons-steal-antimatter
function, and then checked the number of bases in the world. The result: No duplication.
Now, before I continue, let me describe the process I used in the klingons-steal-antimatter
function.
The function created a list of thefts. A theft is a [thief victim]
pair. It used those pairs to create lists of all the thieves and victims, and separate lists of all the innocent klingons and all the unvictimized bases.
Why? Because this is a purely functional program. In a purely functional program you cannot update the status of an object. Instead you transformm old objects into new objects. So when stealing antimatter from a base you must create a new base with less antimatter, and you must create a new klingon with more antimatter. When you are done processing all the thefts you are left with a list of all the updated klingons, and a list of all the updated bases.
The world
contains a list of all the klingons and a list of all the bases. In order to update the world
after processing the thefts you have to concatenate the updated bases with the unvictimized bases, and you have to concatenate the updated klingons with the innocent klingons.
Got it? Understand? Good.
As I pondered the code I realized that a base could be robbed by more than one klingon. Klingons tend to slowly migrate towards bases and then steal from them. Two or three or more could eventually manage to slide over to a base, like a pack of coyotes squabbling over a carcass.
Now I already had a unit test that checked for this condition. It created two klingons near one base and made sure that each klingon was able to steal from that base. What that test did not do, however, was count the number of bases in the world when it was done.
So I added a check. base-count => 1
. Whoops, it came back with 2
.
Now maybe you've already figured out why this happened. But let me walk you through it. My function identified two thefts: [[k1 b] [k2 b]]
It returned with the results of each theft. Let's say that k1
stole a1
antimatter from b
, and k2
stole a2
from b
. What the function returned was [[k1+a1 b-a1] [k1+a2 b-a2]]
. Note that the second theft in the list was not [k2+a2 b-a1-a2]
.
You've probably guessed the rest. When I reassembled the world, I added all the bases that had been victims; and -- of course -- I added b-a1
and b-a2
.
Fortunately I had lots of unit tests to fall back on. Changing the algorithm was actually quite challenging, and required me to put all the klingons and bases into hashmaps keyed by their positions. I won't bore you with the details.
So I added unit tests to check for the duplications, saw them fail, and then gradually made them pass. The unit tests allowed me to be sure that I was not breaking something else along the way.
Now you might think this is just an esoteric little problem that you'll never encounter. However, if you are writing functional programs, you will face this issue, and you'll likely face it a lot. Dealing with immutable lists of objects means that when you update such a list you must recreate it. If you are only updating m
out of n
elements of the list, you have to partition the original list into the m
elements you are changing and the n-m
elements you are not changing; and then you have to concatenate the m
changed elements with the n-m
unchanged elements in order to create the updated list.
Anyway, I thought you might find that interesting.
]]>Thanks Dad!
Tim and I would spend our day "playing" with the floor model of the PDP-8 they had at the office. The office staff were very accommodating and accepting of our presence, and they helped us out if we needed any fresh rolls of teleprinter paper, or paper tape.
Several years later, at the age of 20, I found myself working at Teradyne Applied Systems in Chicago. The computer we used there was called an M365; but it was really just an upgraded PDP-8. We used it to control lasers in order to trim electronic components to very precise values.
Forty four years later, in May of 2015, I started playing with a cute little Lua environment on my iPad called Codea. I wrote several fun little programs, like lunar lander, etc. But then I thought: "Wouldn't it be fun to write a PDP-8 Emulator?"
A few days/weeks later I had a nice little PDP-8 emulator running on my iPad. I found some archived binary images of ancient paper tapes and managed to load them into my emulator. This allowed me to run the suite of development tools that I had used back in those early days.
Then Apple decided it didn't want people writing code on the Ipad that was not distributed through the App store, so they blocked the means by which Codea users could share source code. Indeed, I couldn't even move Lua source code to my new iPads. So the emulator was lost.
Fortunately I had put the last working version up on GitHub.
At some point, Apple reopened the channel, perhaps due to a court case. I discovered this a few weeks back, and loaded that old source code back into my iPad. It worked like a champ.
I made a few changes to deal with the bigger screen, and the faster processor, and then announced it on twitter. I think many people have played with it since.
You can get the emulator here. You'll find a lot of good tutorial information, and several demonstration videos in that repository.
As you may know I have a youtube series on the cleancoders.com channel, in which I walk through the problems in the Euler project solving them in Clojure and then taking them to the max, Myth-buster style.
Euler 4 is a simple little problem of finding the factors of palindromic numbers. I quickly solved it in Clojure, and then I thought it would be fun to write a PDP-8 program to solve it.
Down the rathole I went.
I used TDD to get the individual subroutines working. Among the subroutines I wrote were single and double precision multiply and divide routines. (We didn't use the word "functions" back then.) The poor PDP-8 could only add. It couldn't even subtract. Subtraction was accomplished by using twos-complement addition (let the reader understand;-)
Was this fun? Yes, at first it was kinda cool to reminisce, and to feel all the old knowledge and instincts come flooding back into my brain. But once the "novelty" wore off, it stopped being fun, and just turned into work -- grinding, tedious, work.
It took me several hours, over a period of a few days, but I got the blasted thing working. It's not an experience I'd like to repeat. Working on a PDP-8 is a PITA, even when with all the cheats I supply in my Emulator.
Here, for your edification, is my solution to Euler 4 on a PDP-8. This code solves the problem; but I'm quite sure it has some really nasty bugs anyway. I am in no way proud of this code. I'm just not willing to improve it. If you study it you'll see just how awful it is. I mean, among other sins I used truly naive algorithms for multiplying and dividing numbers.
Anyway, be careful. The lure of the rathole is very compelling.
/EULER 4 SOLUTION
PZERO=20
*200
MAIN, CLA
TLS
TAD SEED
ISZ SEED
CIA
JMS CALL
MKPAL
JMS CALL
PRDOT
CLA
TAD MAXFAC
DCA FAC
FACLUP,
CLA
TAD FAC
TAD K100
SMA CLA
JMP MAIN
JMS CALL
DLOAD
DPAL
TAD FAC
CIA
JMS CALL
ISFAC
SKP
JMP GOTFAC
CLA
TAD I OFP /OTHER FAC > 999 TRY NEXT PAL.
TAD MAXFAC
SMA CLA
JMP MAIN
ISZ FAC
JMP FACLUP
GOTFAC,
CLA
TAD I OFP
TAD MAXFAC
SMA CLA
JMP MAIN
JMS CRLF
CLA
TAD FAC
CIA
JMS CALL
PRAC
JMS CALL
PRDOT
CLA
TAD I OFP
JMS CALL
PRAC
JMS CRLF
JMS CALL
DLOAD
DPAL
JMS CALL
PRDACC
JMS CRLF
HLT
DECIMAL
SEED, -999
MAXFAC, -999
OCTAL
FAC, 0
OFP, OTHFAC+1
*400
/MAKE A PALINDROMIC NUMBER FROM A SEED.
/ABC->ABCCBA IN DECIMAL IN DACC AND STORED IN DPAL
MKPAL, 0
DCA DPAL+1
DCA DPAL
TAD DPAL+1
JMS CALL
DIV
K10
DCA WRK
TAD REM
DCA DIGS
TAD WRK
JMS CALL
DIV
K10
DCA DIGS+2
TAD REM
DCA DIGS+1
JMS CALL
DLOAD
DPAL
TAD K1000
JMS CALL
DMUL
JMS CALL
DSTORE
DPAL
CLA
TAD DIGS
JMS CALL
MUL
K10
TAD DIGS+1
JMS CALL
MUL
K10
TAD DIGS+2
DCA DWRK+1
DCA DWRK
JMS CALL
DLOAD
DPAL
JMS CALL
DADD
DWRK
JMS CALL
DSTORE
DPAL
JMP I MKPAL
/SKIP IF AC IS A FACTOR OF DACC. AC=0
ISFAC, 0
DCA DFAC+1
DCA DFAC
JMS CALL
DDIV
DFAC
JMS CALL
DSTORE
OTHFAC
JMS CALL
DLOAD
DREM
JMS CALL
DSKEQ
D0
SKP
ISZ ISFAC
JMP I ISFAC
DFAC, 0
0
OTHFAC, 0
0
OCTAL
DPAL, 0
0
DIGS, 0
0
0
WRK, 0
DWRK, 0
0
// PZERO FOR EULER
*PZERO
DECIMAL
K100, 100
K1000, 1000
K10, 10
OCTAL
PZERO = .
~
*1000
/DMATHLIB
/DLOAD - LOAD ARG INTO DACC, AC=0
DLOAD, 0
CLA
TAD I DLOAD
ISZ DLOAD
DCA DARGP
TAD I DARGP
DCA DACC
ISZ DARGP
TAD I DARGP
DCA DACC+1
JMP I DLOAD
/DOUBLE PRECISION STORE ACCUMULATOR POINTED TO BY ARG
DSTORE, 0
CLA
TAD I DSTORE
DCA DARGP
ISZ DSTORE
TAD DACC
DCA I DARGP
ISZ DARGP
TAD DACC+1
DCA I DARGP
JMP I DSTORE
/SKIP IF DOUBLE PRECISION ARGUMENT IS EQUAL TO DACC. AC=0
DSKEQ, 0
CLA
TAD I DSKEQ
DCA DARGP
ISZ DSKEQ
TAD DACC
CIA
TAD I DARGP
SZA CLA
JMP I DSKEQ
ISZ DARGP
TAD DACC+1
CIA
TAD I DARGP
SNA CLA
ISZ DSKEQ
JMP I DSKEQ
/DOUBLE PRECISION ADD ARGUMENT TO DACC. AC=0
DADD, 0
CLA CLL
TAD I DADD
ISZ DADD
DCA DARGP
TAD DARGP
IAC
DCA DARGP2
TAD I DARGP2
TAD DACC+1
DCA DACC+1
RAL
TAD I DARGP
TAD DACC
DCA DACC
JMP I DADD
/COMPLEMENT AND INCREMENT DACC
DCIA, 0
CLA CLL
TAD DACC+1
CMA IAC
DCA DACC+1
TAD DACC
CMA
SZL
IAC
DCA DACC
JMP I DCIA
/MULTIPY DACC BY AC
DMUL, 0
CIA
DCA PLIERD
JMS DSTORE
DCAND
JMS DLOAD
D0
TAD PLIERD
SNA CLA
JMP I DMUL
DMUL1, JMS DADD
DCAND
ISZ PLIERD
JMP DMUL1
JMP I DMUL
PLIERD, 0
DCAND, 0
0
/DIV DACC BY DARG (AWFUL) R IN DREM AC=0
DDIV, 0
CLA
TAD I DDIV
ISZ DDIV
DCA .+4
JMS DSTORE
DVDEND
JMS DLOAD
0
JMS DCIA /NEGATE DIVISOR
JMS DSTORE
DVSOR
JMS DLOAD
DVDEND
DCA DQUOT
DCA DQUOT+1
JMP DDIV1
DDIV2, ISZ DQUOT+1 // INCREMENT DQUOT
SKP
ISZ DQUOT
DDIV1, JMS DSTORE
DREM
JMS DADD
DVSOR
TAD DACC
SMA CLA
JMP DDIV2
JMS DLOAD
DQUOT
JMP I DDIV
DARGP, 0
DARGP2, 0
DVSOR, 0
0
DVDEND, 0
0
DQUOT, 0
0
/PAGE ZERO DATA FOR DMATHLIB
*PZERO
DACC, 0
0
D0, 0
0
DREM, 0
0
PZERO=.
~
/SINGLE PRECISION MATH LIBRARY
*2000
/DIVIDE AC BY ARGP (SLOW AND NAIVE)
/Q IN AC, R IN REM
DIV, 0
DCA REM
TAD I DIV
ISZ DIV
DCA ARGP
TAD I ARGP
CIA
DCA MDVSOR
DCA QUOTNT
TAD REM
DIVLUP, TAD MDVSOR
SPA
JMP DIVDUN
ISZ QUOTNT
JMP DIVLUP
DIVDUN, CIA
TAD MDVSOR
CIA
DCA REM
TAD QUOTNT
JMP I DIV
MDVSOR, 0
QUOTNT, 0
ARGP, 0
/MULTIPLY AC BY ARGP (SLOW AND NAIVE)
/GIVING SINGLE PRECISION PRODUCT IN AC
MUL, 0
DCA CAND
TAD I MUL
ISZ MUL
DCA ARGP
TAD I ARGP
SNA
JMP I MUL
CIA
DCA PLIER
TAD CAND
ISZ PLIER
JMP .-2
JMP I MUL
CAND, 0
PLIER, 0
/PZERO FOR SMATHLIB
*PZERO
REM, 0
PZERO=.
~
/TTY UTILS
*3000
/PRINT ONE CHAR IN AC. IF CR THEN PRINT LF.
PRTCHAR,0
TSF
JMP .-1
TLS
DCA CH
TAD CH
TAD MCR
SZA
JMP RETCHR
TAD KLF
TSF
JMP .-1
TLS
RETCHR, CLA
TAD CH
JMP I PRTCHAR
CH, 0
MCR, -215
/PRINT AC AS ONE DECIMAL DIGIT AC=0
PRDIG, 0
TAD K260
TSF
JMP .-1
TLS
CLA
JMP I PRDIG
K260, 260
/PRINT THE DACC IN DECIMAL
PRDACC, 0
JMS CALL
DSTORE
DACSV
JMS CALL
DDIV
D1E6
TAD DACC+1
JMS PRDIG
JMS CALL
DLOAD
DREM
JMS CALL
DDIV
D1E5
TAD DACC+1
JMS PRDIG
JMS CALL
DLOAD
DREM
JMS CALL
DDIV
D1E4
TAD DACC+1
JMS PRDIG
JMS CALL
DLOAD
DREM
JMS CALL
DDIV
D1E3
TAD DACC+1
JMS PRDIG
JMS CALL
DLOAD
DREM
JMS CALL
DDIV
D1E2
TAD DACC+1
JMS PRDIG
JMS CALL
DLOAD
DREM
JMS CALL
DDIV
D1E1
TAD DACC+1
JMS PRDIG
JMS CALL
DLOAD
DREM
TAD DACC+1
JMS PRDIG
JMS CALL
DLOAD
DACSV
JMP I PRDACC
DACSV, 0
0
D1E6, 0364
1100
D1E5, 0030
3240
D1E4, 2
3420
D1E3, 0
1750
D1E2, 0
144
D1E1, 0
12
/PRINT AC, AC=AC
PRAC, 0
DCA SAC
TAD SAC
JMS CALL
DIV
D1E3+1
JMS PRDIG
TAD REM
JMS CALL
DIV
D1E2+1
JMS PRDIG
TAD REM
JMS CALL
DIV
D1E1+1
JMS PRDIG
TAD REM
JMS PRDIG
TAD SAC
JMP I PRAC
SAC, 0
/PRINT DOT AC=AC
PRDOT, 0
DCA SAC
TAD KDOT
JMS TYPE
TAD SAC
JMP I PRDOT
/----------------------
/PZERO TEST LIBRARY
*PZERO
TYPE, 0 / AC=0
TSF
JMP .-1
TLS
CLA
JMP I TYPE
CRLF, 0 / AC=0
CLA
TAD KCR
JMS TYPE
TAD KLF
JMS TYPE
JMP I CRLF
/SOUND BELL AND HALT WITH ADDR OF BAD TEST IN AC
ERROR, 0
CLA
TAD KBELL
JMS TYPE
CLA CMA
TAD ERROR
HLT
/PRINT DOT, COUNT ERROR
PASS, 0
CLA
TAD KDOT
JMS TYPE
ISZ TESTS
JMP I PASS
/TESTS COMPLETE, PRINT ZERO AND HALT WITH # OF TESTS IN AC.
TSTDUN,
JMS CRLF
TAD KZERO
JMS TYPE
JMS CRLF
TAD TESTS
HLT
/CALL SUBROUTINE
CALL, 0
DCA AC
TAD I CALL
DCA CALLEE
TAD CALL
IAC
DCA I CALLEE
ISZ CALLEE
TAD AC
JMP I CALLEE
AC, 0
CALLEE, 0
TESTS, 0
KZERO, 260
KBELL, 207
KCR, 215
KLF, 212
KDOT, 256
PZERO=.
~
$
Papert's goal was to teach children about programming. As the years went by the robot got replaced with screens, and the turtle became an icon that could draw lines. Children from the 70s until now have been enthralled by the simple commands for directing the turtle, and the elegant drawings they can make.
For example, this is how you might draw a square:
forward 100
right 90
forward 100
right 90
forward 100
right 90
forward 100
right 90.
Recently I had a need to explore some interesting geometrical designs. Turtle graphics would be perfect for my purposes. So I wrote a turtle graphics processor in Clojure. [code]
I used the quil
framework which is based on the Processing
framework in Java. This framework makes it very easy to create simple GUIs in Clojure.
Now consider the problem of the Turtle. What is the type model for this object? What fields does it have, and what constraints must be placed on those fields?
Here was my solution to that problem, written in clojure/spec
. As usual, in Clojure, you start at the bottom and read towards the top.
(s/def ::position (s/tuple number? number?))
(s/def ::heading (s/and number? #(<= 0 % 360)))
(s/def ::velocity number?)
(s/def ::distance number?)
(s/def ::omega number?)
(s/def ::angle number?)
(s/def ::weight (s/and pos? number?))
(s/def ::state #{:idle :busy})
(s/def ::pen #{:up :down})
(s/def ::pen-start (s/or :nil nil?
:pos (s/tuple number? number?)))
(s/def ::line-start (s/tuple number? number?))
(s/def ::line-end (s/tuple number? number?))
(s/def ::line (s/keys :req-un [::line-start ::line-end]))
(s/def ::lines (s/coll-of ::line))
(s/def ::visible boolean?)
(s/def ::speed (s/and int? pos?))
(s/def ::turtle (s/keys :req-un [::position
::heading
::velocity
::distance
::omega
::angle
::pen
::weight
::speed
::lines
::visible
::state]
:opt-un [::pen-start]))
Now don't freak out at all the parentheses and colons. In fact, for the moment, just ignore them.
So, what is a turtle? A turtle is a map whose required keys are as follows:
position
is the cartesian coordinate of the pen of the turtle. If you look up towards the top you will see that a position is defined as a tuple containing two numbers.
heading
is the direction that the turtle is pointing. It will move in that direction if told to move forward. Again, looking up towards the top you can see that a heading must be a number between 0 and 360.
velocity
is a number that represents the speed at which the turtle will move across the screen. This is used for animation, so that the user can actually watch the turtle travel across the screen.
distance
is a number that represents the remaining distance that the turtle must traverse before the current command (either a forward
or backwards
command) is complete.
omega
is a number that represents the angular velocity of the turtle. Again, this is for animation purposes, so that the user can watch the turtle rotate when given a right
or left
command.
angle
is a number that represents the number of degrees remaining to complete the current rotation command.
pen
is the state of the pen. Looking up you can see that the state of the pen can be either up
or down
.
weight
is a positive number that represents the thickness of the line drawn by the pen.
speed
is a positive integer that acts as a multiplier for both the velocity
and omega
parameters. This allows the user to speed up or slow down the animation.
lines
is a list of all the lines drawn by the turtle so far. Looking up you can see that it is a collection of lines, and that lines are maps whose required keys are line-start
and line-end
, both of which are tuples of two numbers. (Yes, I suppose I should have created a point
type.)
visible
is a boolean that determines whether the turtle itself should be visible while it is being animated. If this is false, then all the user sees is the animated result of the turtle's movements.
state
is either busy
or idle
. This is used by the command processor. When the turtle goes from busy
to idle
the next command is pulled from the command queue and executed.
It should be clear that this is a type model. Most statically typed languages would not be able to capture all the constraints within this type model; though there are perhaps some that could. However, this is not a static type model. Clojure is not a statically typed language. clojure/spec
is a dynamic type definition language.
What does that mean? Probably the best way to explain that is to show you where that type model gets invoked. Here's a simple example.
(defn make []
{:post [(s/assert ::turtle %)]}
{:position [0.0 0.0]
:heading 0.0
:velocity 0.0
:distance 0.0
:omega 0.0
:angle 0.0
:pen :up
:weight 1
:speed 5
:visible true
:lines []
:state :idle})
This is the default constructor of the turtle. Notice that it just loads up all the required fields into a map. Notice also that there is a post condition that asserts that the result conforms the the turtle
type.
This is nice. If I forget to initialize a field, or if I initialize a field to a value that conflicts with the type, I get an error.
Here's another, more complex example. Don't freak out, you don't have to understand this in detail.
(defn update-turtle [turtle]
{:post [(s/assert ::turtle %)]}
(if (= :idle (:state turtle))
turtle
(let [{:keys [distance
state
angle
lines
position
pen
pen-start] :as turtle}
(-> turtle
(update-position)
(update-heading))
done? (and (zero? distance)
(zero? angle))
state (if done? :idle state)
lines (if (and done? (= pen :down))
(conj lines (make-line turtle))
lines)
pen-start (if (and done? (= pen :down))
position
pen-start)]
(assoc turtle :state state :lines lines :pen-start pen-start)))
)
This is the function that updates the turtle for each screen refresh. Again, notice the post condition. If anything is calculated incorrectly by the update-turtle
function, I'll get an exception right away.
Now some of you might be worried that by checking types at runtime I could end up with runtime errors in production. You might therefore assert that static typing is better because the compiler checks the types long before the program ever executes.
However, I do not intend to have runtime errors in production, because I have a suite of tests that exercise all the behaviors of the system. Here's just one of those tests:
(describe "Turtle Update"
(with turtle (-> (t/make) (t/position [1.0 1.0]) (t/heading 1.0)))
(context "position update"
(it "holds position when there's no velocity"
(let [turtle (-> @turtle (t/velocity 0.0) (t/state :idle))
new-turtle (t/update-turtle turtle)]
(should= turtle new-turtle)))
Again, you don't have to understand this in any detail. Just notice that the make
and update-turtle
functions are being invoked. Since those functions have post conditions that will check the types, and since my suite of tests is exhaustive, I am quite certain that there will be no runtime errors in production and that my dynamic type checking is as robust as any static type system.
The dynamic nature of the type checking allows me to assert type constraints that are very difficult, if not impossible, to assert at compile time. I can, for example, assert complex relationships between the values of the fields.
To expand on that example, imagine the type model of an accounting balance sheet. The sum of the assets, liabilities and equities on the balance sheet must be zero. This is easy to assert in clojure/spec
but is difficult, if not impossible, to assert in most statically typed languages.
Moreover, I get to control when types are asserted. It is up to me to decide if and when a certain type should be checked. This gives me a lot of power and flexibility. It allows me to violate the type rules in the midst of computations, so long as the end result ends up conforming to the types.
One last point. In the late 90s and the 2000s, there was a lengthy and animated (and sometimes acrimonious) debate over TDD vs DBC (Design by Contract). What clojure/spec
has taught me is that the two play very well together, and both should be in every programmer's toolkit.
The first electronic computer I ever wrote a program for was an ECP-18 in 1966. This was a 15 bit wide machine with 1024 words of drum memory. The programs I wrote were all in binary machine language and were entered through the front-panel switches.
In the years between 1967 and 1969 my father would drive my friend, Tim Conrad, and I 25 miles to the Digital Equipment Corp sales office, where we would spend our Saturdays entering programs into the PDP-8 that they had on the floor. They were very gracious to allow us such access and freedom. The code we wrote was in PAL-D assembler (which was written by Ed Yourdon when he was 21 years old).
My very first job as a programmer was temporary. A matter of two weeks. I was 17, and the year was 1969. My father went to the CEO of a nearby insurance actuarial firm, ASC Tabulating, and in his inimitable fashion, told them that they would be hiring me for a summer job. He had a way of being very convincing.
The program I wrote for ASC was named IDSET. It was written in Honeywell H200 assembler (the language was called Easycoder and was based on IBM 1401 Autocoder). The purpose was to read student records from a magnetic tape and insert ID codes into those records, and then write them out onto a new tape. With some coaching, I was able to get that program to work.
Upon graduating High School, in 1971, I got a job at ASC again; but this time as a third-shift off-line printer operator. We were printing junk mail, which was a brand new thing back then.
A few months later I was hired as a full-time programmer analyst at ASC, and was assigned to work on huge re-write of a massive accounting and records system for Local 705 Trucker's union in Chicago. The existing system ran on a great big GE Datanet 30. ASC wanted to reimplement it on a Varian 620F mini-computer.
The 620F was a lovely little 16 bit computer with 32K of core memory and a 1us cycle time. The primary IO devices were a teletype, a slow card reader, two magnetic tape drives, and two 2314 20MB disks. The machine also had 16 (or was it 32) RS232 ports for talking to teletypes that were remotely connected through 300BPS modems.
Although the 620F came with a stand-alone assembler, there was no operating system. So every bit of that real time union accounting system was built from assembler code, with no frameworks, platforms, or operating systems to help.
In 1973 I took a job at Chicago Laser Systems, programming a PDP-8-like machine, in assembler, to control pulsed lasers, galvonometer driven mirrors, and step-and-repeat tables to trim electronic components to high degrees of tolerance.
In 1975 I took a job at Outboard Marine Corporation, programming a real time aluminum die cast system in IBM System 7 assembler.
In 1977 I took a job at Teradyne Central, programming a PDP-8-like machine, in assembler (again), to control a distributed system for testing and monitoring the quality of all the telephone lines in a telephone company service area. A year later we started using 8085 micro-computers and wrote all that code in assembler too.
Suffice it to say that I was steeped in assembler, and thought that all high-level languages were a joke. My forays into COBOL, Fortran, and PL/1 did not convince me otherwise. Real programmers programmed in assembler.
Between 1977 and 1980 I was introduced to Pascal. I rejected it as a viable language almost immediatly. I found the type system far too constraining, and didn't trust all the magic behind the scenes.
In 1980 I read a copy of Kernighan and Ritchie, and for the first time I began to see that a high-level language could possibly be an appropriate engineering language. I spent many years writing in that wonderful language which, by the way, was as untyped as assembler.
Oh, that's not to say that C didn't have declared types. It's just that the compiler didn't bother to check that you were using those types properly. This made the language untyped for all intents and purposes.
In 1986, after several nightmare scenarios having to do with the typlessness of C, I was an enthusiastic early adopter of C++. Unfortunately I could not get my hands on a C++ compiler until 1987. I became quite an expert in the languge, and engaged in many (many (many)) arguments on comp.lang.c++ and comp.object (in those heady days of USENET, a very early social networking platform).
C++ is a statically typed language. Many, today, would consider it to be relatively weakly typed; but from my point of view, after a decade and a half of untyped languages, I thought the type enforcement was very strong. I had overcome the feeling of being handcuffed by a strong type system and became quite adept at building type models.
In 1990 I took a contracting job at Rational, working in C++ on the first release of Rational Rose. This is where I met Grady Booch, and came up with the plan for my first book.
By 1991 I was a consultant, selling my services to companies, all over the US and Europe, who wanted to learn about object-oriented programming and C++. It was a lucrative affair for me, and I continued building that business for several years. Eventually I became the editor-in-chief of The C++ Report (does anybody remember print magazines?)
In 1999 I realized that C++ was a waning technolgy, and that the action was really happening in Java. Java was similar enough to C++ for me to make the transition with relative ease. The type system of Java was a bit weaker than C++, and I refused to use the stronger features (like final
though I had been an avid consumer of const
in C++).
By 2003 I had grown tired of Java's static type system and started playing around with Python. I found the language to be primitive and somewhat haphazard; so after a few excursions with the language I switched to Ruby.
In Ruby I found a home for several years. The dynamic type system was robust. The object-oriented facilities were well thought through and very easy to use. It was an elegant language with very few warts.
Then, in 2010 or so, I bumped into Clojure. I had just recently read The Structure and Interpretation of Computer Programs and so was interested in playing around with a LISP derivative.
It has been 11 years now, and I feel no urge to change languages. I reckon that Clojure may be my last programming language. Oh, not that I haven't looked around. I've had some daliances with Golang, Elixr, and Kotlin, and have looked with trepidation at Haskel. I've even played with Scala and F#. I keep looking as new languages arise; but have found nothing that calls me to switch away from Clojure.
Notice the pathway of my career. I went from untyped languages like assembler and C, to statically typed languages like C++ and Java, to dynamically typed languages like Python and Ruby, and now to Clojure.
The type system in Clojure is as dynamic as Python or Ruby, but there is a library in Clojure called clojure/spec
that provides all the strong typing anyone would ever need. However, instead of that typing being controlled by the compiler, it is controlled by me. I can enforce simple types, or very complex data relationships. You might think of it as a kind of pre-condition/post-condition language. Eifel programmers would feel very much at home with it. It's an almost perfect way to engage in Design by Contract.
So what do I conclude from this? Not much other than that static typing is not for me. I prefer the flexibility of dynamic typing, and the ability to enforce types if, and when, I need such enforcement.
]]>I tweeted my answer in the following cryptic paragraph.
Place the if/else cases in a factory object that creates a polymorphic object for each variant. Create the factory in ‘main’ and pass it into your app. That will ensure that the if/else chain occurs only once.
Others have since asked me for an example. Twitter is not the best medium for that so…
Firstly, if the sole intent of the programmer is to translate:
0->'male',
1->'female'
otherwise -> 'unknown'
…then his refactoring #2 would be my preference.
However, I have a hard time believing that the business rules of the system are not using that gender code for making policy decisions. My fear is that the if/else/switch
chain that the author was asking about is replicated in many more places within the code. Some of those if/else/switch
statements might switch on the integer, and others might switch on the string. It’s not inconceivable that you’d find a if/else/switch
that used an integer in one case and a string in the next!
The proliferation of if/else/switch
statements is a common problem in software systems. The fact that they are replicated in many places is problematic because when such statements are inevitably changed, it is easy to miss some. This leads to fragile systems.
But there is a worse problem with if/else/switch
statements. It’s the dependency structure.
Such statements tend to have cases that point outwards towards lower level modules. This often means that the module containing the if/else/switch
will have source code dependencies upon those lower level modules.
That’s bad enough. We don’t like dependencies that run from high level modules to low level modules. They thwart our desire to create architectures that are made up of independently deployable components.
However, the above diagram shows that it’s worse than that. Other higher level modules tend to depend on the modules that contains those if/else/switch
statements. Those higher level modules, therefore, have transitive dependencies upon the lower level modules. This turns the if/else/switch
statements into dependency magnets that reach across large swathes of the system source code, binding the system into a tight monolithic architecture without a flexible component structure.
The solution to this problem is to break those outwards dependencies on the lower level modules. This can be done with simple polymorphism.
In the diagram above you can see the high level modules using a base class interface that polymorphically deploys to the low level details. With a little thought you should be able to see that this is behaviorally identical to the if/else/switch
but with a twist. The decision about which case to follow must have been made before those high level policy modules invoked the base class interface.
We’ll come back to when that decision is made in a moment. For now, just look at the direction of the dependencies. There is no longer any transitive source code dependency from the high level modules to the low level modules. We could easily create a component boundary that separates them. We could even deploy the high level modules independently from the low level modules. This makes for a pleasantly flexible architecture.
Another point to consider is that the if/else/switch
and the polymorphic implementations both use table lookups to do their work. In the case of an if/else
the table lookup is procedural. In the case of a switch
most compilers build a little lookup table. In the case of the polymorphic dispatch the vector table is built into the base class interface. So all three have very similar runtime and memory characteristics. One is not much faster than another.
So where does the decision get made? The decision is made when the instance of the base class is created. Hopefully that creation happens in a nice safe place like main
. Usually we manage that with a simple factory class.
In the diagram above you can see the high level module uses the base class to do its work. Every business rule that would once have depended on an if/else/switch
statement now has its own particular method to call in the base class. When a business rule calls that method, it will deploy down to the proper low level module. The low level module is created by the Factory
. The high level module invokes the make(x)
method of the Factory
passing some kind of token x
that represents the decision. The FactoryImpl
contains the sole if/else/switch
statement, which creates the appropriate instance and passes it back to the high level module which then invokes it.
Note, again, the direction of the dependencies. See that red line? That’s a nice convenient component boundary. All dependencies cross it pointing towards the higher level modules.
Be careful with that token x
. Don’t try to make it an enum
or anything that requires a declaration above the red line. An integer, or a string is a better choice. It may not be type safe. Indeed, it cannot be type safe. But it will allow you to preserve the component structure of your architecture.
You may well be concerned about a different matter. That base class needs a method for every business rule that once depended upon the if/else/switch
decision. As more of those business rules appear, you’ll have to add more methods to the base class. And since many business rules already depend upon the base class they’ll have to be recompiled/redeployed even though nothing they care about changed.
There are many ways to resolve that problem. I could keep this blog going for another 2,000 words or so describing them. To avoid that I suggest you look up The Interface Segregation Principle and the Acyclic Visitor pattern.
Anyway, isn’t it fascinating how interesting a discussion of a simple if/else/switch
can be?
Deep problems, that require much heavy thinking, do not often lend themselves to pairing. The interaction between the programmers tends to disrupt the necessary concentration.
On the other hand, it is not uncommon for programmers to get caught in a problem that they think is deep, but for which there is a much simpler solution that another programmer could quickly see. So it is wise to start deep problems with a pair, or even a mob, but then break it up when it becomes clear that the problem is irreducible.
On the other side of the spectrum, there is no good reason to pair on trivial matters. Fleshing out a list of error messages, or loading fifty fields into a form are relatively mindless activities that do not require the scrutiny afforded by pairing.
Then there is the vast middle. This is where pairing/mobbing are most valuable. These are problems that are non-trivial, but also not particularly deep. This is 90% of all programming. Pairing on this type of code keeps that code well tested, well structured, and as simple as possible.
Pairing should always be voluntary, never be forced, never be scheduled by a manager, and never tracked. It is an informal process that is entirely under the control of the individual programmers.
Some people can’t, or won’t do it. That’s ok; but it may require that their participation in certain projects be curtailed.
Pairing sessions should be short-ish. 20-40 minutes at a time. (Tomato sized) With no more than three or four consecutive sessions of that length. This is not a rule, just an informal guideline.
Not all code that would benefit from pairing, should be written by pairs. A mature team might pair 50% of the time, or even less. During the pairing sessions, a large amount of code will be reviewed; far more than the pair is actively writing; and thus the benefits of pairing will be seen in very large swathes of non-paired code.
Bottom line: Don’t be a jerk. Pair sometimes, don’t pair other times. Pair enough so that you have a good grasp of the overall system, and know enough of what your teammates are doing that you could step into their roles if the need arose. Don’t pair so much that you hate your job, and your teammates.
]]>For years the knowledge of the SOLID principle has been a standard part of our recruiting procedure. Candidates were expected to have a good working knowledge of these principles. Lately, however, one of our managers, who doesn’t code much anymore, has questioned whether that is wise. His points were that the Open-Closed principle isn’t very important anymore because most of the code we write isn’t contained in large monoliths and making changes to small microservices is safe and easy. The Liskov Substitution Principle is long out of date because we don’t focus on inheritance nearly as much as we did 20 years ago. I think we should consider Dan North’s position on SOLID – “Just write simple code.”
I wrote the following letter in response:
The SOLID principles remain as relevant to day as they were in the 90s (and indeed before that). This is because software hasn’t changed all that much in all those years — and that is because software hasn’t change all that much since 1945 when Turing wrote the first lines of code for an electronic computer. Software is still if
statements, while
loops, and assignment statements — Sequence, Selection, and Iteration.
Every new generation likes to think that their world is vastly different from the generation before. Every new generation is wrong about that; which is something that every new generation learns once the next new generation comes along to tell them how much everything has changed. <grin>
So let’s walk through the principles, one by one.
SRP) The Single Responsibility Principle.
Gather together the things that change for the same reasons. Separate things that change for different reasons.
It is hard to imagine that this principle is not relevant in software. We do not mix business rules with GUI code. We do not mix SQL queries with communications protocols. We keep code that is changed for different reasons separate so that changes to one part to not break other parts. We make sure that modules that change for different reasons do not have dependencies that tangle them.
Microservices do not solve this problem. You can create a tangled microservice, or a tangled set of microservices if you mix code that changes for different reasons.
Dan North’s slides completely miss the point on this, and convinces me that he did not understand the principle at all. (or that he was being ironic, which knowing Dan, is far more likely) His answer to the SRP is to “Write Simple Code”. I agree. The SRP is one of the ways we keep the code simple.
OCP) The Open-Closed Principle.
A Module should be open for extension but closed for modification.
Of all the principles, the idea that anyone would question this one fills me full of dread for the future of our industry. Of course we want to create modules that can be extended without modifying them. Can you imagine working in a system that did not have device independence, where writing to a disk file was fundamentally different than writing to a printer, or a screen, or a pipe? Do we want to see if
statement scattered through our code to deal with all the little details?
Or… Do we want to separate abstract concepts from detailed concepts. Do we want to keep business rules isolated from the nasty little details of the GUI, and the micro-service communications protocols, and the arbitrary behaviors of the database? Of course we do!
Again, Dan’s slide gets this completely wrong. When requirements change only part of the existing code is wrong. Much of the existing code is still right. And we want to make sure that we don’t have to change the right code just to make the wrong code work again. Dan’s answer is “write simple code”. Again, I agree. And, ironically, he is right. Simple code is both open and closed.
LSP) The Liskov Substitution Principle.
A program that uses an interface must not be confused by an implementation of that interface.
People (including me) have made the mistake that this is about inheritance. It is not. It is about sub-typing. All implementations of interfaces are subtypes of an interface. All duck-types are subtypes of an implied interface. And, every user of the base interface, whether declared or implied, must agree on the meaning of that interface. If an implementation confuses the user of the base type, then if/switch
statements will proliferate.
This principle is about keeping abstractions crisp and well-defined. It is impossible to believe that this is an outmoded concept.
Dan’s slides are entirely correct on this topic; he simply missed the point of the principle. Simple code is code that maintains crisp subtype relationships.
ISP) The Interface Segregation Principle.
Keep interfaces small so that users don’t end up depending on things they don’t need.
We still work with compiled languages. We still depend upon modification dates to determine which modules should be recompiled and redeployed. So long as this is true we will have to face the problem that when module A depends on module B at compile time, but not at run time, then changes to module B will force recompilation and redeployment of module A.
This issue is especially acute in statically typed languages like Java, C#, C++, GO, Swift, etc. Dynamicaly typed languages are affected much less; but are still not immune. The existence of Maven and Leiningen are proof of that.
Dan’s slide on this topic is provably false. Clients do depend on methods they don’t call, if they have to be recompiled and redeployed when one of those methods is modified. Dan’s final point on this principle is fine, so far as it goes. Yes, if you can split a class with two interfaces into two separate classes, then it is a good idea to do so (SRP). But such separation is often not feasible, nor even desirable.
DIP) The Dependency Inversion Principle.
Depend in the direction of abstraction. High level modules should not depend upon low level details.
It is hard to imagine an architecture that does not make significant use of this principle. We do not want our high level business rules depending upon low level details. I hope that is perfectly obvious. We do not want the computations that make money for us polluted with SQL, or low level validations, or formatting issues. We want isolation of the high level abstractions from the low level details. That separation is achieved by carefully managing the dependencies within the system so that all source code dependencies, especially those that cross architectural boundaries, point towards high level abstractions, not low level details.
In every case Dan’s slides end with: Just write simple code. This is good advice. However, if the years have taught us anything it is that simplicity requires disciplines guided by principles. It is those principles that define simplicity. It is those disciplines that constrain the programmers to produce code that leans towards simplicity.
The best way to make a complicated mess is to tell everyone to “just be simple” and give them no further guidance.
]]>The code below is the standard solution to the Prime Factors Kata.
public List<Integer> factorsOf(int n) {
ArrayList<Integer> factors = new ArrayList<>();
for (int d = 2; n > 1; d++)
for (; n % d == 0; n /= d)
factors.add(d);
return factors;
}
However, I was doing this kata in Clojure the other day and I wound up with a different solution. It looked like this:
(defn prime-factors [n]
(loop [n n d 2 factors []]
(if (> n 1)
(if (zero? (mod n d))
(recur (/ n d) d (conj factors d))
(recur n (inc d) factors))
factors)))
The algorithm is pretty much the same. I mean if you tracked the value of n
, d
, and factors
they would go through the same changes. On the other hand the code in Java is a doubly nested loop; but the code in Clojure is a single recursive loop with two recursion points. That’s interesting.
I could write the recursive algorithm in Java like this:
private List<Integer> factorsOf(int n) {
return factorsOf(n, 2, new ArrayList<Integer>());
}
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
if (n>1) {
if (n%d == 0) {
factors.add(d);
return factorsOf(n/d, d, factors);
} else {
return factorsOf(n, d+1, factors);
}
}
return factors;
}
And then, since this is tail recursive, I could rewrite it as a straight loop.
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
while (true) {
if (n > 1) {
if (n % d == 0) {
factors.add(d);
n /= d;
} else {
d++;
}
} else
return factors;
}
}
For all intents and purposes this code executes the same algorithm as the standard solution; but it does not have a doubly nested loop. We have transformed the code from a doubly nested loop, to a single loop, without affecting the algorithm.
Is this always possible?
In other words: given a program with a nested loop, is there a way to write the same program with a single loop?
The answer to that is: Yes.
The fact that a bit of code executes within an inner loop could be encoded into a state variable. The outer loop could then dispatch to that bit of code depending upon how that state variable is set.
We see that in the code above. The state condition for the inner loop is n%d==0
. Indeed, I can extract that out as a explanatory variable to make my point clearer. I can also extract n>1
.
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
while (true) {
boolean factorsRemain = n > 1;
boolean currentDivisorIsFactor = n % d == 0;
if (factorsRemain) {
if (currentDivisorIsFactor) {
factors.add(d);
n /= d;
} else {
d++;
}
} else
return factors;
}
}
Now all the looping decisions are made at the very top; and the if
statements simply dispatch the flow of control to the right bits of code.
That nested if
is a bit annoying. Let’s replace all that nesting with appropriate logic.
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
while (true) {
boolean factorsRemain = n > 1;
boolean currentDivisorIsFactor = n % d == 0;
if (factorsRemain && currentDivisorIsFactor) {
factors.add(d);
n /= d;
}
if (factorsRemain && !currentDivisorIsFactor)
d++;
if (!factorsRemain)
return factors;
}
}
Now we have a nice outer loop that fully determines the execution path up front, and then selects the appropriate paths with a sequence of if
statements with no else
clauses.
Indeed, we can improve upon this just a little bit more by using more explanatory variables to explicitly name those paths.
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
while (true) {
boolean factorsRemain = n > 1;
boolean currentDivisorIsFactor = n % d == 0;
boolean factorOutCurrentDivisor = factorsRemain &&
currentDivisorIsFactor;
boolean tryNextDivisor = factorsRemain && !currentDivisorIsFactor;
boolean allDone = !factorsRemain;
if (factorOutCurrentDivisor) {
factors.add(d);
n /= d;
}
if (tryNextDivisor) {
d++;
}
if (allDone)
return factors;
}
}
I think I can make this more interesting by using an enum
and a switch
.
private enum State {Starting, Factoring, Searching, Done}
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
State state = State.Starting;
while (true) {
boolean factorsRemain = n > 1;
boolean currentDivisorIsFactor = n % d == 0;
if (factorsRemain && currentDivisorIsFactor)
state = State.Factoring;
if (factorsRemain && !currentDivisorIsFactor)
state = State.Searching;
if (!factorsRemain)
state = State.Done;
switch (state) {
case Factoring:
factors.add(d);
n /= d;
break;
case Searching:
d++;
break;
case Done:
return factors;
}
}
}
Now let’s move the determination of the next state into each case.
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
State state = State.Starting;
while (true) {
switch (state) {
case Starting:
if (n == 1)
state = State.Done;
else if (n % d == 0)
state = State.Factoring;
else
state = State.Searching;
break;
case Factoring:
factors.add(d);
n /= d;
if (n == 1)
state = State.Done;
else if (n % d != 0)
state = State.Searching;
break;
case Searching:
d++;
if (n == 1)
state = State.Done;
else if (n % d == 0)
state = State.Factoring;
break;
case Done:
return factors;
}
}
}
Ugh. I think we can improve upon this by moving a few things around and gettting rid of those explanatory variables.
private List<Integer> factorsOf(int n, int d, List<Integer> factors) {
State state = State.Starting;
while (true) {
switch (state) {
case Starting:
break;
case Factoring:
factors.add(d);
n /= d;
break;
case Searching:
d++;
break;
case Done:
return factors;
}
if (n == 1)
state = State.Done;
else if (n % d == 0)
state = State.Factoring;
else
state = State.Searching;
}
}
OK, So now the whole thing has been changed into a Moore model finite state machine. The state transition diagram looks like this.
If you look closely you can see the nested loops in that diagram. They are the two transitions on the Searching
and Factoring
states that stay in the same state. You can also see the how the two loops interconnect through the transitions between the Searching
and Factoring
states. The Starting
state simply accepts n
from the outside world and initializes d
and factors
, and then dispatches to one of the other three states as appropriate. The Done
state simply returns the factors
list.
This is how Alan Turing envisioned computation in his 1936 paper, which you can read about in Charles Petzold’s wonderful book: The Annotated Turing.
So, we’ve gone from a nice doubly nested loop in Java to a Turing style finite state machine simply through a sequence of refactorings, each of which kept all the tests passing. This transformation from a standard procedure to a Turing style finite state machine could be done on any program at all.
Now let’s go back to the two bits of code that started all this. The Java version:
public List<Integer> factorsOf(int n) {
ArrayList<Integer> factors = new ArrayList<>();
for (int d = 2; n > 1; d++)
for (; n % d == 0; n /= d)
factors.add(d);
return factors;
}
And the Clojure version:
(defn prime-factors [n]
(loop [n n d 2 factors []]
(if (> n 1)
(if (zero? (mod n d))
(recur (/ n d) d (conj factors d))
(recur n (inc d) factors))
factors)))
The finite state machine is entirely hidden in the Java version isn’t it. It’s very difficult to see it peaking out from those nested for
loops. But that state machine is much more obvious in the Clojure program. The state is determined by the two if
forms, and the appropriate code is executed for each state.
If you can’t see that FSM in the Clojure code, then consider this simple refactoring which makes it even more evident:
(defn factors [n]
(loop [n n d 2 fs []]
(cond
(and (not= n 1) (zero? (mod n d))) (recur (/ n d) d (conj fs d))
(and (not= n 1) (not (zero? (mod n d)))) (recur n (inc d) fs)
(= n 1) fs)))
Why should this be? Why should the Clojure program look more like the FSM than the Java program? The answer is simple. The Java program can save some state information within the flow of control, because it can mutate variables while the loops are in progress. The Clojure program cannot save any state within the flow of control because no variables can be mutated at all. Those state changes are only noticed when the recursive loop
is re-entered.
Thus, functional programs tend to look much more like Finite State Machines than programs that are free to manipulate variables.
One last thought. The Java program that implemented the Finite State Machine had only one loop; and that loop was: while
(true)
. That means the loop knew nothing at all about the algorithm it was looping. Thus we can abstract it away from the program itself and envision a language that has no loops at all. No while
statements, no for
loops, no if
statements, and (of course) no goto
s. Programs in this language would be written in the FSM style. They would be composed of switch statements that switched on boolean expressions that identified each state. The language system would then simply execute that program, over and over, until told to stop.
Such programs would be naturally functional. For although they could mutate the state of variables, the mutated state would be irrelevant to the flow of control within the program, and could only affect the next iteration of the program. In effect the program would look like a tail-call-optimized recursive function.
Wait… Did I miss the exit?
]]>Since then I have seen the other side of the coin. Codes of conduct have been used as weapons to exclude people on the basis of their political opinions, or on the basis of their associations, or just because someone didn’t like them. I have written blogs about this as well. (1), (2)
As much as I think that codes of conduct are a good idea, we must not allow them to be weaponized. If we are going to set up rules with consequences, then we also need to set up the the due processes by which those rules and consequences are adjudicated. Otherwise the people who police the codes of conduct will be free of the due checks and balances that protect conference attendees and speakers from unfair and malicious actions. As we have seen, such malicious and unfair actions have become all too common.
It seems to me that if a conference is going to publish a code of conduct, like the one below, they must also publish the process by which alleged violations will be adjudicated. That process must include provisions for the accused to be able to defend themselves against the allegation, and must also allow the accused to know the identity of the accuser(s). Otherwise all conference attendees and speakers will be exposed to malicious and falsified complaints with no recourse to defend themselves.
The conference I was disinvited from is over. I was ejected because code of conduct complaints were registered against me by three relatively minor speakers in quick succession. I do not know if those speakers acted in concert. Nor am I certain of the identities of those speakers (though I have a good idea). What I do know is that three or four weeks before the conference was to begin those speakers threatened to withdraw from the conference if I were allowed to speak.
From what I have been able to discern, the conference organizers conducted an investigation. I was not a party to this investigation, indeed I was unaware that it was taking place. I was not notified about the complaints, nor was I given the opportunity to speak in my own defense. The conference organizers simply judged me based upon the complaints and whatever they could discover for themselves. I am quite certain that due diligence was not a requirement of the investigation.
Given that they were volunteers, and that losing three speakers one month before the conference is a considerable blow, it’s not hard to imagine that the conference organizers were under a fair bit of pressure to resolve the issue quickly and salvage as many speakers as possible. What’s more, the conference had already extracted as much value as it could from my image being emblazoned on their website and on the mailers they sent out two days before the start of the conference. So the decision to eject me must have been pretty easy.
What was the code of conduct violation? Apparently it related to something on twitter. I have read the code of conduct and the only potential violation I can see falls under the following rule.
Any form of written, social media, or verbal communication that can be offensive or harassing to any attendee, speaker or staff is not allowed at Chicago Cloud Conference.
That’s quite a standard. I don’t think any of us could withstand it. We’ve all said or written things that have offended, or could offend someone. I’ve had people get offended about my definition of monads. I’ve had people get upset with me about the SOLID principles, or my position on TDD, or my criticisms of statically typed languages. Some people may even have been offended by my infrequent comments about current politics.
As written, this rule means that anybody can complain about anything you might have said or written, at any time in the past. The only qualification for violation is that someone finds it offensive.
What’s more, since there is no published process of adjudication, you may well find that if a complaint is made against you, you will not be able to defend yourself, in any way. An individual, or a small group of people, whom you do not know, will vote in secret, without your knowledge, and without your input. If they decide against you, you will be ejected from the conference, without refund, and without recourse.
In short this means that if someone doesn’t like you, they can get you kicked out – and there’s nothing you can do about it. In my case three speakers apparently didn’t like something I said on twitter. So they extorted the conference organizers who bowed under the weight of that extortion and disinvited me without giving me the opportunity to address the complaints.
My solution to this is simple:
From now on I will not agree to attend, nor will I agree to speak at, any conference that publishes a code of conduct but does not have a published process for adjudicating code of conduct complaints. That process must include a means for those accused of a violation to defend themselves from the malicious actions of others, and must allow them to know who their accusers are.
I recommend that you all adopt the same policy.
Code Of Conduct
Chicago Cloud Conference is dedicated to providing a harassment-free conference experience for everyone, regardless of gender, sexual orientation, disability, physical appearance, body size, race, or religion. We have a zero-tolerance policy for any harassment of conference participants in any form. Sexual language and imagery is not appropriate for any conference venue, including talks. Conference participants violating these rules may be sanctioned or expelled from the conference without a refund at the discretion of the conference organizers.
Any form of written, social media, or verbal communication that can be offensive or harassing to any attendee, speaker or staff is not allowed at Chicago Cloud Conference. Please inform a Chicago Cloud Conference staff member if you feel a violation has taken place and the conference leadership team will address the situation.
Harassment includes offensive verbal comments related to gender, sexual orientation, disability, physical appearance, body size, race, religion; sexual images in public spaces; deliberate intimidation; stalking; following; harassing photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and unwelcome sexual attention. Participants asked to stop any harassing behavior are expected to comply immediately. Exhibitors in the expo hall, sponsor or vendor booths, or similar activities are also subject to the anti-harassment policy. In particular, exhibitors should not use sexualized images, activities, or other material. Booth staff (including volunteers) should not use sexualized clothing/uniforms/costumes, or otherwise create a sexualized environment.
If a participant engages in harassing behavior, the conference organizers may take any action they deem appropriate, including warning the offender or expulsion from the conference with no refund. If you are being harassed, notice that someone else is being harassed, or have any other concerns, please contact a member of conference staff immediately. Conference staff can be identified by t-shirts and special badges. Conference staff will be happy to help participants contact hotel/venue security or local law enforcement, provide escorts, or otherwise assist those experiencing harassment to feel safe for the duration of the conference. We value your attendance.
We expect participants to follow these rules at all conference venues and conference-related social events.
Chicago Cloud Conference prioritizes marginalized people’s safety over privileged people’s comfort and therefore we will not act on complaints regarding: ‘Reverse’ -isms, including ‘reverse racism,’ ‘reverse sexism,’ and ‘cisphobia’. Reasonable communication of boundaries, such as “leave me alone,” “go away,” or “I’m not discussing this with you”. Communicating in a ‘tone’ you don’t find congenial. Criticizing racist, sexist, cissexist, or otherwise oppressive behavior or assumptions.
What to do when you witness a Code of Conduct violation?
All reports of incidents are confidential! We will not publish the name of the reporter in any way. Speak up
Of course we do not want you do get into a more uncomfortable position as you maybe already are. You do not need to interact with the person(s) who presumably violated the Code of Conduct. Please let someone of the organizing team know
In every session, you will find one track host (the person introducing the speakers) and at least one crew member (wearing a colorful shirt with the word “crew” on it). All people who are working on Chicago Cloud Conference are very aware of the Code of Conduct. Approach them and let them know. In most cases they will bring you to one of the main organizers, so we can write an incident report. Who What were the circumstances that led to the incident? When?
Everyone working on Chicago Cloud Conference is informed on how to deal with an incident and how to further proceed with the situation.
The Purpose of the Code of Conduct:
]]>By signaling inclusivity and diversity as values we expect the conference to uphold, the Code of Conduct helps guarantee that the event will indeed be inclusive and embrace diversity.
Anyway, he wrote to me last October (That’s right, a full year ago!) and asked me to give a presentation at a Chicago conference this September 21st. I agreed, and he thanked me, and that was that. Then, in June, he wrote to tell me that the conference was going to be virtual due to Covid. I acknowledged and, once again, that was that.
Last Wednesday, September 9th, twelve days before the conference, he called me on the phone and said:
“This is going to be the most uncomfortable phone call I have ever made.”
He went on to say that the “Code of Conduct” people at the conference were concerened about some of my political opinions, and that some of the speakers of the conference refused to speak if I was going to speak.
Like I said, this guy is a friend of mine, and I don’t want to get him into any trouble, so I decided not to raise a fuss about it, and I promised him I would not mention his name or the name of the conference on line. He responded by telling me:
I’m scared to death of these people.
Over the last few days I’ve been mulling this situation over in my mind, and I’ve come to a few interesting conclusions.
OK, we didn’t have a formal written contract, but we had emails. And we also had the fact that, for the better part of a year, the conference website had my picture on it, and advertised me as a speaker. I conclude that the conference organizers derived substantial benefit from those pictures and from promising my virtual presence to their audience. I, on the other hand, was denied the benefit of actually speaking to that audience. Therefore, I am the damaged party.
Could I sue them? Certainly, though I’d have a difficult time quantifying the damages. Had we agreed on a speaking fee, I could at least claim that fee as damages. Next time I do one of these pro-bono events I’ll have the organizers agree to paying a hefty cancellation fee.
Those speakers would not have been harmed by speaking in a virtual conference that I also spoke in. Their intent was to damage me by forcing the conference organizers to breach their contract with me. That is the definition of tortious interference.
Could I sue them? Certainly. I won’t, for the same reason that I’m not going to sue the conference organizers. And, frankly, suing people for such small potatoes just isn’t worth the trouble. But, like I said, next time I do a pro-bono talk I’ll have the conference organizers agree to the value that I’m deriving in return for using my name and likeness on their website. Then I can sue them, and any tortious interferers, for that sum and punitive damages too.
Do I know who those tortiously interfering speakers are? I’ve got a pretty good idea. Myfear of course is that I do not wish to harm my friend. Nor do I wish to harm the conference organizers, nor the Chicago Software community. It seems to me that they are all victims of those revolting speakers.
So, this time, I’ll let the legal options rest. Instead, I’m offering a virtual free talk at 10:00 AM CDT, on September 21st, the first day of the conference. Those who wanted to hear me speak, still can.
The last point I’d like to make is this:
]]>Disinviting someone from a virtual conference who can draw a potentially large audience away from that virtual conference is not a particularly intelligent tactic.
At first I just hand copied the data into a spreadsheet. But that became tedious quite rapidly.
Then, in late March, I wrote a little Clojure program to extract and process the data. Every morning I pull the repo, and then run my little program. It reads the files, does the math, and prints the results.
Of course I used TDD to write this little program.
But over the last several weeks I’ve made quite a few small modifications to the program; and it has grown substantially. In making these adaptations I chose to use a different discipline: REPL Driven Design.
REPL Driven Design is quite popular in Clojure circles. It’s also quite seductive. The idea is that you try some experiments in the REPL to make sure you’ve got the right ideas. Then you write a function in your code using those idea. Finally, you test that function by invoking it at the REPL.
It turns out that this is a very satisfying way to work. The cycle time – the time between a code experiment and the test at the REPL – is nearly as small as TDD. This breeds lots of confidence in the solution. It also seems to save the time needed to mock, and create fake data because, at least in my case, I could use real production data in my REPL tests. So, overall, it felt like I was moving faster than I would have with TDD.
But then, in late April, I wanted to do something a little more complicated than usual. It required a design change to my basic structure. And suddenly I found myself full of fear. I had no way to ensure that those design changes wouldn’t leave the system broken in some way. If I made those changes, I’d have to examine every output to make sure that none of them had broken. So I postponed the change until I could muster the courage, and set aside the dedicated time it would require.
The change was not too painful. Clojure is an easy language to work with. But the verfication was not trivial, which led me to deploy the program with a small bug – a bug I caught 4 days later. That bug forced me to go back and correct the data and graphs that I generated.
Why did I need the design change? Because I was not mocking and creating fake data. My functions just read from the repo files directly. There was no way to pass them fake data. The design change I needed to make was precisely the same as the design change that I’d have needed for mocking and fake data.
Had I stuck with the TDD discipline I would have automatically made that design change, and I would not have faced the fear, the delay, and the error.
Is it ironic that the very design change that TDD would have forced upon me was the design change I eventually needed? Not at all. The decoupling that TDD forces upon us in order to pass isolated inputs and gather isolated outputs is almost always the design that fascilitates flexibility and promotes change.
So I’ve learned my lesson. REPL driven development feels easier and faster than TDD; but it is not. Next time, it’s back to TDD for me.
]]>Here are a few common utility functions:
user=> (inc 1) ; increments argument
2
user=> (dec 3) ; decrements argument
2
user=> (empty? []) ; tests for empty
true
user=> (empty? [1 2])
false
If you know Java or C# you probably know what the map
function does. Here’s an example: (map inc [1 2 3])
evaluates to (2 3 4)
.
The first argument of map is a function. The second is a list. The map
function returns a new list by applying the function to every element of the input list.
The filter
function also takes a function and a list. (filter odd? [1 2 3 4 5])
evaluates to (1 3 5)
. From that I think you can tell what both the filter
and the odd?
functions do.
And so with that, let’s try a little challenge. Let’s find all the prime numbers between one and a thousand.
We’ll use a variant of TDD to do this. Our eyes will be the tests. The cycle will be the same size as normal TDD; but we’ll write a bit of code first and then test it.
I know. Blashphemy! So sue me. ;-)
We begin like this: (defn primes [n] )
This returns nil
.
user=> (primes 1000)
nil
Now let’s get all the numbers between 1 and n
.
(defn primes [n]
(range 1 (inc n)))
user=> (primes 10)
(1 2 3 4 5 6 7 8 9 10)
You’ve probably figured out what range
does. It just returns a list of all the integers between it’s arguments.
OK, so now all we have to do is filter all the primes:
(defn primes [n]
(let [candidates (range 1 (inc n))]
(filter prime? candidates)))
CompilerException java.lang.RuntimeException: Unable to resolve symbol: prime? in this context, compiling:(null:3:5)
Oh, oh. We need to implement prime?
(defn prime? [n])
user=> (primes 10)
()
OK, that makes sense. But I should explain the let
function. It allows you to create names that are bound to expressions. The names exist only within the parentheses of the let
expression. So it’s a way to create local variables – though the word “variable” is not quite right because they cannot be reassigned. They are immutable.
Now how do we tell if a given integer n
is prime? Well, you all know how to do that, right? The simple and naive way is to divide the integer by every number between 2 and n
. But, of course that’s wasteful. There’s a better upper limit to try which is the square root of n
. I’m sure you can work out why that’s true.
(defn prime? [n]
(let [sqrt (Math/sqrt n)]
sqrt))
user=> (prime? 100)
10.0
OK, that’s right. Notice that we called the Java Math.sqrt
function. That’s a good example of how Clojure can call down into the Java libraries). Of course we don’t want prime?
to return a number; we want it to return a boolean. But for now it’s good to see the intermediate values of our computation.
So, next we’d like to get all the integers between 2 and the square root. We already know how to do that.
(defn prime? [n]
(let [sqrt (Math/sqrt n)
divisors (range 2 (inc sqrt))]
divisors))
user=> (prime? 100)
(2 3 4 5 6 7 8 9 10)
Now which of the divisors
divide n
evenly? We can find out by using the map
function.
(defn prime? [n]
(let [sqrt (Math/sqrt n)
divisors (range 2 (inc sqrt))
remainders (map (fn [x] (rem n x)) divisors)]
remainders))
user=> (prime? 100)
(0 1 0 0 4 2 4 1 0)
The rem
function should be self-explanatory; it just returns the integer remainder of the division of n
by x
. The (fn [x]...)
business needs a little explanation. Notice how similar it is to defn f [x]
? This is how we create an anonymous function. If you know the syntax in Java or C# for anonymous functions, then this shouldn’t be too much of a surprise to you. Anyway, the remainders list is just the list of all the remainders that result from dividing n
by the divisors
.
Now some of those remainders were zero, and that means they divided n
evenly. Therefore n
(100 in this case) is not prime. Let’s try a few others.
user=> (prime? 17)
(1 2 1 2)
user=> (prime? 1001)
(1 2 1 1 5 0 1 2 1 0 5 0 7 11 9 15 11 13 1 14 11 12 17 1 13 2 21 15 11 9 9)
user=> (prime? 37)
(1 1 1 2 1 2)
OK, so if the remainders list has a zero in it, then n
is not prime. Well, that should be easy, shouldn’t it?
(defn prime? [n]
(let [sqrt (Math/sqrt n)
divisors (range 2 (inc sqrt))
remainders (map (fn [x] (rem n x)) divisors)
zeroes (filter zero? remainders)]
zeroes))
user=> (prime? 100)
(0 0 0 0)
user=> (prime? 17)
()
So now all we need to do is return true
if the list is empty. Right?
(defn prime? [n]
(let [sqrt (Math/sqrt n)
divisors (range 2 (inc sqrt))
remainders (map (fn [x] (rem n x)) divisors)
zeroes (filter zero? remainders)]
(empty? zeroes)))
user=> (prime? 100)
false
user=> (prime? 17)
true
user=> (primes 100)
(1 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97)
Now I want you to think carefully about how we solved this problem. No if
statements. No while
loops. Instead we envisioned lists of data flowing through filters and mappers. The solution was almost more of a fluid dynamics problem than a software problem. (Ok, that’s a stretch, but you get my meaning.) Instead of imagining a procedural solution, we imagine a data-flow solution.
Think hard on this – it is one of the keys to functional programming.
(Special thanks to Stu Halloway @stuarthalloway for cluing me into the dataflow mindset way back in 2005)
Oh, and the primes between 1 and 1000?
user=> (primes 1000)
(1 3 5 7 11 13 17 19 23 29 31 37 41 43 47 53 59 61 67 71 73 79 83 89 97 101 103 107 109 113 127 131 137 139 149 151 157 163 167 173 179 181 191 193 197 199 211 223 227 229 233 239 241 251 257 263 269 271 277 281 283 293 307 311 313 317 331 337 347 349 353 359 367 373 379 383 389 397 401 409 419 421 431 433 439 443 449 457 461 463 467 479 487 491 499 503 509 521 523 541 547 557 563 569 571 577 587 593 599 601 607 613 617 619 631 641 643 647 653 659 661 673 677 683 691 701 709 719 727 733 739 743 751 757 761 769 773 787 797 809 811 821 823 827 829 839 853 857 859 863 877 881 883 887 907 911 919 929 937 941 947 953 967 971 977 983 991 997)
]]>And, yes, there is a bug. 1 is not prime. 2 is prime. Can you fix it?
This expression: (1 2)
represents the list containing the integers 1 and 2 in that order. If you want an empty list, that’s just ()
. And the list of the first five letters of the alphabet is just (\a \b \c \d \e)
.
Now you know a lot about the syntax of clojure. Perhaps you think there’s a lot missing. Well, there are a few things missing; but far fewer than you’d think.
You might be wondering how you add two numbers. That’s easy, that’s just (+ 1 2)
. As it happens that’s also just the list of the function named +
followed by a 1 and a 2. You see, a function call is really just a list. The function is the first element of the list, and the arguments are just the other elements of that list. When you want to call a function, you simply invoke the list that represents that function call.
There are quite a few built-in functions in clojure. For example there’s +, -, *, and /
. They do precisely what you’d think. Well, perhaps not precisely. (+ 1 2 3)
evaluations to 6
. (- 3 2 1)
evaluates to zero. (* 2 3 4)
evaluates to 24
. And (/ 20 2 5)
evaluates to 2. (- 5)
evaluates to -5
. (* 5)
evaluates to 5
. And, get ready for this, (/ 3)
evaluates to 1/3
. That last is the clojure syntax for the rational number one-third.
(first 1 2 3)
evaluates to 1
, (second 1 2 3)
evaluates to 2, and (last 1 2 3)
evaluates to – you guessed it – 3
.
If you’d like to see this in action you’ll need to start up a clojure REPL. You can google how to do that. The word REPL stands for Read, Evaluate, Print Loop. It’s a very simple program that reads in an expression, evaluates that expression, prints the result of that expression, and then loops back to the read.
If you start a REPL you’ll get some kind of a prompt, perhaps like this user=>
. Then you can type an expression and see it evaluated. Here are a few from my REPL
user=> (+ 1 2 3 4)
10
user=> (- 5 6 7 8)
-16
user=> (* 6 7 8)
336
user=> (/ 5 6 9)
5/54
If you try the expression at the very start of this article: (1 2)
you’ll get a nasty surprise.
user=> (1 2)
ClassCastException java.lang.Long cannot be cast to clojure.lang.IFn user$eval1766.invokeStatic (:1)
That’s because the digit 1
is not a function; and the REPL believes that if it reads a list, that list ought to be evaluated as a function call. If you just want the list (1 2)
at the REPL you can convince the REPL not to call the list as a function by quoting it as follows:
user=> (quote (1 2))
(1 2)
user=> '(1 2)
(1 2)
user=> (list 1 2)
(1 2)
The first invokes the quote
function which prevents its argument (1 2)
from being evaluated and just returns it. The second is just a little syntax shortcut for calling the quote
function. The third invokes the function that constructs lists.
Lists are implemented as linked lists. Each element contains a value and points to the next element. That makes it very fast to add an element to the front of the list, or to walk the list one element at a time. But it makes it slow to index into the list to find the Nth element. So, for that, clojure uses the vector data type. Here is a vector of the first three integers: [1 2 3]
. That’s right, it’s the square brackets that do the trick.
A vector is a lot like a growable array. It’s easy to add to the end of it, and it’s easy to index into it. Lists make good stacks. Vectors make good queues.
Now let’s define a function. (defn f [x] (+ (* 3 x) 1))
this defines the function named f
. It takes one argument named x
. And it calculates the formula: 3x+1
.
Now let’s take this apart one token at a time. This looks like a call to the function defn
. We’ll let that stand for the moment, but it’s not exactly right; defn
is a bit more special than that. The next token is the name of the function: f
. Names are alphanumeric with a few special characters allowed. For example +++
is a valid name. Following the name is a vector that names the function arguments. Again, these are names. Those names will be bound to the argument values when the function is called. And following the argument vector is the expression that is evaluated by the function. That expression can use the argument names.
You now know the vast majority of Clojure syntax. There’s more, of course, but you already know enough to write significant programs.
So let’s write a simple one. Let’s write the factorial function.
(defn fac [x] (if (= x 1) 1 (* x (fac (dec x)))))
Let’s walk through this. The function is named fac
and it takes one argument named x
. The if
function takes three arguments. If the first evaluates to something truthy it returns the second, otherwise it returns the third. The =
function does exactly what you’d think: it is a test for equality. If x
is 1, then the if
statement, and therefore the function, will return 1. Otherwise the if
statement will return x
times the factorial of the decrement of x
.
Let’s try it:
user=> (fac 3)
6
user=> (fac 4)
24
user=> (fac 10)
3628800
user=> (fac 20)
2432902008176640000
user=> (fac 30)
ArithmeticException integer overflow clojure.lang.Numbers.throwIntOverflow (Numbers.java:1501)
That works nicely, until we exceed 64 bits of precision. Clojure likes to use 64 bit integers for efficiency. But if you’d rather have unlimited precision you can use the N
notation.
user=> (fac 1000N)
402387260077093773543702433923003985719374864210714632543799910429938512398629020592044208486969404800479988610197196058631666872994808558901323829669944590997424504087073759918823627727188732519779505950995276120874975462497043601418278094646496291056393887437886487337119181045825783647849977012476632889835955735432513185323958463075557409114262417474349347553428646576611667797396668820291207379143853719588249808126867838374559731746136085379534524221586593201928090878297308431392844403281231558611036976801357304216168747609675871348312025478589320767169132448426236131412508780208000261683151027341827977704784635868170164365024153691398281264810213092761244896359928705114964975419909342221566832572080821333186116811553615836546984046708975602900950537616475847728421889679646244945160765353408198901385442487984959953319101723355556602139450399736280750137837615307127761926849034352625200015888535147331611702103968175921510907788019393178114194545257223865541461062892187960223838971476088506276862967146674697562911234082439208160153780889893964518263243671616762179168909779911903754031274622289988005195444414282012187361745992642956581746628302955570299024324153181617210465832036786906117260158783520751516284225540265170483304226143974286933061690897968482590125458327168226458066526769958652682272807075781391858178889652208164348344825993266043367660176999612831860788386150279465955131156552036093988180612138558600301435694527224206344631797460594682573103790084024432438465657245014402821885252470935190620929023136493273497565513958720559654228749774011413346962715422845862377387538230483865688976461927383814900140767310446640259899490222221765904339901886018566526485061799702356193897017860040811889729918311021171229845901641921068884387121855646124960798722908519296819372388642614839657382291123125024186649353143970137428531926649875337218940694281434118520158014123344828015051399694290153483077644569099073152433278288269864602789864321139083506217095002597389863554277196742822248757586765752344220207573630569498825087968928162753848863396909959826280956121450994871701244516461260379029309120889086942028510640182154399457156805941872748998094254742173582401063677404595741785160829230135358081840096996372524230560855903700624271243416909004153690105933983835777939410970027753472000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000N
OK, one last thing. Let’s add up all the numbers in a list. We want (sum [1 2 3 4 5])
to evaluate to 15
. First we’ll do it the hard way:
(defn sum [l] (if (empty? l) 0 (+ (first l) (sum (rest l)))))
The empty?
function does just what you’d think, it returns true if the list is empty. The rest
function returns all but the first element of a list.
Of course we could have written sum
like this: (defn sum [l] (apply + l))
. The apply
function – um – applies the function passed in it’s first argument to the list in its second.
We could also have written the function like this: (defn sum [l] (reduce + l))
. But that takes us to the reduce
function which (as George Carlin used to say) might go a bit too far. At least for this article.
Dad, can you help me with my school report?
Sure son. What’s it about?
We have to do it on the great pandemic of 2020. You were there, right?
I was just a little boy. But I know a lot about it. What is it you need to know?
We’re supposed to write about the heroes.
Ah, yes. A good topic. There were so many.
OK, so… Who were they?
Well, first of all there were the healthcare workers. Day after day, week after week, they kept on working in those hospitals full of very sick people. Many of them got sick too; and quite a few of them died.
They must have been brave.
They were. Very. They were as brave as any soldier going to war. Perhaps braver, because you couldn’t see the enemy, and in those days you couldn’t fight it.
We can fight it now, can’t we Dad.
Yes son. Now we can. We have vaccines and treatments. Nobody dies of COVID-19 anymore. But back then we didn’t have vaccines or treatments. We just had nurses and doctors who tried their very best to save as many people as possible.
So they were the heroes?
Yes. But there were many more. There were the people who worked in grocery stores.
I thought everybody stayed home to work.
Many of us did. We were the lucky ones. But the people who worked in those stores had to go to work every day. People needed food; and so grocery stores needed to stay open. And the people who worked in those stores had to help hundreds, maybe even thousands of people every day. They took huge risks to keep those stores open.
Wow, I hadn’t thought of that.
And then there were the delivery people. The people who drove trucks of food to the stores and trucks of products to people’s home. The people who worked for Amazon, and UPS, and Fedex, and the US Mail.
Who else, Dad? Who else?
Well, look son, there were so many. The police, the firemen, the sailors and soldiers, the air traffic controllers, the garbage men, the repairmen. Even though most people weren’t working, the essential parts of our civilization had to be kept running. And then there were just the everyday people who followed the rules and kept themselve at home for so many weeks. It was a huge effort that everyone had to play a part in.
But…
But what, Dad?
Well, there was one group of people who don’t often get mentioned; but without them the Pandemic would have been a hundred times worse than it was.
Really? Who?
The programmers.
Dad… You’re a programmer aren’t you?
Yes son, I am. Just like my mother – your gramma – before me. She was one of the ones who worked during the Pandemic.
Was gramma a hero Dad?
No more than anyone else, son. She worked from home. She wore masks, and kept the necessary social distance from others. I was just a little boy, but I remember those masks and how much we had to stay at home. Most programmers did just what Gramma did too. They worked from home.
So then why were they heros, Dad? It sounds to me like they just did what everybody else did.
Well, son, think of this. It was the programmers who made it possible for people to work from home; because it was the programmers who built the software that made the internet possible. You see, this was the first full scale national emergency during which people had instant access to the news, to the government, and to each other. When the President, and the Governors told people to shelter at home, almost everybody knew about it within minutes or hours. The news was sent to their computers, to their phones, and to their watches. Not only that, but people who were stuck at home could still talk to each other using Facebook and Twitter and Facetime. People could order products on Amazon, and on so many other on-line shopping networks. People could even order food from restaurants to be delivered or picked up. Without the programmers who made those systems, people would have had a much harder time sheltering at home; and the pandemic would have been much worse.
So the programmers weren’t brave, like the doctors and nurses and police were brave. They weren’t heros like that.
No, not like that. But without them, without the tools they created, so many more people would have died. For example, did you know that the genetic code of the virus was sequenced long before the pandemic spread? It was that RNA sequence that allowed our researchers to get a head start on the vaccines that eventually killed off the virus and saved so many people. It was programmers who built the software that ran in those RNA sequencers. Without those programmers, the vaccines might have come much too late.
Wow! What else, Dad? What else?
Well, you know that there was a time when people used paper money, right? Imagine how easily the virus would have spread if people paid for groceries or gasoline with paper money! But it was programmers who built the systems that allowed people to pay with credit cards, or by just waving their phone or watch over readers. They didn’t even have to touch antyhing! The virus couldn’t spread that way.
And then there was so much entertainment piped right into people’s homes. Netflix, and Amazon Prime, and Youtube, and.. Well the options were endless back then.
So people could work from home, shop from home, be entertained at home, and hardly ever had to leave their homes. And all that was because of the software written by programmers.
And that saved us, didn’t it Dad?
Well, son, it certainly played a pretty important part.
Are you glad you’re a programmer Dad?
]]>It’s an important job, Son. I never want to be anything else. Except, of course, your Dad.
From: Robert Martin (@unclebobmartin) (unclebob@cleancoder.com)
Re: Code of Conduct case of Charles Max Wood.
Dear Linux Foundation:
I am writing to you as a concerned member of the software development community which I have enjoyed serving for the last 50 years. I am writing in public because the events I wish to describe took place in public. I fear that something has gone terribly wrong within your organization; and that it will have deep repercussions within this industry that I cherish.
The timeline of events, as far as I can determine them, is as follows:
The Linux Foundation received a public tweet sent to the @KubeCon twitter address. That tweet recommended that Kube Con discontinue their association with Charles Max Wood. The reasons given in this complaint were his request for an open and civil phone call, and a picture of Mr. Wood wearing a MAGA hat.
The Linux Foundation publicly replied from the @linuxfoundation twitter account as follows:
Hi all, We have reviewed social and videos and determined that the Event Code of Conduct was violated and his registration to the event has been revoked. Our events should and will be a safe space.
First let me say that I find it highly problematic that the complaint and the decision were public. Indeed I am surprised that LF would accept a publicly submitted code of conduct complaint. I am much more than surprised that LF would ever consider publicly responding to such a complaint. Indeed, it seems to me that the public complaint, and perhaps even the public response by LF, could be seen as public harassment – which is explicitly prohibited by the LF Code of Conduct.
It seems to me that Code of Conduct complaints made in public must be immediately rejected and viewed as Code of Conduct violations in and of themselves. Code of Conduct complaints should be submitted in private and remain private and confidential in order to prevent their use as a means of harassment. It also seems to me that while the process of accepting, reviewing, and adjudicating such complaints should be public, the proceedings and decision of each individual case should remain private and confidential in order to protect the parties from harm. Making them a public showcase is, simply, horrible.
Was the Code of Conduct actually violated by Mr. Wood? I have watched the videos in question and read the tweets and I can find no instance where Mr Wood violated the LF Code of Conduct. I understand that LF can make any decision they like about what constitutes a Code of Conduct violation. However, when both the complaint and the response are so blatantly public, it seems to me that LF owes it to the observing community to explain their decision and describe the due process that was used to make it – including the decision to make the public response that undoubtedly caused harm to Mr. Wood. To date no such explanation has been forthcoming, despite repeated requests.
The software community needs to understand how decisions like this are going to be made. Otherwise those of us who have watched this case may be forced to conclude that LF has no internal process, that no due diligence will be applied to Code of Conduct complaints and determinations, that the accused will have no rights either of appeal or privacy, that LF feels free to make its decisions based on the blowing of political winds, and will loudly announce their decisions regardless of the harm it might cause.
Therefore I have the following questions:
Why was the initial complaint accepted and acknowledged in public? It was clearly political in nature, and very clearly intended to cause harm to Mr. Wood.
Is it LF policy to accept complaints that, in and of themselves, violate the LF Code of Conduct?
Why was the Code of Conduct determination announced publicly, despite the harm it would obviously cause to Mr. Wood?
Can LF specifically justify the determination that Mr. Wood violated the Code of Conduct?
Does LF have a documented process by which Code of Conduct complaints are to be submitted, reviewed, and adjudicated?
Is it LF policy to consider political affiliation, or support of certain public officials, as Code of Conduct violations?
Is it LF policy to publicly denounce registrants who have been determined to have violated the LF Code of Conduct?
Does LF have a Code of Conduct for how it conducts itself?
In summary, it appears to this humble observer that The Code of Conduct process at The Linux Foundation went very badly off the rails with regard to Charles Max Wood. That LF owes Mr. Wood, and the Software Community at large, a profound apology. That LF should keep all future Code of Conduct complaints and decisions personal and confidential. That LF should publish and follow a well defined process for accepting, reviewing, and adjudicating future Code of Conduct complaints. And that some form of reparation be provided to Mr. Wood for the public harm that was done to him by the careless and unprofessional behavior of The Linux Foundation.
Yours
Robert C. Martin.
]]>It’s important to remember that prior to 1946 there were no programmers, that computers themselves were virtually unknown until the late ’50s. That virtually nobody lived next door to a programmer back then.
Nowadays virtually everyone in the western world, and even in much of the developing world, is surrounded by computers. And while programming remains a mystery to many, programmers are common neighbors.
So let’s scan the last six decades and watch as the culture changes it’s view of just who we are and what we do.
It’s best to begin at the beginning. The first truly classic Science Fiction movie. Forbidden Planet. If you haven’t seen it, you are missing something profound and spectacular. I urge you to watch – even study – it.
There are no explicit mentions of computers or programmers in this movie. The concept was simply not something that the public could relate to. However there was a machine. A very big machine. And the implication was that it was intelligent, but not sentient.
In the movie the anti-hero Dr Morbius is marooned on the uninhabited world of Altair 4. He discovers an ancient alien machine. Two decades later rescuers arrive. He shows them the machine and states: “I have reason to believe that years ago a minor alteration was performed throughout the entire 8000 cubic miles of it’s own fabric.”
The programmers of that big machine are long dead; but they are described as belonging to a highly evolved and benevolent alien race.
There is another machine on this planet. It is a robot named Robby.
Robby is clearly intelligent and sentient. Robby speaks English, with the inflection of a proper british butler, rather in the manner of Carson on Downton Abbey. Dr. Morbius claims to have created the Robot; so he is clearly the programmer.
Morbius is studious, austere, even dour. He is not evil; but he is a hermit and does not particularly enjoy the company of others. He is massively intelligent but quite anti-social.
Now remember that this was the ’50s. Missiles and A-bombs. Scientists had a particular stereotype in those days, and Dr. Morbius is consistent with that; though with a hint of Captain Nemo.
Yes, I’m going backwards two years, but only to say that I did not forget this movie. I just don’t count it as significant. This was a movie made for kids, and the semi-intelligent robot is much more like Lassie than Robby. The creator of Tobor is an eminent scientist who also fits the mold of the ’50s.
With one exception, we learn very little about the programmers in Star Trek. The computer, however, is fascinating. The computer was voiced by Majel Barrett, Gene Roddenberry’s wife. She also played Nurse Chapel, and “Number One” in the pilot. She played the computer as utterly flat. The voice was monotone. The information was factual. The computer never offered an opinion, or an emotion of any kind. The computer was nothing more than a tool.
The exception was the episode entitled The Ultimate Computer in which a new intelligent computer was hooked up to the enterprise. The creator (and implicitly the programmer) of this machine was Dr. Daystrom. Both he and the computer have a simultaneous nervous breakdown, and Kirk has to “pull the plug”.
The implication is that programmers are so intelligent and driven that they eventually lose emotional stability.
This is one of the first instances of the computer acting as the villain.
Hal 9000 is the villain of this story. We know little of the programmer, Dr. Chandra, except that he taught the computer a song.
Note that during this era it is the computer that is the character. The programmers, if mentioned at all, are ancillary.
Another movie in which the computer is the hyperintelligent villain. The programmer is a scientist from the Dr. Morbius mold.
The computers are among the main characters and are essentially a race of slaves. We never meet the programmers, but it’s clear that they must be morally bankrupt.
Hero programmer defeats evil computer. This is the first time we see the programmer as a good guy who defeats the computer. The movie is also a foreshadow of The Matrix because the main character gets transported into the computer. As a programmer (though they call him a “user”) he as powers.
The hero programmer is a world famous scientist and business man. He does not live next door.
The computer is again a character, though this time an innocent dupe. A young boy meets the programmer and psychoanalyses him in order to convince the computer to not destroy the world. The programmer is depicted as a famous scientist who is emotionally damaged. The computer is depicted as a child-like character who likes to play games.
This one is indirect. Well meaning humans program the evil computer, Skynet, that then programs the Terminator to kill Sarah Connor. So this is a singularity prediction. The computers program the computer.
One interesting aspect of this film is the depiction of the human-like machine being so utterly focused on it’s mission. At first you think of the terminator as almost human. But bit by bit that humanity is lost. In the end you see only the machine, half-destroyed, missing legs and all vestiges of human form, still intent on one purpose only.
(Sigh) Jonny number five is a combat robot whose programming gets scrambled by a lightning strike. This makes the robot sentient and purely innocent. Eventually the robot invents it’s own moral code which is vastly superior to anything human.
So there is no programmer in this case – except nature or God or… And in that case all human flaws are exposed by the purity of the programming.
Cute movie, but very dumb.
This is our first real glimpse of a humanized programmer. Dennis Nedry is not a mad scientist, not a well respected researcher, he’s just a common ordinary programmer. And he is a flawed human. Oh, there’s a bit of the Twinkie eating, basement dwelling stereotype there; but this is the first time the movies show a programmer as someone who might live next door.
The computer is not a character at all. It is just a tool (“a Unix System”).
The main character is a programmer who must user her skills as a programmer to defeat a ruthless plot to frame her for murder and other nefarious things.
This is another case where the programmer could be someone next door.
All the human characters are programmers. They all live next door. But, given the red pill, are transported to an alternate reality where they can “see” the code. They are engaged in an apocalyptic fight over good and evil. The main character is a type of Jesus.
Note the progression. Over the years the representation of the computer changes from Main Character (Good or evil) to supporting character to tool. The programmer changes from obscurity to mad or damaged scientist, to Nature or Skynet, to the guy next door, to hyper-aware Savior.
What does this say about society’s opinion of us. Does society really think we are the folks who live next door who are simultaneously the hidden saviors?
Well, maybe we don’t want to read too much into things. Note that I stopped this review just prior to the millennium. Have there been any movies since then in which programmers played a significant role?
Actually, I think we have transitioned off the screen and have become part of the movie industry. Virtually no movie made nowadays can be made without massive computer graphics and programming effort. So now they know us intimately. We do live next door. And they don’t need to put us on the screen anymore.
]]>One of the services of 23andMe is that they offer to connect you to relatives who have also used 23andMe. Using this service my wife found a second cousin whom she had never met, but whose extended family had overlapped with hers. By email they were able to compare the names of aunts and uncles, and the towns where they lived. The more they talked, the more they realized that the overlap with the extended families was large.
Some years back I went through the effort of scanning all the old photo albums that we had created or inherited over the years. From that trove of digitized pictures my wive was able to find a 50 year old photograph of that extended family, taken in a little town in Mexico. She shared that photo with her relative who happened to be visiting that town at the moment.
The relative showed the picture to her aunts, uncles, cousins, and they all started pointing to people that they recognized. Many tears flowed as warm recollections were conveyed. This is apparently the only surviving photograph of that extended family; and now they all have, and cherish, it.
Now I want you to consider what made this possible.
Software. It was software that drove the connection of all those people. It was software that enabled the warm tears of recollection to flow. It was software that provided the photo to the folks in that little town in Mexico, who had not seen the faces of their loved ones in 50 years. It was Software. It was you and I – the programmers who built the systems and the connections that made that miracle happen.
Software is the circulatory system of our civilization. Software digests, filters, and sorts the constituents of the information stream. Software routes the necessary element of that stream to the right places. Software is the heart, the lungs, the vessels, the liver and kidneys, and the digestive system of our civilization. Nothing works anymore without software. Our civilization could no longer survive without software.
But software does more than support the survival of our civilization. Software also supports those moments of joy that my wife and her relatives recently experienced.
It is things like this that make me proud to be a programmer. Without us, our civilization could not survive, and the warm connections between relatives and friends could not be made.
It is things like this that also make me yearn for a deeper discipline and professionalism for our industry. Too much depends upon us now. We’re going to have to leave the wild west of software behind and civilize ourselves, so that the civilization we support will continue to prosper.
]]>