Sunday, November 22, 2015

Explanatory Development

We consider masters in a craft to know what they do -- but in a very limited sense. If they are really good, they know they do not know much about what they are doing. They are only conscious of certain kinds of things, and they do their best to make use of the means provided them -- but they'd be fooling themselves if they thought they knew much about how they, or any human beings, actually do what they do. The best they can do, is to explain what-and-how well enough so that another human might be able to understand it. But what is really going on inside our heads when we do something difficult? The answers are far beyond the reach of current research.

Computer programming is also a craft, but we've made almost no tools to help explanation, because we're not in the habit of thinking of explanation as important. The lack of tools for explaining a program while developing it, to ourselves or anyone else, continues to reinforce our non-explanatory habit. This is discussed occasionally -- small tools pop-up regularly -- but no consensus on the importance of explanation has even begun to emerge. This is strange, considering what we know about what we do.

A program has a direct, complex effect upon a complex machine -- an effect that humans spend much time and effort corralling and defining as carefully as they can, so the resulting computer operation tends towards their expectations. Without people, everything about symbols and symbolic manipulation, involving some 'automation' or not, in any of the formal sciences -- logic, mathematics, computer engineering, etc. -- is meaningless. Without people, it's not possible to know whether a program is 'correct', because the measures of 'correctness', the desiderata, let's call them the 'acceptance criteria', remain only in the heads of people.

We make code meaningful to us. The symbols in our programs are simply artifacts, markers and reminders, whose real meaning resides within our brains, or within the brains of some other people. Providing meaning to these symbols is strictly a human experience, and, more importantly, providing meaning to my code is strictly an experience of mine. I may have found a way to make a machine do something I want it to do, but the purpose and meaning of the symbols that have this effect on the machine are only understandable in human terms by another human being if we are part of a team that is somehow sharing this meaning. That is, only if I code with explanation as a primary principle.

Some of the code may be more comprehensible if we're part of a highly restricted and indoctrinated coding community. This can implicitly provide a kind of ersatz explanation, limited in duration to the programming community, or fashion, in question. These don't last long.

What does endure is a broader explanation, which keeps human universals in mind. This needs a first-class status in my code, must be integrated with it, and re-written continually, to keep my own thoughts straight, and to keep my potential readers, colleagues, and users, as completely informed as possible. 

For example, say that I have some business logic in my program, regarding the access to different features provided to different types of users. We often call this an 'access control layer' today. But am I making that logic visible to other human beings, such as my support staff, or my testers? How am I inventorying the "features" in my code that users have access to? If, say, I have a webapp that's essentially a dashboard, something often called a 'single-page application' today, how have I identified all the "parts" and "wholes" of this beast? Is all this comprehensible to anyone? Or is it buried in code, so only I or a handful of people can see what's going on? Instead, I should make an accessible, running guide to the actual live features, and the actual live access layer, in the actual live code, so that I and others can see everything.

Well, why wouldn't I use this 'guide', whatever it looks like, whatever approach I decide to take, to 'guide' my development? Why wouldn't I take my ideas about the specific system or application, and make those central, through the guide, to its actual development, maintenance, operation, and explanatory documentation, for the sake of myself and everyone else?

Of course this relates to notions in software architecture like an 'oracle' or a 'single source of truth'. But there are two ways I'd like to see this taken much further: 1) the guide should be pervasive and central to everything, from the organization and navigation of the code, to the description of the features, to the purpose of the product; 2) the guide should be geared towards people, including the programmers themselves, in their most humble state, when their most sensitive capacities as human beings are exposed. This should include an appreciation for living structure, beauty, and human limits, with a watchful eye upon our tendency to confuse models for reality.

By 'guide', of course, I'm not advocating any particular 'format'. I only mean any approach that values ideas, explains ideas, ties those ideas accurately and directly to the relevant code or configuration, allows for code consolidation, and explains abstractions, with an operational "yes we can find the code responsible for x" attitude towards making the system transparent, and any 'x' comprehensible. This puts a far greater organizing burden on the explanatory structure than you would find in Literate Programming documentation, for example. 

It has nothing to do with using accepted 'definitions', accepted 'best practices', 'patterns', or any other pre-baked ideas or frameworks. It has everything to do with taking your ideas and their explanation, and using them to orient yourself and everyone else to anything in the application. 

Our development environments and platforms need to support this deeply operational explanatory activity. 

Currently, none do.

No comments:

Post a Comment