Sunday, November 22, 2015

Explanatory Development

We consider masters in a craft to know what they do -- but in a very limited sense. If they are really good, they know they do not know much about what they are doing. They are only conscious of certain kinds of things, and they do their best to make use of the means provided them -- but they'd be fooling themselves if they thought they knew much about how they, or any human beings, actually do what they do. The best they can do, is to explain what-and-how well enough so that another human might be able to understand it. But what is really going on inside our heads when we do something difficult? The answers are far beyond the reach of current research.

Computer programming is also a craft, but we've made almost no tools to help explanation, because we're not in the habit of thinking of explanation as important. The lack of tools for explaining a program while developing it, to ourselves or anyone else, continues to reinforce our non-explanatory habit. This is discussed occasionally -- small tools pop-up regularly -- but no consensus on the importance of explanation has even begun to emerge. This is strange, considering what we know about what we do.

A program has a direct, complex effect upon a complex machine -- an effect that humans spend much time and effort corralling and defining as carefully as they can, so the resulting computer operation tends towards their expectations. Without people, everything about symbols and symbolic manipulation, involving some 'automation' or not, in any of the formal sciences -- logic, mathematics, computer engineering, etc. -- is meaningless. Without people, it's not possible to know whether a program is 'correct', because the measures of 'correctness', the desiderata, let's call them the 'acceptance criteria', remain only in the heads of people.

We make code meaningful to us. The symbols in our programs are simply artifacts, markers and reminders, whose real meaning resides within our brains, or within the brains of some other people. Providing meaning to these symbols is strictly a human experience, and, more importantly, providing meaning to my code is strictly an experience of mine. I may have found a way to make a machine do something I want it to do, but the purpose and meaning of the symbols that have this effect on the machine are only understandable in human terms by another human being if we are part of a team that is somehow sharing this meaning. That is, only if I code with explanation as a primary principle.

Some of the code may be more comprehensible if we're part of a highly restricted and indoctrinated coding community. This can implicitly provide a kind of ersatz explanation, limited in duration to the programming community, or fashion, in question. These don't last long.

What does endure is a broader explanation, which keeps human universals in mind. This needs a first-class status in my code, must be integrated with it, and re-written continually, to keep my own thoughts straight, and to keep my potential readers, colleagues, and users, as completely informed as possible. 

For example, say that I have some business logic in my program, regarding the access to different features provided to different types of users. We often call this an 'access control layer' today. But am I making that logic visible to other human beings, such as my support staff, or my testers? How am I inventorying the "features" in my code that users have access to? If, say, I have a webapp that's essentially a dashboard, something often called a 'single-page application' today, how have I identified all the "parts" and "wholes" of this beast? Is all this comprehensible to anyone? Or is it buried in code, so only I or a handful of people can see what's going on? Instead, I should make an accessible, running guide to the actual live features, and the actual live access layer, in the actual live code, so that I and others can see everything.

Well, why wouldn't I use this 'guide', whatever it looks like, whatever approach I decide to take, to 'guide' my development? Why wouldn't I take my ideas about the specific system or application, and make those central, through the guide, to its actual development, maintenance, operation, and explanatory documentation, for the sake of myself and everyone else?

Of course this relates to notions in software architecture like an 'oracle' or a 'single source of truth'. But there are two ways I'd like to see this taken much further: 1) the guide should be pervasive and central to everything, from the organization and navigation of the code, to the description of the features, to the purpose of the product; 2) the guide should be geared towards people, including the programmers themselves, in their most humble state, when their most sensitive capacities as human beings are exposed. This should include an appreciation for living structure, beauty, and human limits, with a watchful eye upon our tendency to confuse models for reality.

By 'guide', of course, I'm not advocating any particular 'format'. I only mean any approach that values ideas, explains ideas, ties those ideas accurately and directly to the relevant code or configuration, allows for code consolidation, and explains abstractions, with an operational "yes we can find the code responsible for x" attitude towards making the system transparent, and any 'x' comprehensible. This puts a far greater organizing burden on the explanatory structure than you would find in Literate Programming documentation, for example. 

It has nothing to do with using accepted 'definitions', accepted 'best practices', 'patterns', or any other pre-baked ideas or frameworks. It has everything to do with taking your ideas and their explanation, and using them to orient yourself and everyone else to anything in the application. 

Our development environments and platforms need to support this deeply operational explanatory activity. 

Currently, none do.

Tuesday, October 6, 2015

Our perception of 'analog' and 'digital' in the natural world

The words 'analog' and 'digital' sound rather precise. About a century ago, introspective engineers were inspired by these basic notions to create operational definitions and analytical tools that make use of these two labels. These can still be useful -- not so much in the natural sciences, as I'll explain below, but occasionally for setting goals in an engineering workplace. But these historic and highly-constrained technical terms are something of a distraction for any serious investigation into the human nature of 'analog' and 'digital'. The key to approaching these notions is to remember that 'analog' and 'digital' are ideas, brain-internal entities, universally available to normal human cognition. Like all ideas, they recruit mental faculties that interact in complex, unknown ways. The ways they interact in the brain seem, in this case, to tend towards mutual exclusion.

When the idea of 'digital mechanism' is associated with something in the world, it always entails an understanding that this 'mechanism' exists within an 'analog world'. It reacts in a not-quite-sufficiently-concomitant fashion, evidence for which is representable by discontinuous functions -- in other words, the 'digital' system reacts with 'unusual prominence' after some 'threshold' has been reached. Of course, anything in the natural world can be found to have this 'character', and often human investigators are very keen to determine the expression and causation underlying a subject's 'digital' character. 

At the same time, the same subject of investigation will reveal 'analog' characteristics -- again, when the investigator looks for them. Unfortunately, this is true of perhaps the entire range of human-observed properties of complex systems: something that is 'whole' also seems to have more-or-less 'distinct' 'parts', something that 'acts' also seems to be 'passive', something that 'flows' also appears to be 'static' ... and special training is typically required to regularly resolve these apparent contradictions when taking different conceptual-perceptual approaches. 

One could characterize 'analog' and 'digital' as 'projections' of human cognition upon our limited sensations of the world around us, real and imaginary -- the same cognition and sensation that makes us feel that we know 'the world'. When we call something 'analog' and 'digital', we're mostly looking at ourselves, using mental equipment that, when genetically intact, required stimulation during development to prevent atrophy. 'Analog' and 'digital' are universal, in the sense that, at the least, all humans can conceive of them. They are our biological inheritance.

We can test this by observing the world. What are simple examples of something in nature that is both analog and digital?  We observe a system that's near its tipping point -- such as a stick precariously perched on its end, on a fence, in the wind -- the 'analog' gradual movement will reach a threshold, and then 'the system', something which we define as observers, will achieve its 'next digital state'. 

Another example is a 'loaded' system -- using a slingshot, pull a rock back until it is also in a precarious state, then let go, which is a gradual 'analog' thing from one perspective, but could be seen as a 'digital switch' from another perspective, in which the rock 'discretely' moves from the state of being 'in the slingshot' to being 'out of the slingshot'. Any system, since it is observed by a human, has these properties, but we shouldn't despair -- although these properties appear together, one will typically dominate at one point, and another at another point, and sometimes the simultaneous perspectives are very easy to separate and identify. That is, one perspective can help us to construct a more enlightening analysis than the other ... although these are interdependent properties, so we can't ever totally discount one.

Since this is an epistemological issue, where the culprit is always human cognition, this 'finding what is important for a particular perspective' is possible even of complex affairs, far beyond the edge of what would be considered appropriate research subjects for the natural sciences. The other day, talking to someone who studies urban policy, I pointed out that a well-known government program, 'Urban Renewal', was very destructive of people's lives and neighborhoods. I gave one example, where a proposed $200 million Urban Renewal expense would have destroyed an entire neighborhood. He agreed, but then pointed out that the urban renewal fund had made one investment that was quite good. I agreed, but pointed out that this was a $2 million investment. We were both making correct points about the fund, but the massive downside of the large investment vastly outweighed the upside of the small investment, meaning we could say 'Urban Renewal' is a bad idea, by precisely two orders of magnitude.

That said, if one was studying projects that made a small number of people wealthy, with no concern for anything else, then clearly Urban Renewal would be 'good' in this sense. There are certainly people who make such assumptions.

Likewise, some natural systems have a 'digital' aspect that is far more prominent than its 'analog' aspect, given the interests of the researcher. In other cases, analog systems are more prominent. But the the key is to define 'prominent': prominent by which criteria of interest? It is possible to get 'out of our skins' a little better, and examine our interests.

In the natural science, 'most prominent' is 'that effect or aspect without which nothing would happen'. Again, there are multiple factors, and we bring multiple interests to the table, but we can examine these. 'Relative importance' or 'requiredness' or even 'essence' is often considered a mysterious idea. But not if we willingly examine ourselves, as part of our research. In the sciences, we need to continually re-examine our dogmas, judgments, and criteria for intelligibility, when considering what is 'important' in any investigation. And, in the case of internal, instinctive notions like 'analog' and 'digital', we need to try to recognize when our instincts are getting in the way, as they usually do, of further enlightenment regarding the world outside, and inside, ourselves.

Sunday, August 23, 2015

The Inventory Problem

We don't quite know how many cells are in the human body, because it's a hard problem, but we estimate it to be around two to the fiftieth. But, even if we had a 'better number', we know that the human body is more than "just cells" ... and of course cells are more than "just molecules", molecules are more than "just atoms", and atoms are more than "just energy". We know that there is a human psychological tendency, and a strong tendency among people who think they are being 'scientific', to assert that a complex system consists of "nothing but" some studied factor in its makeup. You know the sort of thing: the human body is just water and chemicals, biology is nothing but physics, etc. We used to call this 'reductionist' and 'materialist', but I think that's giving too much credit to what amounts to blind dogmatism among people who have no sense of just how little we understand about the universe.

What's the higher-order structure 'above' the level of cells? Well, it's just anything we call a 'system' that we believe is interesting. When we explore the real body we find that the boundaries and the coherence of our chosen 'interesting system', say a kidney or an immune system, are not what we expected. But, then, we should expect such surprise. Humans perceive certain things in certain ways, often in several conflicting ways, and when we decide to look at 'the visual system', or 'the nose', we are making use of some still mysterious human faculty that assigns importance to certain aspects of its environment. On examination, we are always surprised, because our unexamined intuition tends to be wrong. In fact, even our attempts to divorce ourselves from our intuition tend to be heavily suffused with human tendencies. It's a tough game to find out what's going on outside of your own perception, to turn the things your intuition 'knows' into mysteries about what's 'out there'. But that's natural science. It's hard work, especially when we're dealing with complex systems.

This means there really is no ontology of the body which is more than a kind of convenience, so that we can talk about what we're studying. Ontology is important in science, because these are our current assertions about what exists in the world. Of course they are constantly changing, and much of it is based itself on tacit knowledge that we do not understand, which is why I don't think many kinds of science make sense without a parallel study of human psychology. At any stage, our assertions are still human mental constructs. They may be more enlightening, better integrated with other theories, or more carefully constructed to avoid unnecessary stipulations, but they are only 'better'. They aren't 'complete'. Ontologies are works-in-progress, at best.

This epistemological story can be found everywhere in computing. Let's take the issue of testing a software system. In our example, let's say that we primarily care about a consistent  user experience, and so the tests take place against the user interface. What is the inventory of features against we we are testing? It certainly is not the set of features we set out to build in the first place: in order to make a good product, we had to change these. The closest thing to an accurate description of the final system is the work done by the documentation team. If you have such a thing. The team has used human judgement to decide what is important for someone learning about the system. They have organized what they consider to be the 'features' of the system, and explained their purpose and behaviors, as best as they could. This is the closest thing the software company has to an inventory of features and properties against which a QA team can build a testing system. In a system where the interface is everything, and there are a lot of systems like that, and a lot of systems that should be considered like that but aren't, the only way to build reasonable tests is after-the-fact. There is a discipline called 'test-driven development', but this is only appropriate to certain internal aspects of the system, it cannot address the 'logic' that is 'externalized' for the users. There is no such logic in the code. It's a perception of the system, used to guide its development.

If this is true, there is no way to take a 'feature inventory' from within the software. The best one can do is study the user-interface, find out how it responds, talk to developers and product designers to work out their intentions when they're unclear, and keep a coherent-looking list that is easy-to-understand. This is literally not an inventory in any mechanistic sense. It is a thorough set of very-human judgments upon something that others have created.

The 'inventory' will be acceptable, and have descriptive adequacy, when the appropriate group of people can understand it. This might be a very different inventory for a quality-assurance team than for a training team or a support team. There are things the designers and the engineers find important that produce yet another 'inventory'. There are other kinds of inventories, for accessibility issues. The best you can do, in all these cases, is the most human job you can do, to explain the right things to the right audience. The idea that there is any kind of 'logically correct' software, achievable without human judgement, is absurd. A person needs to judge what is correct! We couldn't do any of this work without human judgment. 

Because of this epistemological fact, we rarely have the time for inventories of features. Instead, we look to eliminate 'problems', humanly judged, and polish the software system until it makes sense and does what the team and the users want it to do. The task of describing it, of explaining it, is done in a minimally descriptive way, taking advantage of innate and learned human understanding, and the ability of users to explore things for themselves. The quality-assurance team finds some set of tests that satisfies them, tests for problems that have been fixed, regression tests to make sure the problems don't recur. The notion of a 'complete' description of the system is considered 'just too much work', when, in fact, a 'complete description' is impossible, because such a description cannot exist, it can only be adequate to our current purposes.

This epistemological problem shows up in simpler ways. One approach to preventing virus infection in computers is to add to a growing 'blacklist' of behaviors or data that indicate an 'infection'. The other approach is to make a 'whitelist': only these operations should be possible on the system. The list is only expanded when you want to do something new, not when someone else wants to attack you. This is like avoiding the inventory problem.

Even more, it's reminiscent of the difference between natural science and natural history. Natural history, zoology in its older form, and even structuralism, are about cataloguing and classifying things in nature. Explain why things are the way they are? That's natural science.  In derogatory terms, natural science looks to generalize and idealize and abstract, ignoring as many differences as possible. Natural history embraces diversity, and is more like butterfly collecting-and-organizing. In general, we need integrated approaches that allow for collecting diverse facts in the context of an an ever-improving explanatory theory. 

Approaches to building software are ever-expanding, and we are spending no effort trying to understand why, primarily because computer science is not a natural science, and doesn't approach the problem of explaining why things are one way, and not another. Most of the answers to those questions lie in a study of the human mind, not in a study of the machines that humans build. Studying software without studying cognition, is like studying animal tracks without studying the animals.

Thursday, August 20, 2015

The Wrong Tools

We are using the wrong tools to program, and the wrong criteria to judge good programs, and good programming practices. These bad practices, and bad approaches to thinking about the nature of programming, have emerged together over the last 70 years.

Our first mistake is the emphasis on code itself. I understand how high-level languages can seem very empowering, and so it seems natural that 'polishing code' is a means of achieving quality, and 'code standards' are means to improve group collaboration.

But even though these are accepted practices, they are not correct. When we make any improvements at all, we are not actually following these practices. The issue of what we are doing is not even a topic, and it's not examined in any kind of computing institution, academic or industrial. This is true despite the fact that everyone with any experience or sensitivity is absolutely certain that there's some fundamental expressiveness problem, on which they can't quite put their finger.

Let's say that code has two purposes.

On one hand, we have built machines and programs that can read the code, which does something, based on various kinds of agreements, mostly unspoken, mostly not understood, not studied, not explicit, and incomprehensible, but which maintain an illusion of explicitness, precision, consistency, and predictability -- probably because there are symbols involved, and we instinctively tend to respect symbols and construe them as meaningful in and of themselves. 

The other purpose of code is to express and explain your thoughts, your hopes, and your desires, to yourself, and to your colleagues, specifically regarding what you would like this system of loose engineering agreements to do in all kinds of circumstances.

At the heart of both of these 'uses of code', the operational and the expressive, are human meaning and ideas. These are not understood by the machine, in any sense. We take subsets of human ideas and create "instructions" for "operations" in the machine that in some way are reminiscent of these ideas, usually highly-constrained in a way that requires a great deal of explanation.

That's on the operational side! This is just as true on the expressive side, where we have new ideas that we are trying to express in highly-constrained ways that still can be read by humans, on these interlocking systems and platforms of strange high-constraint ideas. And of course -- most of you can guess -- these "two purposes" of code really are the same, because most programmers build various layers of code that are essentially new machines that define the operational platform on which they then express their application ideas.

Which means that the code is the least important part of computing. The mind-internal meaning of all these constraints needs to be explained so that they can inspire the correct meaning in the minds of the humans taking various roles relative to various parts of the software. 

Without explanation, code is meaningless. Without human minds, symbols are meaningless. Code is "operational", so we are fooled into thinking the meaning is 'in the machine'. But that meaning is also in the heads of people, who either made the machines or use them.

If this is true, then good explanation -- explanation that is genuinely useful, which genuinely makes the life of people involved easier and better -- is the heart of computing. This needs to be recognized, emphasized, and facilitated. Code is merely a kind of weak shorthand that we use, badly, to pass hopeful, incoherent indications to artifacts that other people have created.

Existing formal languages and their tools -- based on a uselessly-constrained approach to symbolic representation -- are woefully inappropriate for this, based as they are on a rather trivial formal definition of computation, which has been accepted because of the rather amusing belief -- no more than an unexamined dogma -- that anything can be 'precisely described' with symbols, boolean logic, parameterized functions, and digital representations. None of this is "true", or even sensible, and the belief in these dogmas show how far computing is from the natural sciences.

In the meantime, we need new programming tools with which we can more completely express new concepts, and more easily express our feelings and desires.

Monday, June 1, 2015

Theory and experiment in the natural science of computing

Although outdated, I find the perceptual-conceptual division, in the history of psychology, to be quite interesting. On the one hand, we could just be dismissive of the division today. The argument runs something like this: categorical thinking is employed in a preliminary way in the sciences, but it is inevitably 'wrong', because categories are mental constructions ... the world itself, including the world we construct, isn't made from categories … so the perceptual-conceptual distinction is one of those discardable stages of science, and we should just move on.

But, still, there is a real mechanism, within the brain, which we still do not understand, which leads us to produce categories, properties, objects, and other mental constructs, about the world. This mechanism, whatever it is, 'makes' our continuous world discrete. The primary argument in favor of abandoning the distinction -- that concepts can effect perception and visa-versa, making them intractably intertwined, tempting neologisms into existence like 'perconception' or 'conperception'  -- means the distinction is something we need to abandon as an investigative principle, but not as a subject of investigation. "Continuous" or "field-like" conception-perception is an innate phenomenon. "Discrete" conception-perception is also an innate, and very complex, aspect of mental life, engaged in the use of symbols, for example, and probably rooted in some aspect of our language faculty. But I don't see many people studying either one, let alone the overlaps between them. 

In computing, knowing that these conception-perception phenomena exist is very important -- unfortunately, the mind-internal nature of discrete descriptions seems to be continually forgotten by computing's chief practitioners, in industry and academia. This is clearly a detriment to their understanding of symbols, meanings, implementations, communications, definitions, and uses, from the perspective of the natural sciences. But it's also completely understandable that they've become victims of this problem, because the belief that we have some kind of direct mental connection to the outside world is an instinctive illusion, one of the most powerful ones -- a curtain that is difficult to lift.

Among the most fascinating of the influences of conception upon perception, is the effect that awareness of different "conceptions of perception" can have on the subject. 

There are several examples, but I'll take one that many people are familiar with. Take someone who draws or paints from real life onto a two-dimensional surface. In one way or another, an artist learns an idea, a concept, that there's an extra kind of perception, that allows them to "flatten" what is in fact a flat projection on the retina, but which our visual system works mightily to "inflate" into a perception of space. The artist learns to "reflatten" this perception and look for proportions and shapes among the the results, which can be used to produce the 2d image.  In fact, this is something that one can practice mentally, without drawing or painting. And the sense of proportion that's learnt by this exercise helps produce 3d sculpture, which is also interesting.

The conceptual awareness that this kind of perception is possible, makes it achievable.

I'm more interested in a somewhat different concept-to-percept influence, mentioned earlier: the perception of things as objects, categories, properties, etc. Some of this work is done innately by the visual system, of course -- for example, everyone is familiar with the color divisions of the rainbow, although there are no such divisions in the spectrum itself.

But the naming of the perceived groups of colors is an "objectifying" act, that is, something we choose to do, no matter how innate the mechanism we use to do it. From the limited impression we get within the conscious experience, it seems like the same kind of act as, say, our imagining that the nervous system is a discrete network, or  treating people as interchangeable nodes in a corporate hierarchy, or almost any kind of counting, or the mental separation of 'things' that we perceive as separable.

Because there's another way of perceiving, which we also do naturally, and those ways-of-seeing are apparently in something like a competition. We also see things as 'stuff', or 'fields', for lack of a better word, or 'centers': perceivables that are coherent but which we have not yet considered 'objects' with 'boundaries' etc. This kind of 'stuff-perception' allows us to see, or experience, the sky, or a field, or a forest ... without thinking of it as an 'object'. One can think of 'reductionist' mental transitions, for example from "forest" to "trees" to "potential logs", as a kind of gradient from "field perception" to "object conception".

Not surprisingly, awareness of the existence of these two kinds of perception can help a person to decide to use one, the other, or both. This is interesting to computing in a number of ways. 

First, it means that any task making use of a 'mode of perception' could benefit from explicit pedagogy about their psychological reality -- their mind-internal reality. Although it's visible in some research, and common in personal training, the best approaches to a pedagogy of 'modes of perception' is unrigorously studied. The field of architecture is a good example: Christopher Alexander created a number of books intended to help the reader perceive buildings and cities in the 'field-like' manner, but the effectiveness of the books in achieving this has not been studied. Readers just try it, and see if it works for them. That doesn't really get us anywhere.

Second, an explicit awareness of these distinct modes of perception can help us to identify particular mental factors, qualities, and effects, that enter into the use of computer interfaces, including those interfaces used by programmers, and allow us to judge the quality of the outcomes of the use of those interfaces. 

I believe these distinctions could unleash an experimental goldmine.

So now I'd like to discuss briefly one story from the history of psychological experimentation. 

There's a very good book by Robert S. Woodworth, from 1931, entitled Contemporary Schools of Psychology, which describes the various ideas and motivations of researchers that he knew or worked with personally. It describes the 'method of impression', which allows us, for example, to look at an optical illusion, like Mach Bands, and consider our impression an experimental result -- we can say 'the illusion is present in this image' or not, allowing us to test questions about the nature of the stimulus and the response.

Psychology emerges from philosophy in the 19th century, and so issues of consciousness were quite important to early experimental psychologists. When an investigator asks a subject 'which weight seems heavier?' when they are lifted in different orders, the investigator is relying on the impression of the subject. The primary interest is the effect on consciousness, even though this is a very objective experiment, when properly carried out.

But a reaction to this emerged, from the animal psychologists, who Woodworth describes as feeling "dismissed". The psychological establishment at the time felt that, although animal conscious experience likely exists, it can only be supposition, since we cannot ask them anything, and hence no rational psychological experiments could be carried out on animals. 

This reaction to this became Behaviorism. The tried to define 'objective' as some activity that you could measure, without giving credence to a subject's interpretation (their claim, that this 'increase in objectivity' was achieved, was rejected by many, at the time, with the same rational epistemological arguments we would use to reject the claim today). This allowed them to experiment with factors of instinct and experience that entered into animal behavior, and put humans and animals on equal ground. Unfortunately, they also threw the baby out with the bathwater. They had one overriding theory of the animal, or the person, and that was the association of stimulus with response, whether learned or innate. Presuming the underlying mechanism, behind your subject of inquiry, is a terrible approach to theory construction, because you won't even notice the effect of this assumption on your observations. 

A perfect example was the behaviorist John Watson's replacement of the term "method of impression" with "verbal report". The latter he considered 'behavior', and so it was 'acceptable science', and this way he could include previous work on Mach Bands, or afterimages, or heaviness, or taste distinction. We can see the danger Watson introduced here: the experimenter was now assigning a meaning to the verbal report. So even more subjectivity was introduced, but now it was hidden, because the theory of mind, the theory of that which generates the behavior,  was no longer explicitly part of the research. 

This methodological dogma had another effect -- the generation of many decades' worth of meaningless data and statistical analyses. When you decide that you've suddenly become objective, and turned a potentially structure-revealing impression into a datum, then you have no choice but to collect a great deal of data to support any conclusions you might make from your study, to lend it, I suppose, some kind of intellectual weight. A corollary is that this tends to make the findings rather trivial, because you're no longer constructing interesting questions, but are instead relying on data to confirm uninteresting ones. The upside of this, is that you can quickly determine whether, say, afterimages are only seen by certain people under certain conditions. The downside is that the investigator is no longer building a model of the subject of inquiry, and so tends to ignore the subtle and interesting influences on the perception of afterimages. The theories that emerge then lack both descriptive coverage and explanatory force. Of course, many investigators did build models, and also followed puzzling results, so, at best, one can only say that behaviorism had a negative influence on the study of cognition as a natural science, but not an all-consuming influence. Most behaviorists were not extreme behaviorists.

But, to return to our original theme, the negative influence is no one's fault. Natural science tends to defy our instincts, and there were innate cognitive tendencies at work here -- tendencies that are still not studied -- which led mildly dogmatic researchers to simple 'objectifying' stimulus-response models. Behaviorism expresses a point of view that we return to, often. In the computer world, you hear it a lot, and not just in the poor cognitive methodology that pervades AI. Even idioms like "data-driven" and "data is objective" are meaningless by themselves. The phrase begs the really important questions: which data, generated how, while asking which question, based on what theory, and interpreted how, and by whom, framed by which assumptions, and making use of which human or non-human faculties? The idea that 'objective data' somehow 'exists', without an experiment, without a context for interpretation, without the influence of human meter-builders and human interpreters, is just not correct. But people tend to make such claims anyway. It's an innate tendency.

So, what would good psychology as a natural science look like, when applied to improved understanding of the mental life of computer engineers?

If we're looking to build better theories of complex systems based on evidence, we can't do much better than to look at the eponymous scene in the documentary "Is the man who is tall happy?", in which Michel Gondry interviews Noam Chomsky about linguistics.

Take the sentence "The man who is tall is happy". A native English speaker will report that this is grammatically correct, and if you're one, you can use the 'method of impression' to demonstrate that to yourself. Now, turn the sentence into a question. You can do this grammatically through movement: "Is the man who is tall happy?" You can also see that the following variant is not grammatical "Is the man who tall is happy?" A native speaker would just say it's wrong. 

But why do we move the furthest "is" from the end of the sentence to the beginning? Why not the first "is"? Let's just say that scientists enjoy mysteries, and puzzles about nature, and so we need to build a theory, to answer that question, and hope that the answer can be more broadly enlightening -- which, among other benefits, means the theory could predict other results.

The overall answer, is that language has its own innate structure. The second "is" is actually the structurally most prominent element in the sentence (you can demonstrate this to yourself with a tree diagram), and so the easiest to move.

This would be impossible to determine simply by statistical analysis of text with no specific questions in mind. The use of statistics is motivated by our ignorance (I'm not using that word pejoratively) of the structure that underlies, that generates the surface behavior. The separation of variant and invariant structure can be made through analysis, including statistical analysis, of verbal reports, but only if there is a question in the mind of the theorist about the generating structures. Any statistical analysis that does not officially have an explicit structural question to answer, is only hiding its assumptions and stipulations about structure, by adding interpretations at the beginning and the end.

Note that these structural theories are idealizations -- nature is not transparent, and we have limited attention, so we need to know what we're asking questions about, and what we're not asking questions about.

Notice also how much further we can get in formulating structural questions when we accept the 'method of impression' along with the probing strengths of our mysterious capacity to think. There are plenty of questions about the human mind that require massive data sets to answer. But it's unlikely that any of those questions would be interesting if we weren't using, explicitly or implicitly, the method of impression, so we could narrow those questions sharply. Moving forward towards understanding the "act of programming", as a natural phenomenon, will require that everyone understands the power of this method, which has enabled so much progress in linguistics (Noam Chomsky's initiatives), and art & design (Christopher Alexander's initiatives), over the past 60 years.

Friday, May 29, 2015

The primary practical barometer of programming progress

There's really just one criteria to apply, when judging whether or not an engineering environment has improved the life of engineers.

Does the environment do a better job of helping programmers create what they want to create? Their desire might be: a particular hand-crafted interface to a special set of functionality; or it might be a difficult-to-visualize network protocol; or it might be a simulation and visualization of something running within a machine, such as software or hardware. 

But the criteria of success should be the same. Does the environment move programmers towards a situation, even a very domain-specific situation, where the manner in which they think about and evaluate success is now better served than the technical means to achieve technical ends? Are they now better able to "make the thing" as they conceive and feel it, with fewer implementation requirements?

If not, we are simply building environments out of technical necessity. We are not moving programmers towards a better future.

Some good examples can be found occasionally among domain-specific languages; others among special-purpose interfaces. When good, they are the result of a high sensitivity to the considerations and desired outcomes of people. 

So, say one makes a precise statement about the desired behavior of a program, a repair, a new feature, a new program. This is of course an iterative process.

Let's imagine that the final effort devoted making the desiderata precise is D. That includes time interacting with the system that demands an exact operational description, and time to evaluate whether the now-visible result is the desired result.

Everything else is unrelated technical time, or U. This is not to denigrate it. Only to measure the impression of the effort needed to express D.

The ease (E) of using the programming language or environment for the implementation of that particular story or feature, can be characterized as a ration of two measurable impressions:

                      E = D / U

Obviously it takes a great deal of work, research, and sensitivity, to make U smaller.

But I rarely see a drop in U, in general-purpose programming languages or integrated development environments. These tools, whose job should be to ease the expression of thought, almost never improves the overall E of programmers. They're instead a bag of replacement techniques that contribute to U, with endless renaming of nearly-equivalent logical concepts, with endless unexamined consequential logical complications ... all with nearly no impact on E. Unfortunately, these new general-purpose environments and languages are tenaciously promoted -- and so another generation of programmers is trapped and unserved.

I don't think this situation is irresolvable. But it cannot be resolved by people who are unaware of this basic issue.

Clearly this is just a first-order definition of factors, not yet sufficient for experimental confirmation. For example, when constructing an experiment we would need to eliminate experience and facility with particular notational or symbolic conventions, as best as we can, and carefully evaluate what was left. We would also need to isolate motivational factors: for example, being sure that subjects are producing what they want to produce, not simply achieving some goal provided by the investigator. We need to, as best as we can account for, provide homologous situations, so that we can use concomitant variation, to adjust the major variables under investigation: the denotation of machine operations and the mind-internal semantic agreements.

Monday, September 8, 2014

If computing were a natural science, what might it look like?

One of the hardest-won results from the natural sciences, uncovered over the last half-milennium, is simple, but hard to keep in mind: the real world is not accessible to our cognition. 

When we realize this, it makes the world a mystery, but this, in turn, makes the world interesting and exciting to explore. That's science. And since the machinery of our brain is part of this mysterious world, it too is a mystery, and we have almost no idea how it's interfering with, and relating to, our perception of the world.

Our intuition fooled us into believing that we 'understood the world'. It felt like we understood it. But then Galileo pointed out that, by our intuition, a heavy body will fall faster than a light one. But, in reality, it doesn't. And, when we thought hard about things we already knew, things that we forgot because parts of our intuition acted as obstacles, we realized that it was rational for them to fall at the same rate.

And there are always more internal obstacles.

 Another intuitive part of our rationality makes use of the concept of a "clockwork universe", recruiting it as a criterium for understanding. But as every first-year physics student learns, and as Newton demonstrated in the 17th-Century, there is no clockwork, contact-mechanical explanation for gravity, or any other force. No system of invisible pulleys or rods can 'explain' these forces. They are simply 'qualities' of the universe, and we were forced to adjust our idea of explanatory adequacy accordingly.

In general, we can say that the goal of the natural sciences is to find out what is going on outside our perceptions -- 'outside our skins', 'out there' -- to use just a few suggestive phrases to convey a difficult idea. 

But here's the problem: we're still in the mix. We can only try to get out of our skins. And we need to keep trying. Each time we discover another way that our minds are interfering with the accuracy of our scientific theories, it allows us to work positively for a while. But there's always another horizon. And we always come across further mental interference, further stipulations that need removal, later on. Our own cognition is, ironically, the most serious obstacle in the sciences.

Another problem arises because many of our scientific interests revolve around ourselves, and our minds. It's almost impossible to find any work on the mind which isn't rife with mental interference -- but many researchers are aware of the problem, and try to sort the results from the stipulations, the external from the internal.

Here's a problem that's peculiar to the cognitive sciences: we look at ourselves from the outside, appropriately, as animals. But externally measurable behavior is superficial. So we need a human to interpret the behavior. And we need to recognize complex phenomena, the internal ones, as we experience them, so that we can experiment on ourselves, using ourselves as gauges or meters of the complex phenomena under investigation. 

So, in the classic example, even though we understand almost nothing from external measurement about the biological organ that provides us with language (the Language Faculty), we can use ourselves to test the boundaries of language phenomena, from the inside. For example, we can ask if the sentence "I'm going to the store" is experienced as grammatical, where we use a complex internal organ to make a judgement on external stimuli -- and if we ask if "I go store to" is grammatical, the different answer leads us to interesting research questions. Without this internal meter, we wouldn't know how to even approach this as a question in the natural sciences.

Clearly the human activity of computing is not something that has even begun to broach these difficulties. As a practical matter, though, computer people, and computer users, are impacted by these issues everyday. The craft of computing has developed, to at least cope with the issues in a limited way, and get work done. But, in my view, no computer scientists or programmers have any awareness of what is going on in the real world of computer-human interaction, no more than a pianist knows what is happening to our brains when we hear music. We have a craft, but we have not even the beginnings of a natural science of computing.

What would a natural science of computing look like? What might programming look like, as a result?

I want to provide a taste of the kind of research needed.

There are many tools available for web programming. Generally, some engineers have found themselves doing the same thing many times, and recognizing this they attempt to create a layer, something that allows them to do the same thing they've done before, but with less effort. As a result, the higher-level view of this layer becomes a kind of notation of its own, backed by its own ideas, which can be combined and parameterized so they become a kind of general platform on which to build applications.

As a result, application development becomes rather path-dependent, making many programs effectively the same. Developers begin by imaging these elements and combining tools, and then making a stab at using them to take steps forward, towards products they have in mind.

It's important to understand that all this 'bridging' happens in the brain, and is completely dependent on mental faculties that as yet have no name or research program. 

The tools are something external, and too complex to know completely, but the programmer needs to understand them better in some way, in order to move forward in some direction. The 'direction', or 'product definition', is an exercise of the imagination, with some notes on paper, and the programmer picks some aspect of this fantasy to approach with tools that he knows, or believes he can discover something about.

To address this 'bridging problem', the programmer needs an increasing understanding of (1) the emerging 'product', from a user's perspective, (2) the emerging program, from an engineer's perspective, and (3) the frameworks, libraries, ideas, and communities that these are built upon.

So, we've begun to characterize what the programmer is doing as a human activity, one that needs to be investigated from a biological perspective: we need to pick elemental examples of these bridging moments, and study what makes it easier or harder for the mind to organize a solution.

But the state of the art is rather different. There's a kind of struggle to create support systems. Helping with (3) are the myriad websites that have emerged to let programmers share technical tricks and ideas, and endless 'branded movements' that attempt to justify and codify sets of ideas that provide various kinds of inspiration.

The focus is on technology and borrowed ideas. But none of this addresses (1) and (2). That's because it's hoped that the programmer will be inventive, and bridge the gaps. But that means the critical human dimension of software development, the study of the user's issues, and the programmer as user, suffer from lack of serious investigation.

These are questions that could be investigated from a natural science perspective.  In regards to (2), what unknown mental faculties enter into the simplest internally-experienced phenomenon of 'discovery' while looking within a program? When the programmer needs to create some new infrastructure to deal with some 'type' or 'aspect' or 'property' of the problem as they perceive it, how does their programming environment make this hard or difficult? 

As an example, say I'm writing a completely different kind of web application. Say that the most important thing, to me, is typography. Nothing I see in my program should be more important than the new issues before me, because the whole point of the program is to study 'typographical effectiveness' for human users. 

Unfortunately, nothing in a framework or programming environment will facilitate the creation of a fresh, new, special-purpose theory, or framework, with which to study the creation of programs that support the as-yet-unknown world of the typographer. There is general-purpose functionality, and in pre-trod areas there are special-purpose frameworks, but there is nothing to help me create the special-purpose ones, and to help me find my way around these new special-purpose theories once I've created them. All this despite the fact that most of life is quite mysterious, and hence certainly unexplored by software.

The lack of support for developing and exploring 'the new' and 'the special', is coupled with a prejudice towards 'the general', at least a particular, extremely limited interpretation of the notion of 'general'. The combined effect is the lack of imagination and sensitivity in almost all computer products and applications. Motivated people, who are exploring new territory, can still muddle through and get some limited new things to happen, but they are almost entirely on their own. Because no one is studying the problem of technical support for innovation, or bridging, we rely instead totally on the innovation of the worker in overcoming the poverty of the available tools. 

All of this derives from the total lack of a perspective within computing that falls within the natural sciences. Imagine if we observed a very mysterious behavior among spiders in the wild, and we just said "well, I don't know what's going on, but the spiders keep making webs, so it doesn't really matter!" Science doesn't do that. We want to know what's happening. But if it's us, we might care more, but we don't investigate what's actually going on. If we did, we could help. And help ourselves, because we could really use good tools for investigating new ideas. In the future, the computer, in whatever form it takes, might actually fulfill its potential, and normal people, who have better things to do than keep up with another trivial new programming environment, or another trivial programming 'paradigm', will be able to use computers to explore the endless mysteries around and within us.



Wednesday, August 27, 2014

Is it a network?

A common idealization borrowed from engineering and computing is the idea of the "network". A network may seem like very basic system geometry, after all, it's just a 'graph' in discrete mathematics. But the idea of a network is also rather problematic, and seems to have led many scientists down the wrong road for centuries. 

We need to take a closer look at our use of the idea of 'network', in every circumstance. In the same way that 'objects' and 'properties' are mental constructs, networks are too. All these ideas are of course innate, and generally useful, but when we're involved in the natural sciences, and asking questions about complex systems, we need to regularly check our epistemology. Is it a network? 

If something was a network, how would it compare to something that was not a network? Would we be able to build a meter to tell them apart? If it's a mental construct, what are its characteristics and limits?

The complexity of biological and cognitive objects of interest cause us to get lost in our own innate toolset. Post-Galilean physics has been the study of the simplest possible problem. Complexity was the enemy. The basic method was to simplify models, ask basic questions, and reduce experimental interests and influences. Really complex problems were thrown over the transom, to the chemists and the biologists, who for centuries barely even considered themselves scientists, because they were stuck, unable to put aside all these many interesting questions, which the physicists could ignore in their pursuit of foundation issues.

When dealing with the natural science of complex 'systems', we often feel we have nothing important to say unless we fall back on these highly-structured intellectual instincts. And so we have our questionable science --  'learning networks', and 'objects' and zoological-typological 'categories', and selective pressures upon 'bags of qualities' -- scientific  dead-ends whose weaknesses we've been uncovering slowly in various post-positivist enlightenments.

Let's get back to 'network' for a moment. Whether or not 'network' is a useful idea in any given research situation is of course up to the investigators. We see things that are human products which seem to have this 'network geometry': paths, roads, train systems, computer networks, etc. In the physicists' world, it's not that simple. There are many forces and gradients with varying character and various mutual influences. But nothing that could be called a 'network', except by popularizers. In the chemist's world, this becomes harder to resist, because everything under investigation is, on one level, a 'network' of elements. But in any other way, these high-energy mashes of dense mutual influences don't seem anything like the discrete signaling networks that people create. You could use them to create a network. You could use graph theory to give you approximations. But it's a mistake to consider that anything at the chemical 'level' actually is a network.

In biology, with its even more complex investigations, it gets harder to resist our tendency to put phenomena into the 'network' category. There are so many complex results that need to be integrated with one another, that it's sometimes easiest to just imagine networks of influences. These network-diagrams can look massively complex, so much so that it doesn't look like we've made any scientific progress, that is, we're not much enlightened by the result.

But, again, the diagrams are maps. The object of biological inquiry is the territory. We're doing ourselves a disservice to mistake our tools for the object of our investigations. The 'network' is a perception, a tool. It may or may not be helpful. But there should be no 'network theory' of biological systems, or the more complex ecological systems. Exploratory network diagrams, like a finite-element analysis, are at best a kind of limited simulation tool. There's no actual 'network' there, in any external sense.

Which brings us to the biggest misconception of at least the past century, and perhaps the last three centuries. That the human brain, even the animal nervous system, is a 'network'.

There's very little evidence for it. Again, these systems are so complex and difficult to understand that we immediately fall back on any intelligible characterization. And the characterization that is ready and waiting in our mind, is the network.

It's interesting how the innate concept of 'network' is integrally related to that of 'object'. Certainly we try to turn things into 'objects' as part of our instinct. But when we investigate the world, even casually, using this idea of 'objects', we immediately find 'objects within objects', 'objects relating to objects', 'objects influenced by objects' … and adding the concept of 'signaling', we get 'objects signaling objects'. That's a 'network'.

Really, I'm making a very old argument here: the more complex a system, the more we fall back on 'natural ideas' which will probably distract us from discovering what is really going on outside our percepts and concepts.

I'll still make use of networks in my simulations. But like anyone who has tried to simulate reality on a digital machine, I know there are much harder problems than 'graph theory' ahead of us. But to even approach these problems, we need to be aware of the 'network bias' in the human mind.

It's probable that calling the nervous system a 'network' is completely meaningless.

What do I mean by that? If I could tell a didactic Socratic story … 

… let's say some alien intelligences, who study our cognition, were to look at a clock, and speculate upon how we would look at a clock today. 

They might imagine that each of the clock's parts were what we call 'objects', and that the tight interrelations among the parts are what we would call a 'network'. 

Hearing this speculation, we might beg to differ with them: "no, the parts are not similar enough to be a 'network' ". 

They would point out, "interesting … your concept of a network is quite specific … the network 'nodes' need to share some specific, unspoken 'quality' for you to accept them as 'nodes' in a 'network'."

Our alien friends decide to add something: "You do understand that you are like this clock? Your brain, at the very least. The 'connected' 'parts' in your brain do not have these qualities that, if you were to see them, would qualify as a 'network' to you. And yet, you assume they do. Whether the nervous system or the brain is a network should be a scientific question about the natural world, but since 'network' is a mysteriously defined word in your mind, there is no way to even ask that question. And yet, even though humans know almost nothing about their minds, they say 'yes, a nervous system is a network, and so is the brain … just look at those neurons and dendrites and synapses'. They say this confidently, even though there's no evidence that this is the appropriate way to view these structures, and no evidence that they behave as 'nodes' do in the human perception of human-constructed networks."

The moral of the tale … if it's in the real world, and it's not something that a person constructed and called a 'network' ... then it's not likely to be a 'network', despite the efforts of your imagination, or the structure of your simulation.

Tuesday, August 26, 2014

The Problems with Recursion

Is recursion only something we see in the world, or is it something in the world itself?

This is a question we should be pouring over, and puzzling about. But instead I see people either in full thrall of some kind of pan-recursionism, or else denying that it could exist anywhere.

If we're ever able to discover a reasonable answer to this question, we may not be able to remember it for long enough to make use of it. Recursion is very appealing, and the reason seems to relate to the mechanism behind the human language faculty, something we're all born with.

Human language is some kind of cyclic composition faculty, which interacts with and recruits from the rest of the brain in surprising ways. We see superficial externalized aspects of this cyclicity in written and spoken language, such as 'phrases within phrases ad infinitum'.

If the language faculty, at its core, is a kind of cyclic composition engine, it's no wonder that we find recursion so seductive. We have a "recursion meter", if you will, in a prominent place in our cognition. We can sense when something could be perceived 'recursively'. 

It also seems to be deeply integrated with our desire for simple, comprehensible symbolic theories about the world, probably responsible for the phenomenon Charles Sanders Peirce called abductive reasoning. We look for self-similar patterns embedded within each other. The faculty forces us to look for simple theories that enlighten us, and hence explain something to us, about hopelessly complex-looking phenomena -- Newton's laws of motion are a classic example. Although the modern mathematical study of recursion is a 20th-century phenomenon, it's only a refined version of something that clearly impacted human efforts long before. Which, again, is not surprising, if it's part of the innate language faculty.

All of this doesn't mean, though, that recursion is somehow the 'best tool' for people, in all, or even any, situations. There's a reason for that. It's known as reality.

Take programming languages. From LISP to Haskell, languages that encourage the compression of computational representation into recurrence relations are easy to define, sometimes inspiring to use, but they relate poorly to the non-logical side of programming.

By the 'non-logical side of programming', a phrase that will surprise many programmers, I mean nearly everything. That will surprise most programmers.

Almost nothing in the act of programming involves 'logic', in the sense of 'rule-based symbol manipulation'. But it does involve logic in the older sense of 'good thinking', a sense that is much closer to  the natural sciences, where there is very little presumption that we have somehow magically turned the world into symbols. There is far less confusion, within the natural sciences, between "the map" (formal notation, math, idealization etc.) and "the territory" (the world outside our minds, which we've a very limited, and specific, ability to perceive.)

Think of any symbol in any programming language -- let's take the word 'if' in a formal conditional construct. What does the 'if' mean? We can construct a machine that satisfies us with its conditional-like behavior, which we can call 'if' -- but 'if', the word itself, has a mind-internal meaning. There's no way to "teach" a machine what we understand by the word 'if'. We can only inject into a machine a behavior that people will typically perceive as 'conditional', if they know it was constructed by another human being.

So even at the most elemental level, symbols mean nothing without people. We can of course force machines to react to people-assigned representations of symbols. But the satisfying reactions, the assignments, the low-level interpretation, the high-level interpretation, the user's feelings -- none of these are definable as a formal rule-based system of symbolic logic. The symbolic logic is only a shorthand, only a set of partially-ordered artifacts, little more than markings on paper, constructed by and awaiting the massively complex use and expectations of the human cognitive faculties.

Which is why recursion is so 'dangerous'. It's simply an appealing feature of symbolic logic, which appeals to a very specific aspect of a particular faculty in our minds. But we don't live in some biological cyclic compositor. We live in a complex world that we do not even slightly understand, which our complex brains impose complex interpretations upon. And the abilities of these complex brains are not only understood poorly -- that would be excusable -- but they are actively misunderstood by almost all computer scientists, who regularly reside within fallacies that were understood millennia ago.

These misunderstandings exist because computing began as an engineering discipline: automating production, and building tabulators and calculators. Within the computing discipline, people create and use formal systems. But I believe the limits to these formal systems have been reached, and were reached decades ago, and will remain in the dark ages, until a broader study of the biology of programmers and users, the role of people in the systems they create and use, is studied from a natural science perspective -- instead of from a seat-of-the-pants pragmatism, exalting "whatever works", "whatever's profitable", and "whatever gets the product out the door".

… a few more points.

If a cyclic composition mechanism exists in the mind, in whatever form, that would mean it exists in nature. So, in some sense, a machine of some kind that exhibits perception and generation of recurrent relations, is the result of natural physical laws and our human genetic endowment. In some sense, it could be a very simple machine, that is 'optimal' in some sense: the next cycle can proceed by ignoring all but the 'most important result ' of the previous cyclical work. What's being optimized is not that clear. What the mechanism is, is yet unknown, so we cannot begin to know how this 'simple' set-construction mechanism appeared.

As a candidate for symbolic computational recursion in nature, cell self-reproduction is often presented. But it's not clear yet if the 'genetic component' (genes as narrowly defined by molecular biologists) is the decisive one in the recurrent relation, or whether these genetic 'symbols' (which is only a 'symbol' by a rather extreme metaphor) merely piggy-back on biophysical reproductive behavior, such as the vesicles that Pier Luigi Luisi has pointed towards in abiogenesis research.

Even further from the mark, and further towards our recursive 'perceptual trap', if that's not putting it too strongly, are fractals.

Fractals in nature, when most compelling, mostly look like a gradient of energy, dissipating through a medium that reacts similarly at a few different scales. A bullet-hole through glass looks like this: lots of breaks close to the impact, with geometrically similar but larger and fewer breaks as the energy dissipates. 

Even in those cases, the recursive characterization is part of human cognition, not nature. That's ok, if we understand that it's only for calculation purposes.

As physical idealizations, fractals fall apart rather quickly. A tree is a good example. Leaves do not vary in size after they've reached maturity. The behavior of any organism involves a great many factors, which change radically at different scales, and cannot even be superficially characterized as fractal. Fractals are a phantom of our mental recursive trap, and in the natural sciences, they, and recursion generally, need to be recognized as a kind of potent fantasy.

Saturday, December 21, 2013

The Coöptation of Methodology

There have always been shop rules, safety regulations, good practice, et cetera, in every engineering environment.

In software engineering, the study of good practices, methodology, is increasingly confused with one very bad practice: forcing people to adhere to particular methods. 

I'm sure this goes up and down, but in 40 years of programming I've never seen such an invasion of 'productivity-driven' corporate culture … an invasion into a naturally collegial engineering environment. When work is normal among engineers, the power of the human mind to resolve issues is understood to be of prime importance, and not to be tampered with.

But today, perhaps, engineers are so demoralized and demotivated by the sheer volume of crap produced by the computer industry that, from a manager's perspective, they "need to be monitored and artificially motivated". Their hearts and minds are not as important as their obedience.

Decades ago, there began a push for 'programming metrics' such as 'lines of code per day', and at the same time, a push for 'conformity' such as code formatting standards. These were widely ridiculed by programmers, not because the relevant issues were ignored by engineers -- after all, sure, it's interesting to know how a program changes in shape and size, and it's appropriate to format code so your colleagues can read it. But management's implication that judgement of such things by 'outsiders' could be anything besides trivial … was considered silly.

That is, until people realized that metrics weren't 'silliness' but rather 'authoritarian'. Management, under performance pressure, was asserting itself. And they were looking for tools with which to assert authority. The managers were often former engineers themselves … so the industry was using the basic strategy for developing a colonial elite, elevating prisoners to prison guards.

Parallel to the search by the powerful for means of employee control, was the fascinating internal effort, by engineering researchers, to experiment with new methods, and better understand these complex effectiveness issues. This research is methodology … the science and study of method. It's a subtle study, which involves, among other actions: a sensitivity to moments when things seem to be working well; and building and testing tools to make life easier and simpler, the better to respond to increasing demands for more complex software.

I want to take an aside for a moment, and point out that while, in one important way, software has become more complex, in another important way it has not. 

Increased complexity of a user experience is not necessarily an improvement. Usually quite the opposite. We still type our thoughts into text editors almost identical to those that were available decades ago, because the straightforward facilitation of the act of typing hasn't changed. This is because we don't want to disturb the human mind while it's doing its complex work. Nothing is more frustrating than, say, the Facebook editor's difficult-to-avoid attempts to change your writing into Facebook-internal links. The inability of our engineering culture to pass along understanding of the problems with these kinds of automation is endemic to both technological optimism and corporate slavery, which promote break-neck production while sacrificing introspection and quality.  

The interesting thing is: the user hasn't really changed much, hence the interfaces can't be much more complex than they were decades ago. The humanity of the user still must be accommodated: it is their brain we are trying to empower.  Hence the UI / UX dictum, 'keep it simple', can never change, and this highlights the fact that the effective qualities of interfaces 50 years ago aren't much different from those of today. 

But what goes on behind the scenes has changed dramatically, requiring massively different calculations for both the interface generation and the product's effect. Hence, despite the best efforts of the funders of computing, programmers still focus obsessively on their own methodology, whenever they can.

Unfortunately, every mildly popular experiment in methodology is coöpted by management and 'thought leaders' at the behest of the establishment -- they will literally steal ideas and turn them against people. They are trained to coöpt,  and if they don't, someone else will. They are trained (often with such subtlety that they don't even notice it) to deceive all parties while turning ideas into weapons-for-hire. They have sub-industries of supportive minions who help them to do so.

This is why the dotcom boom suddenly felt like "The World Turned Upside-down". There was an explosion among restless engineers who suddenly, with new technology, and new fields to explore, could escape the dull controlled world of wage-slavery and engage in activity freely, focussing not just on methodology, but on doing things with computing that really mattered -- and move computing away from supporting the corporate ripoff of consumers, taxpayers and other businesses for profit. 

In any case, after the crash, there was a reaction to the dot com boom -- much like the US establishment's reaction to the 1960s, an important civilizing era -- with post-2000 companies reasserting their power, and forcing firm lines-of-control upon product direction and engineering methodology.

I'll describe two examples of the coöptation of methodology, and then, like a good engineer, I'll address some of the existing and potential remedies.

I'll start with "Agile".

A discussion about methods of programming needs to include 'flexibility', in the sense of a 'responsiveness to change'. No one wants to write a program that is 'write-only'. It will obviously need modification, and, as it turns out, it needs modification during its initial development, and this, in turn, implies that development must be done incrementally, continually, focussing on the most important things first, priorities that get re-evaluated at each step, in order to keep a program well-structured for change, well-adapted at any moment, and properly responsive to the inevitable change needed in functionality.

Now, I would have written much the same paragraph above during the late 1970's, after reading The Oregon Experiment and A Pattern Language by Christopher Alexander, who had set-up a system at my school, the University of Oregon, that facilitated user-design and even some user-construction, with an emphasis on coherent, satisfying, incremental improvement. 

So, for me, saying "program development needs to be agile", is essentially the same as saying "programs need to be implemented by human beings". I agree, of course, that programs do need to be written by human beings! (Yes, I'm aware that a program can also be programmed to do something we could choose to call programming.)

So the new excitement about "agile development" in the late 90's seemed like some kind of propagation … a broadening of awareness about old ideas, letting new engineers know how things need to be, to do good work.

Interestingly, activities that were quite common, solidified into technical terms. Which is fine. So, when I manage a difficult project, I like to have coffee in the morning with my team, and we can think hard about what we've done the previous day, not in an onerous way, and think hard about what we learned from that, and think some more about what we should do next, then agree upon next steps.

This kind of teamwork is as old as humanity. But then it came to be called a 'scrum' by those in agile. Also, the habit of sitting down with people, to share in programming efforts, became 'pair programming'. Again, I have no problem with this. For propagation, ideas need names.

Then something happened: a Coöptation. Not that this is new, but when it happened to 'Agile', it became a real monster in service of the corporation.

I honestly don't think it's worth detailing all the problems with these new strict "rules of agile". There was immediately a very strong reaction to this attempt at prescriptive engineering in the service of the corporate power-structure. 

One group, which included programming methodologists like Kent Beck and Ward Cunningham, wrote an Agile Manifesto, which basically said "people first"-- protect people and their immense ability to solve problems from anything that even feels like an obstacle for the sake of conformity and control. By this point, much of the energy being introduced in Agile had graduated from "ideas and principles", which were helpful, to "codified workflows" which were strict, nonsense versions of the real thing. The tragedy of such coöptation is that movements intended to free people become the next means to enslave them.

Earlier this year, one highly-indoctrinated corporate-manager told me that this Codified Agile even forced people to communicate using 'form sentences', which required the description of the state of work using particular sentence constructions. I tried it, but nearly vomited. "Hey", I said, "if you mess with natural language, you're messing with people's minds". We aren't computers. Go program some robots, but don't try to program humans.

Agile, in this form, became a clear tool of corporate bureaucracy (from start-ups to multinationals), tracking and controlling the worker's every thought. Do that, and you can guarantee thinking will be quite limited. Looking at the products pouring into the marketplace today, the 'lack-of-innovation' approach seems to be quite successful.

Let's look at another example: Patterns. 

Interestingly, even more directly than agile, software patterns borrow from Christopher Alexander's work on the connection between the freedom to think, feel and act, and the quality of the human built environment to facilitate life. Building profoundly good stuff in a holistic way to make life genuinely better.

Patterns are generic solutions, intended to enlighten people, not to rule them. In almost all cases, there may be better solutions, more important principles to follow, etc. Patterns in Alexander's sense are simply good solutions, something that both the heart and the mind can agree upon. You can use them to inspire you to find solutions to difficult problems in the real world. This is especially true when they are conveyed as a kind of gradient of patterns that apply at various scales, from cities down to tabletops.

Not coincidentally, Beck and Cunningham formally introduced patterns to the software world, in a 1987 paper. Interestingly, this took the form of a short application sequence of patterns, a tiny pattern language of useful ideas that effectively inspired a group to design a good interface.

But by the mid 1990's, a rival pattern group tried to do something far less subtle, and advocated for "must use" design patterns. This was not only ridiculous, it alienated many very sensitive and innovative people. 

Of course, corporations then made use of these new strictures as a way to evaluate quality of software, and 'force' it to happen -- when in fact it couldn't possibly work like that. The enormous damage rendered in the minds of young programmers by this "prescriptive patterns" movement, for example the MVC pattern, is only slightly offset by the continued work of the original methodologists, in the form of the Hillside group and the PLoP conferences, who carry on, and study pattern languages that, in a more easy-going introspective and collaborative fashion, simply suggest various approaches and principles for solving various sets of problems in various contexts.

Now, it's kind of odd for me to 'complain' that these young methodological sciences within computing were coöpted, when modern computing itself emerged funded by the establishment, in the context of funneling power and money into the hands of corporations and governments. What else would one expect?

So, finally, let's think about what we can do to change this situation. 

I'd like to divide the possible approaches into two categories: 

1) making new methodologies harder to coöpt by nature, hence protecting the topics, and people engaged in them, from the forces of controllers.

2) changing the nature and effect of the computing economy itself, so the forces of controllers are weakened.

I note again that, during the temporary economic revolutions that were the dotcom boom, and before that the personal computing boom, it seemed that (2) was possible, maybe even easy, to achieve. It doesn't seem like that now, but that doesn't mean the situation is impossible. 

And, yes, I think computing people need to all become activists.

For (1), I believe we need to:

(a) put computing on a natural science footing, as I write about here often, which would resolve some of the bitter and scientism-laden sectarianism that divides engineers.

(b) make certain that computing has a continuing moral discussion about what it does, for whom, for whose benefit, under what conditions, and why.

For (2), I believe that (1), above, can lead to a new economy of high-quality free-as-in-freedom software and hardware, where communities coöperate with each other to build the products that will satisfy actual needs, without destroying people's minds and the planet underneath them. We need technology that does what people and communities need, and not technology for corporate power and greed. We need technology that improves and adds meaning and self-fulfilment to people's lives, not technology that distracts them from their lives. 

To do this, we need a serious moral, economic, ecological, human awakening. This is always latent in everyone, and possible to encourage in people, if we become activists for a better, more compassionate world, and do the hard-work of consciousness-raising among the entire population … including by making the best software convey this humane sensibility. Also, inside businesses and institutions, we need to actively shift the establishment in this direction.

Then we can study method in peace.