Saturday, December 21, 2013

The Coöptation of Methodology

There have always been shop rules, safety regulations, good practice, et cetera, in every engineering environment.

In software engineering, the study of good practices, methodology, is increasingly confused with one very bad practice: forcing people to adhere to particular methods. 

I'm sure this goes up and down, but in 40 years of programming I've never seen such an invasion of 'productivity-driven' corporate culture … an invasion into a naturally collegial engineering environment. When work is normal among engineers, the power of the human mind to resolve issues is understood to be of prime importance, and not to be tampered with.

But today, perhaps, engineers are so demoralized and demotivated by the sheer volume of crap produced by the computer industry that, from a manager's perspective, they "need to be monitored and artificially motivated". Their hearts and minds are not as important as their obedience.

Decades ago, there began a push for 'programming metrics' such as 'lines of code per day', and at the same time, a push for 'conformity' such as code formatting standards. These were widely ridiculed by programmers, not because the relevant issues were ignored by engineers -- after all, sure, it's interesting to know how a program changes in shape and size, and it's appropriate to format code so your colleagues can read it. But management's implication that judgement of such things by 'outsiders' could be anything besides trivial … was considered silly.

That is, until people realized that metrics weren't 'silliness' but rather 'authoritarian'. Management, under performance pressure, was asserting itself. And they were looking for tools with which to assert authority. The managers were often former engineers themselves … so the industry was using the basic strategy for developing a colonial elite, elevating prisoners to prison guards.

Parallel to the search by the powerful for means of employee control, was the fascinating internal effort, by engineering researchers, to experiment with new methods, and better understand these complex effectiveness issues. This research is methodology … the science and study of method. It's a subtle study, which involves, among other actions: a sensitivity to moments when things seem to be working well; and building and testing tools to make life easier and simpler, the better to respond to increasing demands for more complex software.

I want to take an aside for a moment, and point out that while, in one important way, software has become more complex, in another important way it has not. 

Increased complexity of a user experience is not necessarily an improvement. Usually quite the opposite. We still type our thoughts into text editors almost identical to those that were available decades ago, because the straightforward facilitation of the act of typing hasn't changed. This is because we don't want to disturb the human mind while it's doing its complex work. Nothing is more frustrating than, say, the Facebook editor's difficult-to-avoid attempts to change your writing into Facebook-internal links. The inability of our engineering culture to pass along understanding of the problems with these kinds of automation is endemic to both technological optimism and corporate slavery, which promote break-neck production while sacrificing introspection and quality.  

The interesting thing is: the user hasn't really changed much, hence the interfaces can't be much more complex than they were decades ago. The humanity of the user still must be accommodated: it is their brain we are trying to empower.  Hence the UI / UX dictum, 'keep it simple', can never change, and this highlights the fact that the effective qualities of interfaces 50 years ago aren't much different from those of today. 

But what goes on behind the scenes has changed dramatically, requiring massively different calculations for both the interface generation and the product's effect. Hence, despite the best efforts of the funders of computing, programmers still focus obsessively on their own methodology, whenever they can.

Unfortunately, every mildly popular experiment in methodology is coöpted by management and 'thought leaders' at the behest of the establishment -- they will literally steal ideas and turn them against people. They are trained to coöpt,  and if they don't, someone else will. They are trained (often with such subtlety that they don't even notice it) to deceive all parties while turning ideas into weapons-for-hire. They have sub-industries of supportive minions who help them to do so.

This is why the dotcom boom suddenly felt like "The World Turned Upside-down". There was an explosion among restless engineers who suddenly, with new technology, and new fields to explore, could escape the dull controlled world of wage-slavery and engage in activity freely, focussing not just on methodology, but on doing things with computing that really mattered -- and move computing away from supporting the corporate ripoff of consumers, taxpayers and other businesses for profit. 

In any case, after the crash, there was a reaction to the dot com boom -- much like the US establishment's reaction to the 1960s, an important civilizing era -- with post-2000 companies reasserting their power, and forcing firm lines-of-control upon product direction and engineering methodology.

I'll describe two examples of the coöptation of methodology, and then, like a good engineer, I'll address some of the existing and potential remedies.

I'll start with "Agile".

A discussion about methods of programming needs to include 'flexibility', in the sense of a 'responsiveness to change'. No one wants to write a program that is 'write-only'. It will obviously need modification, and, as it turns out, it needs modification during its initial development, and this, in turn, implies that development must be done incrementally, continually, focussing on the most important things first, priorities that get re-evaluated at each step, in order to keep a program well-structured for change, well-adapted at any moment, and properly responsive to the inevitable change needed in functionality.

Now, I would have written much the same paragraph above during the late 1970's, after reading The Oregon Experiment and A Pattern Language by Christopher Alexander, who had set-up a system at my school, the University of Oregon, that facilitated user-design and even some user-construction, with an emphasis on coherent, satisfying, incremental improvement. 

So, for me, saying "program development needs to be agile", is essentially the same as saying "programs need to be implemented by human beings". I agree, of course, that programs do need to be written by human beings! (Yes, I'm aware that a program can also be programmed to do something we could choose to call programming.)

So the new excitement about "agile development" in the late 90's seemed like some kind of propagation … a broadening of awareness about old ideas, letting new engineers know how things need to be, to do good work.

Interestingly, activities that were quite common, solidified into technical terms. Which is fine. So, when I manage a difficult project, I like to have coffee in the morning with my team, and we can think hard about what we've done the previous day, not in an onerous way, and think hard about what we learned from that, and think some more about what we should do next, then agree upon next steps.

This kind of teamwork is as old as humanity. But then it came to be called a 'scrum' by those in agile. Also, the habit of sitting down with people, to share in programming efforts, became 'pair programming'. Again, I have no problem with this. For propagation, ideas need names.

Then something happened: a Coöptation. Not that this is new, but when it happened to 'Agile', it became a real monster in service of the corporation.

I honestly don't think it's worth detailing all the problems with these new strict "rules of agile". There was immediately a very strong reaction to this attempt at prescriptive engineering in the service of the corporate power-structure. 

One group, which included programming methodologists like Kent Beck and Ward Cunningham, wrote an Agile Manifesto, which basically said "people first"-- protect people and their immense ability to solve problems from anything that even feels like an obstacle for the sake of conformity and control. By this point, much of the energy being introduced in Agile had graduated from "ideas and principles", which were helpful, to "codified workflows" which were strict, nonsense versions of the real thing. The tragedy of such coöptation is that movements intended to free people become the next means to enslave them.

Earlier this year, one highly-indoctrinated corporate-manager told me that this Codified Agile even forced people to communicate using 'form sentences', which required the description of the state of work using particular sentence constructions. I tried it, but nearly vomited. "Hey", I said, "if you mess with natural language, you're messing with people's minds". We aren't computers. Go program some robots, but don't try to program humans.

Agile, in this form, became a clear tool of corporate bureaucracy (from start-ups to multinationals), tracking and controlling the worker's every thought. Do that, and you can guarantee thinking will be quite limited. Looking at the products pouring into the marketplace today, the 'lack-of-innovation' approach seems to be quite successful.

Let's look at another example: Patterns. 

Interestingly, even more directly than agile, software patterns borrow from Christopher Alexander's work on the connection between the freedom to think, feel and act, and the quality of the human built environment to facilitate life. Building profoundly good stuff in a holistic way to make life genuinely better.

Patterns are generic solutions, intended to enlighten people, not to rule them. In almost all cases, there may be better solutions, more important principles to follow, etc. Patterns in Alexander's sense are simply good solutions, something that both the heart and the mind can agree upon. You can use them to inspire you to find solutions to difficult problems in the real world. This is especially true when they are conveyed as a kind of gradient of patterns that apply at various scales, from cities down to tabletops.

Not coincidentally, Beck and Cunningham formally introduced patterns to the software world, in a 1987 paper. Interestingly, this took the form of a short application sequence of patterns, a tiny pattern language of useful ideas that effectively inspired a group to design a good interface.

But by the mid 1990's, a rival pattern group tried to do something far less subtle, and advocated for "must use" design patterns. This was not only ridiculous, it alienated many very sensitive and innovative people. 

Of course, corporations then made use of these new strictures as a way to evaluate quality of software, and 'force' it to happen -- when in fact it couldn't possibly work like that. The enormous damage rendered in the minds of young programmers by this "prescriptive patterns" movement, for example the MVC pattern, is only slightly offset by the continued work of the original methodologists, in the form of the Hillside group and the PLoP conferences, who carry on, and study pattern languages that, in a more easy-going introspective and collaborative fashion, simply suggest various approaches and principles for solving various sets of problems in various contexts.

Now, it's kind of odd for me to 'complain' that these young methodological sciences within computing were coöpted, when modern computing itself emerged funded by the establishment, in the context of funneling power and money into the hands of corporations and governments. What else would one expect?

So, finally, let's think about what we can do to change this situation. 

I'd like to divide the possible approaches into two categories: 

1) making new methodologies harder to coöpt by nature, hence protecting the topics, and people engaged in them, from the forces of controllers.

2) changing the nature and effect of the computing economy itself, so the forces of controllers are weakened.

I note again that, during the temporary economic revolutions that were the dotcom boom, and before that the personal computing boom, it seemed that (2) was possible, maybe even easy, to achieve. It doesn't seem like that now, but that doesn't mean the situation is impossible. 

And, yes, I think computing people need to all become activists.

For (1), I believe we need to:

(a) put computing on a natural science footing, as I write about here often, which would resolve some of the bitter and scientism-laden sectarianism that divides engineers.

(b) make certain that computing has a continuing moral discussion about what it does, for whom, for whose benefit, under what conditions, and why.

For (2), I believe that (1), above, can lead to a new economy of high-quality free-as-in-freedom software and hardware, where communities coöperate with each other to build the products that will satisfy actual needs, without destroying people's minds and the planet underneath them. We need technology that does what people and communities need, and not technology for corporate power and greed. We need technology that improves and adds meaning and self-fulfilment to people's lives, not technology that distracts them from their lives. 

To do this, we need a serious moral, economic, ecological, human awakening. This is always latent in everyone, and possible to encourage in people, if we become activists for a better, more compassionate world, and do the hard-work of consciousness-raising among the entire population … including by making the best software convey this humane sensibility. Also, inside businesses and institutions, we need to actively shift the establishment in this direction.

Then we can study method in peace.

Saturday, December 7, 2013

Computing as Natural Science

"Computer Science", as we see it today, is:

(1) part formal science (i.e., categorized with logic and mathematics)
(2) part engineering, tool-building and shop-practice
(3) part corporate and institutional hype

Modern computing has a strange history, originally the work of mathematicians and engineers, in support of powerful bureaucratic institutions, corporate and governmental, and heavily shaped by lies, often called 'marketing', with no corrective mechanism acting on doctrine, except 'success'.

This is why today's "computer science" is not a "natural science" (i.e., categorized with physics, chemistry and biology) although the majority of workers in the field are confused about this. This is partly because, in order to deal with our own complex human-made artifacts, e.g. computer systems, engineers make use of exploratory methodologies, upon internal computing environments, which is similar to the work of scientists -- although the actual similarity is 'merely obvious', and so remains unexplored by the natural sciences.

Our exploratory methodology does make programming a 'science', in the sense of a 'pursuit of knowledge'. But that definition doesn't put Computing into the same category as the natural sciences. To be in that category, we'd have to try to determine what the natural world of computing is. As things stand today, the computer's only relation to natural science is as a provider of instrumentation, a recruiter of scientific work for business purposes (e.g. computational geometry or integrated circuit chemistry), and, occasionally, a provider of engineering metaphors to working scientists.

Unfortunately, many incautious computer-people make vast, vague claims of scientific legitimacy, again mostly forwarded within the context of the modern worship of power, money and success.

Computing academics and business-people regularly and wildly claim to have discovered something about (1) the human mind, (2) laws of nature, (3) human nature, (4) language, (5) vision, (6) learning, (7) society, (8) music, (9) art … the list is endless. All without the most rudimentary understanding of how hard it is to uncover principles in the natural sciences, especially about such complex phenomena in the real world. 

It's the worst kind of scientism: their claims sound scientific because formal scientific instrumentation is used in a desperately impoverished theoretical framework. It is very reminiscent of the way Behaviorism undeservedly dominated and stunted difficult complex sciences for decades … in fact, Behaviorism itself has re-emerged, under different guises, within the weak environment of this "computer scientism". 

The situation is very unsatisfactory. Computing today simply has no foundation.

So let's change this. Let's join with the natural sciences.

I propose we explore a basis for computing outside of mathematics and engineering. If we can shift computer science to a study of something that happens in nature (including people as a part of nature), then most of the disconnects and confusions would fade into the background.

Theories could finally have a concrete footing, putting the exploration of questions on the same basis as any exploration within the natural sciences. There is much confusion about theory in engineering and mathematics, most of which boils down to confusion between mind-internal and mind-external factors. The Turing Machine is a perfect example of this disconnect, which I've written about before. Semantics is another. Engineered "cognition" also falls mostly into this category.

Since there is no approach to 'theory' in computer science that stands within the natural sciences, we'll need to create such an approach. 

Let's start by being puzzled by "the obvious", and ask a simple question ...

Where is 'computation' in the natural world?

The answer is very important, and worth thinking about before you reject or accept it: 'computation' is in our minds

By which I mean that the word 'computation' is something within our mind, an idea or concept that recruits various capacities in the brain, which some other faculty in our brain composes in some way, to form the idea. It is an idea that can be identified within our brain by an fMRI experiment.

We then use this idea (within the brain), to inspire the design and construction of machines, which help us to 'do computation' (a related concept within the brain) so those machines can be considered 'computers' (ditto). 

By making use of another still mysterious mental function, 'using metaphor', anything we think about can be described, if we so choose, as a 'machine' or a 'computer'. Those are ideas (defined within the brain), as are concepts we 'apply' them to: ideas like 'the world', 'organisms', 'my mind' … etc.

My point is that 'machine' and 'computation' are not otherwise defined. As technical terminology, we have invented a gedanken model, the Turing machine, to frame some interesting mathematical exploration. And many kinds of machine-thought-models like this have been 'built' and explored in the complexity zoo. And we have attached the words 'machine' and 'computation' to them, which is a terminological choice, much as we say that 'planes fly' but we don't say that 'submarines swim'.

But this is not the definition of 'machine' or 'computer' that's in our mind. That has yet to be explored. 

Our current theoretical machines are technical expedients. These are not natural science theories, because they don't begin to pick something out from the natural world and call it 'computation'. I mean, people have used these technical formalisms on things in the world, but are unconcerned if there are computations in the world that don't match the model -- similar to the way a writer of a successful chess-playing program wouldn't be concerned if the structure of the program has nothing to do with the structure of the human mind. After all, they say, it "plays chess", by their definition (as "planes fly"). In the same way, engineering models of 'computation' are used to 'build machines', not to explore the limits, mind-internal and mind-external, of what we might consider 'computation'.


We need to understand that 'machine' and 'computer' are not technical terms in the natural sciences. We need to see this as a problem. (1) And then we need to find out what the limits of these terms are in our minds, through experiments regarding their limits. (2) And then we need a separate terminology for things we consider computations outside our minds, things that a physicist could identify with an instrument. (3) And these include what the brain itself does, which we often call 'computational', although we have no idea what kind of machine the brain is, and so we don't know if its operation in the natural world has any connection to the first two.

We have not explored these three straightforward natural science approaches to computing. And this is just the beginning. The world of computing has become so diverse that it will take years to straighten out this situation. But, ultimately, this approach will simplify Computer Science, and make it more intelligible, integrated, and authentic.

Friday, November 15, 2013

Habits for any natural science of people

The natural sciences have explored 'people' as a subject since their beginnings. But this has proven to be extremely difficult to do sufficiently objectively for good results. 

'Difficult' is not the same as 'impossible', however.

Of course, complete objectivity is impossible ... but improved objectivity is very possible. Huge philosophical movements have denied this distinction, irrationally. The repercussions are still everywhere today.

I believe an important point needs to be better understood, and more explicitly understood, throughout any kind of human science: complex human features must be studied through the use of informants. 

This has been the core of the new linguistics, for example. If everyone agrees on the grammaticality of something, it's not because grammar has been written down (it hasn't) and they have memorized it (they haven't), but instead because this complex faculty that evaluates grammar grows, with the same, very limited environmental stimulus, to be essentially equivalent in everyone. Otherwise we couldn't understand one another. To study this phenomenon, we simply need to accept that the language faculty is much like the stomach: primarily a genetic endowment.

What the stomach does is complex, but you can begin to know more about the stomach by thinking about it, guessing at its operation, and testing its complex reaction to situations that are carefully constructed to give interesting answers: experiments. All those steps are required for real science.

We do the same thing in studying language. If I remove the "in" from the previous sentence, we can all agree that it is still a grammatical sentence, and its meaning hasn't much changed. But if I remove "same", it is grammatical, but the meaning has changed. If I remove "do" it is no longer grammatical, but the meaning can be guessed at. These experiments tell us something about our language faculty, using this complex instinct as the experimental subject, and ourselves as informants.

To do this, one must respect the informant, that is, the experimental subject. One must, because they are helping you to understand some feature of the complex, almost-completely-mysterious human brain. We know vanishingly little about the brain in any animal, even less about our own, and even less about its very complex features.

That means that to explore and understand more about the mechanism of some very complex human capacity, say our ability to 'program smoothly', one must be very cautious to examine situations that are raised about subjective human impressions in various situations. You must listen to the experimental subject. Their subjective impression is your experimental subject. It will help you to identify and isolate some human capacity, if you listen carefully, get them to explain what they can, and try to make it repeatable, and testable, in various ways.

That's how science works. Physicists don't ignore a lab assistant who says "we have a recurring, completely unexpected result". They don't say "oh, we understand everything" and fire the assistant and smash the laboratory's experimental apparatus. I mean, this could happen, but no one would think it good. When it's very early days in the theoretical development, they especially don't do this.

On the contrary, in computer science, this set of habits has simply not developed at all -- outside of, in a very limited way, user interface design. Leading computer scientists have no time for such subtlety. 

As an example, let me describe something that happened to me recently. I was having lunch with one of the major programming leaders of our time. I said that I had identified a new phenomenon, and I needed to show it to him, so he could experience it. He refused to look at it, feeling that he already understood it, even though, based on his description, I could tell he was talking about something related, but not sufficiently isolated. Now, any psychologist, or even a user experience designer, would have been interested. But because he felt this was territory he'd already been over, he didn't want to see the results. Even though we were talking about extremely complex and poorly understood human capacities.

This kind of indifference is rife throughout the computer world, though not universal. I've met the leaders of many movements in computing, who, one would think, would be very sensitive to human informants. When doing their own work, they often are. But I find most of them to be simply unaware of what we've learned in the last 50 years about the importance of informants for studying complex human phenomena, and the importance of sharing informant results. They believe that the relatively minor improvements in programming are more important, despite the fact that there's tremendous dissatisfaction and disagreement in the engineering world today … a situation which might be improved by a bit of natural science.

All of this might be partly because computing developed to respond immediately to the "drive to build". Especially the building of larger and more complex logical systems. The sensitivity of programmers who feel that something is wrong is ignored, even though a thorough, sensitive examination of such feelings will likely be the basis for moving forward to better scientific understanding of complex human capacities, and better tools for programming. We're in a state right now where a few programming movements feel they've found "the answer", and people are simply exhorted to "get it". While such attitudes are expedient when building a programming team, or even a programmer's movement, they are antithetical to science.

Friday, November 8, 2013

Computation is broader than the Turing Machine

This should be one of those things we learn in school, and yet we learn the opposite. 

The notion of computation is not identical with a Turing machine.

First, the notion of computation is a human state of mind, not a technical term.

Second, if you define 'computation' to be 'anything that can be performed on a Turing machine', then you'll never discover any other form of computation. 

This is very much like the way the word 'gene', a term coined for discussion of the source of an inherited feature, became 'DNA', due to the influence of the new molecular biology. But of course this is now known to be incorrect: there are many non-DNA factors that enter into inheritance (environment, epigenetics, natural law, et cetera). So, biologists either need to use a different word for the broad notion of 'gene', which is not advisable, or they need to reclaim it from molecular biology, so it can be used again in discussions of inheritance, features, etc.

In computing, let's look at a very simple thing, which computational systems do, but which simply cannot be captured by a Turing machine:

* Determine if two inputs are received simultaneously.

Clearly this is a computational task. It's clearly an automatable task. It's a task often performed by actual computers. But it has nothing to do with a Turing machine. You simply cannot determine simultaneity with a single input, if one construes the tape head as an input (which one shouldn't, see below). The result of a 'simultaneity determination' could be signaled to more than one additional computer at a time, through tapes if preferred, and the ramifications of this are even further from a Turing machine's capacities.

So a 'multiple-head Turing machine' or a 'multi-tape Turing machine' (a slightly different model) can do things that we call computation, which cannot be done on a single-head Turing machine, if we add, for example, time or some other signaling capacity to this form of TM. I thought everyone understood this, but I found this on the wikipedia page on computability:

Here, there may be more than one tape; moreover there may be multiple heads per tape. Surprisingly, any computation that can be performed by this sort of machine can also be performed by an ordinary Turing machine, although the latter may be slower or require a larger total region of its tape.

It's surprising because it is not true, if one were to actually construct usable Turing machines. The point of a gedanken experiment is to inspire such considerations. Doing two things at once, or seeing if two things are happening at once, are clearly not possible with a single-head Turing machine. This is just the beginning of the differences … a brain, for example, is obviously capable of real-world computation that would be impossible to perform with a Turing machine. Millions of calculations by a module of the brain, which are then passed through different routes to different modules simultaneously? How is that a Turing machine?

Now, you could simulate the environment and the multi-head Turing machine together, using a single-thread computation (the is equivalent to a Turing machine). But look at what we've just done. We've defined an operation that takes place in the world as somehow 'computationally-equivalent' to a simulation of the world. They aren't equivalent. The simulation is only a tool for investigating our theory of what goes on in a world intertwined with computing. No one would say that this single-threaded machine is equivalent to a biological world that it might be simulating. Only in computing would we get so confused about the role of theory.

Since at least the 17th-century, the idea of the human brain as a kind of computer or machine has been useful for investigation of what it may be to be human. The problem: we don't know what kind of machine or computer it is. We still do not, to this day. Our technical definitions of computability will definitely need expansion, to include real computational  phenomena, before we can begin to understand what biological computers do.

I believe that, at the very start, we need to introduce a de-mechanized version of the Turing machine. In early stages of any science, the notion of intelligibility tends to be 'mechanical'. Pre-Newtownian physics, in early investigations by everyone in the 17th century, tended to make mechanic models the gold standard of a good theory. That disappeared after Newton discovered that there was no mechanical explanation of forces. In fact, his finding expands the notion of 'mechanical' at that point, to include action at a distance. 

But the very human concept of mechanics as explanatory science keeps reoccurring, and it has done so in computing. For multi-head and multi-track Turing machines we need to ask "what is the tape mechanism?" because it matters. If we imagine it to be a real, mechanical tape mechanism, then it is a sensor, and "simultaneity" in a "multi-sensor" Turing machine would be a computable question, in a real machine.

Computing is something happens in the real world. So we need to ask ourselves: how can computing move from a confused formal science / engineering hybrid to a natural science?

Friday, October 11, 2013

Vitalism and Machine Learning

There's a methodological mistake, often made in the early stages of a science. It was pointed out in 1865, relative to biology, by Claude Bernard, in his influential An Introduction to the Study of Experimental Medicine. 

Bernard wrote that we tend to over-simplify explanations. It's natural for us. In some sense, science would be impossible without our ability to guess at simple theories, as Charles Sanders Peirce pointed out, describing his notion of abductive reasoning. This is because science is about understanding and improving theories, and they need to be comprehensible, or else they can hardly be tested and improved. It's a process that makes heavy use of our intuition.

But we go overboard, and rush to judgement, quite often, ignoring Newton's lesson to us: frame no hypothesis. By which he meant that he could not find an explanation for gravity, and so he didn't try to invent one -- he only provided a description, and this is where we are, still, today, with our theories of natural forces.

In an objective sense, science is about constructing comprehensible theories of the world, not comprehending the world itself. But we still want to understand the world itself. This is a desire for the feeling of understanding, not for the much more unnatural form of understanding in science: in the form of a theory that inspires experimentation and helps to uncover enlightening principles.

For Claude Bernard, vitalism was a good example of this. The notion of a vital force was already not a practical concern to researchers by 1865. It had recently been understood that much biological physiology could be explained by analysis and experiment, including the chemical analysis of organic tissues in the hot field of organic chemistry. Not everything could be explained, but some things could be, and much could be described -- much more than might be expected. The idea that there is a force like gravity, specific to living things, wasn't needed. 

In fact, vitalism is too simple to be correct -- in a way, it's disrespectful of nature. It's a trivial explanation based on our woefully limited understanding -- driven by our natural instincts, which are capable of distinguishing life from the inorganic, since we are living organisms ourselves.

I'll paraphrase the case Bernard makes: imagine someone sees a bird, lifting off the ground. Without further evidence, the person could easily explain that birds are suffused with 'anti-gravity substances'. But the bird's ability turns out to be layers of complex interactions between special qualities of birds and little understood physical-chemical laws. It's very complex. There's no simple explanation. That's why it's biology and not physics: physics deals with phenomena that are 'easy' to isolate, and more complex problems are tossed downstream to the chemists and biologists.

That said, there was a vital force, if you stretch the definition of force beyond the physicists' technical terminology: the teleology-like mechanism of genetics. But that guiding hand is not like gravity, which is what the 19th-century vitalists were looking for. 

So it was quite reasonable for Bernard to claim vitalism to be counter-productive. It was better to say "we don't yet understand why living creatures do certain things, like grow, reproduce, form living shapes, etc". Even in 2013, we still can't completely characterize the difference between living things and inorganic matter, let alone explain the phenomena -- but that doesn't mean we should explain away our ignorance, with something that sounds real, like a 'vital force'. Real science is a much more humble affair: we have to admit that we don't have many answers to very serious and important questions. And there's a corollary: Science must be opposed to Scientism, the offhand speculative explanation of things 'scientifically', which is comically prevalent, and exactly as irrational as relying on angels and spirits to explain features of the natural world.

Bernard's book also pointed to a mistake being made by young researchers, a different kind of 'too simple explanation', which was: "it's all chemistry". 

As Elizabeth Gasking pointed out in 1970, in The Rise of Experimental Biology, for many decades after Bernard's book, medical investigators ignored their own ignorance of biological processes, in favor of simplistic chemical explanations. Bernard's book tried to counter this by presenting the influential idea of the 'internal environment' of the organism, what we would now call its physiology. The point was that this internal environment was so poorly understood that the consequences of chemical interventions are very difficult to understand. This advice, now critically important to the biological sciences, is stilled ignored by, for example, the propaganda-driven pharmaceutical industry.

Let's take another common mistaken simplification, this time in our approach to the study of the human mind. 

It's common to assume that the mind is 'plastic', a tabula rasa with no a priori structure, which has a magical statistically-empowered general-learning mechanism which, somehow, always develops into the mind of a human being. But it's very unlikely that any creature, produced in this way, would be able to survive in the world, let alone be identifiable as the same species, capable of communicating with its own kind, etc.

The 'plastic' mistake is sometimes called behaviorism, empiricism, network-learning, or associationism. But really, it's very like vitalism, in that behaviorists are trying to find a simple, not useful, not verifiable, irrational 'force' or mechanism that 'does everything' or 'explains everything'. It didn't work, and it's completely discredited in biology. But it still hangs on, mostly in engineering, because it is easier to program based on this discredited model, than it is to program based on the unknown processes of the actual mind ... there are in fact many ways one can make a machine 'learn' in this way, some of which are useful for making products. But these techniques are not part of the natural sciences, and their employers shouldn't pretend that they are, as many do in modern technology companies.

Computer engineers should heed Bernard's lesson from biology: over-simplification of complex living systems is bound to get you in trouble. Stay aware of important mysteries. We need to keep it clear: techniques for machine learning are very distinct from the unknown operation of the human mind. This should be a basic tenet of computer engineering, like the distinction between formal and natural languages. Machine-learning techniques have nothing to do with human intelligence ... except that, of course, human intelligence invented these ideas, a point that offers a much more important insight into actual human cognition than any machine-learning technique itself.

Sunday, July 7, 2013

Intelligibility and technical definitions: explanation vs. explanatory theory

Although there are methodological similarities, engineering is not science (see the previous post). But there's an important methodological artifact that science and engineering share.

In science, theories must be intelligible. You want to explain something to other people, so they can understand it. This is true in many human endeavors, and it's true in engineering as well. 

But there's more: we program within the context of what seems to be an 'explanatory theory' of a computational model, one we have implemented as a development environment on the machine. It's not a theory about some universal principles in the natural world ... it's a 'theory' about something that we have created. In an important sense, it is not a theory ... it's an explanation of our construction. 

This explanation is similar to an explanatory theory in an important way: technical terminology only makes sense, and is only intelligible, in the context of the explanation.

So, the practice within technological work -- defining highly restricted terms within the context of an intelligible explanation -- uses the same criteria of intelligibility that scientists use, when explaining results in terms that are understood only within an explanatory theory!

No wonder science and engineering can seem so similar! Both have a need for precise jargon, in the context of a long explanation!

But one is a theory, because it attempts to explain, not something we've built, but something that we believe is going on in nature. The other, I'm sad to say, is just an explanation.

That's my pre-theoretical guess about a factor involved in the human organism's common confusion between engineering and science. There are many other factors, but no terminology, and no theory ... because there's no scientific work on this topic.

Thursday, July 4, 2013

The application of feeling

Most engineering, most science, most art, and most problem-solving, involves operations of the mind that we are rarely even dimly aware of. This is through no fault of our own: most "conscious activity" is unconscious, and we have no access to it. 

Over the millennia, we've identified a few kinds of thought, but we're quite far from understanding them, and there are clearly many other kinds of thought, mostly very far from our conscious awareness, which enter into our every living moment.

One of these the is the application of feeling. This is described in Christopher Alexander's four volume The Nature of Order, but essentially, in order to tap into our ability to create really good structure, which resonates deeply with people, we need to follow a process where we ask ourselves whether choice A or choice B has deeper feeling. I would put 'deeper feeling' in quotes, but in order for this technique to have good results, you need to ask it, of yourself, with as little interference as possible, and as honestly and simply as possible.

So, what might be going on here?

This technique seems to allow us to recognize (I almost said 'resonate') with structures effectively shaped by natural laws: the still poorly understood biophysics of living organisms. The conscious mind, and hence much of the subconscious mind, can interfere with our ability to see coherent, well-balanced, natural geometry. But 'the shortcut', an honest response to the question 'which has more feeling?', seems capable of resolving many issues like 'which of these is most profound?' This can be very useful for an artist trying to unleash their full human potential, and for an engineer trying to create a really good machine, program or user interface. It even enters into scientific work: which is the best way to represent a theoretical or experimental result? The one which has the most feeling, the one which is simple, lucid, elegant, and beautiful. This can be a guide to improve your work as you do it. Advanced mathematics, and even simple arithmetic, would be almost impossible without it. We use this criteria, to some degree, all the time. Just not enough. And we allow other forces to interfere when we shouldn't.

Now, this is an assertion. You can read The Nature of Order and test it yourself. I should point out that it's not really a training manual, or a training course, but more of a long explanation of the technique, its basis, and what it might mean. Showing someone how to apply feeling is something, I find, that is best communicated person-to-person. I'm not sure why.

Do you believe this? Many artists do, because they really need to use this technique explicitly throughout the day, to do their best work. Engineers have a harder time believing it, primarily because their working environment was formed with this shortcut only in some degree, but not often enough, and not explicitly enough. So we spend a lot of time wrestling with new, rather pedestrian computer tools that never seem to offer much of an advantage to engineers-as-human-beings. The programming environment designers didn't know how to apply feeling, and certainly didn't try to facilitate its use among their users.

The hard sciences have very little to say about the application of feeling. It's too complex a topic. Remember, physics is successful because it deals with the simplest imaginable phenomena. Chemistry is more complex. Biology even more complex than chemistry. And human biology is just off the scale ... we know much less than most researchers want to admit. The biology of human thought, which overlaps with psychology, is so complex that most of it is still 'soft science'.

So, it will be a very long time before we understand the application of feeling as a part of the human mind. We're not even sure what understanding it might look like. We can certainly see what kinds of structures are produced, and effective to people -- but they are hard to characterize the specific geometry of, at least in a way that a computer might recognize it. Exploring the geometry is a primary topic of The Nature of Orderusing a technique that's not so different from exploring universal grammar in biological linguistics. 

We could probably get a decent fMRI result of the state the mind enters when this evaluation-of-feeling is taking place. It's hard to see which questions to ask, after that. We could do more fMRI's to differentiate the application of feeling from other kinds of mental activity, essentially asking "how often does the feeling-evaluation-faculty enter into other mental processes?"

But I'm mostly interested in how hard it is for engineers to use this technique, even when it is done within a perfectly reasonable research context. It could have been used to 'rate' patterns in the software patterns movement, but it wasn't, even though rating is a prominent part of the original book A Pattern Language, and a critical tool in its predecessor, The Oregon Experiment

This has always troubled me. User interface designers have no trouble asking themselves this kind of question. Why do programmers, or programming-tool developers? If we're ever to make any progress with software tools, engineers will have to be open to asking and answering these perfectly reasonable questions about their interface with the machine.

Wednesday, July 3, 2013

Three aspects of programming language design: mentation mysteries, metaphysical mathematical momentum, and "the shortcut"

Programming language design is something that humans do. 

One could examine biologically the effects of notations on the human mind, and try to create a working model of what might be called the "mentation mysteries" behind programming languages. And use that to set some kinds of criteria for future programming languages.

That's hard work. That's why it's called hard science. So people don't do this, when they design programming languages. They obviously don't need to, because plenty of programming languages exist, even though this research is still on the starting block.

So, instead of biological research, what other tools do people use to create new programming languages? 

Two tools: metaphysics and feeling.

By metaphysics I mean metaphysical mathematical momentum. 

Someone has a theory of what might be useful. It's based on some 'computational model' that they respect: Haskell programmers prefer functions that don't act on named state variables, Grogix programmers believe the world could be defined by operational grammars, LISP programmers believe that our world can be defined as ramified lists, SNOBOL programmers believe that ultimately everything is a string, etc. This is not just the ontology of their world ... made of lists, strings, grammars, monads, etc. ... there's also a metaphysics, a model of what lies hidden behind programming, in a fantasy computational system which acts as the programmer's substitute for the "real world". These language designers then develop mathematics and achieve momentum which can trample many other considerations for their tool, such as usability.

Well, there's only so much an engineer has time for ... So most aspects of most programming languages did not emerge from either biologically confirmed results or theoretical imperatives.

They emerged because they felt right

All those open questions about the mind, and those potentially useful metaphysical systems, might yield something useful later. But good engineering (and that's what good programming language design is) needs to work now. So, we use our sensitivity, we try things out, we keep the good stuff, we throw out the bad stuff, and we try to keep the result feeling coherent, while we maintain some kind of logical consistency and pragmatism.

It's the shortcut. 

You can't prove that your "answer to the crisis of programming" is right. But, using your gut, you can build something good, something perhaps a touch idiosyncratic, and see if it works for you, your friends, and some members of the public.

That said, it's really important to do a post-mortem. Be explicit about which of the three criteria above are being used in different parts of your language design. Because, most likely, you are using these three, separately or in combination, for different aspects of your language. And the poor engineer learning this language should be able to know what was going on, in your analysis, when you invented this language feature, or that one ... It's simple transparency. To a certain degree, you could sidestep analysis by just answering "I used my feeling" for all of it, but that would be silly. We tend to think about something and use our feeling to judge something. If you can't analyze your own language in this way, I think people would be quite right in not taking the effort to understand your programming language. They may be wrong, and they may be missing out, but the onus is on the inventor to explain what's been done, and why.

If you think your programming language is so simple and pure that it requires no explanation, I beg to differ, from experience. I created a programming language (grogix) which only has one kind of statement. But it takes a great deal of work to know whether you can use that simple statement to do all the things you want to do, and do them in a way that is easier than some existing way. So an explanation and demonstration is required.

So, look at your creation, and ask yourself to rank the drive behind any given feature:

  1. cognitive science
  2. metaphysical imperative
  3. feeling

Sunday, June 30, 2013

The Biology of Computer Programming

Computer Programming (CP) is a human activity. 

Humans are organisms. So CP can be investigated biologically.

Scientific investigation makes use of any method that produces results: the further understanding of the natural world. So, let's take two complementary, established approaches to investigating human activity within the biological sciences. 

Approach 1: CP is a human experience that is partly accessible to consciousness. That is, there is some conscious experience, likely a very small part of the overall activity, which people who do CP are aware of. This is a repeatable awareness, found in common to all people who do CP, and although there isn't much outward indication, the moments of the experiences are mostly identifiable by informants. At the very least, I know when I perceive that I'm doing the various tasks that constitute CP. 

Approach 2: where, in the moment of the awareness of these experiences, we find externalized indicators (perhaps with instrumentation).

Principles of investigation

In a nod to Franz Boas, I need to say: I couldn't investigate this question if I wasn't a computer programmer. 

In a nod to 17th & 18th century philosophers of the mind, for example David Hume or Dugald Stewart, I assume that these internal experiences are small indicators of large, complex, inherited mechanisms that shape our mental activity and our actions in the world. 

In a nod to ethologists such as Niko Tinbergen and Konrad Lorenz, I can assume that these are significantly instinctive abilities, and so can be investigated for any member of our species. 

In a nod to geneticists, I can assume that these abilities are highly enabled and shaped by our genetic endowment, with phenotypic modifications that take place during development and life.

In a nod to Noam Chomsky, I can use myself as an experimental subject, an informant in linguistic terms, and begin to ask increasingly sharp questions, and perform experiments, regarding aspects of these human experiences that we all take for granted. 

In a nod to Christopher Alexander, I can use the informant method to examine in detail extremely obscure, complex, hard to find, yet universal, human experiences. 

In a nod to recent use of fMRI techniques by cognitive neurologists, I can turn these experiments into external readings, which help us to begin to sketch out the neurological structures that are part of the activity, and the relation of these activities to other complex human experiences.

In another nod to Chomsky, we assume that any human experience, or any biological feature, is composed of some combination of these three factors (A, B, C): 

A) natural law: for example, there is no gene that says a single cell must be spheroid rather than some other shape -- this is natural law. Genetics operates within these biophysical constraints, which are still poorly understood.

B) biological inheritance: genetics, epigenetics, and various known biological mechanisms -- and poorly understood ones, for example, structure-preservation during morphogenesis, and whatever makes multicellular organisms cohere so robustly.

C) external stimulus: from the environment, during the development and life of the organism.

Approach 1

Let's begin with a general interview of our informant. How does CP feel?

CP feels like any craft, in some way, with some planning, some building of a strategy, some carrying it out, some storytelling (to oneself and others, parts of which become part of the product), a great deal of trial and error, and the use of a great many unknown mental capacities. There is also a sense that one must endeavor to keep the structure of the program coherent, and the user interface must be effective. There are little things that we do, which are quite important, like trying to keep the code well-organized.

That's a very complex human activity. So let's tease one tiny possible CP piece for testing by our informants.

A tiny theory

Here's a small, testable theory: a programmer considers one aspect of "good coding" to be the representation of the same code by the same name. I posit, in this theory, that the human "moment of recognition" of "the same code doesn't use the name" and the "resolution" of this, by "replacing the code" with the "previously defined name for that code" is carried out by people in the same way, activating the same pattern within the brain, that one would activate when putting identical objects into the same pile or while putting books into sets on a shuffled bookshelf.

I may be wrong here. It's just a theory. It also may be far too complex a comparison, requiring simplification. 

Also, there are many questions one could ask about CP mentations that only require comparisons between very similar coding activities (see the conclusion for an example). But, for broader interest, I thought I'd include a test of a broader comparison between mental activities.

The programming side of the experiment is pretty straightforward: in front of the subject, we place a few lines of code. There are a few variable definitions at the top. One complicated value sits on the righthand side of one variable definition. The same complicated value has been placed elsewhere, within the few lines of code. We ask the informant to "clean up this code":

  • var x = a.b.c.f(15);
  • var y = 0;
  • print x, y;
  • y = z( a.b.c.f(15) );
  • print x, y; 

We then create an equally simple version of the comparison activity, and have our informant perform the two, and ask if it feels like the same kind of mental activity. For this methodological discussion, let's assume that we had it right, and most of them said "yes".

Approach 2

At this point, we can bring in an fMRI, and more informant-subjects. We scan for normal brain activity for the action of typing, cutting and pasting, looking at a screen, etc. For our comparison activity, we scan for normal identifying and shifting of objects, books, etc. We then subtract these from the scans during our two experimental situations, look at the remainder, and see if they are the same.

Clearly, such an experiment could show the theory to be inconclusive. At that point, the theories, questions and experiments would need to sharpened up.

The limits of comparison studies

It's quite likely that every mental aspect of programming cannot be revealed through comparisons with other kinds of activities. There are many times when programming feels like nothing else, although it is possible that there is some mental activity that is simply exaggerated by programming.

In these cases we will continue to iterate between the approaches until we can more clearly identify such unique mental activities, and then probe them by constructing more subtle experiments.

The Scale

It is likely that there will be hundreds of mental theories of CP that could be identified, and confirmed or rejected, in this way. It is likely that unifying these theories, simplifying them as much as possible, will be difficult work, but hopefully revealing.


At that point, yes, we could, perhaps, begin to use this information to model human CP activity, creating a computer simulation that, with some stimulus and direction, might do something we could metaphorically call 'programming'. 

Of course, we could create such "human simulations" without doing any biological research on the actual human mind -- this has been done in computer science and in the computer industry for at least 50 years. That's the field of Artificial Intelligence, which, in its rush to a "solution", has invented hundreds of techniques that have almost nothing to do with the actual science of biology. Artificial Intelligence, and engineering in general, is not science. Science is our attempt to discover what is actually happening in the natural world. Science is not the drive to imitate nature. It's the drive to understand it.


I believe that moving the study of computing into biology can be motivated by the construction of many testable theories, something that any programmer can begin to do. The reduction and teasing out of the features of CP, as a human activity, the identification of fundamental actions, is something that is commonplace in the engineering world, as a way of passing along principles of good engineering practice: patterns being a recent example. Repositioning and refining these practical insights into biological theorems is very difficult, and leads one to very different questions than engineers are used to asking. 

Just using approach 1, coding activities can be reduced to increasingly interesting questions, such as, "is grouping code into a procedure a completely different activity from dividing one procedure into two, and if not, what are the overlapping mental activities?" These can be tested just using approach 1. I think if we are going to have a true science of computing in the future, it will be essential to examine these kinds of questions.

Thursday, June 6, 2013

Phenomenology as grist for cognitive biology

So, when a philosopher says something like "science cannot tell you what it's like to be thrown into life", well ...

... if philosophy (or literature or whomever) has identified some actual feeling or idea or internal process, that is, some kind of experience, which they assert as true subjective experience, then that is something that could be investigated from a biological perspective ... as a cognitive science experiment. 

So, put some people in fMRI machines, ask them to evoke that experience, and tell the investigator when they're feeling it, and then the investigator can look to see what patterns of brain activity are common at that moment among many subjects, find the common localized structures, compare to other structures, find elements, field effects and compositional mechanisms, etc.

The same can be done with, say, a "sense of individuality" or a "moment of free will" or an enjoyment of "being in the moment". If the phenomena are common experiences, then we can start to identify them. We won't find out much, because understanding of the brain is in such a primitive state, but at least existentialism can be brought back under scientific review, and we can separate the good stuff from the bad.

Yes, it's a little ironic, since the original idea was to emphasize individuality. But irony shouldn't stop us from learning something new about the common human experience.