Saturday, December 21, 2013

The Coöptation of Methodology

There have always been shop rules, safety regulations, good practice, et cetera, in every engineering environment.

In software engineering, the study of good practices, methodology, is increasingly confused with one very bad practice: forcing people to adhere to particular methods. 

I'm sure this goes up and down, but in 40 years of programming I've never seen such an invasion of 'productivity-driven' corporate culture … an invasion into a naturally collegial engineering environment. When work is normal among engineers, the power of the human mind to resolve issues is understood to be of prime importance, and not to be tampered with.

But today, perhaps, engineers are so demoralized and demotivated by the sheer volume of crap produced by the computer industry that, from a manager's perspective, they "need to be monitored and artificially motivated". Their hearts and minds are not as important as their obedience.

Decades ago, there began a push for 'programming metrics' such as 'lines of code per day', and at the same time, a push for 'conformity' such as code formatting standards. These were widely ridiculed by programmers, not because the relevant issues were ignored by engineers -- after all, sure, it's interesting to know how a program changes in shape and size, and it's appropriate to format code so your colleagues can read it. But management's implication that judgement of such things by 'outsiders' could be anything besides trivial … was considered silly.

That is, until people realized that metrics weren't 'silliness' but rather 'authoritarian'. Management, under performance pressure, was asserting itself. And they were looking for tools with which to assert authority. The managers were often former engineers themselves … so the industry was using the basic strategy for developing a colonial elite, elevating prisoners to prison guards.

Parallel to the search by the powerful for means of employee control, was the fascinating internal effort, by engineering researchers, to experiment with new methods, and better understand these complex effectiveness issues. This research is methodology … the science and study of method. It's a subtle study, which involves, among other actions: a sensitivity to moments when things seem to be working well; and building and testing tools to make life easier and simpler, the better to respond to increasing demands for more complex software.

I want to take an aside for a moment, and point out that while, in one important way, software has become more complex, in another important way it has not. 

Increased complexity of a user experience is not necessarily an improvement. Usually quite the opposite. We still type our thoughts into text editors almost identical to those that were available decades ago, because the straightforward facilitation of the act of typing hasn't changed. This is because we don't want to disturb the human mind while it's doing its complex work. Nothing is more frustrating than, say, the Facebook editor's difficult-to-avoid attempts to change your writing into Facebook-internal links. The inability of our engineering culture to pass along understanding of the problems with these kinds of automation is endemic to both technological optimism and corporate slavery, which promote break-neck production while sacrificing introspection and quality.  

The interesting thing is: the user hasn't really changed much, hence the interfaces can't be much more complex than they were decades ago. The humanity of the user still must be accommodated: it is their brain we are trying to empower.  Hence the UI / UX dictum, 'keep it simple', can never change, and this highlights the fact that the effective qualities of interfaces 50 years ago aren't much different from those of today. 

But what goes on behind the scenes has changed dramatically, requiring massively different calculations for both the interface generation and the product's effect. Hence, despite the best efforts of the funders of computing, programmers still focus obsessively on their own methodology, whenever they can.

Unfortunately, every mildly popular experiment in methodology is coöpted by management and 'thought leaders' at the behest of the establishment -- they will literally steal ideas and turn them against people. They are trained to coöpt,  and if they don't, someone else will. They are trained (often with such subtlety that they don't even notice it) to deceive all parties while turning ideas into weapons-for-hire. They have sub-industries of supportive minions who help them to do so.

This is why the dotcom boom suddenly felt like "The World Turned Upside-down". There was an explosion among restless engineers who suddenly, with new technology, and new fields to explore, could escape the dull controlled world of wage-slavery and engage in activity freely, focussing not just on methodology, but on doing things with computing that really mattered -- and move computing away from supporting the corporate ripoff of consumers, taxpayers and other businesses for profit. 

In any case, after the crash, there was a reaction to the dot com boom -- much like the US establishment's reaction to the 1960s, an important civilizing era -- with post-2000 companies reasserting their power, and forcing firm lines-of-control upon product direction and engineering methodology.

I'll describe two examples of the coöptation of methodology, and then, like a good engineer, I'll address some of the existing and potential remedies.

I'll start with "Agile".

A discussion about methods of programming needs to include 'flexibility', in the sense of a 'responsiveness to change'. No one wants to write a program that is 'write-only'. It will obviously need modification, and, as it turns out, it needs modification during its initial development, and this, in turn, implies that development must be done incrementally, continually, focussing on the most important things first, priorities that get re-evaluated at each step, in order to keep a program well-structured for change, well-adapted at any moment, and properly responsive to the inevitable change needed in functionality.

Now, I would have written much the same paragraph above during the late 1970's, after reading The Oregon Experiment and A Pattern Language by Christopher Alexander, who had set-up a system at my school, the University of Oregon, that facilitated user-design and even some user-construction, with an emphasis on coherent, satisfying, incremental improvement. 

So, for me, saying "program development needs to be agile", is essentially the same as saying "programs need to be implemented by human beings". I agree, of course, that programs do need to be written by human beings! (Yes, I'm aware that a program can also be programmed to do something we could choose to call programming.)

So the new excitement about "agile development" in the late 90's seemed like some kind of propagation … a broadening of awareness about old ideas, letting new engineers know how things need to be, to do good work.

Interestingly, activities that were quite common, solidified into technical terms. Which is fine. So, when I manage a difficult project, I like to have coffee in the morning with my team, and we can think hard about what we've done the previous day, not in an onerous way, and think hard about what we learned from that, and think some more about what we should do next, then agree upon next steps.

This kind of teamwork is as old as humanity. But then it came to be called a 'scrum' by those in agile. Also, the habit of sitting down with people, to share in programming efforts, became 'pair programming'. Again, I have no problem with this. For propagation, ideas need names.

Then something happened: a Coöptation. Not that this is new, but when it happened to 'Agile', it became a real monster in service of the corporation.

I honestly don't think it's worth detailing all the problems with these new strict "rules of agile". There was immediately a very strong reaction to this attempt at prescriptive engineering in the service of the corporate power-structure. 

One group, which included programming methodologists like Kent Beck and Ward Cunningham, wrote an Agile Manifesto, which basically said "people first"-- protect people and their immense ability to solve problems from anything that even feels like an obstacle for the sake of conformity and control. By this point, much of the energy being introduced in Agile had graduated from "ideas and principles", which were helpful, to "codified workflows" which were strict, nonsense versions of the real thing. The tragedy of such coöptation is that movements intended to free people become the next means to enslave them.

Earlier this year, one highly-indoctrinated corporate-manager told me that this Codified Agile even forced people to communicate using 'form sentences', which required the description of the state of work using particular sentence constructions. I tried it, but nearly vomited. "Hey", I said, "if you mess with natural language, you're messing with people's minds". We aren't computers. Go program some robots, but don't try to program humans.

Agile, in this form, became a clear tool of corporate bureaucracy (from start-ups to multinationals), tracking and controlling the worker's every thought. Do that, and you can guarantee thinking will be quite limited. Looking at the products pouring into the marketplace today, the 'lack-of-innovation' approach seems to be quite successful.

Let's look at another example: Patterns. 

Interestingly, even more directly than agile, software patterns borrow from Christopher Alexander's work on the connection between the freedom to think, feel and act, and the quality of the human built environment to facilitate life. Building profoundly good stuff in a holistic way to make life genuinely better.

Patterns are generic solutions, intended to enlighten people, not to rule them. In almost all cases, there may be better solutions, more important principles to follow, etc. Patterns in Alexander's sense are simply good solutions, something that both the heart and the mind can agree upon. You can use them to inspire you to find solutions to difficult problems in the real world. This is especially true when they are conveyed as a kind of gradient of patterns that apply at various scales, from cities down to tabletops.

Not coincidentally, Beck and Cunningham formally introduced patterns to the software world, in a 1987 paper. Interestingly, this took the form of a short application sequence of patterns, a tiny pattern language of useful ideas that effectively inspired a group to design a good interface.

But by the mid 1990's, a rival pattern group tried to do something far less subtle, and advocated for "must use" design patterns. This was not only ridiculous, it alienated many very sensitive and innovative people. 

Of course, corporations then made use of these new strictures as a way to evaluate quality of software, and 'force' it to happen -- when in fact it couldn't possibly work like that. The enormous damage rendered in the minds of young programmers by this "prescriptive patterns" movement, for example the MVC pattern, is only slightly offset by the continued work of the original methodologists, in the form of the Hillside group and the PLoP conferences, who carry on, and study pattern languages that, in a more easy-going introspective and collaborative fashion, simply suggest various approaches and principles for solving various sets of problems in various contexts.

Now, it's kind of odd for me to 'complain' that these young methodological sciences within computing were coöpted, when modern computing itself emerged funded by the establishment, in the context of funneling power and money into the hands of corporations and governments. What else would one expect?

So, finally, let's think about what we can do to change this situation. 

I'd like to divide the possible approaches into two categories: 

1) making new methodologies harder to coöpt by nature, hence protecting the topics, and people engaged in them, from the forces of controllers.

2) changing the nature and effect of the computing economy itself, so the forces of controllers are weakened.

I note again that, during the temporary economic revolutions that were the dotcom boom, and before that the personal computing boom, it seemed that (2) was possible, maybe even easy, to achieve. It doesn't seem like that now, but that doesn't mean the situation is impossible. 

And, yes, I think computing people need to all become activists.

For (1), I believe we need to:

(a) put computing on a natural science footing, as I write about here often, which would resolve some of the bitter and scientism-laden sectarianism that divides engineers.

(b) make certain that computing has a continuing moral discussion about what it does, for whom, for whose benefit, under what conditions, and why.

For (2), I believe that (1), above, can lead to a new economy of high-quality free-as-in-freedom software and hardware, where communities coöperate with each other to build the products that will satisfy actual needs, without destroying people's minds and the planet underneath them. We need technology that does what people and communities need, and not technology for corporate power and greed. We need technology that improves and adds meaning and self-fulfilment to people's lives, not technology that distracts them from their lives. 

To do this, we need a serious moral, economic, ecological, human awakening. This is always latent in everyone, and possible to encourage in people, if we become activists for a better, more compassionate world, and do the hard-work of consciousness-raising among the entire population … including by making the best software convey this humane sensibility. Also, inside businesses and institutions, we need to actively shift the establishment in this direction.

Then we can study method in peace.

Saturday, December 7, 2013

Computing as Natural Science


"Computer Science", as we see it today, is:

(1) part formal science (i.e., categorized with logic and mathematics)
(2) part engineering, tool-building and shop-practice
(3) part corporate and institutional hype

Modern computing has a strange history, originally the work of mathematicians and engineers, in support of powerful bureaucratic institutions, corporate and governmental, and heavily shaped by lies, often called 'marketing', with no corrective mechanism acting on doctrine, except 'success'.

This is why today's "computer science" is not a "natural science" (i.e., categorized with physics, chemistry and biology) although the majority of workers in the field are confused about this. This is partly because, in order to deal with our own complex human-made artifacts, e.g. computer systems, engineers make use of exploratory methodologies, upon internal computing environments, which is similar to the work of scientists -- although the actual similarity is 'merely obvious', and so remains unexplored by the natural sciences.

Our exploratory methodology does make programming a 'science', in the sense of a 'pursuit of knowledge'. But that definition doesn't put Computing into the same category as the natural sciences. To be in that category, we'd have to try to determine what the natural world of computing is. As things stand today, the computer's only relation to natural science is as a provider of instrumentation, a recruiter of scientific work for business purposes (e.g. computational geometry or integrated circuit chemistry), and, occasionally, a provider of engineering metaphors to working scientists.

Unfortunately, many incautious computer-people make vast, vague claims of scientific legitimacy, again mostly forwarded within the context of the modern worship of power, money and success.

Computing academics and business-people regularly and wildly claim to have discovered something about (1) the human mind, (2) laws of nature, (3) human nature, (4) language, (5) vision, (6) learning, (7) society, (8) music, (9) art … the list is endless. All without the most rudimentary understanding of how hard it is to uncover principles in the natural sciences, especially about such complex phenomena in the real world. 

It's the worst kind of scientism: their claims sound scientific because formal scientific instrumentation is used in a desperately impoverished theoretical framework. It is very reminiscent of the way Behaviorism undeservedly dominated and stunted difficult complex sciences for decades … in fact, Behaviorism itself has re-emerged, under different guises, within the weak environment of this "computer scientism". 

The situation is very unsatisfactory. Computing today simply has no foundation.

So let's change this. Let's join with the natural sciences.

I propose we explore a basis for computing outside of mathematics and engineering. If we can shift computer science to a study of something that happens in nature (including people as a part of nature), then most of the disconnects and confusions would fade into the background.

Theories could finally have a concrete footing, putting the exploration of questions on the same basis as any exploration within the natural sciences. There is much confusion about theory in engineering and mathematics, most of which boils down to confusion between mind-internal and mind-external factors. The Turing Machine is a perfect example of this disconnect, which I've written about before. Semantics is another. Engineered "cognition" also falls mostly into this category.

Since there is no approach to 'theory' in computer science that stands within the natural sciences, we'll need to create such an approach. 

Let's start by being puzzled by "the obvious", and ask a simple question ...

Where is 'computation' in the natural world?

The answer is very important, and worth thinking about before you reject or accept it: 'computation' is in our minds

By which I mean that the word 'computation' is something within our mind, an idea or concept that recruits various capacities in the brain, which some other faculty in our brain composes in some way, to form the idea. It is an idea that can be identified within our brain by an fMRI experiment.

We then use this idea (within the brain), to inspire the design and construction of machines, which help us to 'do computation' (a related concept within the brain) so those machines can be considered 'computers' (ditto). 

By making use of another still mysterious mental function, 'using metaphor', anything we think about can be described, if we so choose, as a 'machine' or a 'computer'. Those are ideas (defined within the brain), as are concepts we 'apply' them to: ideas like 'the world', 'organisms', 'my mind' … etc.

My point is that 'machine' and 'computation' are not otherwise defined. As technical terminology, we have invented a gedanken model, the Turing machine, to frame some interesting mathematical exploration. And many kinds of machine-thought-models like this have been 'built' and explored in the complexity zoo. And we have attached the words 'machine' and 'computation' to them, which is a terminological choice, much as we say that 'planes fly' but we don't say that 'submarines swim'.

But this is not the definition of 'machine' or 'computer' that's in our mind. That has yet to be explored. 

Our current theoretical machines are technical expedients. These are not natural science theories, because they don't begin to pick something out from the natural world and call it 'computation'. I mean, people have used these technical formalisms on things in the world, but are unconcerned if there are computations in the world that don't match the model -- similar to the way a writer of a successful chess-playing program wouldn't be concerned if the structure of the program has nothing to do with the structure of the human mind. After all, they say, it "plays chess", by their definition (as "planes fly"). In the same way, engineering models of 'computation' are used to 'build machines', not to explore the limits, mind-internal and mind-external, of what we might consider 'computation'.

Conclusion

We need to understand that 'machine' and 'computer' are not technical terms in the natural sciences. We need to see this as a problem. (1) And then we need to find out what the limits of these terms are in our minds, through experiments regarding their limits. (2) And then we need a separate terminology for things we consider computations outside our minds, things that a physicist could identify with an instrument. (3) And these include what the brain itself does, which we often call 'computational', although we have no idea what kind of machine the brain is, and so we don't know if its operation in the natural world has any connection to the first two.

We have not explored these three straightforward natural science approaches to computing. And this is just the beginning. The world of computing has become so diverse that it will take years to straighten out this situation. But, ultimately, this approach will simplify Computer Science, and make it more intelligible, integrated, and authentic.