Friday, November 15, 2013

Habits for any natural science of people

The natural sciences have explored 'people' as a subject since their beginnings. But this has proven to be extremely difficult to do sufficiently objectively for good results. 

'Difficult' is not the same as 'impossible', however.

Of course, complete objectivity is impossible ... but improved objectivity is very possible. Huge philosophical movements have denied this distinction, irrationally. The repercussions are still everywhere today.

I believe an important point needs to be better understood, and more explicitly understood, throughout any kind of human science: complex human features must be studied through the use of informants. 

This has been the core of the new linguistics, for example. If everyone agrees on the grammaticality of something, it's not because grammar has been written down (it hasn't) and they have memorized it (they haven't), but instead because this complex faculty that evaluates grammar grows, with the same, very limited environmental stimulus, to be essentially equivalent in everyone. Otherwise we couldn't understand one another. To study this phenomenon, we simply need to accept that the language faculty is much like the stomach: primarily a genetic endowment.

What the stomach does is complex, but you can begin to know more about the stomach by thinking about it, guessing at its operation, and testing its complex reaction to situations that are carefully constructed to give interesting answers: experiments. All those steps are required for real science.

We do the same thing in studying language. If I remove the "in" from the previous sentence, we can all agree that it is still a grammatical sentence, and its meaning hasn't much changed. But if I remove "same", it is grammatical, but the meaning has changed. If I remove "do" it is no longer grammatical, but the meaning can be guessed at. These experiments tell us something about our language faculty, using this complex instinct as the experimental subject, and ourselves as informants.

To do this, one must respect the informant, that is, the experimental subject. One must, because they are helping you to understand some feature of the complex, almost-completely-mysterious human brain. We know vanishingly little about the brain in any animal, even less about our own, and even less about its very complex features.

That means that to explore and understand more about the mechanism of some very complex human capacity, say our ability to 'program smoothly', one must be very cautious to examine situations that are raised about subjective human impressions in various situations. You must listen to the experimental subject. Their subjective impression is your experimental subject. It will help you to identify and isolate some human capacity, if you listen carefully, get them to explain what they can, and try to make it repeatable, and testable, in various ways.

That's how science works. Physicists don't ignore a lab assistant who says "we have a recurring, completely unexpected result". They don't say "oh, we understand everything" and fire the assistant and smash the laboratory's experimental apparatus. I mean, this could happen, but no one would think it good. When it's very early days in the theoretical development, they especially don't do this.

On the contrary, in computer science, this set of habits has simply not developed at all -- outside of, in a very limited way, user interface design. Leading computer scientists have no time for such subtlety. 

As an example, let me describe something that happened to me recently. I was having lunch with one of the major programming leaders of our time. I said that I had identified a new phenomenon, and I needed to show it to him, so he could experience it. He refused to look at it, feeling that he already understood it, even though, based on his description, I could tell he was talking about something related, but not sufficiently isolated. Now, any psychologist, or even a user experience designer, would have been interested. But because he felt this was territory he'd already been over, he didn't want to see the results. Even though we were talking about extremely complex and poorly understood human capacities.

This kind of indifference is rife throughout the computer world, though not universal. I've met the leaders of many movements in computing, who, one would think, would be very sensitive to human informants. When doing their own work, they often are. But I find most of them to be simply unaware of what we've learned in the last 50 years about the importance of informants for studying complex human phenomena, and the importance of sharing informant results. They believe that the relatively minor improvements in programming are more important, despite the fact that there's tremendous dissatisfaction and disagreement in the engineering world today … a situation which might be improved by a bit of natural science.

All of this might be partly because computing developed to respond immediately to the "drive to build". Especially the building of larger and more complex logical systems. The sensitivity of programmers who feel that something is wrong is ignored, even though a thorough, sensitive examination of such feelings will likely be the basis for moving forward to better scientific understanding of complex human capacities, and better tools for programming. We're in a state right now where a few programming movements feel they've found "the answer", and people are simply exhorted to "get it". While such attitudes are expedient when building a programming team, or even a programmer's movement, they are antithetical to science.

Friday, November 8, 2013

Computation is broader than the Turing Machine


This should be one of those things we learn in school, and yet we learn the opposite. 

The notion of computation is not identical with a Turing machine.

First, the notion of computation is a human state of mind, not a technical term.

Second, if you define 'computation' to be 'anything that can be performed on a Turing machine', then you'll never discover any other form of computation. 

This is very much like the way the word 'gene', a term coined for discussion of the source of an inherited feature, became 'DNA', due to the influence of the new molecular biology. But of course this is now known to be incorrect: there are many non-DNA factors that enter into inheritance (environment, epigenetics, natural law, et cetera). So, biologists either need to use a different word for the broad notion of 'gene', which is not advisable, or they need to reclaim it from molecular biology, so it can be used again in discussions of inheritance, features, etc.

In computing, let's look at a very simple thing, which computational systems do, but which simply cannot be captured by a Turing machine:

* Determine if two inputs are received simultaneously.

Clearly this is a computational task. It's clearly an automatable task. It's a task often performed by actual computers. But it has nothing to do with a Turing machine. You simply cannot determine simultaneity with a single input, if one construes the tape head as an input (which one shouldn't, see below). The result of a 'simultaneity determination' could be signaled to more than one additional computer at a time, through tapes if preferred, and the ramifications of this are even further from a Turing machine's capacities.

So a 'multiple-head Turing machine' or a 'multi-tape Turing machine' (a slightly different model) can do things that we call computation, which cannot be done on a single-head Turing machine, if we add, for example, time or some other signaling capacity to this form of TM. I thought everyone understood this, but I found this on the wikipedia page on computability:

Here, there may be more than one tape; moreover there may be multiple heads per tape. Surprisingly, any computation that can be performed by this sort of machine can also be performed by an ordinary Turing machine, although the latter may be slower or require a larger total region of its tape.

It's surprising because it is not true, if one were to actually construct usable Turing machines. The point of a gedanken experiment is to inspire such considerations. Doing two things at once, or seeing if two things are happening at once, are clearly not possible with a single-head Turing machine. This is just the beginning of the differences … a brain, for example, is obviously capable of real-world computation that would be impossible to perform with a Turing machine. Millions of calculations by a module of the brain, which are then passed through different routes to different modules simultaneously? How is that a Turing machine?

Now, you could simulate the environment and the multi-head Turing machine together, using a single-thread computation (the is equivalent to a Turing machine). But look at what we've just done. We've defined an operation that takes place in the world as somehow 'computationally-equivalent' to a simulation of the world. They aren't equivalent. The simulation is only a tool for investigating our theory of what goes on in a world intertwined with computing. No one would say that this single-threaded machine is equivalent to a biological world that it might be simulating. Only in computing would we get so confused about the role of theory.

Since at least the 17th-century, the idea of the human brain as a kind of computer or machine has been useful for investigation of what it may be to be human. The problem: we don't know what kind of machine or computer it is. We still do not, to this day. Our technical definitions of computability will definitely need expansion, to include real computational  phenomena, before we can begin to understand what biological computers do.

I believe that, at the very start, we need to introduce a de-mechanized version of the Turing machine. In early stages of any science, the notion of intelligibility tends to be 'mechanical'. Pre-Newtownian physics, in early investigations by everyone in the 17th century, tended to make mechanic models the gold standard of a good theory. That disappeared after Newton discovered that there was no mechanical explanation of forces. In fact, his finding expands the notion of 'mechanical' at that point, to include action at a distance. 

But the very human concept of mechanics as explanatory science keeps reoccurring, and it has done so in computing. For multi-head and multi-track Turing machines we need to ask "what is the tape mechanism?" because it matters. If we imagine it to be a real, mechanical tape mechanism, then it is a sensor, and "simultaneity" in a "multi-sensor" Turing machine would be a computable question, in a real machine.

Computing is something happens in the real world. So we need to ask ourselves: how can computing move from a confused formal science / engineering hybrid to a natural science?