But, still, there is a real mechanism, within the brain, which we still do not understand, which leads us to produce categories, properties, objects, and other mental constructs, about the world. This mechanism, whatever it is, 'makes' our continuous world discrete. The primary argument in favor of abandoning the distinction -- that concepts can effect perception and visa-versa, making them intractably intertwined, tempting neologisms into existence like 'perconception' or 'conperception' -- means the distinction is something we need to abandon as an investigative principle, but not as a subject of investigation. "Continuous" or "field-like" conception-perception is an innate phenomenon. "Discrete" conception-perception is also an innate, and very complex, aspect of mental life, engaged in the use of symbols, for example, and probably rooted in some aspect of our language faculty. But I don't see many people studying either one, let alone the overlaps between them.
In computing, knowing that these conception-perception phenomena exist is very important -- unfortunately, the mind-internal nature of discrete descriptions seems to be continually forgotten by computing's chief practitioners, in industry and academia. This is clearly a detriment to their understanding of symbols, meanings, implementations, communications, definitions, and uses, from the perspective of the natural sciences. But it's also completely understandable that they've become victims of this problem, because the belief that we have some kind of direct mental connection to the outside world is an instinctive illusion, one of the most powerful ones -- a curtain that is difficult to lift.
Among the most fascinating of the influences of conception upon perception, is the effect that awareness of different "conceptions of perception" can have on the subject.
There are several examples, but I'll take one that many people are familiar with. Take someone who draws or paints from real life onto a two-dimensional surface. In one way or another, an artist learns an idea, a concept, that there's an extra kind of perception, that allows them to "flatten" what is in fact a flat projection on the retina, but which our visual system works mightily to "inflate" into a perception of space. The artist learns to "reflatten" this perception and look for proportions and shapes among the the results, which can be used to produce the 2d image. In fact, this is something that one can practice mentally, without drawing or painting. And the sense of proportion that's learnt by this exercise helps produce 3d sculpture, which is also interesting.
The conceptual awareness that this kind of perception is possible, makes it achievable.
I'm more interested in a somewhat different concept-to-percept influence, mentioned earlier: the perception of things as objects, categories, properties, etc. Some of this work is done innately by the visual system, of course -- for example, everyone is familiar with the color divisions of the rainbow, although there are no such divisions in the spectrum itself.
But the naming of the perceived groups of colors is an "objectifying" act, that is, something we choose to do, no matter how innate the mechanism we use to do it. From the limited impression we get within the conscious experience, it seems like the same kind of act as, say, our imagining that the nervous system is a discrete network, or treating people as interchangeable nodes in a corporate hierarchy, or almost any kind of counting, or the mental separation of 'things' that we perceive as separable.
Because there's another way of perceiving, which we also do naturally, and those ways-of-seeing are apparently in something like a competition. We also see things as 'stuff', or 'fields', for lack of a better word, or 'centers': perceivables that are coherent but which we have not yet considered 'objects' with 'boundaries' etc. This kind of 'stuff-perception' allows us to see, or experience, the sky, or a field, or a forest ... without thinking of it as an 'object'. One can think of 'reductionist' mental transitions, for example from "forest" to "trees" to "potential logs", as a kind of gradient from "field perception" to "object conception".
Not surprisingly, awareness of the existence of these two kinds of perception can help a person to decide to use one, the other, or both. This is interesting to computing in a number of ways.
First, it means that any task making use of a 'mode of perception' could benefit from explicit pedagogy about their psychological reality -- their mind-internal reality. Although it's visible in some research, and common in personal training, the best approaches to a pedagogy of 'modes of perception' is unrigorously studied. The field of architecture is a good example: Christopher Alexander created a number of books intended to help the reader perceive buildings and cities in the 'field-like' manner, but the effectiveness of the books in achieving this has not been studied. Readers just try it, and see if it works for them. That doesn't really get us anywhere.
Second, an explicit awareness of these distinct modes of perception can help us to identify particular mental factors, qualities, and effects, that enter into the use of computer interfaces, including those interfaces used by programmers, and allow us to judge the quality of the outcomes of the use of those interfaces.
I believe these distinctions could unleash an experimental goldmine.
So now I'd like to discuss briefly one story from the history of psychological experimentation.
There's a very good book by Robert S. Woodworth, from 1931, entitled Contemporary Schools of Psychology, which describes the various ideas and motivations of researchers that he knew or worked with personally. It describes the 'method of impression', which allows us, for example, to look at an optical illusion, like Mach Bands, and consider our impression an experimental result -- we can say 'the illusion is present in this image' or not, allowing us to test questions about the nature of the stimulus and the response.
Psychology emerges from philosophy in the 19th century, and so issues of consciousness were quite important to early experimental psychologists. When an investigator asks a subject 'which weight seems heavier?' when they are lifted in different orders, the investigator is relying on the impression of the subject. The primary interest is the effect on consciousness, even though this is a very objective experiment, when properly carried out.
But a reaction to this emerged, from the animal psychologists, who Woodworth describes as feeling "dismissed". The psychological establishment at the time felt that, although animal conscious experience likely exists, it can only be supposition, since we cannot ask them anything, and hence no rational psychological experiments could be carried out on animals.
This reaction to this became Behaviorism. The tried to define 'objective' as some activity that you could measure, without giving credence to a subject's interpretation (their claim, that this 'increase in objectivity' was achieved, was rejected by many, at the time, with the same rational epistemological arguments we would use to reject the claim today). This allowed them to experiment with factors of instinct and experience that entered into animal behavior, and put humans and animals on equal ground. Unfortunately, they also threw the baby out with the bathwater. They had one overriding theory of the animal, or the person, and that was the association of stimulus with response, whether learned or innate. Presuming the underlying mechanism, behind your subject of inquiry, is a terrible approach to theory construction, because you won't even notice the effect of this assumption on your observations.
A perfect example was the behaviorist John Watson's replacement of the term "method of impression" with "verbal report". The latter he considered 'behavior', and so it was 'acceptable science', and this way he could include previous work on Mach Bands, or afterimages, or heaviness, or taste distinction. We can see the danger Watson introduced here: the experimenter was now assigning a meaning to the verbal report. So even more subjectivity was introduced, but now it was hidden, because the theory of mind, the theory of that which generates the behavior, was no longer explicitly part of the research.
This methodological dogma had another effect -- the generation of many decades' worth of meaningless data and statistical analyses. When you decide that you've suddenly become objective, and turned a potentially structure-revealing impression into a datum, then you have no choice but to collect a great deal of data to support any conclusions you might make from your study, to lend it, I suppose, some kind of intellectual weight. A corollary is that this tends to make the findings rather trivial, because you're no longer constructing interesting questions, but are instead relying on data to confirm uninteresting ones. The upside of this, is that you can quickly determine whether, say, afterimages are only seen by certain people under certain conditions. The downside is that the investigator is no longer building a model of the subject of inquiry, and so tends to ignore the subtle and interesting influences on the perception of afterimages. The theories that emerge then lack both descriptive coverage and explanatory force. Of course, many investigators did build models, and also followed puzzling results, so, at best, one can only say that behaviorism had a negative influence on the study of cognition as a natural science, but not an all-consuming influence. Most behaviorists were not extreme behaviorists.
But, to return to our original theme, the negative influence is no one's fault. Natural science tends to defy our instincts, and there were innate cognitive tendencies at work here -- tendencies that are still not studied -- which led mildly dogmatic researchers to simple 'objectifying' stimulus-response models. Behaviorism expresses a point of view that we return to, often. In the computer world, you hear it a lot, and not just in the poor cognitive methodology that pervades AI. Even idioms like "data-driven" and "data is objective" are meaningless by themselves. The phrase begs the really important questions: which data, generated how, while asking which question, based on what theory, and interpreted how, and by whom, framed by which assumptions, and making use of which human or non-human faculties? The idea that 'objective data' somehow 'exists', without an experiment, without a context for interpretation, without the influence of human meter-builders and human interpreters, is just not correct. But people tend to make such claims anyway. It's an innate tendency.
So, what would good psychology as a natural science look like, when applied to improved understanding of the mental life of computer engineers?
If we're looking to build better theories of complex systems based on evidence, we can't do much better than to look at the eponymous scene in the documentary "Is the man who is tall happy?", in which Michel Gondry interviews Noam Chomsky about linguistics.
Take the sentence "The man who is tall is happy". A native English speaker will report that this is grammatically correct, and if you're one, you can use the 'method of impression' to demonstrate that to yourself. Now, turn the sentence into a question. You can do this grammatically through movement: "Is the man who is tall happy?" You can also see that the following variant is not grammatical "Is the man who tall is happy?" A native speaker would just say it's wrong.
But why do we move the furthest "is" from the end of the sentence to the beginning? Why not the first "is"? Let's just say that scientists enjoy mysteries, and puzzles about nature, and so we need to build a theory, to answer that question, and hope that the answer can be more broadly enlightening -- which, among other benefits, means the theory could predict other results.
The overall answer, is that language has its own innate structure. The second "is" is actually the structurally most prominent element in the sentence (you can demonstrate this to yourself with a tree diagram), and so the easiest to move.
This would be impossible to determine simply by statistical analysis of text with no specific questions in mind. The use of statistics is motivated by our ignorance (I'm not using that word pejoratively) of the structure that underlies, that generates the surface behavior. The separation of variant and invariant structure can be made through analysis, including statistical analysis, of verbal reports, but only if there is a question in the mind of the theorist about the generating structures. Any statistical analysis that does not officially have an explicit structural question to answer, is only hiding its assumptions and stipulations about structure, by adding interpretations at the beginning and the end.
Note that these structural theories are idealizations -- nature is not transparent, and we have limited attention, so we need to know what we're asking questions about, and what we're not asking questions about.
Notice also how much further we can get in formulating structural questions when we accept the 'method of impression' along with the probing strengths of our mysterious capacity to think. There are plenty of questions about the human mind that require massive data sets to answer. But it's unlikely that any of those questions would be interesting if we weren't using, explicitly or implicitly, the method of impression, so we could narrow those questions sharply. Moving forward towards understanding the "act of programming", as a natural phenomenon, will require that everyone understands the power of this method, which has enabled so much progress in linguistics (Noam Chomsky's initiatives), and art & design (Christopher Alexander's initiatives), over the past 60 years.
Note that these structural theories are idealizations -- nature is not transparent, and we have limited attention, so we need to know what we're asking questions about, and what we're not asking questions about.
Notice also how much further we can get in formulating structural questions when we accept the 'method of impression' along with the probing strengths of our mysterious capacity to think. There are plenty of questions about the human mind that require massive data sets to answer. But it's unlikely that any of those questions would be interesting if we weren't using, explicitly or implicitly, the method of impression, so we could narrow those questions sharply. Moving forward towards understanding the "act of programming", as a natural phenomenon, will require that everyone understands the power of this method, which has enabled so much progress in linguistics (Noam Chomsky's initiatives), and art & design (Christopher Alexander's initiatives), over the past 60 years.
Extraordinary post. I simply found your blog and expected to express that I have genuinely refreshing scrutinizing your blog sections.Huge thankful for the significant data.
ReplyDeleteArtificial intelligence course in malaysia
Here I saw several articles and posts published on this site, I am more interested in some of them, will provide more information on these topics in future articles.Manual Testing training onlineManual Testing Online TrainingQA training OnlineSoftware testing Online Courseselenium training online courseqa testing online coursepython training in hyderabad
ReplyDelete