I started using this phrase, which popped into my head some 30 years ago, as a kind of expression of optimism, in the face of all odds. I was writing about successful grassroots community projects, local ones, those that seemed hopelessly positive or naïve to people living in the Reagan/Thatcher years, but which were in fact quite straightforward. Actually, the more ideal and positive they were, the more humane and sensitive they were -- the more straightforward and possible they became. This is because an ideal project is defined as one produced by a community through achievable, agreed upon, positive steps.
It's perhaps even a kind of mathematical expression, describing a cycle of imagination, pursuit, discovery, and reformulation: a principle in the practice of natural science wrapped into a form that will beg good questions. "Anything that's truly good is possible". So, what is 'good'? What is 'possible'? These are left to the listener, but, the equation itself is intended to kind of poke anyone who hears it, so that their answers, even though they will change, can act as a guide to better work: in community organizing, personal responsibility, public-interest engineering, and theory formation in the natural sciences.
The counterexamples are instructive. If a marketing executive at a pop-technology company promises something that is a lie, typically through advertising, and seduces people with the 'idea' of the product by using spectacle and stimulants and money, then they're selling the impossible. That's not good. It even falls outside any real discussion about pursuing good or possible projects. So, involvement in those not-even-false promises is not something that a responsible, community-minded person should participate in.
The same is true for things we know to be bad: injustice, war, destruction of people and the environment, bad products, bad effects, etc ... no one will ever claim that these are in themselves truly good. The facts need to be disguised, with dishonest rhetoric or with ideas that are disconnected from, and uninformed by, their bad consequences. It's surely possible to do bad things, under these circumstances. But, again back to the equation: it's not what was promised and it will not be sustainable -- to which the current state of the world testifies. Any bad consequence simply cannot last. Which means plutocracy, technocracy, kleptocracy, privilege, destruction, and injustice, are ultimately unsustainable. The idealists, the ones who believe in intelligent, beautiful, compassionate, careful, creative lives for all people, in harmony with nature ... are the most practical.
If we're removing parking lots and freeways, turning them into gardens and wild areas, and we're having fun doing this fulfilling work, and we have a 'crazy' notion like "wouldn't it be great if everyone had the opportunity to do this?" Well ... we're describing a good thing. Perhaps even a necessary thing. And it certainly seems possible. So, we'll figure it out. And we can continue to use this phrase as a guide to finding the good ways to do the good things.
That's how an activist uses this equation.
It works in the natural sciences as well, partly because 'truly good' is how we describe a truly enlightening and solid idea or result, judged to be truly good. That's what we're spending our days looking for: enlightenment. So it better be possible!
This brings me to two interrelated topics which, at first, will seems hard to square with each other: 1) how we might be able to write software in a more humane manner, and 2) the difficult natural science of uncovering the structure of concepts within the human brain.
I'll start with a story about patterns and pattern languages.
About 55 years ago, among a subgroup of people, the word 'patterns' started to be used to describe truly good ideas: something that is helpful in improving a situation, but not so specific that you'd call it a 'trick' or a 'hack' or a 'tip' in a technical manual. It's a general good idea which makes sense to both your head and your heart. An example is a 'transportation interchange', some public place well known, and very visible, that can be used for people to transfer from one form of public transport to another. Good idea.
So, a pattern language is a set of patterns that a community finds to be good, and which are good and useful at various levels of scale and consideration, in order to help them to make a better world. It's a language full of good, practical ideas.
Doesn't sound far from the equation, does it? Good = possible.
Patterns of this sort were proposed and explored by philosopher and architect Christopher Alexander, in books such as Notes on a synthesis of form and A Pattern Language. We were friends, and one time, while taking a train to some appointment somewhere in England, I gave him my "Anything that's truly good is possible" phrase. He became quiet and lost in thought for 20 minutes. Then I think he said something like "that's good".
Patterns started to interest computer people as early as the 1960's, but this interest really picked up steam in the late 80's and early 90's, when the object-oriented programming community found them a useful vehicle for sharing ideas.
During this time, a number of interesting notions emerged and faded out, in the normal tumult that takes place in computer industry discussions, where ideas and tagline and nomenclature are like fashion, and people, in the name of a pragmatism that disguises self-interest and capitalism, can be tribal and fickle and dogmatic.
One interesting idea that became lost: groups of interlinked patterns, the pattern languages, should be generative.
They meant a number of things by this, which I'll get to in a moment, but the word itself comes from Noam Chomsky's work on generative grammar, which means a mathematical system that generates all grammatically correct sentences. Chomsky was the first to point out (while inventing several important ideas in computer science) that we have no idea how to generate all correct sentences, even meaningless ones. And this should not surprise us, since grammar is a very complex biological system. A bit of progress has been made on this problem in 60 years, but there still is no complete written grammar of any natural human language. The theory that would be necessary to even write such a grammar down, if we could, is still in its early stages.
A generative grammar would generate all sentences of a natural language, and not include any sentence that is not in that natural language.
So, 'good sentences' means 'possible sentences'. Good as judged by people as 'grammatical', possible in the sense of generated by a human's natural language faculty. That's our equation again, reformulated as a statement of interest in how the black box of human language works.
This is a formulation that reflects the motivation behind every theory in the natural sciences, no matter what stage or state the theory is in.
So a generative pattern language would allow a person to generate good programs. That is, 'the good' becomes 'the possible'. Would such a pattern language be inspiring? Yes, a generative pattern language would need to inspire the creation of good programs. Much like good ideas need to tested for their genuine goodness, and their genuine possibility, in any field. (A Pattern Language is a very inspiring book, and the bestselling book on architecture of all time.) But it could also be a stronger criteria, defining the ways in which good patterns could be combined to form new good patterns.
At this point we might wonder: where does this equation come from?
Another example might offer a clue. It comes from the history of thinking about thinking, and tying these strands together might help us to make some real progress.
A pattern is just an idea. Well, it's a good idea. But if it's really good, it can be used as a metaphor for other good ideas. And if it's really really good, when you 'compose' it or 'integrate it' with other good ideas, the result is a good idea. This is similar to the strong definition of a 'generative pattern language' mentioned above. But now we're talking about all concepts.
In reality, the nature of 'compose' and 'integrate' here is not some simple rule, like those in formal logic. In fact, to function, to get a good idea from two good ideas, you would need to include judgment of the 'truly good' during and after the act of composition. There's no getting around that.
That 'moment of judgment' is the generative bit the pattern people were looking for. Honestly, it was why object-oriented patterns fell apart, as a guide to good work, just as any formal logic, reliant as it is on one set of ideas and rules, falls apart. The composition of the ideas into new ideas needs to include new judgement, and that includes testing the new idea for whether it is 'humane' or 'natural' or 'truly good'. And far too many software patterns, intending to simply codify an implementation of some 'theory' of software, simply didn't understand this sufficiently to do a good job creating 'higher-level' good structure. And that's why our software tools continue to suck.
But, this gives us a clue as to how to do better, and a clue as to how we should be studying human ideas from the point of view of the cognitive sciences.
It's an old notion, possibly as old as the human brain, that there are innate ideas, which other animals have as well. Our genetic endowments include ideas like 'grab' and 'climb' and 'twist' and 'move' and 'stuff' and 'hold' and 'thing' and some easy compositions like 'thing I hold' and 'stuff you grab' and 'thing I climb'. But we have something else, an indefinite kind of idea-composition, which also lets us name new ideas, and when we do name these composed ideas, they join our intuition, so they feel rather like an innate ideas (something that has misled philosophers from Locke to Quine). This is because the idea that comes to mind when you say 'screwdriver' is the same for every speaker of every language. This was recently demonstrated with fMRI imaging by Marcel Just and Tom Mitchell at Carnegie-Mellon, but Aristotle pointed it out 2500 years ago: he said we wouldn't be able to translate anything unless we had the same ideas in our heads.
So, the nature of this "composition of new ideas" is very important. It has much more difficult criteria than we know. The composition cannot be defined with simple formalisms, certainly not with theories of symbolic manipulation that are untested against the human brain.
So, for example, when the object-oriented programmer does a simple composition of one type of object with another, maybe one pattern with another, they may say that 'it works' because the machine performed, at this moment, the way they wanted. But, that's not good enough. There's no consideration as to whether the ideas and goals in the head of the programmer have been sufficiently explained, expressed, noted, and judged. In fact, the tendency will be not to do that, because of some belief that the formal compositions of working code must produce 'working code'. Well, it may run, and do what you want for now, but it's not really working, since it doesn't express a human idea. We do not know the rules for composition of human ideas, so all we can do is judge the new compositions as they occur to us, and see if they are human and truly good.
But we don't do that. We use simpler formal composition mechanisms with simple consistency rules, and almost no judgement. This results in "write-only" code. Even if someone else can make sense of it -- and typically the programmer cannot themselves make sense of it after a time -- it has not been judged to be good, human, or natural. So it's not good and, in a sense, it's impossible and impractical.
Let's go back to theories of ideas. If we found the innate ideas, and began to untangle the mechanism for composition of ideas, we can do so in part through fMRI confirmation of the Just/Mitchell type, and in part by judging the new ideas (using the method of impression) that are generated with our theory of idea composition.
Are those ideas any good? Do they make sense? Would they ever occur to you? We can begin to create a theory of innate ideas and their composition in human beings only if we abandon anything analogous to the formal compositions of objects and functions in programming languages, mathematics, and formal logic. Because we know those do not generate the right results. They do not generate good, natural ideas. We should remember that Aristotle's syllogisms and Boole's operators were actually just working theories about how the mind worked, or 'should' work, somewhere along the descriptive - prescriptive spectrum. They need updating with the considerations I've mentioned.
The same is true for engineering. We must abandon the dictatorship of the formalism, the tyranny of the parts over the whole, and create software that is judged on whether the formalisms used make sense as human thoughts. We try to pepper code with comments and good names. It's not enough. We need to escape the formalisms that are stopping us from thinking about how to best express our thoughts.
One consequence of what I'm saying: until we do this, no program, or even an idea, or even a result, created through machine learning, could ever become 'inspectable' or 'explainable', by us or machines. Because we have not even begun to create a theory of what is comprehensible! And we won't be able to do that until we start using our internal judgment of what is good to constrain what is done, and therefore continually discover what is possible to do well.
Consequently, I have some proposals for people in the computer industry who might be interested in the natural sciences, who might be interested in doing some good for the world, who might be interested in helping to uncover something about how the brain actually operates, and who might want to fill the considerable gaps in our understanding of how the human brain of software professionals could do better, easier, and more natural work.
The first would be to avoid irresponsible work in technology. Since all work in technology these days is pretty extraordinarily irresponsible, this will take quite a bit of self-education and inspiration on the part of the reader.
The second would be to correct irresponsible pseudo-science in the computer industry. While most of this is the result of marketing and self-deception among the successful, it's really rather galling that the computer industry has been the center of a revival of behaviorist and associationist 'theories' which were rightly buried decades ago in the cognitive sciences. This will take quite a bit of research and thought on the part of the reader.
The third would be to help with actual research on modeling and testing the construction of human ideas from innate ideas using innate mechanisms. Essentially, we're trying to find out more about the human mind, and trying to find out how it is that the brain-internal fMRI for the word 'screwdriver' was constructed from other ideas, innate or previously abstracted. If we could focus on this collective effort, we might make some progress on this problem before the species becomes extinct.
The fourth would be to avoid these simplistic ideas that pretend to be the solution, the 'end of programming': functions, rewriting, categories, objects, classes, ML ... it's really quite nauseating to even hear this kind of fashionista religious dogma spoken by engineers. Engineering is something done by the human mind, and until we start approaching it as an internal issue, an issue with mental comfort, and begin to map out an approach to working more naturally, no progress will be made on making better software.
I understand it is incumbent upon me to better elucidate these initiatives.
More soon.
Sunday, August 11, 2019
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment