What's the higher-order structure 'above' the level of cells? Well, it's just anything we call a 'system' that we believe is interesting. When we explore the real body we find that the boundaries and the coherence of our chosen 'interesting system', say a kidney or an immune system, are not what we expected. But, then, we should expect such surprise. Humans perceive certain things in certain ways, often in several conflicting ways, and when we decide to look at 'the visual system', or 'the nose', we are making use of some still mysterious human faculty that assigns importance to certain aspects of its environment. On examination, we are always surprised, because our unexamined intuition tends to be wrong. In fact, even our attempts to divorce ourselves from our intuition tend to be heavily suffused with human tendencies. It's a tough game to find out what's going on outside of your own perception, to turn the things your intuition 'knows' into mysteries about what's 'out there'. But that's natural science. It's hard work, especially when we're dealing with complex systems.
This means there really is no ontology of the body which is more than a kind of convenience, so that we can talk about what we're studying. Ontology is important in science, because these are our current assertions about what exists in the world. Of course they are constantly changing, and much of it is based itself on tacit knowledge that we do not understand, which is why I don't think many kinds of science make sense without a parallel study of human psychology. At any stage, our assertions are still human mental constructs. They may be more enlightening, better integrated with other theories, or more carefully constructed to avoid unnecessary stipulations, but they are only 'better'. They aren't 'complete'. Ontologies are works-in-progress, at best.
This epistemological story can be found everywhere in computing. Let's take the issue of testing a software system. In our example, let's say that we primarily care about a consistent user experience, and so the tests take place against the user interface. What is the inventory of features against we we are testing? It certainly is not the set of features we set out to build in the first place: in order to make a good product, we had to change these. The closest thing to an accurate description of the final system is the work done by the documentation team. If you have such a thing. The team has used human judgement to decide what is important for someone learning about the system. They have organized what they consider to be the 'features' of the system, and explained their purpose and behaviors, as best as they could. This is the closest thing the software company has to an inventory of features and properties against which a QA team can build a testing system. In a system where the interface is everything, and there are a lot of systems like that, and a lot of systems that should be considered like that but aren't, the only way to build reasonable tests is after-the-fact. There is a discipline called 'test-driven development', but this is only appropriate to certain internal aspects of the system, it cannot address the 'logic' that is 'externalized' for the users. There is no such logic in the code. It's a perception of the system, used to guide its development.
If this is true, there is no way to take a 'feature inventory' from within the software. The best one can do is study the user-interface, find out how it responds, talk to developers and product designers to work out their intentions when they're unclear, and keep a coherent-looking list that is easy-to-understand. This is literally not an inventory in any mechanistic sense. It is a thorough set of very-human judgments upon something that others have created.
The 'inventory' will be acceptable, and have descriptive adequacy, when the appropriate group of people can understand it. This might be a very different inventory for a quality-assurance team than for a training team or a support team. There are things the designers and the engineers find important that produce yet another 'inventory'. There are other kinds of inventories, for accessibility issues. The best you can do, in all these cases, is the most human job you can do, to explain the right things to the right audience. The idea that there is any kind of 'logically correct' software, achievable without human judgement, is absurd. A person needs to judge what is correct! We couldn't do any of this work without human judgment.
Because of this epistemological fact, we rarely have the time for inventories of features. Instead, we look to eliminate 'problems', humanly judged, and polish the software system until it makes sense and does what the team and the users want it to do. The task of describing it, of explaining it, is done in a minimally descriptive way, taking advantage of innate and learned human understanding, and the ability of users to explore things for themselves. The quality-assurance team finds some set of tests that satisfies them, tests for problems that have been fixed, regression tests to make sure the problems don't recur. The notion of a 'complete' description of the system is considered 'just too much work', when, in fact, a 'complete description' is impossible, because such a description cannot exist, it can only be adequate to our current purposes.
This epistemological problem shows up in simpler ways. One approach to preventing virus infection in computers is to add to a growing 'blacklist' of behaviors or data that indicate an 'infection'. The other approach is to make a 'whitelist': only these operations should be possible on the system. The list is only expanded when you want to do something new, not when someone else wants to attack you. This is like avoiding the inventory problem.
Even more, it's reminiscent of the difference between natural science and natural history. Natural history, zoology in its older form, and even structuralism, are about cataloguing and classifying things in nature. Explain why things are the way they are? That's natural science. In derogatory terms, natural science looks to generalize and idealize and abstract, ignoring as many differences as possible. Natural history embraces diversity, and is more like butterfly collecting-and-organizing. In general, we need integrated approaches that allow for collecting diverse facts in the context of an an ever-improving explanatory theory.
Approaches to building software are ever-expanding, and we are spending no effort trying to understand why, primarily because computer science is not a natural science, and doesn't approach the problem of explaining why things are one way, and not another. Most of the answers to those questions lie in a study of the human mind, not in a study of the machines that humans build. Studying software without studying cognition, is like studying animal tracks without studying the animals.
Approaches to building software are ever-expanding, and we are spending no effort trying to understand why, primarily because computer science is not a natural science, and doesn't approach the problem of explaining why things are one way, and not another. Most of the answers to those questions lie in a study of the human mind, not in a study of the machines that humans build. Studying software without studying cognition, is like studying animal tracks without studying the animals.
No comments:
Post a Comment