One of the hardest-won results from the natural sciences, uncovered over the last half-milennium, is simple, but hard to keep in mind: the real world is not accessible to our cognition.
When we realize this, it makes the world a mystery, but this, in turn, makes the world interesting and exciting to explore. That's science. And since the machinery of our brain is part of this mysterious world, it too is a mystery, and we have almost no idea how it's interfering with, and relating to, our perception of the world.
Our intuition fooled us into believing that we 'understood the world'. It felt like we understood it. But then Galileo pointed out that, by our intuition, a heavy body will fall faster than a light one. But, in reality, it doesn't. And, when we thought hard about things we already knew, things that we forgot because parts of our intuition acted as obstacles, we realized that it was rational for them to fall at the same rate.
And there are always more internal obstacles.
Another intuitive part of our rationality makes use of the concept of a "clockwork universe", recruiting it as a criterium for understanding. But as every first-year physics student learns, and as Newton demonstrated in the 17th-Century, there is no clockwork, contact-mechanical explanation for gravity, or any other force. No system of invisible pulleys or rods can 'explain' these forces. They are simply 'qualities' of the universe, and we were forced to adjust our idea of explanatory adequacy accordingly.
In general, we can say that the goal of the natural sciences is to find out what is going on outside our perceptions -- 'outside our skins', 'out there' -- to use just a few suggestive phrases to convey a difficult idea.
But here's the problem: we're still in the mix. We can only try to get out of our skins. And we need to keep trying. Each time we discover another way that our minds are interfering with the accuracy of our scientific theories, it allows us to work positively for a while. But there's always another horizon. And we always come across further mental interference, further stipulations that need removal, later on. Our own cognition is, ironically, the most serious obstacle in the sciences.
Another problem arises because many of our scientific interests revolve around ourselves, and our minds. It's almost impossible to find any work on the mind which isn't rife with mental interference -- but many researchers are aware of the problem, and try to sort the results from the stipulations, the external from the internal.
Here's a problem that's peculiar to the cognitive sciences: we look at ourselves from the outside, appropriately, as animals. But externally measurable behavior is superficial. So we need a human to interpret the behavior. And we need to recognize complex phenomena, the internal ones, as we experience them, so that we can experiment on ourselves, using ourselves as gauges or meters of the complex phenomena under investigation.
So, in the classic example, even though we understand almost nothing from external measurement about the biological organ that provides us with language (the Language Faculty), we can use ourselves to test the boundaries of language phenomena, from the inside. For example, we can ask if the sentence "I'm going to the store" is experienced as grammatical, where we use a complex internal organ to make a judgement on external stimuli -- and if we ask if "I go store to" is grammatical, the different answer leads us to interesting research questions. Without this internal meter, we wouldn't know how to even approach this as a question in the natural sciences.
Clearly the human activity of computing is not something that has even begun to broach these difficulties. As a practical matter, though, computer people, and computer users, are impacted by these issues everyday. The craft of computing has developed, to at least cope with the issues in a limited way, and get work done. But, in my view, no computer scientists or programmers have any awareness of what is going on in the real world of computer-human interaction, no more than a pianist knows what is happening to our brains when we hear music. We have a craft, but we have not even the beginnings of a natural science of computing.
What would a natural science of computing look like? What might programming look like, as a result?
I want to provide a taste of the kind of research needed.
There are many tools available for web programming. Generally, some engineers have found themselves doing the same thing many times, and recognizing this they attempt to create a layer, something that allows them to do the same thing they've done before, but with less effort. As a result, the higher-level view of this layer becomes a kind of notation of its own, backed by its own ideas, which can be combined and parameterized so they become a kind of general platform on which to build applications.
As a result, application development becomes rather path-dependent, making many programs effectively the same. Developers begin by imaging these elements and combining tools, and then making a stab at using them to take steps forward, towards products they have in mind.
It's important to understand that all this 'bridging' happens in the brain, and is completely dependent on mental faculties that as yet have no name or research program.
The tools are something external, and too complex to know completely, but the programmer needs to understand them better in some way, in order to move forward in some direction. The 'direction', or 'product definition', is an exercise of the imagination, with some notes on paper, and the programmer picks some aspect of this fantasy to approach with tools that he knows, or believes he can discover something about.
To address this 'bridging problem', the programmer needs an increasing understanding of (1) the emerging 'product', from a user's perspective, (2) the emerging program, from an engineer's perspective, and (3) the frameworks, libraries, ideas, and communities that these are built upon.
So, we've begun to characterize what the programmer is doing as a human activity, one that needs to be investigated from a biological perspective: we need to pick elemental examples of these bridging moments, and study what makes it easier or harder for the mind to organize a solution.
But the state of the art is rather different. There's a kind of struggle to create support systems. Helping with (3) are the myriad websites that have emerged to let programmers share technical tricks and ideas, and endless 'branded movements' that attempt to justify and codify sets of ideas that provide various kinds of inspiration.
The focus is on technology and borrowed ideas. But none of this addresses (1) and (2). That's because it's hoped that the programmer will be inventive, and bridge the gaps. But that means the critical human dimension of software development, the study of the user's issues, and the programmer as user, suffer from lack of serious investigation.
These are questions that could be investigated from a natural science perspective. In regards to (2), what unknown mental faculties enter into the simplest internally-experienced phenomenon of 'discovery' while looking within a program? When the programmer needs to create some new infrastructure to deal with some 'type' or 'aspect' or 'property' of the problem as they perceive it, how does their programming environment make this hard or difficult?
As an example, say I'm writing a completely different kind of web application. Say that the most important thing, to me, is typography. Nothing I see in my program should be more important than the new issues before me, because the whole point of the program is to study 'typographical effectiveness' for human users.
Unfortunately, nothing in a framework or programming environment will facilitate the creation of a fresh, new, special-purpose theory, or framework, with which to study the creation of programs that support the as-yet-unknown world of the typographer. There is general-purpose functionality, and in pre-trod areas there are special-purpose frameworks, but there is nothing to help me create the special-purpose ones, and to help me find my way around these new special-purpose theories once I've created them. All this despite the fact that most of life is quite mysterious, and hence certainly unexplored by software.
The lack of support for developing and exploring 'the new' and 'the special', is coupled with a prejudice towards 'the general', at least a particular, extremely limited interpretation of the notion of 'general'. The combined effect is the lack of imagination and sensitivity in almost all computer products and applications. Motivated people, who are exploring new territory, can still muddle through and get some limited new things to happen, but they are almost entirely on their own. Because no one is studying the problem of technical support for innovation, or bridging, we rely instead totally on the innovation of the worker in overcoming the poverty of the available tools.
All of this derives from the total lack of a perspective within computing that falls within the natural sciences. Imagine if we observed a very mysterious behavior among spiders in the wild, and we just said "well, I don't know what's going on, but the spiders keep making webs, so it doesn't really matter!" Science doesn't do that. We want to know what's happening. But if it's us, we might care more, but we don't investigate what's actually going on. If we did, we could help. And help ourselves, because we could really use good tools for investigating new ideas. In the future, the computer, in whatever form it takes, might actually fulfill its potential, and normal people, who have better things to do than keep up with another trivial new programming environment, or another trivial programming 'paradigm', will be able to use computers to explore the endless mysteries around and within us.
Monday, September 8, 2014
Wednesday, August 27, 2014
Is it a network?
A common idealization borrowed from engineering and computing is the idea of the "network". A network may seem like very basic system geometry, after all, it's just a 'graph' in discrete mathematics. But the idea of a network is also rather problematic, and seems to have led many scientists down the wrong road for centuries.
It's probable that calling the nervous system a 'network' is completely meaningless.
We need to take a closer look at our use of the idea of 'network', in every circumstance. In the same way that 'objects' and 'properties' are mental constructs, networks are too. All these ideas are of course innate, and generally useful, but when we're involved in the natural sciences, and asking questions about complex systems, we need to regularly check our epistemology. Is it a network?
If something was a network, how would it compare to something that was not a network? Would we be able to build a meter to tell them apart? If it's a mental construct, what are its characteristics and limits?
The complexity of biological and cognitive objects of interest cause us to get lost in our own innate toolset. Post-Galilean physics has been the study of the simplest possible problem. Complexity was the enemy. The basic method was to simplify models, ask basic questions, and reduce experimental interests and influences. Really complex problems were thrown over the transom, to the chemists and the biologists, who for centuries barely even considered themselves scientists, because they were stuck, unable to put aside all these many interesting questions, which the physicists could ignore in their pursuit of foundation issues.
When dealing with the natural science of complex 'systems', we often feel we have nothing important to say unless we fall back on these highly-structured intellectual instincts. And so we have our questionable science -- 'learning networks', and 'objects' and zoological-typological 'categories', and selective pressures upon 'bags of qualities' -- scientific dead-ends whose weaknesses we've been uncovering slowly in various post-positivist enlightenments.
Let's get back to 'network' for a moment. Whether or not 'network' is a useful idea in any given research situation is of course up to the investigators. We see things that are human products which seem to have this 'network geometry': paths, roads, train systems, computer networks, etc. In the physicists' world, it's not that simple. There are many forces and gradients with varying character and various mutual influences. But nothing that could be called a 'network', except by popularizers. In the chemist's world, this becomes harder to resist, because everything under investigation is, on one level, a 'network' of elements. But in any other way, these high-energy mashes of dense mutual influences don't seem anything like the discrete signaling networks that people create. You could use them to create a network. You could use graph theory to give you approximations. But it's a mistake to consider that anything at the chemical 'level' actually is a network.
In biology, with its even more complex investigations, it gets harder to resist our tendency to put phenomena into the 'network' category. There are so many complex results that need to be integrated with one another, that it's sometimes easiest to just imagine networks of influences. These network-diagrams can look massively complex, so much so that it doesn't look like we've made any scientific progress, that is, we're not much enlightened by the result.
But, again, the diagrams are maps. The object of biological inquiry is the territory. We're doing ourselves a disservice to mistake our tools for the object of our investigations. The 'network' is a perception, a tool. It may or may not be helpful. But there should be no 'network theory' of biological systems, or the more complex ecological systems. Exploratory network diagrams, like a finite-element analysis, are at best a kind of limited simulation tool. There's no actual 'network' there, in any external sense.
Which brings us to the biggest misconception of at least the past century, and perhaps the last three centuries. That the human brain, even the animal nervous system, is a 'network'.
There's very little evidence for it. Again, these systems are so complex and difficult to understand that we immediately fall back on any intelligible characterization. And the characterization that is ready and waiting in our mind, is the network.
It's interesting how the innate concept of 'network' is integrally related to that of 'object'. Certainly we try to turn things into 'objects' as part of our instinct. But when we investigate the world, even casually, using this idea of 'objects', we immediately find 'objects within objects', 'objects relating to objects', 'objects influenced by objects' … and adding the concept of 'signaling', we get 'objects signaling objects'. That's a 'network'.
Really, I'm making a very old argument here: the more complex a system, the more we fall back on 'natural ideas' which will probably distract us from discovering what is really going on outside our percepts and concepts.
I'll still make use of networks in my simulations. But like anyone who has tried to simulate reality on a digital machine, I know there are much harder problems than 'graph theory' ahead of us. But to even approach these problems, we need to be aware of the 'network bias' in the human mind.
It's probable that calling the nervous system a 'network' is completely meaningless.
What do I mean by that? If I could tell a didactic Socratic story …
… let's say some alien intelligences, who study our cognition, were to look at a clock, and speculate upon how we would look at a clock today.
They might imagine that each of the clock's parts were what we call 'objects', and that the tight interrelations among the parts are what we would call a 'network'.
Hearing this speculation, we might beg to differ with them: "no, the parts are not similar enough to be a 'network' ".
… let's say some alien intelligences, who study our cognition, were to look at a clock, and speculate upon how we would look at a clock today.
They might imagine that each of the clock's parts were what we call 'objects', and that the tight interrelations among the parts are what we would call a 'network'.
Hearing this speculation, we might beg to differ with them: "no, the parts are not similar enough to be a 'network' ".
They would point out, "interesting … your concept of a network is quite specific … the network 'nodes' need to share some specific, unspoken 'quality' for you to accept them as 'nodes' in a 'network'."
Our alien friends decide to add something: "You do understand that you are like this clock? Your brain, at the very least. The 'connected' 'parts' in your brain do not have these qualities that, if you were to see them, would qualify as a 'network' to you. And yet, you assume they do. Whether the nervous system or the brain is a network should be a scientific question about the natural world, but since 'network' is a mysteriously defined word in your mind, there is no way to even ask that question. And yet, even though humans know almost nothing about their minds, they say 'yes, a nervous system is a network, and so is the brain … just look at those neurons and dendrites and synapses'. They say this confidently, even though there's no evidence that this is the appropriate way to view these structures, and no evidence that they behave as 'nodes' do in the human perception of human-constructed networks."
Our alien friends decide to add something: "You do understand that you are like this clock? Your brain, at the very least. The 'connected' 'parts' in your brain do not have these qualities that, if you were to see them, would qualify as a 'network' to you. And yet, you assume they do. Whether the nervous system or the brain is a network should be a scientific question about the natural world, but since 'network' is a mysteriously defined word in your mind, there is no way to even ask that question. And yet, even though humans know almost nothing about their minds, they say 'yes, a nervous system is a network, and so is the brain … just look at those neurons and dendrites and synapses'. They say this confidently, even though there's no evidence that this is the appropriate way to view these structures, and no evidence that they behave as 'nodes' do in the human perception of human-constructed networks."
The moral of the tale … if it's in the real world, and it's not something that a person constructed and called a 'network' ... then it's not likely to be a 'network', despite the efforts of your imagination, or the structure of your simulation.
Tuesday, August 26, 2014
The Problems with Recursion
Is recursion only something we see in the world, or is it something in the world itself?
This is a question we should be pouring over, and puzzling about. But instead I see people either in full thrall of some kind of pan-recursionism, or else denying that it could exist anywhere.
If we're ever able to discover a reasonable answer to this question, we may not be able to remember it for long enough to make use of it. Recursion is very appealing, and the reason seems to relate to the mechanism behind the human language faculty, something we're all born with.
Human language is some kind of cyclic composition faculty, which interacts with and recruits from the rest of the brain in surprising ways. We see superficial externalized aspects of this cyclicity in written and spoken language, such as 'phrases within phrases ad infinitum'.
If the language faculty, at its core, is a kind of cyclic composition engine, it's no wonder that we find recursion so seductive. We have a "recursion meter", if you will, in a prominent place in our cognition. We can sense when something could be perceived 'recursively'.
It also seems to be deeply integrated with our desire for simple, comprehensible symbolic theories about the world, probably responsible for the phenomenon Charles Sanders Peirce called abductive reasoning. We look for self-similar patterns embedded within each other. The faculty forces us to look for simple theories that enlighten us, and hence explain something to us, about hopelessly complex-looking phenomena -- Newton's laws of motion are a classic example. Although the modern mathematical study of recursion is a 20th-century phenomenon, it's only a refined version of something that clearly impacted human efforts long before. Which, again, is not surprising, if it's part of the innate language faculty.
All of this doesn't mean, though, that recursion is somehow the 'best tool' for people, in all, or even any, situations. There's a reason for that. It's known as reality.
Take programming languages. From LISP to Haskell, languages that encourage the compression of computational representation into recurrence relations are easy to define, sometimes inspiring to use, but they relate poorly to the non-logical side of programming.
By the 'non-logical side of programming', a phrase that will surprise many programmers, I mean nearly everything. That will surprise most programmers.
Almost nothing in the act of programming involves 'logic', in the sense of 'rule-based symbol manipulation'. But it does involve logic in the older sense of 'good thinking', a sense that is much closer to the natural sciences, where there is very little presumption that we have somehow magically turned the world into symbols. There is far less confusion, within the natural sciences, between "the map" (formal notation, math, idealization etc.) and "the territory" (the world outside our minds, which we've a very limited, and specific, ability to perceive.)
Think of any symbol in any programming language -- let's take the word 'if' in a formal conditional construct. What does the 'if' mean? We can construct a machine that satisfies us with its conditional-like behavior, which we can call 'if' -- but 'if', the word itself, has a mind-internal meaning. There's no way to "teach" a machine what we understand by the word 'if'. We can only inject into a machine a behavior that people will typically perceive as 'conditional', if they know it was constructed by another human being.
So even at the most elemental level, symbols mean nothing without people. We can of course force machines to react to people-assigned representations of symbols. But the satisfying reactions, the assignments, the low-level interpretation, the high-level interpretation, the user's feelings -- none of these are definable as a formal rule-based system of symbolic logic. The symbolic logic is only a shorthand, only a set of partially-ordered artifacts, little more than markings on paper, constructed by and awaiting the massively complex use and expectations of the human cognitive faculties.
Which is why recursion is so 'dangerous'. It's simply an appealing feature of symbolic logic, which appeals to a very specific aspect of a particular faculty in our minds. But we don't live in some biological cyclic compositor. We live in a complex world that we do not even slightly understand, which our complex brains impose complex interpretations upon. And the abilities of these complex brains are not only understood poorly -- that would be excusable -- but they are actively misunderstood by almost all computer scientists, who regularly reside within fallacies that were understood millennia ago.
These misunderstandings exist because computing began as an engineering discipline: automating production, and building tabulators and calculators. Within the computing discipline, people create and use formal systems. But I believe the limits to these formal systems have been reached, and were reached decades ago, and will remain in the dark ages, until a broader study of the biology of programmers and users, the role of people in the systems they create and use, is studied from a natural science perspective -- instead of from a seat-of-the-pants pragmatism, exalting "whatever works", "whatever's profitable", and "whatever gets the product out the door".
… a few more points.
If a cyclic composition mechanism exists in the mind, in whatever form, that would mean it exists in nature. So, in some sense, a machine of some kind that exhibits perception and generation of recurrent relations, is the result of natural physical laws and our human genetic endowment. In some sense, it could be a very simple machine, that is 'optimal' in some sense: the next cycle can proceed by ignoring all but the 'most important result ' of the previous cyclical work. What's being optimized is not that clear. What the mechanism is, is yet unknown, so we cannot begin to know how this 'simple' set-construction mechanism appeared.
As a candidate for symbolic computational recursion in nature, cell self-reproduction is often presented. But it's not clear yet if the 'genetic component' (genes as narrowly defined by molecular biologists) is the decisive one in the recurrent relation, or whether these genetic 'symbols' (which is only a 'symbol' by a rather extreme metaphor) merely piggy-back on biophysical reproductive behavior, such as the vesicles that Pier Luigi Luisi has pointed towards in abiogenesis research.
Even further from the mark, and further towards our recursive 'perceptual trap', if that's not putting it too strongly, are fractals.
Fractals in nature, when most compelling, mostly look like a gradient of energy, dissipating through a medium that reacts similarly at a few different scales. A bullet-hole through glass looks like this: lots of breaks close to the impact, with geometrically similar but larger and fewer breaks as the energy dissipates.
Even in those cases, the recursive characterization is part of human cognition, not nature. That's ok, if we understand that it's only for calculation purposes.
As physical idealizations, fractals fall apart rather quickly. A tree is a good example. Leaves do not vary in size after they've reached maturity. The behavior of any organism involves a great many factors, which change radically at different scales, and cannot even be superficially characterized as fractal. Fractals are a phantom of our mental recursive trap, and in the natural sciences, they, and recursion generally, need to be recognized as a kind of potent fantasy.
This is a question we should be pouring over, and puzzling about. But instead I see people either in full thrall of some kind of pan-recursionism, or else denying that it could exist anywhere.
If we're ever able to discover a reasonable answer to this question, we may not be able to remember it for long enough to make use of it. Recursion is very appealing, and the reason seems to relate to the mechanism behind the human language faculty, something we're all born with.
Human language is some kind of cyclic composition faculty, which interacts with and recruits from the rest of the brain in surprising ways. We see superficial externalized aspects of this cyclicity in written and spoken language, such as 'phrases within phrases ad infinitum'.
If the language faculty, at its core, is a kind of cyclic composition engine, it's no wonder that we find recursion so seductive. We have a "recursion meter", if you will, in a prominent place in our cognition. We can sense when something could be perceived 'recursively'.
It also seems to be deeply integrated with our desire for simple, comprehensible symbolic theories about the world, probably responsible for the phenomenon Charles Sanders Peirce called abductive reasoning. We look for self-similar patterns embedded within each other. The faculty forces us to look for simple theories that enlighten us, and hence explain something to us, about hopelessly complex-looking phenomena -- Newton's laws of motion are a classic example. Although the modern mathematical study of recursion is a 20th-century phenomenon, it's only a refined version of something that clearly impacted human efforts long before. Which, again, is not surprising, if it's part of the innate language faculty.
All of this doesn't mean, though, that recursion is somehow the 'best tool' for people, in all, or even any, situations. There's a reason for that. It's known as reality.
Take programming languages. From LISP to Haskell, languages that encourage the compression of computational representation into recurrence relations are easy to define, sometimes inspiring to use, but they relate poorly to the non-logical side of programming.
By the 'non-logical side of programming', a phrase that will surprise many programmers, I mean nearly everything. That will surprise most programmers.
Almost nothing in the act of programming involves 'logic', in the sense of 'rule-based symbol manipulation'. But it does involve logic in the older sense of 'good thinking', a sense that is much closer to the natural sciences, where there is very little presumption that we have somehow magically turned the world into symbols. There is far less confusion, within the natural sciences, between "the map" (formal notation, math, idealization etc.) and "the territory" (the world outside our minds, which we've a very limited, and specific, ability to perceive.)
Think of any symbol in any programming language -- let's take the word 'if' in a formal conditional construct. What does the 'if' mean? We can construct a machine that satisfies us with its conditional-like behavior, which we can call 'if' -- but 'if', the word itself, has a mind-internal meaning. There's no way to "teach" a machine what we understand by the word 'if'. We can only inject into a machine a behavior that people will typically perceive as 'conditional', if they know it was constructed by another human being.
So even at the most elemental level, symbols mean nothing without people. We can of course force machines to react to people-assigned representations of symbols. But the satisfying reactions, the assignments, the low-level interpretation, the high-level interpretation, the user's feelings -- none of these are definable as a formal rule-based system of symbolic logic. The symbolic logic is only a shorthand, only a set of partially-ordered artifacts, little more than markings on paper, constructed by and awaiting the massively complex use and expectations of the human cognitive faculties.
Which is why recursion is so 'dangerous'. It's simply an appealing feature of symbolic logic, which appeals to a very specific aspect of a particular faculty in our minds. But we don't live in some biological cyclic compositor. We live in a complex world that we do not even slightly understand, which our complex brains impose complex interpretations upon. And the abilities of these complex brains are not only understood poorly -- that would be excusable -- but they are actively misunderstood by almost all computer scientists, who regularly reside within fallacies that were understood millennia ago.
These misunderstandings exist because computing began as an engineering discipline: automating production, and building tabulators and calculators. Within the computing discipline, people create and use formal systems. But I believe the limits to these formal systems have been reached, and were reached decades ago, and will remain in the dark ages, until a broader study of the biology of programmers and users, the role of people in the systems they create and use, is studied from a natural science perspective -- instead of from a seat-of-the-pants pragmatism, exalting "whatever works", "whatever's profitable", and "whatever gets the product out the door".
… a few more points.
If a cyclic composition mechanism exists in the mind, in whatever form, that would mean it exists in nature. So, in some sense, a machine of some kind that exhibits perception and generation of recurrent relations, is the result of natural physical laws and our human genetic endowment. In some sense, it could be a very simple machine, that is 'optimal' in some sense: the next cycle can proceed by ignoring all but the 'most important result ' of the previous cyclical work. What's being optimized is not that clear. What the mechanism is, is yet unknown, so we cannot begin to know how this 'simple' set-construction mechanism appeared.
As a candidate for symbolic computational recursion in nature, cell self-reproduction is often presented. But it's not clear yet if the 'genetic component' (genes as narrowly defined by molecular biologists) is the decisive one in the recurrent relation, or whether these genetic 'symbols' (which is only a 'symbol' by a rather extreme metaphor) merely piggy-back on biophysical reproductive behavior, such as the vesicles that Pier Luigi Luisi has pointed towards in abiogenesis research.
Even further from the mark, and further towards our recursive 'perceptual trap', if that's not putting it too strongly, are fractals.
Fractals in nature, when most compelling, mostly look like a gradient of energy, dissipating through a medium that reacts similarly at a few different scales. A bullet-hole through glass looks like this: lots of breaks close to the impact, with geometrically similar but larger and fewer breaks as the energy dissipates.
Even in those cases, the recursive characterization is part of human cognition, not nature. That's ok, if we understand that it's only for calculation purposes.
As physical idealizations, fractals fall apart rather quickly. A tree is a good example. Leaves do not vary in size after they've reached maturity. The behavior of any organism involves a great many factors, which change radically at different scales, and cannot even be superficially characterized as fractal. Fractals are a phantom of our mental recursive trap, and in the natural sciences, they, and recursion generally, need to be recognized as a kind of potent fantasy.
Subscribe to:
Posts (Atom)