Re-Unifying the Subjective and the Objective


Linas Vepstas <[email protected]>
July 2001


Standard Disclaimer: I am a novice at philosophy, and presumably, it shows. This is also a draft, not a finished article. You have been warned.

Summary:The following is a casual exploration of some ideas on which post-modern philosophy/theology seems to have been settling on. My attempts, as always, are to see if concepts from modern mathematics and physics can shed light on some old, arguably outstanding problems of philosphy.

Realism, Absolutism, Dualism, and Linguistics

It has been argued that the only things one can really know are subjective: those things that one feels, one senses, and those things that can be discovered either through the application of rational thought on these subjective sensations, or through (divine) inspiration. Until modern times, western thought has beleived in dualism: that there is a 'something-out-there' that is distinct from the 'in-here-with-me'. Post-modern thinking has pretty much dispensed with the need for dualism, replacing it with linguistic analysis [e.g. Don Cuppit, Mysticism After Modernity]. However, dualism persists in much thinking, especially in that of scientists and psychologists, in part because the sense of the 'out-there' is so strong, and because of the temendous successes of science in determining the detailed workings of the 'out-there'. So far, the physicists have made out the best: one need only point at the flowoering of modern technolgy to indicate that they indeed seem to have grasped something 'out there', and have been able to build tremendous machines based on the understanding of its workings. By contrast, the ethicists have been completely routed: there is no science of ethics because there seems to be no ethical statement that is equally true and discernable everywhere: Western ethics is not Islamic ethics, and Buddhism lays it all on end. Ethical truths seem to be submerged in a morass of cultural relativism. Ethical laws can be violated in a way that the law of gravity cannot. Although this cultural mess seems to be of Durkheim's making, there is a very strong sense that forward progress can none-the-less be made through the ideas of structural anthropology introduced by Claude Levi-Stauss. The notion here is that while there may not be absolute truths, we can uncover structural relationships between beleifs. There is a blurry line between structural anthropology and linguistic literary analysis.

Let us pause to review the concept of structural analysis (structural anthropology) with a few examples. Although to modern man, we have RGB or HSV color spaces, where we can assign three numbers, representing hue, saturation and value, to any visible color, this is not how primitive societies do it. When you go out into the field, and ask a primitive 'what color is this?', (i.e. what is the word in your language for this color?), one finds that the most primitive cultures have three words: white, black and red. Probing deeper, one might find a fourth: a greenish-blue. Soon, the vocabulary runs out. What one discovers is that the HSV space has been carved up into regions. How the space is carved up depends on the culture (although the white-black-red structure seems universal). It is this carving up that is the 'structure' of structural anthropology. That the carving up is culturally relative is perhaps better illustrated by phonemes. English-speaking tourists in Russia will discover theatre posters advertising Shakespeare's Gamlet. This is no typo: russians can't hear the difference between G and H, and the correct spelling, in Russia, is Gamlet. They also have trouble with F and P: Paris speak Prench, and the given name Francis is pronounced as Prancis. The russians aren't stupid: Americans have trouble getting the French accent right, and all hope is lost is you're European, wanting to learn Chinese. The point here is that the 'structures' are deeply ingrained in the brain, and are not lightly overcome.

Russians conjugate thier verbs in the way that europeans do, but they also decline thier nouns. Declension is a strange concept if one grew up without it: one carves up the space of relationships between things in seven ways: What? Whose? For whom? About what? With what? Where? Aha! (Not just Slavs, but also Balts do this; Latin does too, but only has four cases). Of course, there is an inifinite number of raltionships between objects, and in the english language, we feel we can express all of them. However, by limiting ourselves to singular and plural, its as if we were color primitives limited to white, red and black: we can describe any color, but we must use novel constructions that are the linguistic equivalent of 'lighter, and the opposite of red'. Again, the fact that we labour under this restriction is not obvious to the native speaker, of say, English. While common man perceives past, present and future, the linguist perceives that there is also the future perfect, and the infinitive, and others as well. We have carved up all possibile time and action relationships into a half-dozen domains: this is the 'paradigm', the 'structure' of structural anthropology.

I've dwelt on this notion of 'structure', because it will be important to my arguments later. Structure is like syntax: it defines a (culturally relaitive) relationship between things. Structure (or syntax) must be distinguished from semantics, 'the meaning of things'. Semantics is much harder is much harder than syntax. Syntax is 'easy' and broadly deployed: You can write down syntax rules, for spoken languages, and in many other places. Przemyslaw Prusinkiewicz has shown how to use syntactic systems (more precisely, Lindenmeyer systems) to describe plant morphology (i.e. the shape and development and flowering of those living green things in your garden). Coupled to diffusion (differential) equations, he's well on the way to describing the finest details of growth and flowering, and it doesn't take much imagination to realize that the genetic expression of DNA, regulated by the diffusion of enzymes and factors and lysing proteins must follow a very, very similar path. Just in case I lost you there, and you are wondering 'what does he mean by "syntax", then?', its just what you think: that thing of 'diagramming sentances' they taught you in grade school. If you're a programmer, think parsing, think BNF (Backus Nauer Form). Think lex, the standard unix lexical analyser. Maybe Chomsky can take credit for the notion of syntax in linguistics, but in fact, the notion of syntax is deep in the heart of mathematics. Logicians know that one proves theorems with (semi-)Thue systems; they also know that semi-thue systems are equivalent to Church's Lambda Calculus (whence the computer programming languages 'lisp' and 'scheme'), which in turn are provably equivalent to Turing machines. There is the statement 'if its sayable, its sayable in lambda calculus', which is in fact completely equivalent to the statement that 'if its computable, its computable with a turing machine.' This is because tables of syntax rules and syntactic parsers can be shown to be mapped on-to-one to turing machines. It is no accident that AI researchers program in LISP.

The point here is that one can go very, very far with structural analysis: one can make powerful statements about linguistics, one can talk about the expression of DNA with the same concept, and the mathematicians top them all: 'anything sayable is sayable with lambda calculus.' And yet, there is also a limit to syntax: its called semantics. Syntax doesn't seem to carry any meaning. AI still can't deal with 'meaning' (although AI beleivers think that this is so only because the 'frame problem' is so hard). Textual analysis (whether literary criticism, biblical analysis, the 'ineffableness' of poetry, or the analysis of mystical texts) conflates the search for structure with dancing about meaning. That syntax and structural analysis is limited and does not encompass semantics as a proper subset is perhaps most clearly shown in mathematics, where Godel's work blew the doors off theorm proving: There are statements (Godelian statements) which are true (and oddly, can be (easily) seen by humans to be true), but are not provable (viz. no AI (aka turing) machine will ever deduce them).

Let us return to our theme of exploring the philosophical concept of 'Realism' and its implied duality of 'subjective-objective'. Realism, under the name of 'platonic reality', is the working viewpoint of the active mathematician. It seems that 2+2=4 everywhere on the planet, independent of cultural upbringing. A trained mathematician can discover truths and make statements that can be verified by any other mathematician, at any other point in space and time. Out of the subjective reality of 'what one feels, what one senses, what one deduces', all mathematicians can always deduce the correctness of any given theorm. Its not like ethics, where 'good' and 'bad' depend on your upbringing. It seems that math, somewhat like physics, is 'out there', waiting to be discovered. At first blush, it seems that these truths are inviolable: indeed, can God change the value of pi, or make 2+2 equal anything other than 4? But on closer examination, mathematics seems to be built on thin air. One must work from a set of axioms, say the Zermelo-Fraenkel system, and one must beleive that they are true. The 'truth' of some axioms may be doubtful: the axiom of choice is troubling, and so one might say only that one merely hopes, or beleives, that the axioms are self-consistent, and attempts derive true statements based on a working set of assumptions (e.g. ZFC). But in math, there seems to be no absolute truths; the Godelian paradox seems to have robbed the platonic world of certainty. To put it more bluntly: we can only know that 2+2=4 if we first beleive in Peano's axioms; it seems that it is impossible to prove that the axioms are true in any absolute sense.

Anyway, if that was meant to be an introduction, it was a bit general. But how to otherwise make the topic accessible?

The Search for Meaning

Well, we're finally rolling around to the purported main thesis: that there is a plausible physical mechanism for meaning (although its a stretch). Or maybe I should say, 'hey I've got this great idea, and maybe its hokey to you, but its great to me'. The idea proceeds through an analogy. Recent study of the statistical mechanics of chaotic systems shows that there is a sharp divide between the discrete and the continuum. Traditionally, if chaos theory, one follows the point trajectory of a particle in phase space. For iterated equations, the time steps are discrete, and for a suitable iterator, the trajectory is chaotic. However, one can also ask about the continuum mechaniscs (or the statistical mechanics) of the same system. Surprisingly, the continuum mechanics is very, very different. The time evolution of a continuous, differntiable density under a chaotic map doesn't look 'chaotic' at all: densities quickly settle down to a uniform distribution. The details of this might be debatable were it not for the existance of a particular exactly solvable (set of) cases: the time evolution of the Bernoulli map (also, the logistic map, and importantly, the Baker transform). [Dean Driebe, school of Prigione, Fully Chaotic Maps and Broken Time Symmetry, Kulwer Academic Publishers, 1999]. The point trajectories are fully chaotic; the continuous, differentiable densities can be decomposed into eigenstates with decay modes. Yada Yada Yada.

Anyway, this part TBD. The basic leap here is that we should consider the output of a turing machine as a special kind of chaotic process. We are used to seeing the 'point trajectories' of turing machines: viz the 'ones and zeros' of the process. Yet perhaps, we should be asking about what the continuum, statistical mechanics of the 'turing process' should be. We might try this in a number of ways: what is the 'average' action of a single turing machine on all possible inputs? Alternately, what is the 'average' action of all possible turing machines on a single input? This is a hard problem: we are immediatel beset by two difficulties: what do we mean by 'average', and what precisely is the sense of a 'distribution'? Secondly, we have the 'stopping problem': omega, the fraction of all turing machines that stop, is 'unknowable'. That this is hard in practice, not just in theory, is highlighted by the fact that even the tinest 'busy beavers' can compute for a very long time. But in fact, we do have classes of turning machines for which we can begin an exploration. These are the machines which use the 'genetic algorithm' to compute solutions to NP problems. For these machines, the set of inputs can be arranged in an ordered way, and we can define meaningful 'averages' and 'distributions'. (The class of turing machines for which we can do this is in fact much broader: not only algorithms that perform simulated annealing, or monte-carlo methods, but also (random) networks of cellular automata). For this class of algorithms, we already talk quite naturally of continuum distributions; we typically don't talk about the 'point trajectory' of any given instance of a computation.

The next leap is this: is it possible that this continuum-limit of the action of a turing machine, this 'statistical mechanics', is so different, that it is distinct from what we usually talk about when we say 'syntax', and bridges over to what we mean when we say 'semantics'? There are any number of peoms that we can write about 'love', but no one poem, from the linguistic, literary, structural point of view can ever truly defines 'love', however well it suceeds in evoking it. It seems somewhat crazy to state that 'love' is that thing which is the average over all possible poems about love, but post-modern structural analysis seems to drive us to that point. Ineed, that is what Don Cupitt seems to attempt to do: there is nothing there but those words, and he rejects the notion that the thought of the ineffable comes first, followed only later by the words that try to capture the essence of the ineffable through poetic expression. Rather, the expression in language is primary, and in a sense, the 'only' thing. But we know that half of language is syntax, on which we have a good handle, and the rest is semantics, which is a deep mystery. Is it possible that Cupitt is in a sense 'right', and that the 'meaning' we derive in our brains is due to a certain continuum average that our neurons are able to take over all possible syntactical expressions of an idea? Is that in fact what an idea is: the continuum, statistical average of all possible linguistic expressions of that idea? Thus we seem to be able to bridge between the Kantian notion of absolutes (the idea as a 'thing-in-itself'), and the post-modern idea that linguistic expression is primary.

This idea may sound nutty, but is pinned on the fact that we already know that there are systems (chaotic systems) that look very very different, depending on whether one is exploring thier point dynamics, or thier continuum mechanics. We also know that syntactic systems permeate all living organisms deeply. Finally, we also know, thanks to Godel, but especially as elaborated by Roger Penrose [emperors new mind & sequel] that syntax is 'dead' and 'thoughtless'. Penrose tries to find the 'missing magic ingredient' in quantum mechanics. It may well lie there: quantum mechanics is a magnificent machine for taking the averages of things. It might be argued that the reason that 'chaotic quantum systems' seems to be a misnomer, a bust, is precisely because the quantum systems have taken the 'continuum average' and only yeild Driebe's mechanics of continuous, differentiable densities. Whether or not quantum has anything to do with human cognition, I don't know. Its quite possible that a classical neural net can act as the 'averaging' element over syntactical expressions. At the risk of repeating myself, one must not expect this average to look at all like the action of any one given turing machine: something different arises when we bridge over to the continuum.

Now, in the above, there lies begging dozens of questions, but as it is late, I am going to sleep. More chapters later.



Linas Vepstas
July 2001
Copyright (c) 2001 Linas Vepstas
All rights reserved.