Sunday, August 2, 2009

Comments on an Evo Psych Primer

I can't figure out a good introduction, and I'm sure you can do without one.
"The brain is a physical system whose operation is governed solely by the laws of chemistry and physics. What does this mean? It means that all of your thoughts and hopes and dreams and feelings are produced by chemical reactions going on in your head (a sobering thought)."
This is untrue, but of course there's no way for them to know this. You may want to contemplate this and the idea of scientific arrogance next to each other.
"To say that the function of your brain is to generate behavior that is "appropriate" to your environmental circumstances is not saying much, unless you have some definition of what "appropriate" means. What counts as appropriate behavior? "Appropriate" has different meanings for different organisms."
Time to hawk my definition of life: life has goals. (Also time to use a word that isn't 'appropriate.') For a brain to produce intelligent reactions to the environment, it has to figure out first what an intelligent reaction is, explicitly or implicitly.*

*(Logically redundant. The human brain - specifically yours - is not strictly logical, so grammatically necessary.)

The main action of the brain is simply to aid the life-form in continuing to be able to pursue and defend goals. However, this is the action of every organ; typically an intelligent action is one that supports one of the sub-goals. So, instead of "'intelligent' has different meanings for different organisms," "different organisms pursue different goals."

A good philosophical definition makes everything obvious. Also note that the concept 'behaviour' easily generalizes to non-intelligent reactions like floral immune reactions...albeit the whole point of the word is to distinguish brain-reactions from the non-brain kind within biology.

"Realizing that the function of the brain is information-processing has allowed cognitive scientists to resolve (at least one version of) the mind/body problem. For cognitive scientists, brain and mind are terms that refer to the same system, which can be described in two complementary ways -- either in terms of its physical properties (the brain), or in terms of its information-processing operation (the mind)."

Scientific arrogance is negligible compared to philosophical arrogance.
Though the fault here is completely misunderstanding the mind/body problem. As far as philosophy is concerned, the information-processing is a physical property, which is probably why cogsci has found that their 'mind' and 'brain' are identical. On the other hand, note that Cosmides and Tooby acknowledge that this is only 'one version' of the problem.
"Principle 3. [...] In other words, our intuitions can deceive us."
I guess my intuition is just really good. When I examine my consciousness to ask how I see, it tells me that it doesn't know. Trying it again to make sure, I just found out it's practically impossible to even direct my awareness at the problem. I can think about what I'm seeing, or I can think about the thoughts these sights give me, but my mind's eye is blind to anything upstream or in between.

Your incompetence at epistemology can deceive you. Your consciousness rarely does.
This is basically religious dogma on the part of scientists - that your intuition is just about useless.
Generally this is because scientists refuse to relinquish their prejudices about what the intuition can do, and therefore cannot acknowledge its limitations and use it for what it is actually good for.

"A basic engineering principle is that the same machine is rarely capable of solving two different problems equally well. We have both screw drivers and saws because each solves a particular problem better than the other. Just imagine trying to cut planks of wood with a screw driver or to turn screws with a saw."

That's what is so amazing about general-purpose computers, actually. Essentially they're math machines, doing simple operations on binary numbers. And yet, they can solve basically any information problem. (Purpose-built circuits are more efficient in their domain, though.)

"To solve the adaptive problem of finding the right mate, our choices must be guided by qualitatively different standards than when choosing the right food, or the right habitat. Consequently, the brain must be composed of a large collection of circuits, with different circuits specialized for solving different problems."

And here's where the above fact comes in. No, it doesn't have to be, but it is more efficient. I suspect that when your general-purpose circuits can properly solve a problem according to some qualitative standard, it sends out the feeling we label 'understanding.' When you understand a goal, you can reason effectively around it. To understand, then, is to apply the proper meaning to various stimuli.

"You can think of each of these specialized circuits as a mini-computer that is dedicated to solving one problem. Such dedicated mini-computers are sometimes called modules. There is, then, a sense in which you can view the brain as a collection of dedicated mini-computers -- a collection of modules."

I have a math module. It sleeps most of the time and takes many seconds to boot up. From a standing start I can barely count. Once it's up, calculus is my bitch.

"(E.g., human color constancy mechanisms are calibrated to natural changes in terrestrial illumination; as a result, grass looks green at both high noon and sunset, even though the spectral properties of the light it reflects have changed dramatically.)"

Mine seem to be dramatically overpowered; it wasn't until nearly adulthood that I noticed that well-lit coloured objects throw colour stains onto nearby objects. It wasn't long after I found out that you can't see colour in the dark - by reading about it. I immediately went into a dark room and had trouble confirming it, because my brain automatically assigned everything a colour, though I suspect it would have been easier if I had a room that wasn't full of familiar objects. I rarely notice the colour of lighting unless I specifically attend to it. For example I had red curtains as a kid and it made my room red during the day when they were closed. I could only tell everything was red when I specifically asked myself about it.

So perhaps my intuition is deceiving me? Perhaps I just think I can see colour, but I'm just fooling myself...well, it's actually highly testable. If a light is turned on in dark/discoloured rooms, am I surprised by the revealed colour? It has happened, three or four times. The system is powerful but does like to guess at things it can't actually know.

"The more crib sheets a system has, the more problems it can solve. A brain equipped with a multiplicity of specialized inference engines will be able to generate sophisticated behavior that is sensitively tuned to its environment. In this view, the flexibility and power often attributed to content-independent algorithms is illusory. All else equal, a content-rich system will be able to infer more than a content-poor one."

Philosophically, I've found the best way of looking at this is that the brain learns things both through the senses, in single organisms, and through evolution, across ancestry. Even with the crib sheets, logical reasoning is necessary to produce true inferences, which means that the privileged hypotheses are essentially just innate lessons.

This transforms the last statement into "Systems with more knowledge can infer more." Good philosophy tends to make everything obvious.

What I'm trying to say here is that if a philosophy is being obtuse, it's probably because it's bad philosophy and you can ignore it as a source of truth. At worst you skip inefficient learning. In general, really, the job of being understood falls mostly to the speaker or writer.

"Having no crib sheets, there is little they can deduce about a domain; having no privileged hypotheses, there is little they can induce before their operation is hijacked by combinatorial explosion."

I guess this answers a question I've had; how did I learn philosophy? I certainly wasn't taught, and I didn't read anything specifically calling itself philosophy. I did, however, read a lot and I paid attention.
The above statement is identical to one I made last post. ("...if-then") Good philosophy ignores evidence until the final stages, because otherwise you get combinatorial explosion. Instead, work from assumptions and simply check later if these assumptions make any sense.

"Machines limited to executing Bayes's rule, modus ponens, and other "rational" procedures derived from mathematics or logic are computationally weak compared to the system outlined above (Tooby and Cosmides, 1992)."

Fascinating that they think this. (BTW, check: compare the rate toddlers learn words to the rate of incoming information. I found the ratio is gargantuan.) Generalize the concept 'machine;' now, science is a machine. So, this idea reflects back; science needs intuition. 'Scientific' findings can indeed be very interesting and very powerful, but for the most part the reach of science is limited. This is the basic reason I keep pointing out this particular flawed dogma in science culture. Until scientists recognize this, there will remain two kinds of scientists; the ones that keep using the data to support things it doesn't actually say (nutritional science), and the scientists who refuse to believe that we can find truth unless some data tells us so first. (New atheists. Also anti-historian sentiment: "Many of these accusations revolve around the idea that we cannot prove anything about the past, so evolutionary claims cannot be verified.")

"experts can solve problems faster and more efficiently than novices because they already know a lot about the problem domain."

Very good. It's more that experts can solve problems at all, though.

"In other words, our modern skulls house a stone age mind."

Evolution can happen much faster than this phrase implies. Civilization has certianly impacted the stone-age template. If nothing else, look at lactose tolerance. Small adaptations are no less likely in the brain.

"For this reason, evolutionary psychology is relentlessly past-oriented."

All knowledge is past oriented. The whole point is to use the past, which we can see, to understand the future, which we can't see without using the past.

"The premises that underlie these debates are flawed, yet they are so deeply entrenched that many people have difficulty seeing that there are other ways to think about these issues."

Cosmides and Tooby are really doing a good job, overall.

After reading this, I'd have to say that I have an EP hypothesis. Specifically, that all humans are endowed with not one but at least two general-purpose learning and reasoning architectures. I call them the 'rational logic system' and the 'emotional logic system' simply because of the way they appear to present results. Basically, one is "I think that" and the other is "I feel like." The first can solve math problems. The second seems primarily interested in causation, using correlation to try to detect it.

I can't think of a good conclusion either, and I think you can do without one. In fact, make up your own, because it will be tailored to you.

3 comments:

nick said...

Nick from Unenumerated here. Great blog you've got. I just discovered it. On scientific arrogance versus intuition, I'd love to read what you have to say about Bayesian envangelists and their ideas that various intuitions that depart from Bayesianism are "fallaciies". They've come up with long lists of supposed fallacies in which common intuition departs from supposedly rational Bayesian reasoning.

Most of the experiments supposedly uncovering these fallacies, IMHO, amount to experimental subjects not understanding the instructions, or not showing the expected respect for them, or both.

Example: the gambler's fallacy, wherein people read patterns into random data, and specifically think that future rolls of the dice will include "good luck" that makes up for earlier "bad luck". Obviously false for a truly random sequence.

But in nature, there are many complex phenomena that are not completely random but have cyclic components: migration seasons, hunting seasons, weather, and probably most importantly, human appetites. If somebody is hungry now, for example, and they eat regularly, they are more likely to be satiated later. In contrast, fair dice are a rare phenomenon. Now if we're dealing with a specific situation where important parts of reality are indeed truly random, like people losing their money in Vegas, it could properly be called fallacious thinking, but it's quite the fallacy to think that "luck cycles" thinking is fallacious generally.

Likewise, I suspect that most other supposed "fallacies" are of this nature -- adaptive kinds of thinking that the experimenter doesn't understand and is substituting his own, often unreal abstract model for.

Alrenous said...

Thank you for your comments and welcome!

I'd love to read what you have to say about Bayesian envangelists and their ideas that various intuitions that depart from Bayesianism are "fallacies".

And I'd love to write about it.

The summary is easy, though; human intuitions are Bayesian, in that evolution has guided them to be the best guess in a variety of situations.

I'll have to look at an actual list to confirm, but it's likely one of two things. They're targeting the ones that happen to no longer apply. Second, they're trying to twist 'Bayesian' to be the general epistemological standard, and showing how these intuitions don't entirely match up.

I suppose there's also the honed-intuition front, which is interesting.

I'll have a look around and you'll likely see an article on it as my next one.

Most of the experiments supposedly uncovering these fallacies, IMHO, amount to experimental subjects not understanding the instructions, or not showing the expected respect for them, or both.

To counterpoint, the study conclusions may not be wrong, but the design of the experiments makes it impossible to tell the difference.

But in nature, there are many complex phenomena that are not completely random

Indeed, turn it around; in what situation would such thinking be useful? As you point out, there's several situations where such thinking leads to rational behaviour.

The trick, which the Bayesians never seem to catch, is to know when to use it and when to ignore it. Growing up, I thought this was something every mature person knew, more or less.

nick said...

Many mature people believe many very strange things. I look forward to your article!