Friday, December 19, 2008

More on Journalists Suck; Thoughts About Consciousness

This, however, isn't the journalist's fault. Their sources are equally wrong. On consciousness, basically everyone sucks. But again, I like New Scientist. Despite the terrible reasoning, the actual data they display kicks ass all over the place.

"Specifically, what is it that makes the human mind so special? Like many people, I have always believed that the answer lies in our capacity for conscious thought."

"In fact, far from playing second fiddle to the conscious mind, subconscious thought processes may play a crucial role in many of the mental facilities we prize as uniquely human, including creativity, memory, learning and language."

None of these things - not one - are unique to humans. I formed each as a hypothesis, sequentially, and falsified them all. Cetaeceans use language. Apes can be taught symbols. Creativity can be found in elephants. Memory? Are you even serious? Consciousness, in every non-magic theory, cannot be confined to humans.

Mentioned somewhere else entirely, humour is also not unique.

(There is a small chance that specialized neurons which are only found in humans are required for the mind node. However, even house flies sport recognizable emotions, chiefly panic, making this doubtful. Similarly, fruit flies can pay attention.)

Humans do, however, clearly have a drastically higher degree of consciousness than any other organism on the planet. For consciousness to exist it must interact; it must do something. Whatever this is, humans are much better (also probably more versatile) in doing it.

(Or, I reject epiphenomenalism as magic.)

Our technological superiority (also not unique, see caledonian crows) is the same kind of thing as our cultural and linguistic superiority; a difference in degree, not in kind. Quantitative, not qualitative.
"Our subconscious is not an unthinking autopilot that needs to be subjugated by rationality, but a purposeful, active and independent guide to behaviour."
Which you can interrogate through your emotional problem-solving system.
"Some scientists go so far as to believe that it is responsible for the vast majority of our day-to-day activity and that we are nothing more than "zombies" guided by our subconscious."
Note the second part of the sentence contradicts the first. You aren't a zombie if anything less than all of your activity is non-conscious. Second, the fact that I experience anything is disproof that I'm a zombie, and so the second part is simply wrong. To say that humans are zombies, you must misuse the concept 'consciousness.' (Or be solipsist.)
"But as yet you cannot simply look at an image of the brain and say what kind of thought process is being used."
Since consciousness isn't physical this will be a long time in coming. Of course if it were, you would be able to simply measure the interactions of the property or particle of consciousness, but such a thing requires strong emergence to be not magic.

"What this suggests is that our brains constantly monitor our internal and external environment such that when the input becomes important enough, the subconscious decides to engage the conscious and we become aware of what is there. This is certainly what neurobiologist Michael Shadlen from the University of Washington in Seattle believes. "We suspect that the normal unconscious brain monitors the environment for cues that prompt it to decide whether to awaken and engage... The decision to engage at all is, in effect, an unconscious decision to be conscious.""

So if consciousness is illusory or doesn't do anything, why on earth would it spend all this time and effort deciding to be conscious or not?

Note that this parallels the situation with free will. At some point it would have to have been determined that the next event was subject to choice. (Again, I think both concepts are invalid.)

"Dayan says that our behaviour is often driven by more than one of the four controllers - the various types of explicit and implicit thought process may be actively integrated, and this is especially true when we are learning something new where the balance between ignorance and experience changes. Importantly, the subconscious isn't the dumb cousin of the conscious, but rather a cousin with different skills."

This integration is why it's called 'subconscious' not simply unconscious. The one shades into the other.

"Dijksterhuis is convinced that subconscious thought processes are superior in many situations - including most social interactions - because they allow us to integrate complex information in a more holistic way than can be managed by rational thought processes."

As I recently detailed in the piece on emotional logic, this is what I have found as well.

"Studies on rats and monkeys indicate that they too consign skills to subconscious control once they become expert. "Still, we may have a greater capacity for this," says Dayan, "since we have the huge advantage of being able to use language to boost our goal-directed control and so provide a much richer substrate for acquiring habitual skills.""

Dayan is not a philosopher, obviously. Like the dilettante historian previously, Dayan has no better idea than you do if or how humans are better at...consigning skills to the subconscious? It isn't even clear what, exactly, we're supposed to have a better capacity for.

Note again the contradiction with the earlier idea that consciousness is in some way unique to humans, seeing that apparently rats have a subconscious to which to consign skills. (This of course falls under the other-minds problem. Do rats have a subconscious, or just something analogous?)


Anyway, new article, same stupidity.
"It can store information for more than a century if you live that long"
Not likely. Recall that every time you remember a memory, it's opened for editing, in a sense creating it anew. A future version of hard drive may be able to hold a magnetic imprint readably for a century, in isolation. The brain doesn't need to; it can remember your remembering, rather than the original occurance itself.
"INTELLIGENCE is a slippery concept to define,"
Intelligence is the ability to adapt without evolution. For a biological entity, this means you can change with your environment without having to go through a genetic reshuffling. For a machine, it means being able to change with your environment without human intervention; it does not have to be rebuilt or reprogrammed.

This definition rules out machines that simply descriminate between various objects, such as 'smart' bombs, because they do not remain smart in novel situations; they are simply a complicated linkage of dumb components. Similarly robotic arms; even if they can physically make other products than the one they are currently making, they need to be reprogrammed to take advantage of that capability. However, by this definition, we have already achieved artificial intelligence with other machines.
"so not surprisingly it has been tricky to pin it down in the brain."
...

"It is sometimes possible to train working memory with practice, and doing so may benefit IQ, especially fluid intelligence - the ability to solve new problems. However, this may just be a short cut to better IQ test scores rather than an indication of brain structures that confer intelligence."

Impossible. It's just as dangerous as saying that some branch of math is useless. There is, somewhere, a situation that mimics the conditions of this working-memory test, and if you practise you will deal with it better. Of course whether the training is more expensive than the reward cannot be known in advance. This is what price signals are for.
"Some can remember entire books and some can rattle off a piano concerto after a single hearing. Yet others can draw perfect circles. What leads to such islands of intelligence?"
They are also analyzing Einsteins' brain and make some statistical errors combined with focusing on the more. They should also look at how his brain might be less as in less inhibited like these savants.

First, the statistical errors. Einsteins' brain is 15% more round than the average. But of course this is meaningless; perhaps brains of ~100 IQ normally vary by 15%. Or, perhaps not. All of the examples are like this; Einsteins brain is more integrated or more tightly packed, but aside from the merging of two particular folds, none of this is meaningful without statistical perspective. But Einstein might also have been less. He was less verbally skilled, anecdotally, which may point to some mild savantism.

The mystery of savantism is not mysterious at all, especially considering you can temporarily activate savantism in normal subjects with trans-cranial magnetic fields. (This is actually in the article as well, immediately below.) The subconscious brain is chock-full of powerful mathematical and statistical modules, not the least of which is the fact that stereoscopic vision requires trigonometric calculations. (Also, dogs can do differential calculus to find the shortest routes over combinations of land and water.) If some wires get crossed and it starts passing this detailed information to the consciousness, you get savantism. Also this explains why usually it isn't passed; savants are, aside from their abilities, impaired versus their pre-injury state. Similarly, the woman with perfect episodic memory is more tormented than served by her gift.

Perhaps if civilization persists, in suitable millenia we will find mutants that can turn their savantism on and off at will, so as to quickly calculate some dates before returning to a more generally functional state.
"A good memory requires effort and attention not special grey matter."
They are wrong. Certainly it helps, but the situation is the same as in savantism. If you cross the right wires you can get extraordinary abilities, and most people don't have it because it comes at extraordinary cost.

I've had a ridiculous memory all my life. In certain fields I do not have to take notes, and actually study and repetition degrades my performance, because I get bored and resentful. Unless someone taught me the proper efforts to make and attention to pay before I was old enough to remember them doing it, I have special grey matter.

Now there is a bit of technique to it, but I do it all instinctually as far as I can tell. That is, I can with conscious effort stop myself, but otherwise I just naturally memorize things swiftly, easily, and accurately, while simultaneously filtering out everything that's irrelevant.

The only problem is that this system considers all social information like birthdays and names to be irrelevant details...

Wednesday, December 17, 2008

"Arbitrary" Is Arbitrary; Conscious Math Is Arbitrary

The definition of arbitrary is very screwed up.

Background;

Consider half of a sliced bagel. Also consider a bread crust of the same bread, of roughly the same thickness. I assume the purposes of either is to be consumed for enjoyment and nourishment; bagels and bread should have nutrition and a pleasant flavour. (And their, say, aerodynamics, are irrelevant for all intents and purposes.)

There is no real difference between the two; there is no purpose-significant action which is served better by one or the other. They have the same nutrients because they're the same bread, they can both hold spreads at roughly the same ratio, and they can both be properly toasted in a toaster with a 'bagel' setting, which only toasts one side. No matter what you intend for that chunk of dough, your purpose will be served. The only possible difference is if you just have a raw preference for having a hole in your bread, or simply prefer 'b' noises.

(While Anglophone society considers these subjective purposes to be non-practical, the union of subjective and objective makes clear that a subjective, non-rational preference is an intent or purpose. Nevertheless, it is true that this is the only purpose against which the bagel and bread crust differ.)

In sum, the distinction between a bread crust and a bagel is an arbitrary distinction. There is no functional difference marked by this distinction. No matter how you define the difference, it makes no difference; the properties of the bread crust and the bagel remain identical. (If we were to try to systematize the difference, to strictly define it, we would find we needed a different name for every chunk of dough ever, because none of them are exactly the same shape, and even still this is an 'irrelevant' and 'irrational' difference, with regard to the purposes of food, assuming we drop blatantly repellent forms like swastika-bread.)

And now I can break the concept for you. Consider instead half a positive parabola. You can, if you want, define an upper portion where the slope is very large and a lower portion where the slope is smaller. Again, however, the choice is arbitrary; as you slide the definition up and down, there is no quality that suddenly changes for it to catch onto. However, this kind of arbitrary is completely the opposite of the previous kind of arbitrary. Instead of the difference making no difference, every tiny change makes a difference. Every time you move the difference between steep and gentle, the functions of both change.

In fact, I can say that the first example is arbitrary because it makes no difference where you put the distinction, and the second is arbitrary because it's never arbitrary where you put the distinction. It makes me giggle so I'm typing it again; it's arbitrary because it's never arbitrary.


Consciousness vs. Math
And thus part of my problem trying to explain why if one equation is conscious, every equation is conscious. Yes, it makes a difference on where you put it; but it is never arbitrary in the first sense, which means it is arbitrary in the second sense. No matter where on the parabola you define 'steep,' to properly define it, every part of the parabola is steep, to a degree, even at the base where that degree is zero at one and exactly one point.

So I guess if one equation is conscious, the situation is even worse than every equation is conscious. Rather, almost every equation is conscious except a few which completely change depending on which equation you define as conscious first. Even if you found something in nature described by a zero-consciousness equation, it would be more like sleeping than dead, and you'd be able to awaken it with a single poke. It would be very unstable. On top of all this, consciousness would still be either acausal or an epiphenomenon. It would have to either change the behavior of the equations that were highly conscious, that is, away from their purely mathematical behavior, or it would be unnecessary for calculating dynamics.

This is because adding consciousness in as a fifth force in nature forms an infinite regression. The existing equations of motion would all be assigned a consciousness number, which would determine their interactions to the consciousness field. But this field would also be described by an equation, which would have such a number, but the interaction of the field with itself would change the equation, changing the number. Which would cause an interaction of the field with itself, changing the number. And so on. Attempting to make consciousness mathematical either annihilates consciousness into epiphenomena, or leads to the equation tying itself into knots until it disappears.

A further problem is where to divvy up nature into equations, so that we can assign them numbers.

So my inner critic is bugging me, but it a good point. I need to make this explicit. Let us assume that we have a fully mathematical formula for consciousness. First, determinism is strictly true. The equation cannot deviate from itself; your 'choices' in each moment are a function of the state the moment before. Choices do not truly exist, just a sensation we call 'choice' erroneously.

Second, 'happy' = '3.' Perhaps '90=theory of relativity.' (Of course single numbers would be much more fundamental units of thought, but this isn't material to my argument.)

Okay. Why? Why is 'happy' 3 and not 4? Or 70? These things are first, arbitrary in the first sense, and second, you do not need these labels to fully describe the physics.

This is important, so let's do a second example. What about operators? You have the Schrodinger wave function of consciousness, and you use the 'happiness' operator on it to find the degree of happiness. This number then feeds into interactions with other parts of consciousness. But, again, this label is arbitrary. It may empirically be that 'happiness' corresponds to this operator, but there is no reason to use this label. We don't need to know what sensation it corresponds to; we just need the operator and the mathematics to describe its interactions. Consciousness becomes merely epiphenomenon.

Consider the opposite situation, the momentum operator, p=ħ/i d/dx. We also don't need to know that it is momentum for it to work, but this is fine because it is a number that only references other numbers. Its essence of momentum-ness isn't critical to its definition, the way the essence of happiness is critical to its definition. The defintion is, relatively, completely backwards; momentum is defined as the thing (ħ/i)(d/dx) picks out, while what picks out happiness would have to be empirically tested. If we've made a mistake with (ħ/i)(d/dx) and find that what it picks out doesn't have the properties of momentum we want, we don't need to change it at all; we just pick a new name, use it for exactly the same things as we did before, and find the operator that we do want to call momentum. If we find that the happiness operator doesn't pick out the properties of happiness, we immediately realize all our data indicate that it does and we're just screwed. And, as expected from epiphenomenal consciousness, it doesn't matter; oh, this happiness indicator that corresponds to resported happiness isn't happiness, but all our predictions come out true anyway. This property truly is a frivolous extra property.

Consciousness cannot be mathematical. Therefore, it cannot be physical.

Tuesday, December 16, 2008

Notes From My Daily Reading

Reading Dalrymple again.

So.

Are the proponents and providers of vaccines responsible for the adverse side-effects? This is the exact case an examination of my anti-Christ(special) is supposed to make clear. We have, for the sake of the preservation of the lives of many, damage to the lives of few, and a different few. It's the hard choice; how do you feel when the child you just administered a 'life-saving' rubella vaccine to goes into convulsions and dies?* How much worse is it when you realize that child may never have gotten sick with rubella at all?

*(I have no idea how long they actually take to die.)

I think vaccines are really amazing. They're low-tech, extremely effective, and such side-effects would be vanishingly rare except that death doesn't exactly vanish for quite some time.

So it's rather important; are they responsible? Obviously, even if they are, we cannot let them be sued for murder or even malpractise.

Reading New Scientist.

"It's giving neuroscientists something of a headache. Most of what we know about the brain comes from studies of male animals and male human volunteers. If even a small proportion of what has been inferred from these studies does not apply to females, it means a huge body of research has been built on shaky foundations."

No. In addition, no! We had to do that research anyway. Now we've just realized that it's half the research, not the whole.

Also, this sits poorly with the idea that men and women's mental aptitudes are identical or close enough. There are going to be some very uncomfortable scientists in the near future.

"Give a man a sheet of paper printed with tangled streets and he can tell you where you need to go. But don't be afraid to ask a woman for directions. Chances are she'll get you there, too, but using a different technique. Drawing on her hippocampus, she'll offer you physical cues like the bakery, the post office and the Chinese restaurant."

One of those techniques is better than the other. But you can count on scientists and journalists swearing up and down that they're the same.


Journalists suck.
"To do that, he would have to venture into the nascent field of social neuroscience, a discipline that has been described as the next big thing by the founder of cognitive neuroscience, Mike Gazzaniga at the University of California, Santa Barbara."
We are completely bombarded with predictions by all these dilettante historians. Apparently by putting 39 almost meaningless letters after Gazzaniga's name, it turns this groundless prediction into not a waste of time. Are you going to UofC soon? Going to track him down? No? Then you're nearly everyone who will read this article, and all you need to know is that this guy has no more idea of future trends than you do.

And I like New Scientist.


Scientists suck.

I've already covered how journalists suck, so I won't go over how the title is retardedly sensationalist.

"Haynes also raises the prospect of "neural marketing", where advertisers might one day be able to read the thoughts of passers by and use the results to target adverts. "This [new research] specifically doesn't lead to this - but the whole spirit in which this is done is in line with brain reading and the applications that come with that," he says."

No. No. Noooooooooooooooooooooooo. No.

To see why, re-read this bit.

"Kamitani starts by getting someone to look at a selection of images made up of black and white squares on a 10 by 10 square grid, while having their brain scanned. Software then finds patterns in brain activity that correspond to certain pixels being blacked out. It uses this to record a signature pattern of brain activity for each pixel."

Add this;
"Subjects were shown 400 random 10 x 10 pixel black-and-white images for a period of 12 seconds each."
So, to get shitty pictures of 'neuron' in black and white, it took 4800 seconds of individual training, or 80 minutes each. (The images were shown one letter at a time, if you didn't notice.) To get high-resolution images of abstract colours would require the individual to sit in a machine and train the advertiser's computer for thousands of hours.

Even without all this, the technique is fMRI, which requires enormous magnetic fields. Like, don't-bring-your-keys-into-the-room-with-the-machine magnetic fields, let alone your cellphone. You going to set up these fields so that random passers-by are going to walk through them?

Next, look how bad it is. The encoding is all over the board because it's basically ad-hoc. Brains are the opposite of standardized.

Even beyond all this, it isn't actually detecting their thoughts. It's just detecting their ocular impulses. To detect me thinking about the image of the word 'neuron' would be a different beast entirely. If for some reason someone tries to make you sit in a room for a month so they can train their lie detector on you, just think contrary thoughts, and you'll defeat it. To make a lie detector out of this requires you to have a lie detector so you can calibrate it...

Oh God. Haynes is so wrong. It hurts. Make it stop.

How wrong is Haynes? And, by extension, New Scientist?
"But it shouldn't be possible to do this for commercial purposes."
Frankly, fuck you Haynes. Science is not somehow the opposite of commercial activity. I read elsewhere that someone didn't like a shopping cart icon because it sent the wrong metaphor for 'shopping' for classes, that these classes were somehow above that. All bullshit. Do the classes cost money? Then they're commercial. Even if they're free they have monetary opportunity costs; every moment you're in a class you're not flipping burgers for minimum wage, and thus sacrificing money. It shows Haynes incredible lack of thought; there are simply too many ways that science and education are commercial for me to even conveniently list them all, and yet he hasn't stumbled upon even one of them.

What Haynes means is that you shouldn't read someone's mind against their will. Well, duh. You also shouldn't read someone's diary or even talk to them against their will. Immoral act is immoral. And hence, fuck you Haynes. And second, 'it shouldn't be possible.' Oh, you want God to come down and make it impossible? Oh, you mean the State should stop it. That's not impossible. That's just illegal. (And then people get confused when someone realizes progressivism is a religion.)


There's another one from the same set, but since I go into it in depth I'm going to make a new post out of it.

Monday, December 15, 2008

Loop Quantum Gravity and Infinities

First, let me clear something up. My 'proofs' of the No Infinities Principle aren't proofs. They're more like suggestions; the NIP is an empirical principle. We find regularities in nature. In particular, I find that every time a physicist's equation shows up an infinity, it doesn't mean there's an singularity in nature; it just means one of their assumptions, usually a simplifying assumption, was wrong. There is no black hole singularity and there is no Big Bang singularity.

As I said originally, physicists already use this principle, and they're doing so again, they're just not consistent about it. In this piece, Loop Quantum Cosmology replaces the singularity of the Big Bang with a Planck density, exactly as the NIP predicts. This Planck density will also apply to black holes, though since the ball of matter at this density can form an event horizon larger than its radius, we should still expect regular black holes.

(Note that this is also a serious problem for any possible beginning of our universe. It would be behind its own event horizon. This is partly why you can a priori validate inflation; something must be there to stop the Big Bang from immediately forming a universe-sized black hole. No, you don't get out of this by presuming this is what the inside of a black hole looks like; it simply repeats the problem. We would be starting with a point of infinite density inside a point of infinite density, forming a black hole inside a black hole, and ad nauseum.

(I think this is also the beginning of my proof that infinite regression is a fallacy. If it isn't a fallacy, you have to be able to resolve infinite series like these.

(Anyway, I suspect I know where the antimatter went. There is a centre of the universe, and though it's still not a preferred reference frame, there's a half-universe-sized antimatter black hole there; the inflation either doesn't work properly on antimatter, or it wasn't strong enough to pull all matter out of the Big Bang's initial configuration, and it just so happened that what we now call antimatter mostly remained behind. [Do note that the article, while it has good data, also has multiple errors in reasoning.])

The problem with LQC is that they don't realize eternal time is physically impossible.

Set t=(0) to be now. As this drifts into the past, [tick, t=(1), tick, t=(2)] set this moment t=(m). But time is relative, so let's make a coordinate transformation.

Lim m->(-∞) (m). What time is it now, specifically?

This operation is legal. This is what eternal, non-beginning time means. (Non-ending doesn't make any sense; the end is the present moment, though you'll notice the present is moving forward, and will continue to do so infinitely.) It means that time is meaningful no matter how far you go into the past, and since time is relative, you can set your temporal origin to any of those points.

What I immediately find is that for eternal time to be meaningful it cannot be relative. There must be a preferred origin. But we have empirically shown that time cannot be absolute, and we have a contradiction. So I can categorically predict that the 'bounce' will be found to erase all information. Our universe will be indistinguishable from one which began at that moment.

But let me be explicit. Once I've sent the origin infinitely far into the past, there is no transformation that can get it back to the present. Lim m->∞ (m) sends it infinitely far into the future, not to the present. We go from, relative to the origin, [t=(∞)] to [t=(-∞)] and time remains meaningless. (However, with an absolute origin, the present time will never actually be infinite.)

Hopefully when, as they discuss, they stop using general relativity as the jumping-off point for LQC, they will find something that prevents time from being infinite, because otherwise they're simply replacing one singularity with another of a different kind.

Zagnets and Communication

"I asked them if it mattered to them what kind of computer their software selves would run on. No, they replied, it doesn't matter. All computers are considered to be equivalent by virtue of the Church-Turing Hypothesis. If they and their classmates were implemented on a vacuum tube computer, or on a computer made of mechanically-linked Lego blocks, they would still feel the occasional rush of adrenaline as a desired mate strolled by, and the agony of a parental visit."

"If computers are to definitely exist we should know that we could someday build an instrument to find them. Scientific instruments can lack accuracy, but they must be able to distinguish between phenomena. If there was no conceivable device that could distinguish heat from other phenomena like gravity, for example, heat would not be a useful concept, and science would pursue a parameter that could be measured."

I re-read this, and I thought, "Oh, I get it now! This is much better than the way I said it."

And then I thought, "I had this discussion with Marcus! I'm going to go use this on him."

For some moments, I was content. Then I realized what I'd thought - I agreed, and I still didnt' get it, yet I was going to use it on people who strongly disagree! Then I started laughing helplessly. In fact, I just laughed again. Oh my, that's a fine joke!

Once you realize what language is, exactly - communication by a system of shared symbols - it shows up how difficult communication really is. Especially nowadays, which as Orwell predicted, uses language in interchangeable blocks of phrases instead of individual words. Communicating new ideas to someone who has only used these set phrases is basically impossible. (Another reason to declare 'public education' a literal atrocity, as it schools the opposite of good thinking; though you should always expect as much perversity from tax-funded institutions.)

So, when a Ph.D thinks my writing is rambling, (again)? They just don't get it. Most likely, they haven't overcome the liability that is a tax-funded Ph.D.

Notes:

Information Integration Theory neatly answers many of the questions Lanier raises, aside from this one; "Hypothesizing an infinite cloud of slightly different consciousnesses floating around each person seems like an ultimately severe violation of Occam's razor."


There are an awful lot of arguments that parents are forgivable, that they 'did the best they could' etcetera, such as on Liberating Minds. All of this is belied in one phrase;
"the agony of a parental visit"
This is just accepted; yes, of course parental visits are agonizing. As in, it's common and understandable that your parents willingly, knowingly, and avoidably cause you agony for no purpose but their whim. Their 'best?' Misses the point. The point is that if your parents are causing you agony, make them stop, no matter what means you have to use. They're supposed to love you; they should support your goal of lessening your pain. Supposedly.


"Even if it interprets the meteor shower as having the functionality of a brain, that could only be true for a limited period of time. Certainly after a very short while Newton and Einstein would take over again and the brain would dissipate."
I am amused at how we speak as if Newton and Einstein ascended into godhood upon their deaths. (And then we wonder why Christians who read the Bible don't like science.) This image of Newton's spirit forcing particles to obey F=ma by sheer force of will just gets me. (And then people are baffled when I declare that everyone is dualist, even if they have concluded otherwise.)


"Zombies probably think that I am a mystical dualist of some stripe. I can accept that, but I don't act like a mystical dualist. I am enthused by progress in neuroscience. [...] In fact, I'm thrilled to think about brains. I must appear to be a monstrous anti-zombie to the zombies; someone who claims to have ineffable subjective experience and yet acts just like them."
This mildly edited quote describes me well, too.

"Let's imagine a society in the future in which neuroscience has gotten as good as, say, quantum electrodynamics is today, that is to say essentially complete within its framework. Would every educated person be a zombie? Would the consciousness debate still exist? Would it have any practical consequences?

This is an entertaining future to imagine. [... ] And of course that means that inside every zagnet's brain would be seen some little gizmo comprising the thoughts of self-experience."
Yes, I imagine so.

"So, if the consciousness problem has little consequence and will not yield to further physical study, why do zagnets like me care about it? I might ask the same question of some of the zombies."
And here we see the essence of the things I've edited out. Lanier is so very close to right, but not quite. His qualia dial section may be worth reading, but it makes the exact mistake he accuses others of making, and ends up being incoherent. The basic problem is the same as in any physical system; of the many possibilities, how does it decide on this one? Lanier's dial requires a second layer to do this; he is dualist, even if he thinks he isn't.


Dennet's Consciousness Explained simply misuses the word 'consciousness' to describe something that isn't the concept 'consciousness' and thus Dennet fools himself into thinking he's solved the problem. I think it's the Ph.D again.

Thursday, December 4, 2008

My Bias Versus A Good Idea; Censorship by Glut

This is enormously good for my ego.

Which sadly means I have to be extremely skeptical of it.

It would be nice to hear any comments you might have on the relationship between this idea and this blog, though.

I have a couple comments that I hope are orthogonal to the issue.
"The authors summed it up: "In general, the 'best' songs never do very badly, and the 'worst' songs never do extremely well, but almost any other result is possible.""
Multiple possible effects here. One is that there is a principled threshold below which certain people will never download a song, no matter how popular. A second is that it's all probabilistic, where the goodness of a song is simply one factor among many.

"Since the definition is circular, the premise could never be disproved by any amount of counter-evidence -- even if an act that used to be popular, suddenly falls under the radar, that could be seen as "proof" that they lost whatever magic touch they used to have, not as evidence of the arbitrariness of the market!"

This stinks of anti-market bias. He clearly doesn't realize that any mechanism to improve the situation would simply become part of the market, like the stock market. And I agree; such a service would greatly improve efficiency.

There is one counterfactual, and that is using pure violence; penalties for not sitting on some independent media-judging panel, for example. This would not be part of the market, but simply a distortion.

Personally, for the purpose of essay-like information, I still want Uberfact, but it doesn't look like people actually have the motivation for it. Perhaps I should just write out a full spec just to give it the best chance possible...

Tuesday, December 2, 2008

The Anti-Christ(special) versus Responsibility. Are You?

I think the regular idea of the anti-Christ is pretty boring. I have a special version.

Also includes a short discussion about Anglophone anti-crime measures, including a second question, and references a study on framing.

My anti-Christ has exactly the same message as the original version; don't kill, don't steal, be nice, and so on. However, unlike the original, he not somewhat convincing, or very convincing, he is absolutely 100% convincing. It is physically impossible to argue with him and not to be convinced to be 100% moral for the rest of your life, in such a way that you absolutely never give into temptation.

He uses these magical persuasive skills to convince people to spread his message and let him convince everyone on Earth, dropping crime rates in every metropolis and every god-forsaken hole to zero. All wars cease. Trade flourishes, and everyone is just generally super-nice to everybody else.

The first downside is that he IS the anti-Christ. He tortures, he kills, he steals. In fact sometimes in the middle of (successfully) convincing a crowd not to kill, he whips out an uzi and starts gunning people down. "Furthermore killing hurts." B-B-B-B-B-B-B-B-BANG BANG BANG "See? It's not just terrible in the ways I've said before. Also..." He provides for himself entirely through theft. If he has to buy something, it's always with stolen money. If you offer him a donation he'll rip your arm off, beat you to death with it, then steal the money from your children. He is not just evil, he is infinitely hypocritical, and therefore infinitely evil. No matter how repugnant an act you imagine, he will one-down you and do something worse.

The second downside is that as soon as he dies, nobody will be 100% successful at convincing everyone to be moral. A new crop of brutal barbarians and criminals will grow up and go back at it as usual.

At first I used this to prove that hypocrisy cannot be evil per se. Evil, in my view, was evil, regardless of what you told others about it, and if you advocated not-evil, then that was inherently a good thing to do. Now that I have proved the opposite, it remains a proof that ad hominem is a fallacy.

Now I have a different question. A question of responsibility, of choosing who lives and who dies.

Assume you have the opportunity to kill the anti-Christ. He has injured himself, and as a surgeon, you could refuse to operate and he would simply bleed out and die.

If you don't, are you responsible for the death of the people he murders? If you do, are you responsible for the deaths that would not have otherwise occurred?

It sucks to be faced with that choice, even on a small scale. If you can kill (or, I suppose, let die) one fairly healthy but socially disrespected person to save Stephen Hawking, the littlest cancer patient, and Mother Theresa* with their organs, are you responsible?

*(Her perception, not her reality, which was grotesque.)

Certainly, in all cases, you're responsible in the sense that you are part of the causal chain of those deaths. Absent a surgeon, his murders cannot take place. Absent a surgeon who refuses to operate, all those war-deaths cannot occur.

Now, in the real world the answer is a lot easier, because you cannot fully predict the actions of human beings. For instance, if you leave your bike unlocked somewhere, absent a thief it cannot be stolen. This is the proof that rape victims cannot have been 'asking for it' because absent a rapist they could not have been raped. Leaving a bike, or even a million bucks, on a pedestal on a busy road, no matter how little security you put on it, doesn't make it not theft to take it. You did not agree to have it taken.* It is impossible to give license for evil. If you as surgeon operate on a real world criminal, it doesn't make you responsible for his future crimes, even though they would have been impossible without you. He could have repented, but didn't, a fact which is not your fault.

*(On the other hand, there are non-moral considerations. If you do this, yes it's theft to have it stolen, but expecting to keep it is just dumb. The cops should spend no effort tracking it down. I am uneasy about what this might mean regarding rape, but the upshot is that you should always provide yourself with security, to whatever degree is necessary to reasonably ensure security. Going to the cops after a rape is much much worse than being able to stop it yourself. Which I guess means cops a very last resort, as security goes, and should never be relied upon.)

This option is not available for the anti-Christ. He is by definition infinite evil, and will never repent. He is more like a natural disaster than a person. But the question still applies, and without all these real-world distractions, I can actually analyze moral theory to see if its consistent.


From various places, most recently this;

"Framing effects were first explored by Tversky and Kahneman (1981). In a famous experiment, they asked some subjects this question:

Imagine that the U.S. is preparing for an outbreak of an unusual Asian disease which is expected to kill 600 people. Two alternative programs to fight the disease, A and B, have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: If program A is adopted, 200 people will be saved. If program B is adopted, there is a 1/3 probability that 600 people will be saved, and a 2/3 probability that no people will be saved. Which program would you choose?

The same story was told to a second group of subjects, but these subjects had to choose between these programs:

If program C is adopted, 400 people will die. If program D is adopted, there is a 1/3 probability that nobody will die and a 2/3 probability that 600 will die.

It should be obvious that programs A and C are equivalent, as are programs B and D. However, 72% of the subjects who chose between A and B favored A, but only 22% of the subjects who chose between C and D favored C. More generally, subjects were risk-averse when results were described in positive terms (such as “lives saved”) but risk-seeking when results were described in negative terms (such as “lives lost” or “deaths”)."
They have completely neglected this idea. In the first, option B (possibly)* makes you responsible for 200 deaths, where option A has no possibility of this. In the second, C makes you (possibly) reponsible for 400 deaths, whereas D isn't really clear.

*(Depends directly on the answer to this question.)

To see this, take no option. All 600 people die. Option A definitly saves 200, whereas B possibly doesn't. Taking no option in the second scenario is not possible; the framing is such that you might save everyone, but option C makes you directly responsible (possibly) for 400 deaths. It suggests that your treatment is somehow killing people, and that others are just fortunately immune, though if you think about it this can't be the case.

Shockingly, people answer these questions differently! While there is other research to back up the idea that people are risk-averse, including my own personal experience, this particular experiment is at best neutral on the question.

(Incidentally, this illustrates why philosophy is so important. In real life, responsibility is far from clear in many similar situations. Ambiguity begets uncertainty which allows the unscrupulous to steer the application of responsibility to their own ends. Any examples you can think of would be welcome.)

But originally I bring this up to illustrate the hideous choice. I'm going to slightly modify it; in no scenario do you save everyone. You choice is definitely saving 200, or killing everyone or saving 500. (To make it perfect, I would also have to modify the probabilities.) You know in advance which 200 will be saved, and it's different than the 500 who will be saved; half of the 200 will die. It is not statistical as is the assumption in epidemiology; you know for certain. Along with this you can find out any other information you want about the 600 people in question.

So, do you risk the lives of those 200 people for a chance at saving half of them plus another 400 people? Or do you sacrifice the possible lives of 400 people for the sake of definite lives of 200?

How do you decide who lives, and who dies?


Hopefully your answer also solves this apparent inconsistency; my intuition is that you're still not responsible for the anti-Christ, either way. (Of course you should let him live, regardless.)

However, in all situations where it's not who lives versus who dies, you are responsible. Going back to my comment about how cops should be last-line defense against crime, nobody in our society is doing the medical analogy of first line; preventing criminals from forming the first place, analogous to not eating carcinogens or avoiding mosquitoes in malaria areas. (The second line being vaccines, which prevent disease even after exposure, the third being early-stage treatment, like chemo on a tumour, and the fourth and final, analogous to cops, is full-blown surgery.)

I think that we are resopnsible for the crimes that are committed because nobody is doing first-line defence. I think that makes us, as Anglophone society, evil, and the longer we don't the more crimes were are responsible for. It's difficult to pin down responsibility to the exact individuals, because life is messy, but the fact remains that somebody is responsible. I think that we are responsible for crimes that occur because very few people are doing second-line defence, making crime difficult to pull off at all. And finally, we are responsible for the crimes that occur because our third-line defences tend to either empower or victimize our young criminals, rather than doing much of anything that might result in them being less criminal.

But actually, the reason I bring this up is that despite the lack of these defences, I have never personally witnessed a crime, nor has anyone I know personally been a victim. (Wait! That's not strictly true. My bike's back wheel was stolen once. This was, however, as the bikes above, mostly my fault. It's a bike and I totally don't think it counts.) I have heard of a couple of burglaries third-hand, but those were parts of a series of burglaries that just increase the likelihood of being caught.

Something
is doing first-line defence, but it's not being done intentionally by anybody; the expertise doesn't exist, so it's impossible. I want to know what it is, because it's like we already had such an anti-Christ, who did convince lots of people not to commit crimes. If so, then it would be highly advantageous to expand this first-line defence. With just this, it may even be possible to entirely replace the State.

Monday, December 1, 2008

Deception, a Few Notes

As I just mentioned on Enigman's blog, it's trivially easy to deceive without lying, especially if you really commit to not lying. (Except, as per usual, in self-defence.)

For personal growth; if you really insist on not lying, your brain eventually learns and adapts. The urge to lie disappears, but is replaced by self-deception; it activates your hypocrisy circuits and simply prevents you from thinking what you actually believe. Hence, I sometimes say some very stupid but very self-serving things.

Deception is wrong because it's hypocritical. Unless you want to be deceived, you cannot morally deceive others. Moreover, since you want your values respected, if they value not being deceived, you cannot deceive them, even if you personally do value being deceived.