Pages

Subscribe:

Ads 468x60px

Featured Posts

Wednesday, 8 October 2014

Holy Crap... I'm a Dualist!

One of the benefits of being in a program like Arizona's is the variety and quality of discussions that occur outside the classroom. Take for instance a recent lunchtime discussion between myself and Benji Kozuch and David Glick, an addendum to a previous discussion about the problem of qualia for AI-CogSci. Earlier, I had argued pace Benji that artificial intelligence would never be able to duplicate in machines the subjective, personal experience (in the fullest phenomenal sense of the word) which I experience in my waking life.

Benji, appealing to my physicalist intuitions, argued that I have no reason for thinking that, in principle, a cleverly embeded super-microchip could not produce the same sort of conscious, phenomenal character as I presently experience. IF, he says, we could develop a microchip capable of duplicating every physical function of a neural circuit (insert whatever physiological component you see as necessary or sufficient for consciousness), then, given the physicalist doctrine, we should expect said microchip to reproduce phenomenal consciousness.

My response has since earned me the label of "biological chauvanist." See, my (apparently prejudicial) intuition is that the matter of a mind is just as important as its architecture or logical structure or what-have-you. Though it's a crude analogy, grey matter seems to me to be essential to a mind in the same way that steel is essential to a katana. We can create something like a samurai sword out of aluminium, but aluminium and steel are just too different (in tinsel strength, elasticity; in fineness of edge and ability to retain that edge; etc) to classify them together. I believe it's fairly well accepted amongst sword enthusiasts that no other metal can perform the same functions as steel. It seems reasonable to me that the same can be said of minds: beyond the structure and logical relations, the material composition is essential to mental functioning; anything else just can't have the same function.

But Benji was not convinced (I should point out that I did not present him with the sword analogy, merely my intuition that the material is somehow essential and that altering the material would be enough to change the function of the neuron/"chip"). In principle, he believes, the functions of neurons can be duplicated down to the finest detail, just maybe not in this lifetime. He certainly seems amenable to the possibility of creating computer models (digital "brains") which are theoretically capable of recreating perfect functionally-isomorphic virtual neurons. That is, the digital information will map perfectly--down to the finest grain--onto the neural information. And if this is true, and if physicalism is true, then we should expect computer-modelled brains to experience qualia. We have no reason, besides some sort of biological chauvanism, to deny it.
I think I have a couple reasons for why digital brain scenarios are not satisfactory, though one of them is fairly weak as objections go. First, and weakly, I see no reason why digital models should actualize any of the characteristics of the real-world entities they are designed to model. Returning to the sword analogy, a perfectly modeled virtual katana cannot cut paper or tatami mats, though it could certainly predict the pattern a real sword might produce while slicing through a tatami mat. Models are useful for prediction and elucidation, but they do not actualize the properties of that after which they are modeled.

Secondly, I see no way of determining whether or not a virtual brain would have a conscious experience with phenomenal quality. We could present it with stimuli and judge whether its responses are convincing enough to suggest both intelligence (a la Turing's test) and qualitative experience (is there even a test for such a thing?). The problems with this are numerous and obvious, and I see no way around this. Call it the problem of other virtua-minds. How can we be certain that our virtual mind-brain hasn't simply modeled the brain of a Chalmers' zombie? Certainly, we could tap into the optical-/sonic-/tactic-feeds of the program, but would that give us the "what it's like for the program"(if such a thing could be said to exist)? I doubt it, but even if it did, I doubt that it would resemble anything like the phenomenal character of human experience.
All in all, I'm perfectly willing to be labelled a biological chauvanist. I think it's a reasonable position to assume until computer science can convince us that it's flawed, even if it does make me a property dualist.

Tuesday, 13 August 2013

Patterned Norway Lemming


The boldly patterned Norway Lemming is effective day and night, alternating periods of activity with short spells of rest. Shrubs, grasses and mainly mosses make up its diet; in winter it clears runways under the snow on the ground surface in its hunt for food. These lemmings begin to breed in spring, under the snow, and may generate as many as litters of 6 young each all through the summer. 

Lemmings are fabled for their striking population explosions, which crop up about every three or four years. It is still not identified what causes these, but a fine, warm spring following two or three years of low population generally triggers an explosion that year or the next. As local population rises, lemmings are forced into surrounding areas. 

Slowly more and more are driven out, down the mountains and into the valleys. Many are eaten by predators, and more lose their lives while crossing rivers and lakes. Lemmings do not knowingly commit suicide.

Monday, 4 March 2013

Norway lemming


The Norway lemming, Lemmus lemmus, is a common species of lemming found in northern Fennoscandia. It is the only vertebrate species endemic to the region. The Norway lemming dwells in tundra and fells, and prefers to live near water. Adults feed primarily on sedges, grasses and moss. They are active at both day and night, alternating naps with periods of activity.

Tuesday, 5 September 2006

On The Folk as Intellectualists

For those who are interested, over the weekend I participated in a lively discussion regarding some data regarding folk intuitions about the "know how"/"able to" distinction. The discussion revolves around a paper/study by John Bengson, Marc Moffett, and Jen Wright, which discusses Alva Noe's position in "Against Intellectualism"; see the exchange here.

Saturday, 19 August 2006

Newcomb's Paradox

During a brief, semi-lull in today (the last day!)'s graduate orientation, Matt Bedke introduced a paradox that I had not previously been aware of. The quick and dirty version of Newcomb's paradox (as it is called) goes something like this:
You are given a choice between either taking the content of Box A or the contents of both Box A and Box B. The content of Box B is known to you to be $1000, but you do not have the option of taking only the content of Box B. The content of Box A is determined prior to your arrival by a psychic: if the psychic foresees you choosing both boxes, he places nothing in Box B; if the psychic foresees you choosing just Box A, he places a million dollars inside. The degree of the paradox is somewhat contingent upon the degree of accuracy attributed to the psychic by the storyteller, but in this instance Matt reported a 99% statistical accuracy rate.

The question arises: which choice is the most reasonable? If one takes both boxes, there's a 99% probability that they've chosen $1000 over $1 million; if one takes just Box A, there's a 1% chance that they've choses nothing over $1000. Intuitions vary from person to person, and for some people from time to time. As I see it, it speaks to whether one reasons according to Bayesian probabilities or according to non-Bayesian deduction. The Bayesian sees 99% as near enough to certainty that choosing both boxes would be foolhardy; the chance that you are the one time in a hundred that the psychic is wrong is relatively slim. The non-Bayesian (which I think includes myself) reasons more according to the old maxim that "a bird in hand is better than two in the bush." Allow me to explain the two in more formal reasoning.

The Bayesian reasoner looks at the accuracy of the psychic and considers the option accordingly. Since the psychic is 99% correct, choosing both boxes means that there's a 99% chance that he did not place the million dollars in the box; likewise, choosing just Box A means that there's a 99% chance that he did place the million dollars inside. Thus, if one takes both boxes, it's a near certainty that they've chosen $1000 over $1 million, wherease if one takes just Box A, it's a near certainty that they've chosen $1 million over $1000. The latter scenario is far more reasonable than the former, so it seems more reasonable to always take just Box A.

Ignoring the accuracy of prediction previously achieved by the psychic, one can look at the situation as a basic dilemma: either Box A contains a million dollars or it does not; if it does not, then if one takes both boxes, they will have $1o00, and if they take just Box A they will have nothing; if it contains the million dollars, then if one takes both boxes, they will have $1.001 million, and if they take just Box A, they will have $1 million. Either way, whether the psychic has placed the million in Box A or not, one is always better off taking both boxes. It seems quite reasonable to always take both boxes.

I'm sure there are other arguments for and against both mindsets, but these strike me as the most straightforward. I haven't read Nozick's article ("Newcomb's Problem and Two principles of Choice," in Essays in Honor of Carl G. Hempel, ed. Nicholas Rescher, Synthese Library (Dordrecht, the Netherlands: D. Reidel), p 115.), but I would be curious to see just how he fleshes this out and which approach he endorses.

Tuesday, 15 August 2006

From Thought Experiment to Experiment

From Thought Experiment to Experiment

In 1998, Andy Clark and David Chalmers publish an article called "The Extended Mind" which began by presenting (roughly) the following intuition pump:

Situation 1: A subject sits in front of a computer and is asked to determine the fit of two geometric shapes. The shapes are arranged on the screen such that determining their fit requires the subject to mentally imagine the shapes rotating until their sockets align.

Situation 2: A subject sits in front of a computer and given the same task, only this time there is a joystick by which the subject can rotate the shapes on the computer screen. The subject can either mentally rotate as in Situation 1 or utilize the joystick. The assumption is that the joystick provides an advantage in speed.
Situation 3: A subject sits in front of a computer and given the same task, only this time superscientists from the cyberpunk future have equipped the subject with a neural implant that allows her to rotate the screen objects with only a thought. Again, the subject can either mentally rotate the shapes as in Situation 1, or she can utilize the implant to do so on-screen.

Clark and Chalmers suggest, interestingly, that all three cases utilize similar levels of cognition. That is, rotating the shapes via the neural implant is just as mental an action as doing so through pure imagination; and since the joystick utilizes the same sort of computational structure as the neural implant, the joystick manipulation is equally cognitive with imaginative. Ultimately, the conclusion is (roughly) that the mind is extended into whatever objects and/or procedures are active in cognitive processes, which are not limited to the skin/skull/body/brain.

Setting aside the theoretical disputes I have with this thesis (and there are a few), it seems that we have a real-life version of Situation 3. Or, at least, a real-life example of something resembling Situation 3. Matt Nagle, a tetraplegic who volunteered to have 96 electrodes implanted in his brain's motor cortex, closely resembles the subject described in Clark and Chalmers 3rd situation. The implanted electrodes monitor Matt's "brain noise" and interpret it as motor efforts. These interpretations are fed into a computer that allows Matt to control (loosely) an on-screen cursor. So far, Matt has limited control: he can play Pong on a good day, but still has difficulty drawing a circle (even a massively imperfect closed figure can exhaust him). Experimenters hope, however, that in the future, these implants will be part of a complete system that will allow tetra-, quadra-, and paraplegics fully recovered usage of their limbs, bypassing the damaged spinal chord that inhibits the brain messages from reaching the appropriate nervous centers.

The question that arises is whether or not Clark and Chalmers are right about the similarity in level and degreee of cognition between their situations, and Matt Nagle gives us a real-life example by which to better judge the supposed similarity. My intuition is that the similarity is illusive at best, though I'm yet to argue for it in print. I'm interested to see how other philosophers react to this (and any other similar) experiment.

Sunday, 13 August 2006

Natural Temporal Quantifiers

In a discussion with a fellow grad student here at Arizona I learned something very interesting about the Indonesian language (as well as its cousin tongues, like Malay): it has not tenses or tense-specific verb conjugation. This immediately struck me as a natural language that resembles the kind of formalization suggested by Arthur Prior's tense logic.

Prior, as I understand things, really provided the idea of a temporal quantifier, which functions analogously to the modal quantifier in modal logic. Thus, the sentence "Nixon was president" can be formalized as "[It was the case that] Nixon is president." Likewise, from the way it was explained to me, Indonesian makes statements about the past by adverbially quantifying present-tense sentences. Thus the same sentence about Nixon (if translated literally) would look something like "1970-ly, Nixon is president." I found this resemblence terribly interesting.

I'm not sure, however, if we can extrapolate any assumptions about the metaphysical commitments of the typical Indonesian. That is, it's hard to say, based soley on the grammar of the language, whether time-sensitive propositions are taken to be factual declarations (i.e., given an eternalist reading) or treated similarly to the idiomatic and/or literary devices often utilized in English (e.g., "Jason is giving a talk next week"). This might be an empirical question that can be answered by interviewing Indonesian/Malaysian speakers.

Even if we can determine what sorts of metaphysical commitments are engendered by this grammar, I'm skeptical as to what sort of weight it might lend to any philosophical position. Could this actually be used as evidence in support of Eternalism or Temporalism? I don't think so, but it's nonetheless interesting to see a real-life and natural example of a theoretical, formalized logic.