Tuesday, 5 September 2006
On The Folk as Intellectualists
For those who are interested, over the weekend I participated in a lively discussion regarding some data regarding folk intuitions about the "know how"/"able to" distinction. The discussion revolves around a paper/study by John Bengson, Marc Moffett, and Jen Wright, which discusses Alva Noe's position in "Against Intellectualism"; see the exchange here.
Saturday, 19 August 2006
Newcomb's Paradox
During a brief, semi-lull in today (the last day!)'s graduate orientation, Matt Bedke introduced a paradox that I had not previously been aware of. The quick and dirty version of Newcomb's paradox (as it is called) goes something like this:
You are given a choice between either taking the content of Box A or the contents of both Box A and Box B. The content of Box B is known to you to be $1000, but you do not have the option of taking only the content of Box B. The content of Box A is determined prior to your arrival by a psychic: if the psychic foresees you choosing both boxes, he places nothing in Box B; if the psychic foresees you choosing just Box A, he places a million dollars inside. The degree of the paradox is somewhat contingent upon the degree of accuracy attributed to the psychic by the storyteller, but in this instance Matt reported a 99% statistical accuracy rate.
The question arises: which choice is the most reasonable? If one takes both boxes, there's a 99% probability that they've chosen $1000 over $1 million; if one takes just Box A, there's a 1% chance that they've choses nothing over $1000. Intuitions vary from person to person, and for some people from time to time. As I see it, it speaks to whether one reasons according to Bayesian probabilities or according to non-Bayesian deduction. The Bayesian sees 99% as near enough to certainty that choosing both boxes would be foolhardy; the chance that you are the one time in a hundred that the psychic is wrong is relatively slim. The non-Bayesian (which I think includes myself) reasons more according to the old maxim that "a bird in hand is better than two in the bush." Allow me to explain the two in more formal reasoning.
The Bayesian reasoner looks at the accuracy of the psychic and considers the option accordingly. Since the psychic is 99% correct, choosing both boxes means that there's a 99% chance that he did not place the million dollars in the box; likewise, choosing just Box A means that there's a 99% chance that he did place the million dollars inside. Thus, if one takes both boxes, it's a near certainty that they've chosen $1000 over $1 million, wherease if one takes just Box A, it's a near certainty that they've chosen $1 million over $1000. The latter scenario is far more reasonable than the former, so it seems more reasonable to always take just Box A.
Ignoring the accuracy of prediction previously achieved by the psychic, one can look at the situation as a basic dilemma: either Box A contains a million dollars or it does not; if it does not, then if one takes both boxes, they will have $1o00, and if they take just Box A they will have nothing; if it contains the million dollars, then if one takes both boxes, they will have $1.001 million, and if they take just Box A, they will have $1 million. Either way, whether the psychic has placed the million in Box A or not, one is always better off taking both boxes. It seems quite reasonable to always take both boxes.
I'm sure there are other arguments for and against both mindsets, but these strike me as the most straightforward. I haven't read Nozick's article ("Newcomb's Problem and Two principles of Choice," in Essays in Honor of Carl G. Hempel, ed. Nicholas Rescher, Synthese Library (Dordrecht, the Netherlands: D. Reidel), p 115.), but I would be curious to see just how he fleshes this out and which approach he endorses.
You are given a choice between either taking the content of Box A or the contents of both Box A and Box B. The content of Box B is known to you to be $1000, but you do not have the option of taking only the content of Box B. The content of Box A is determined prior to your arrival by a psychic: if the psychic foresees you choosing both boxes, he places nothing in Box B; if the psychic foresees you choosing just Box A, he places a million dollars inside. The degree of the paradox is somewhat contingent upon the degree of accuracy attributed to the psychic by the storyteller, but in this instance Matt reported a 99% statistical accuracy rate.
The question arises: which choice is the most reasonable? If one takes both boxes, there's a 99% probability that they've chosen $1000 over $1 million; if one takes just Box A, there's a 1% chance that they've choses nothing over $1000. Intuitions vary from person to person, and for some people from time to time. As I see it, it speaks to whether one reasons according to Bayesian probabilities or according to non-Bayesian deduction. The Bayesian sees 99% as near enough to certainty that choosing both boxes would be foolhardy; the chance that you are the one time in a hundred that the psychic is wrong is relatively slim. The non-Bayesian (which I think includes myself) reasons more according to the old maxim that "a bird in hand is better than two in the bush." Allow me to explain the two in more formal reasoning.
The Bayesian reasoner looks at the accuracy of the psychic and considers the option accordingly. Since the psychic is 99% correct, choosing both boxes means that there's a 99% chance that he did not place the million dollars in the box; likewise, choosing just Box A means that there's a 99% chance that he did place the million dollars inside. Thus, if one takes both boxes, it's a near certainty that they've chosen $1000 over $1 million, wherease if one takes just Box A, it's a near certainty that they've chosen $1 million over $1000. The latter scenario is far more reasonable than the former, so it seems more reasonable to always take just Box A.
Ignoring the accuracy of prediction previously achieved by the psychic, one can look at the situation as a basic dilemma: either Box A contains a million dollars or it does not; if it does not, then if one takes both boxes, they will have $1o00, and if they take just Box A they will have nothing; if it contains the million dollars, then if one takes both boxes, they will have $1.001 million, and if they take just Box A, they will have $1 million. Either way, whether the psychic has placed the million in Box A or not, one is always better off taking both boxes. It seems quite reasonable to always take both boxes.
I'm sure there are other arguments for and against both mindsets, but these strike me as the most straightforward. I haven't read Nozick's article ("Newcomb's Problem and Two principles of Choice," in Essays in Honor of Carl G. Hempel, ed. Nicholas Rescher, Synthese Library (Dordrecht, the Netherlands: D. Reidel), p 115.), but I would be curious to see just how he fleshes this out and which approach he endorses.
Tuesday, 15 August 2006
From Thought Experiment to Experiment
From Thought Experiment to Experiment
In 1998, Andy Clark and David Chalmers publish an article called "The Extended Mind" which began by presenting (roughly) the following intuition pump:
Situation 1: A subject sits in front of a computer and is asked to determine the fit of two geometric shapes. The shapes are arranged on the screen such that determining their fit requires the subject to mentally imagine the shapes rotating until their sockets align.
Situation 2: A subject sits in front of a computer and given the same task, only this time there is a joystick by which the subject can rotate the shapes on the computer screen. The subject can either mentally rotate as in Situation 1 or utilize the joystick. The assumption is that the joystick provides an advantage in speed.
Situation 3: A subject sits in front of a computer and given the same task, only this time superscientists from the cyberpunk future have equipped the subject with a neural implant that allows her to rotate the screen objects with only a thought. Again, the subject can either mentally rotate the shapes as in Situation 1, or she can utilize the implant to do so on-screen.
Clark and Chalmers suggest, interestingly, that all three cases utilize similar levels of cognition. That is, rotating the shapes via the neural implant is just as mental an action as doing so through pure imagination; and since the joystick utilizes the same sort of computational structure as the neural implant, the joystick manipulation is equally cognitive with imaginative. Ultimately, the conclusion is (roughly) that the mind is extended into whatever objects and/or procedures are active in cognitive processes, which are not limited to the skin/skull/body/brain.
Setting aside the theoretical disputes I have with this thesis (and there are a few), it seems that we have a real-life version of Situation 3. Or, at least, a real-life example of something resembling Situation 3. Matt Nagle, a tetraplegic who volunteered to have 96 electrodes implanted in his brain's motor cortex, closely resembles the subject described in Clark and Chalmers 3rd situation. The implanted electrodes monitor Matt's "brain noise" and interpret it as motor efforts. These interpretations are fed into a computer that allows Matt to control (loosely) an on-screen cursor. So far, Matt has limited control: he can play Pong on a good day, but still has difficulty drawing a circle (even a massively imperfect closed figure can exhaust him). Experimenters hope, however, that in the future, these implants will be part of a complete system that will allow tetra-, quadra-, and paraplegics fully recovered usage of their limbs, bypassing the damaged spinal chord that inhibits the brain messages from reaching the appropriate nervous centers.
The question that arises is whether or not Clark and Chalmers are right about the similarity in level and degreee of cognition between their situations, and Matt Nagle gives us a real-life example by which to better judge the supposed similarity. My intuition is that the similarity is illusive at best, though I'm yet to argue for it in print. I'm interested to see how other philosophers react to this (and any other similar) experiment.
In 1998, Andy Clark and David Chalmers publish an article called "The Extended Mind" which began by presenting (roughly) the following intuition pump:
Situation 1: A subject sits in front of a computer and is asked to determine the fit of two geometric shapes. The shapes are arranged on the screen such that determining their fit requires the subject to mentally imagine the shapes rotating until their sockets align.
Situation 2: A subject sits in front of a computer and given the same task, only this time there is a joystick by which the subject can rotate the shapes on the computer screen. The subject can either mentally rotate as in Situation 1 or utilize the joystick. The assumption is that the joystick provides an advantage in speed.
Situation 3: A subject sits in front of a computer and given the same task, only this time superscientists from the cyberpunk future have equipped the subject with a neural implant that allows her to rotate the screen objects with only a thought. Again, the subject can either mentally rotate the shapes as in Situation 1, or she can utilize the implant to do so on-screen.
Clark and Chalmers suggest, interestingly, that all three cases utilize similar levels of cognition. That is, rotating the shapes via the neural implant is just as mental an action as doing so through pure imagination; and since the joystick utilizes the same sort of computational structure as the neural implant, the joystick manipulation is equally cognitive with imaginative. Ultimately, the conclusion is (roughly) that the mind is extended into whatever objects and/or procedures are active in cognitive processes, which are not limited to the skin/skull/body/brain.
Setting aside the theoretical disputes I have with this thesis (and there are a few), it seems that we have a real-life version of Situation 3. Or, at least, a real-life example of something resembling Situation 3. Matt Nagle, a tetraplegic who volunteered to have 96 electrodes implanted in his brain's motor cortex, closely resembles the subject described in Clark and Chalmers 3rd situation. The implanted electrodes monitor Matt's "brain noise" and interpret it as motor efforts. These interpretations are fed into a computer that allows Matt to control (loosely) an on-screen cursor. So far, Matt has limited control: he can play Pong on a good day, but still has difficulty drawing a circle (even a massively imperfect closed figure can exhaust him). Experimenters hope, however, that in the future, these implants will be part of a complete system that will allow tetra-, quadra-, and paraplegics fully recovered usage of their limbs, bypassing the damaged spinal chord that inhibits the brain messages from reaching the appropriate nervous centers.
The question that arises is whether or not Clark and Chalmers are right about the similarity in level and degreee of cognition between their situations, and Matt Nagle gives us a real-life example by which to better judge the supposed similarity. My intuition is that the similarity is illusive at best, though I'm yet to argue for it in print. I'm interested to see how other philosophers react to this (and any other similar) experiment.
Sunday, 13 August 2006
Natural Temporal Quantifiers
In a discussion with a fellow grad student here at Arizona I learned something very interesting about the Indonesian language (as well as its cousin tongues, like Malay): it has not tenses or tense-specific verb conjugation. This immediately struck me as a natural language that resembles the kind of formalization suggested by Arthur Prior's tense logic.
Prior, as I understand things, really provided the idea of a temporal quantifier, which functions analogously to the modal quantifier in modal logic. Thus, the sentence "Nixon was president" can be formalized as "[It was the case that] Nixon is president." Likewise, from the way it was explained to me, Indonesian makes statements about the past by adverbially quantifying present-tense sentences. Thus the same sentence about Nixon (if translated literally) would look something like "1970-ly, Nixon is president." I found this resemblence terribly interesting.
I'm not sure, however, if we can extrapolate any assumptions about the metaphysical commitments of the typical Indonesian. That is, it's hard to say, based soley on the grammar of the language, whether time-sensitive propositions are taken to be factual declarations (i.e., given an eternalist reading) or treated similarly to the idiomatic and/or literary devices often utilized in English (e.g., "Jason is giving a talk next week"). This might be an empirical question that can be answered by interviewing Indonesian/Malaysian speakers.
Even if we can determine what sorts of metaphysical commitments are engendered by this grammar, I'm skeptical as to what sort of weight it might lend to any philosophical position. Could this actually be used as evidence in support of Eternalism or Temporalism? I don't think so, but it's nonetheless interesting to see a real-life and natural example of a theoretical, formalized logic.
Prior, as I understand things, really provided the idea of a temporal quantifier, which functions analogously to the modal quantifier in modal logic. Thus, the sentence "Nixon was president" can be formalized as "[It was the case that] Nixon is president." Likewise, from the way it was explained to me, Indonesian makes statements about the past by adverbially quantifying present-tense sentences. Thus the same sentence about Nixon (if translated literally) would look something like "1970-ly, Nixon is president." I found this resemblence terribly interesting.
I'm not sure, however, if we can extrapolate any assumptions about the metaphysical commitments of the typical Indonesian. That is, it's hard to say, based soley on the grammar of the language, whether time-sensitive propositions are taken to be factual declarations (i.e., given an eternalist reading) or treated similarly to the idiomatic and/or literary devices often utilized in English (e.g., "Jason is giving a talk next week"). This might be an empirical question that can be answered by interviewing Indonesian/Malaysian speakers.
Even if we can determine what sorts of metaphysical commitments are engendered by this grammar, I'm skeptical as to what sort of weight it might lend to any philosophical position. Could this actually be used as evidence in support of Eternalism or Temporalism? I don't think so, but it's nonetheless interesting to see a real-life and natural example of a theoretical, formalized logic.
Subscribe to:
Posts (Atom)