One of the benefits of being in a program like Arizona's is the variety and quality of discussions that occur outside the classroom. Take for instance a recent lunchtime discussion between myself and Benji Kozuch and David Glick, an addendum to a previous discussion about the problem of qualia for AI-CogSci. Earlier, I had argued pace Benji that artificial intelligence would never be able to duplicate in machines the subjective, personal experience (in the fullest phenomenal sense of the word) which I experience in my waking life.
Benji, appealing to my physicalist intuitions, argued that I have no reason for thinking that, in principle, a cleverly embeded super-microchip could not produce the same sort of conscious, phenomenal character as I presently experience. IF, he says, we could develop a microchip capable of duplicating every physical function of a neural circuit (insert whatever physiological component you see as necessary or sufficient for consciousness), then, given the physicalist doctrine, we should expect said microchip to reproduce phenomenal consciousness.
My response has since earned me the label of "biological chauvanist." See, my (apparently prejudicial) intuition is that the matter of a mind is just as important as its architecture or logical structure or what-have-you. Though it's a crude analogy, grey matter seems to me to be essential to a mind in the same way that steel is essential to a katana. We can create something like a samurai sword out of aluminium, but aluminium and steel are just too different (in tinsel strength, elasticity; in fineness of edge and ability to retain that edge; etc) to classify them together. I believe it's fairly well accepted amongst sword enthusiasts that no other metal can perform the same functions as steel. It seems reasonable to me that the same can be said of minds: beyond the structure and logical relations, the material composition is essential to mental functioning; anything else just can't have the same function.
But Benji was not convinced (I should point out that I did not present him with the sword analogy, merely my intuition that the material is somehow essential and that altering the material would be enough to change the function of the neuron/"chip"). In principle, he believes, the functions of neurons can be duplicated down to the finest detail, just maybe not in this lifetime. He certainly seems amenable to the possibility of creating computer models (digital "brains") which are theoretically capable of recreating perfect functionally-isomorphic virtual neurons. That is, the digital information will map perfectly--down to the finest grain--onto the neural information. And if this is true, and if physicalism is true, then we should expect computer-modelled brains to experience qualia. We have no reason, besides some sort of biological chauvanism, to deny it.
I think I have a couple reasons for why digital brain scenarios are not satisfactory, though one of them is fairly weak as objections go. First, and weakly, I see no reason why digital models should actualize any of the characteristics of the real-world entities they are designed to model. Returning to the sword analogy, a perfectly modeled virtual katana cannot cut paper or tatami mats, though it could certainly predict the pattern a real sword might produce while slicing through a tatami mat. Models are useful for prediction and elucidation, but they do not actualize the properties of that after which they are modeled.
Secondly, I see no way of determining whether or not a virtual brain would have a conscious experience with phenomenal quality. We could present it with stimuli and judge whether its responses are convincing enough to suggest both intelligence (a la Turing's test) and qualitative experience (is there even a test for such a thing?). The problems with this are numerous and obvious, and I see no way around this. Call it the problem of other virtua-minds. How can we be certain that our virtual mind-brain hasn't simply modeled the brain of a Chalmers' zombie? Certainly, we could tap into the optical-/sonic-/tactic-feeds of the program, but would that give us the "what it's like for the program"(if such a thing could be said to exist)? I doubt it, but even if it did, I doubt that it would resemble anything like the phenomenal character of human experience.
All in all, I'm perfectly willing to be labelled a biological chauvanist. I think it's a reasonable position to assume until computer science can convince us that it's flawed, even if it does make me a property dualist.
Benji, appealing to my physicalist intuitions, argued that I have no reason for thinking that, in principle, a cleverly embeded super-microchip could not produce the same sort of conscious, phenomenal character as I presently experience. IF, he says, we could develop a microchip capable of duplicating every physical function of a neural circuit (insert whatever physiological component you see as necessary or sufficient for consciousness), then, given the physicalist doctrine, we should expect said microchip to reproduce phenomenal consciousness.
My response has since earned me the label of "biological chauvanist." See, my (apparently prejudicial) intuition is that the matter of a mind is just as important as its architecture or logical structure or what-have-you. Though it's a crude analogy, grey matter seems to me to be essential to a mind in the same way that steel is essential to a katana. We can create something like a samurai sword out of aluminium, but aluminium and steel are just too different (in tinsel strength, elasticity; in fineness of edge and ability to retain that edge; etc) to classify them together. I believe it's fairly well accepted amongst sword enthusiasts that no other metal can perform the same functions as steel. It seems reasonable to me that the same can be said of minds: beyond the structure and logical relations, the material composition is essential to mental functioning; anything else just can't have the same function.
But Benji was not convinced (I should point out that I did not present him with the sword analogy, merely my intuition that the material is somehow essential and that altering the material would be enough to change the function of the neuron/"chip"). In principle, he believes, the functions of neurons can be duplicated down to the finest detail, just maybe not in this lifetime. He certainly seems amenable to the possibility of creating computer models (digital "brains") which are theoretically capable of recreating perfect functionally-isomorphic virtual neurons. That is, the digital information will map perfectly--down to the finest grain--onto the neural information. And if this is true, and if physicalism is true, then we should expect computer-modelled brains to experience qualia. We have no reason, besides some sort of biological chauvanism, to deny it.
I think I have a couple reasons for why digital brain scenarios are not satisfactory, though one of them is fairly weak as objections go. First, and weakly, I see no reason why digital models should actualize any of the characteristics of the real-world entities they are designed to model. Returning to the sword analogy, a perfectly modeled virtual katana cannot cut paper or tatami mats, though it could certainly predict the pattern a real sword might produce while slicing through a tatami mat. Models are useful for prediction and elucidation, but they do not actualize the properties of that after which they are modeled.
Secondly, I see no way of determining whether or not a virtual brain would have a conscious experience with phenomenal quality. We could present it with stimuli and judge whether its responses are convincing enough to suggest both intelligence (a la Turing's test) and qualitative experience (is there even a test for such a thing?). The problems with this are numerous and obvious, and I see no way around this. Call it the problem of other virtua-minds. How can we be certain that our virtual mind-brain hasn't simply modeled the brain of a Chalmers' zombie? Certainly, we could tap into the optical-/sonic-/tactic-feeds of the program, but would that give us the "what it's like for the program"(if such a thing could be said to exist)? I doubt it, but even if it did, I doubt that it would resemble anything like the phenomenal character of human experience.
All in all, I'm perfectly willing to be labelled a biological chauvanist. I think it's a reasonable position to assume until computer science can convince us that it's flawed, even if it does make me a property dualist.
No comments:
Post a Comment