Post by hordur08 on Jan 23, 2009 13:55:14 GMT -5
Since Birna Íris was considering consciousness there are a few interesting theories about consciousness in humans.
1) Consciousness arose with the language. The ability to make a sentence about oneself allows us have ideas about ourselves and think about ourselves. According to that an animal/person/computer is conscious to the degree of its language capabilities. That actually makes sense if one thinks about how much he remembers from his childhood, and children who have not yet learned to speak are possibly with very limited consciousness.
2) Lots of functions of the brain have been localized, but no researcher has ever found the "seat" of consciousness. This fact has caused some people to revert to a form of dualism, which is the belief that the mind is something different from the body, that it is non-physical and can have an effect on a physical body.
Others have concluded that consciousness most likely happens in a functioning brain when many areas of the brain are functioning at the same time. This is a much more interesting theory than dualism in my opinion and a much more plausible one. This has also been supported by research where people connected to EEG (brain recording machine) which shows many areas of the brain functioning at the same time when a person recognizes an ambiguous figure.
Both of these theories might be of interest to computer scientists and philosophists considering if a machine can be conscious and poses a few questions. Is a machine capable of natural language conscious, and if so is a machine with limited language capabilities conscious to a certain degree?
I think that the second theory is more interesting though, and perhaps more likely. It seems like this parallelism in the brain is what causes consciousness. It is yet to be proven but experimental results seem to favor this view. Is this the key to making a machine conscious? Does it have to have parallel processes?
Even if we can make a machine conscious, how do we know if it is conscious? Will we ever have a better proof for consciousness than reasoning by analogy? The only reason we believe the next human being is conscious is that we are conscious, and since there is another human being it must also be conscious. This will also be a problem when deciding if a machine is conscious. Maybe we will have to settle for making machines that behave like humans and leave the question of consciousness open for debate.
1) Consciousness arose with the language. The ability to make a sentence about oneself allows us have ideas about ourselves and think about ourselves. According to that an animal/person/computer is conscious to the degree of its language capabilities. That actually makes sense if one thinks about how much he remembers from his childhood, and children who have not yet learned to speak are possibly with very limited consciousness.
2) Lots of functions of the brain have been localized, but no researcher has ever found the "seat" of consciousness. This fact has caused some people to revert to a form of dualism, which is the belief that the mind is something different from the body, that it is non-physical and can have an effect on a physical body.
Others have concluded that consciousness most likely happens in a functioning brain when many areas of the brain are functioning at the same time. This is a much more interesting theory than dualism in my opinion and a much more plausible one. This has also been supported by research where people connected to EEG (brain recording machine) which shows many areas of the brain functioning at the same time when a person recognizes an ambiguous figure.
Both of these theories might be of interest to computer scientists and philosophists considering if a machine can be conscious and poses a few questions. Is a machine capable of natural language conscious, and if so is a machine with limited language capabilities conscious to a certain degree?
I think that the second theory is more interesting though, and perhaps more likely. It seems like this parallelism in the brain is what causes consciousness. It is yet to be proven but experimental results seem to favor this view. Is this the key to making a machine conscious? Does it have to have parallel processes?
Even if we can make a machine conscious, how do we know if it is conscious? Will we ever have a better proof for consciousness than reasoning by analogy? The only reason we believe the next human being is conscious is that we are conscious, and since there is another human being it must also be conscious. This will also be a problem when deciding if a machine is conscious. Maybe we will have to settle for making machines that behave like humans and leave the question of consciousness open for debate.