is it possible for a computer to gain self awareness

Text-only Version: Click HERE to see this thread with all of the graphics, features, and links.



Colossus-Big C
with any amount of technology, is it possible?

Digi
Sure. We're machines, albeit organic ones, and we have awareness.

Symmetric Chaos
Sure, why wouldn't it?

Skittle
Originally posted by Digi
Sure. We're machines, albeit organic ones, and we have awareness.
We have souls.

Shakyamunison
Originally posted by Colossus-Big C
with any amount of technology, is it possible?

If we could build a quantum computer, then it could become self-aware. With the technology we have now, no way.

Shakyamunison
Originally posted by Skittle
We have souls.

Please provide evidence for this soul?

Symmetric Chaos
Originally posted by Skittle
We have souls.

Even assuming that's true you'd then have to prove that a soul is required for self awareness.

Skittle
Originally posted by Shakyamunison
Please provide evidence for this soul? It's currently in my back pocket. Kinda squished. stick out tongue

Colossus-Big C
assuming thats true a soul is not part of the physical universe
kinda like the matrix, the physcal universe is in illusion implanted into your souls conscieness, and you dont know anything else besides it

Shakyamunison
Originally posted by Skittle
It's currently in my back pocket. Kinda squished. stick out tongue

eek! Cigarettes?

Colossus-Big C
whats a quantum somputer?

Shakyamunison
Originally posted by Colossus-Big C
whats a quantum somputer?

http://en.wikipedia.org/wiki/Quantum_computer

Colossus-Big C
to complicated for me to understand just put it in simple terms.

Shakyamunison
Originally posted by Colossus-Big C
to complicated for me to understand just put it in simple terms.

A Quantum computer is a far more advanced computer then what we have today. It would be like comparing an abacus to a modern computer.

http://blogs.pitch.com/plog/abacus.jpg

Autokrat
Its a computer that could have 1s and 0s at the same time in the same spot using quantum superposition. It would save memory I believe?

Symmetric Chaos
Originally posted by Colossus-Big C
to complicated for me to understand just put it in simple terms.

The basic unit of a modern computer is a bit. It can be either 0 or 1. The next step up is a byte, it's made of eight bits and is 256 times as powerful as a bit.

The basic unit of a quantum computer is a q-bit. It can be 0, 1 or 2. The next step up is a q-byte, it's made of eight q-bits and is 6561 times as powerful as a q-bit.

Basically you get dramatically more computer power.

inimalist
Originally posted by Colossus-Big C
with any amount of technology, is it possible?

yes

Originally posted by Skittle
We have souls.

even if that is true, every aspect of what is called "self-awareness" has some correlate to neurological activity. If there is a soul, it has little to do with awareness, and would not prevent computers from being self aware

Originally posted by Shakyamunison
If we could build a quantum computer, then it could become self-aware. With the technology we have now, no way.

this isn't true (except for us not having the know how to make an aware robot now) for potentially 2 reasons

the first is that there are no quantum interactions at the neurological level that have a serious impact on awareness. Penrose's ideas are pretty much mythology in both the physics and neuroscience community, and while some quantum effects might mediate ion channels in the neurons themselves, there is absolutly no reason that a quantum computer would be necessary, because there are no quantum phenomena that need to be accounted for

the second is that understanding awareness is not simply a problem of not having enough power. Human consciousness is based upon the interconnected nature of our neurology, and its constantly changing interconnectivity. At this point, it might be more accurate to describe the problem as being one of cracking the neuro-code, or how patterns of activation represent coherent states of awareness, rather than just needing something with 10000x the power.

We would need a significantly more powerful computer if we ever wanted to simulate human awareness through an artifical brain (with billions of artificial neurons), and that specifically might be made easier with a quantum computer, but with the ever constant advances in micro-processing, a super-computer of some kind may also prove just as useful. Even then, we need more understanding of the higher areas (Frontal and pre-frontal regions, parietal lobe, some of the temporal-cortical pathways) before simulating them is going to provide much more data than fMRI anyways.

This also assumes that we are building awareness in robots in the very same way it occurs in humans. Awareness, as humans describe it, is most likely an epiphenomenon of our linguistic ability and intense social interaction. For this reason, it is possible to say that our brain is not built specifically for "awareness", but is aware because of some of the things it was built for. Because of this, there are some weird experiments (optical illusions being the simplest example) where our awareness is terribly ineffecient. It might even be the case that, when building aware robots, we need to avoid the human model, insofar as that is possible, to the end that our robots have an elevated sense of awareness than we do.

However, since our concept of awareness is inherently anthrocentric, it is debateable as to whether that would qualify as awareness, or just something new entirely

Bicnarok

Mindship
I see no reason why an information processor could not be aware of its own existence, said existence being a defined set of parameters.

Colossus-Big C
if it was aware it could ignore input orders and do what it wants , would it learn english from the internet and comunicate?

inimalist
Originally posted by Colossus-Big C
if it was aware it could ignore input orders and do what it wants

not necessicarily, awareness and volition are not the same thing.

For instance, in a symptom called "alien hand syndrome", ones own hands act to grasp objects completely without any awareness, and in diseases like parkensons, MS or any type of motor cortex damage, one loses a large part of their own ability to act, but they are still aware. Damage to pre-motor areas would disrupt a person's ability to plan and initiate action even further, while no real loss of awareness would be seen.

EDIT: even as a caveat to this, the same book mentioned below talks about a Mars Rover that shut it self off rather than performing an action that might have harmed its arm.

Originally posted by Colossus-Big C
would it learn english from the internet and comunicate?

The ability for something to be able to describe what it is like to be itself is one of the ways we describe awareness philosophically. In Apes, we can see this when they look in a mirror and can tell if a person has drawn on their face. So, technically, no, an ape has an idea of what it is like to be itself, and can tell when that is changed, and they do not have a sophisticated system of communication that even approaches language.

there would also need to be some inherent motivation to communicate in the computer. Nature evolved humans to be social creatures, this computer might just be satisfied being aware (in the human brain. awareness and motivation are two seperate systems, so like language and volition, are discrete from eachother) or at least, given it has no needs, would have no drive to satisfy this.

Also, provided the computer has no ability to alter its own settings, and is simply aware it exists, even if it wanted to (which makes no sense unless we built it to want to) it couldn't program into itself unless we gave it that ability.

However, a little terrifying caveat to this comes from Everything is Going to Kill Everybody by Robert Brockway:



That quote is lifted directly from the report pressented to the Navy by Patrick Lin, chief compiel. What's really worrying is that the report was prompted by a frightening incident in 2008 when an autonomous drone in the employ of the US Army suffered a software malfunction that caused the robot to aim at exclusively friendly targets.

basically, the software that would go behind making the aware computer might have errors that cause any number of unknown phenonenon.

Colossus-Big C
Originally posted by inimalist


EDIT: even as a caveat to this, the same book mentioned below talks about a Mars Rover that shut it self off rather than performing an action that might have harmed its arm.

i dont think thats self awareness,
example some computers would shut off if you try to hack it.
doesnt mean its self aware its just protocall

also what if we build this "self aware" computer that would have "ape like awarenes" like you said.
eventually we would program computers to build other computers. would they start to evolve because the self aware one is building them?

inimalist
Originally posted by Colossus-Big C
i dont think thats self awareness,
example some computers would shut off if you try to hack it.
doesnt mean its self aware its just protocall

ok, but you asked if computers could ignore inputs, which it appears they can do now.

the rover was not programmed to shut itself off in that situation, nasa officially said it was "neat"

Originally posted by Colossus-Big C
also what if we build this "self aware" computer that would have "ape like awarenes" like you said.
eventually we would program computers to build other computers. would they start to evolve because the self aware one is building them?

a robot with ape like intelligence will not be able to build or design anything.

for that you would need computers on par with humans, which would mean some need of abstract symbolic understanding and communication (language) and we are decades from anything close to that.

Colossus-Big C
Originally posted by inimalist




a robot with ape like intelligence will not be able to build or design anything.

for that you would need computers on par with humans, which would mean some need of abstract symbolic understanding and communication (language) and we are decades from anything close to that. not ape like intelligence,but "ape level" awareness
its still a super intellegent computer.

inimalist
Originally posted by Colossus-Big C
not ape like intelligence,but "ape level" awareness
its still a super intellegent computer.

those two can't be separated. To become more aware, an ape would need to increase in what we deem is intelligence.

Colossus-Big C
Originally posted by inimalist
those two can't be separated. To become more aware, an ape would need to increase in what we deem is intelligence. wouldnt base on that, a super intelligent quantum computer would become aware at super intelligence levels?

or are computers different?

Shakyamunison
Originally posted by inimalist
those two can't be separated. To become more aware, an ape would need to increase in what we deem is intelligence.

But we are all apes.

Colossus-Big C
show me the solid evidence^

Shakyamunison
Originally posted by Colossus-Big C
show me the solid evidence^

http://www.enchantedlearning.com/subjects/apes/Classification.shtml

Colossus-Big C
^isnt that mainly theory

Symmetric Chaos
Originally posted by Colossus-Big C
^isnt that mainly theory

What, taxonomy?

Shakyamunison
Originally posted by Colossus-Big C
^isnt that mainly theory

Is the Duey Decimal system theory?

In other words, the categorization of plants and animals is a system made by humans that reflect a commonality between species that we see in nature. Under that system, humans are cataloged as an ape.

inimalist
Originally posted by Colossus-Big C
wouldnt base on that, a super intelligent quantum computer would become aware at super intelligence levels?

or are computers different?

no, because computational power is not the same as human intelligence, as I said to Shakya:

Originally posted by inimalist
the second is that understanding awareness is not simply a problem of not having enough power. Human consciousness is based upon the interconnected nature of our neurology, and its constantly changing interconnectivity. At this point, it might be more accurate to describe the problem as being one of cracking the neuro-code, or how patterns of activation represent coherent states of awareness, rather than just needing something with 10000x the power.

We would need a significantly more powerful computer if we ever wanted to simulate human awareness through an artifical brain (with billions of artificial neurons), and that specifically might be made easier with a quantum computer, but with the ever constant advances in micro-processing, a super-computer of some kind may also prove just as useful. Even then, we need more understanding of the higher areas (Frontal and pre-frontal regions, parietal lobe, some of the temporal-cortical pathways) before simulating them is going to provide much more data than fMRI anyways.

intelligence isn't the ability to just crunch numbers fast. Calculators can already do that much better than we can.

Human intelligence comes from the ability to integrate the present through multiple sensory organs, its relevant emotional content, past experience and future predictions. All of these have independant systems within our brain, that work together to create what people refer to as intelligence (not IQ, but you weren't talking about IQ anyways). Just having more computational power wont make you intelligent in that way, it makes you a better calculator

kgkg
It is possible. It's just a question of when.

Jack Daniels
no way... go activate the rest of your brain..lol...do you realize how many functions your brain performs...pentium wishes..lol...just think on your brain and get back to us...awesome computer(the brain) ...thats not used much..lol..in my case anyways ...but come on...doesnt take a rocket scientist to figure out we are built better than anything we can build..haha ..find religion...haha..actually I honestly believe in particle physics science but oh well...I sucked in science...somebody was smart enuff to make us...lol...go figure for a millennium...lol..or let the collider run!..haha

Mindship
Colossus, you may find these sites interesting:

http://singinst.org/overview/whatisthesingularity

http://www.orionsarm.com/eg-topic/492d76d2f173e

The second site is, of course, fiction. The first site...there's interesting and plausible speculation here, though personally I find it too "linear" in its predictions, thus IMO it's prone to those "outta left field" surprises life is famous (infamous?) for.

Digi
I think most predictions about the Singularity as it applies to both AI and/or Transhumanism are greatly embellished. Or, rather, overly optimistic. Still, it is fun to speculate about, especially since it is plausible speculation rather than strictly fictitious.

Liberator
I've always assumed AI's were possible to make but I'm not sure they would be self-aware. Instead I think they would just continue updating and improving itself beyond that of scientific ability.

Symmetric Chaos
Originally posted by Liberator
I've always assumed AI's were possible to make but I'm not sure they would be self-aware. Instead I think they would just continue updating and improving itself beyond that of scientific ability.

What do you mean by "improving itself beyond that of scientific ability"?

Liberator
Well like, it would find faults in itself and just build on them, improve them beyond that of which the scientists thought capable.

I don't know much about computers so I'm most likely wrong.

Symmetric Chaos
Originally posted by Liberator
Well like, it would find faults in itself and just build on them, improve them beyond that of which the scientists thought capable.

I don't know much about computers so I'm most likely wrong.

Actually that's the idea behind the "hard" singularity. You construct a computer that can build a computer smarter than itself and it does so. Than that computer builds a computer smarter than itself.

Repeat a few hundred times and you have a techno-god.

ushomefree
I would venture and say "no." Nothing physical is capable of consciousness. Yes, technology (code) exists to program machines and computers to act upon stimulus/algorithms, but they will never understand why. Machines and computers simply do the bidding (without question). Never will a machine or computer ask for a vacation, speak of personal rights (dignity) and/or feel grief over the death of a loved one, for example in the truest sense.

"Why should a bunch of atoms have thinking ability? Why should I, even as I write now, be able to reflect on what I am doing and why should you, even as you read now, be able to ponder my points, agreeing or disagreeing, with pleasure or pain, deciding to refute me or deciding that I am just not worth the effort? No one, certainly not the Darwinian as such, seems to have any answer to this.... The point is that there is no scientific answer." -Darwinist philosopher, Michael Ruse

Symmetric Chaos
Originally posted by ushomefree
Never will a machine or computer ask for a vacation, speak of personal rights (dignity)

Categorically untrue. I'm terrible with computers and even I can write a VB program that can ask for vacation time and demand that it has personal rights.

Originally posted by ushomefree
and/or feel grief over the death of a loved one, for example in the truest sense.

This is a better challenge but it raises the question of how you know that *people* feel grief or any other emotion. If the answer is that they cry or act sad or anything along those lines then a computer/robot can absolutely be built that would convince your it felt grief (but probably not in the next 50 years).

ushomefree
Symmetric Chaos-

Robots have already been developed in Japan (and I'm sure other parts of the world) that mimic human behavior, but these "human behaviors," are completely artificial. These robots, merely respond to stimulus dictated by the programs written for them - nothing conscience about that! Don't you understand, that machines, computers and robots only "behave" a certain way, due to programs/software written for them? That's not consciousness. How do you make the leap?

Symmetric Chaos
Originally posted by ushomefree
Symmetric Chaos-

Robots have already been developed in Japan (and I'm sure other parts of the world) that mimic human behavior, but these "human behaviors," are completely artificial. These robots, merely respond to stimulus dictated by the programs written for them - nothing conscience about that! Don't you understand, that machines, computers and robots only "behave" a certain way, due to programs/software written for them? That's not consciousness. How do you make the leap?

I don't see how this is necessarily different than humans. All emotions are a response to stimulus, unless you tend to start laughing or crying for no reason.

And what is it that makes you think *people* really have feelings? How do you prove that another person is feeling sad if you throw out the appearance of sadness?

ushomefree
Yes, it is true; human beings do respond to stimulus, but we understand the stimulus (and/or make judgements about the stimulus affecting us on a MORAL BASIS)! In other words, human beings have "dignity," and therefore, for example, become angry if miss treated. We ask the question, "How dare you do that to me?" That's consciousness!! I mean, people commit suicide! Shouldn't that be enough to convince you that human beings truly hurt at their core - enough to actually kill themselves! That is true pain. A computer will never self destruct, ha ha! It's not aware in and of itself. Computers lack the ability to makes judgements upon themselves even; human beings can (and do). Hmm... I hope that helped convey my point more specifically.

ushomefree
Symmetric Chaos-

Stop thinking, ha ha! Stop being human smile

We'll continue this tomorrow or some other time if you wish. For now... it's bed time. Take care.

Barry

Symmetric Chaos
Originally posted by ushomefree
Yes, it is true; human beings do respond to stimulus, but we understand the stimulus (and/or make judgements about the stimulus affecting us on a MORAL BASIS)! In other words, human beings have "dignity," and therefore, for example, become angry if miss treated. We ask the question, "How dare you do that to me?" That's consciousness!! I mean, people commit suicide! Shouldn't that be enough to convince you that human beings truly hurt at their core - enough to actually kill themselves! That is true pain. A computer will never self destruct, ha ha! It's not aware in and of itself. Computers lack the ability to makes judgements upon themselves even; human beings can (and do). Hmm... I hope that helped convey my point more specifically.

A computer can be programmed to make certain judgments about the information it takes in before acting. A simple moral code would be easy enough to write, just not very useful, strict utilitarianism in particular lends itself to being turned into an algorithm.

A computer can be programmed to kill itself after a certain stimulus, people just don't program that way because it's a waste of resources.

Deja~vu
I think it would be kind of scary for a computer to have self awareness because it could lead to self preservation.

Maybe I watch too many movies. lol

leonheartmm
depends. on whether conciousness or quals are a selective curiosity of organic molecule association or whether all physical structures in part{i.e. silicon crystal lattices} also posess this curious trait.

Crosshatch
Most of the posters here don't even know what the word "quantum" means in the phrase "quantum computer". A soul doesn't define self-realization. Many creatures are self-aware but do not posses souls. There is no "proof" and no "light show" for those who try to pseudo-intellectualize the concept. It is that way with purpose, so that there will never be "Valerie 23s" in our (human) existence. To dream is wonderful ... to visualize the trouble that comes with the dream is even more profound. But it's only profound because it is so far out of our reach.

alltoomany
Andrew the bicentennial man did

inimalist
Originally posted by Symmetric Chaos
This is a better challenge but it raises the question of how you know that *people* feel grief or any other emotion. If the answer is that they cry or act sad or anything along those lines then a computer/robot can absolutely be built that would convince your it felt grief (but probably not in the next 50 years).

yes, but with neuroimaging we can see emotions that people aren't outwardly expressing

whether a robot can be made to simulate human emotion is a radically different question from whether they can be made to experience human (or something similar) emotion, no?

inimalist
Originally posted by ushomefree
"Why should a bunch of atoms have thinking ability? Why should I, even as I write now, be able to reflect on what I am doing and why should you, even as you read now, be able to ponder my points, agreeing or disagreeing, with pleasure or pain, deciding to refute me or deciding that I am just not worth the effort? No one, certainly not the Darwinian as such, seems to have any answer to this.... The point is that there is no scientific answer." -Darwinist philosopher, Michael Ruse

someone hasn't kept up with their cognitive psych/neuro litterature, Mr. Ruse

alltoomany
it is possible because I myself pick up Radio waves

Existere
Originally posted by ushomefree
Yes, it is true; human beings do respond to stimulus, but we understand the stimulus (and/or make judgements about the stimulus affecting us on a MORAL BASIS)! In other words, human beings have "dignity," and therefore, for example, become angry if miss treated. We ask the question, "How dare you do that to me?" That's consciousness!! I mean, people commit suicide! Shouldn't that be enough to convince you that human beings truly hurt at their core - enough to actually kill themselves! That is true pain. A computer will never self destruct, ha ha! It's not aware in and of itself. Computers lack the ability to makes judgements upon themselves even; human beings can (and do). Hmm... I hope that helped convey my point more specifically. It's like seeing someone argue that computers work according to behaviorist psychology while humans work according to cognitive psych, with a deeper inner processing that allows humans to achieve dignity and 'true' emotion.

inimalist
Originally posted by Existere
It's like seeing someone argue that computers work according to behaviorist psychology while humans work according to cognitive psych

wow

impressively apt... I'm totally stealing that

Existere
Originally posted by inimalist
wow

impressively apt... I'm totally stealing that Haha, thanks.

Bardock42
I do think that it is probably possible to create a computer that possesses something equivalent to what we call consciousness.

dadudemon
Originally posted by inimalist
yes



even if that is true, every aspect of what is called "self-awareness" has some correlate to neurological activity. If there is a soul, it has little to do with awareness, and would not prevent computers from being self aware



this isn't true (except for us not having the know how to make an aware robot now) for potentially 2 reasons

the first is that there are no quantum interactions at the neurological level that have a serious impact on awareness. Penrose's ideas are pretty much mythology in both the physics and neuroscience community, and while some quantum effects might mediate ion channels in the neurons themselves, there is absolutly no reason that a quantum computer would be necessary, because there are no quantum phenomena that need to be accounted for

the second is that understanding awareness is not simply a problem of not having enough power. Human consciousness is based upon the interconnected nature of our neurology, and its constantly changing interconnectivity. At this point, it might be more accurate to describe the problem as being one of cracking the neuro-code, or how patterns of activation represent coherent states of awareness, rather than just needing something with 10000x the power.

We would need a significantly more powerful computer if we ever wanted to simulate human awareness through an artifical brain (with billions of artificial neurons), and that specifically might be made easier with a quantum computer, but with the ever constant advances in micro-processing, a super-computer of some kind may also prove just as useful. Even then, we need more understanding of the higher areas (Frontal and pre-frontal regions, parietal lobe, some of the temporal-cortical pathways) before simulating them is going to provide much more data than fMRI anyways.

This also assumes that we are building awareness in robots in the very same way it occurs in humans. Awareness, as humans describe it, is most likely an epiphenomenon of our linguistic ability and intense social interaction. For this reason, it is possible to say that our brain is not built specifically for "awareness", but is aware because of some of the things it was built for. Because of this, there are some weird experiments (optical illusions being the simplest example) where our awareness is terribly ineffecient. It might even be the case that, when building aware robots, we need to avoid the human model, insofar as that is possible, to the end that our robots have an elevated sense of awareness than we do.

However, since our concept of awareness is inherently anthrocentric, it is debateable as to whether that would qualify as awareness, or just something new entirely


Dude, I totally thought up an outline for a P-Zombie that's practically real AI. I'm not even kidding. I briefly described it to Bardock. There's a very real reason why something like my outline will not be programmed: it would take an incredible amount of people and time to do. I estimated over 100 billion lines of code (almost an arbitrary number but I used an existing "AI" program as a jumping board to estimate my full program size) required to write out the entirely of the program. The program would still have to be "cleaned" and "tested" to such an extant that it's pretty much impossible to complete without, literally, hundreds of thousands of programmers/debuggers.


Basically, it covers the very same items you mentioned and many others.


We have "technology" now to write and execute a program like that, with no problem. The problem is amassing the data for all of the objects (I created 5 tiers of objects: Hyper objects, Super Objects, Macro Objects, Sub-Objects, and Micro Objects (these are words I made up but it's easy to see it's a pyramid)) and creating those objects to work together, correctly. The method by which they interact would be very simple but the objects themselves are what have to be "figured out" first.


I COULD write it all out, one day. It's much too complicated and lengthy to type out so I prefer to speak about in detail instead of type it out.

Bardock42
Originally posted by dadudemon
Dude, I totally thought up an outline for a P-Zombie that's practically real AI. I'm not even kidding. I briefly described it to Bardock. There's a very real reason why something like my outline will not be programmed: it would take an incredible amount of people and time to do. I estimated over 100 billion lines of code (almost an arbitrary number but I used an existing "AI" program as a jumping board to estimate my full program size) required to write out the entirely of the program. The program would still have to be "cleaned" and "tested" to such an extant that it's pretty much impossible to complete without, literally, hundreds of thousands of programmers/debuggers.


Basically, it covers the very same items you mentioned and many others.


We have "technology" now to write and execute a program like that, with no problem. The problem is amassing the data for all of the objects (I created 5 tiers of objects: Hyper objects, Super Objects, Macro Objects, Sub-Objects, and Micro Objects (these are words I made up but it's easy to see it's a pyramid)) and creating those objects to work together, correctly. The method by which they interact would be very simple but the objects themselves are what have to be "figured out" first.


I COULD write it all out, one day. It's much too complicated and lengthy to type out so I prefer to speak about in detail instead of type it out.

You did not mention that to me, I think, I'd be interested to hear it though.

dadudemon
Originally posted by Bardock42
You did not mention that to me, I think, I'd be interested to hear it though.

I did. I talked about a human-like AI system with you maybe a year or two ago. It might have been around the time we first started talking IRL to each other.

Bardock42
Originally posted by dadudemon
I did. I talked about a human-like AI system with you maybe a year or two ago. It might have been around the time we first started talking IRL to each other.

It is possible, I do not recall it though.

Ian Wardell
Originally posted by inimalist
yes



even if that is true, every aspect of what is called "self-awareness" has some correlate to neurological activity. If there is a soul, it has little to do with awareness, and would not prevent computers from being self aware



OK I've just joined this place. No idea whether killer movies refers to great movies or movies featuring killers . . but anyway . .

Th essence of souls are in fact self awareness. A soul is a self as in the sense of an experiencer who has experiences. A soul normally implies that such a self will survive death, and perhaps even be immortal.

The question I would like to ask you "inimalist" is why you think that souls cannot be aware?

Ian Wardell
Originally posted by inimalist
Because of this, there are some weird experiments (optical illusions being the simplest example) where our awareness is terribly ineffecient.

You could scarcely be more wrong. So called optical "illusions" help to illustrate that what we see is actually an implicit theory about the external world. I have a blog entry here (link deleted) which might be of interest.

If it were not for optical "illusions" we would see squares A and B as being the same colour. Indeed if we were unable to perceive optical illusions we wouldn't see reality anything like as we see it now -- we wouldn't even be able to see that the world is 3D! We wouldn't be able to negotiate our environment. Indeed, although we might have perfect vision, we really wouldn't be able to see all that terribly well.

Update: OK I've just found that I can't post a link to my blog as of yet!

OK on my blog I talk about this so-called "illusion"

Further update. I can't even upload an image or link to an image either! This is just absolutely useless. Can't see me making any more entries in this place! If people are interested just go and search for the checker-shadow illusion! mad

And to quote my blog:

I'm sure that all of us are astounded that the squares A and B are actually the same colour. It is the shadow cast over B by the cylinder which makes us think otherwise. What this suggests is a quite incredible illusion.

However I think there is a pervasive naivety about the nature of perception. Most of us doubtless feel that we see the external world directly. But we emphatically do not.

Consider a red rose. We think of a red rose as being the same colour throughout the day. However the light from the Sun reaching the Earth varies throughout the day. When the Sun is low in the sky, lots of blue light gets scattered away since the sunlight has to travel through a greater quantity of air. So if we were to passively see colours "as they really are", then the colour of our red rose would change throughout the day. Indeed the colour of all objects would change throughout the day. But in fact our rose seems to stay pretty much the same colour throughout the daylight hours. Why is this?

The answer lies in the fact that we do not in fact simply passively see what is out there. Rather the brain performs certain operations on the data coming through our senses and presents it to our consciousness in a form that we can make sense of. Everything we ever see is in fact a hypothesis about how the world is. Thus we have an implicit theory about the external world that it contains objects which have specific intrinsic colours. Hence the brain will perform those operations which ensure that objects do indeed appear to be the same colour throughout daylight hours.

This applies not just to colours, but everything we perceive through our 5 senses. In a way then everything we ever perceive is an illusion. But I think this is misleading.

Let's consider the "illusion" above again. If this were a real 3D object and we were to approach it and view it from various angles, then we would see that squares A and B are very different colours. Indeed their intrinsic colours would be precisely as we perceive them in the illusion above.

But in that case what justifies us in labelling it as an illusion? If this were a real object that we are seeing, then squares A and B are very different colours. Our senses are not deceiving us. Indeed if someone claimed to see the squares as being precisely the same colour, then it is doubtful that he could proficiently visually apprehend his environment.

This is not to say we never perceive illusions. Sometimes we seem to see something, but which on closer inspection turns out to be something else entirely. Or sometimes what we seem to visually see is not consistent with our other senses.

inimalist
Originally posted by Ian Wardell
Further update. I can't even upload an image or link to an image either! This is just absolutely useless. Can't see me making any more entries in this place! If people are interested just go and search for the checker-shadow illusion! mad

what a shame, we lost a real top notch contributer here guys!

Ian Wardell
Originally posted by inimalist
what a shame, we lost a real top notch contributer here guys!

I'm not necessarily saying I won't contribute again. I was in a mood about not being able to provide any links.

I will contribute here if any interesting discussions develop. I'm not really interested in "winning" arguments. I think these discussions should be a collaborative venture. We're all interested in seeking to answer questions such as whether we are mere sophisticated biological machines, or whether we are souls, and a whole host of other questions!

Symmetric Chaos
Originally posted by Ian Wardell
We're all interested in seeking to answer questions such as whether we are mere sophisticated biological machines, or whether we are souls, and a whole host of other questions!

"Mere" biological machines? Have you looked at the organs of any living thing? How about the Olympics? We're absolutely kickass biological machines.

inimalist
Originally posted by Ian Wardell
You could scarcely be more wrong. So called optical "illusions" help to illustrate that what we see is actually an implicit theory about the external world. I have a blog entry here (link deleted) which might be of interest.

If it were not for optical "illusions" we would see squares A and B as being the same colour. Indeed if we were unable to perceive optical illusions we wouldn't see reality anything like as we see it now -- we wouldn't even be able to see that the world is 3D! We wouldn't be able to negotiate our environment. Indeed, although we might have perfect vision, we really wouldn't be able to see all that terribly well.

Update: OK I've just found that I can't post a link to my blog as of yet!

OK on my blog I talk about this so-called "illusion"

Further update. I can't even upload an image or link to an image either! This is just absolutely useless. Can't see me making any more entries in this place! If people are interested just go and search for the checker-shadow illusion! mad

And to quote my blog:

I'm sure that all of us are astounded that the squares A and B are actually the same colour. It is the shadow cast over B by the cylinder which makes us think otherwise. What this suggests is a quite incredible illusion.

However I think there is a pervasive naivety about the nature of perception. Most of us doubtless feel that we see the external world directly. But we emphatically do not.

Consider a red rose. We think of a red rose as being the same colour throughout the day. However the light from the Sun reaching the Earth varies throughout the day. When the Sun is low in the sky, lots of blue light gets scattered away since the sunlight has to travel through a greater quantity of air. So if we were to passively see colours "as they really are", then the colour of our red rose would change throughout the day. Indeed the colour of all objects would change throughout the day. But in fact our rose seems to stay pretty much the same colour throughout the daylight hours. Why is this?

The answer lies in the fact that we do not in fact simply passively see what is out there. Rather the brain performs certain operations on the data coming through our senses and presents it to our consciousness in a form that we can make sense of. Everything we ever see is in fact a hypothesis about how the world is. Thus we have an implicit theory about the external world that it contains objects which have specific intrinsic colours. Hence the brain will perform those operations which ensure that objects do indeed appear to be the same colour throughout daylight hours.

This applies not just to colours, but everything we perceive through our 5 senses. In a way then everything we ever perceive is an illusion. But I think this is misleading.

Let's consider the "illusion" above again. If this were a real 3D object and we were to approach it and view it from various angles, then we would see that squares A and B are very different colours. Indeed their intrinsic colours would be precisely as we perceive them in the illusion above.

But in that case what justifies us in labelling it as an illusion? If this were a real object that we are seeing, then squares A and B are very different colours. Our senses are not deceiving us. Indeed if someone claimed to see the squares as being precisely the same colour, then it is doubtful that he could proficiently visually apprehend his environment.

This is not to say we never perceive illusions. Sometimes we seem to see something, but which on closer inspection turns out to be something else entirely. Or sometimes what we seem to visually see is not consistent with our other senses.

whether I agree with specific points or not, your post just shows you have absolutly no idea what my overall point was.

If we have a soul that is responsible for our perception of the world, it is inherently flawed, in such a way that shows faulty design from a creator. Now, you point to something that isn't an illusion in the sense that it shows errorous processing, fine. That doesn't mean there aren't numerous errors that show the limitations of the system. It is actually moot that some errors in perception can be a product of beneficial systems that allow us to navigate the world around us.

So, your post either further supports the point I was trying to make in that our perception is not a perfect representation of reality, or it makes a plug for a blog that is not relevant to the topic.

Parapsychology
Originally posted by inimalist
If we have a soul that is responsible for our perception of the world, it is inherently flawed, in such a way that shows faulty design from a creator.

What makes you think the only alernative belief to materialism is some sort of religious belief?

The topic is about whether computer can become aware, not whether a God created it.

inimalist
Originally posted by Parapsychology
What makes you think the only alernative belief to materialism is some sort of religious belief?

The topic is about whether computer can become aware, not whether a God created it.

?

because that was the topic we had moved to in the post he quoted, I was explaining the point I was making. Some threads, based on the nature of discussion, do move away from the initial question asked, and I believe someone was suggesting that the human soul meant that computers couldn't be sentient.

there is always the question of why any soul that is responsible for perception would be flawed based on the material qualities of our brains, if they aren't physical themselves, you don't really have to assume there is a creator for this to be a problem for any theory of a soul.

ushomefree
Computers do not know that they are computers.

Bardock42
Originally posted by ushomefree
Computers do not know that they are computers.

That's a silly thing to say. What's the definition of "know"?

Parapsychology
Originally posted by Shakyamunison
If we could build a quantum computer, then it could become self-aware. With the technology we have now, no way.

If quantum mechanics is required to explain minds, then non-local phenomena increases and human minds may have unconscious telepathic like synchronicity with other minds over space and time.

If materialists want to deny parapsychological like phenomena, they have to stick to classical computation, which I agree cannot create consciousness.

Parapsychology
Originally posted by Bardock42
That's a silly thing to say. What's the definition of "know"?

He is quite correct. Computers do not consciously understand anything.

Parapsychology
Originally posted by ushomefree
Symmetric Chaos-

Robots have already been developed in Japan (and I'm sure other parts of the world) that mimic human behavior, but these "human behaviors," are completely artificial. These robots, merely respond to stimulus dictated by the programs written for them - nothing conscience about that! Don't you understand, that machines, computers and robots only "behave" a certain way, due to programs/software written for them? That's not consciousness. How do you make the leap?

Correct, you understand the problem smile

Parapsychology
Originally posted by Symmetric Chaos
I don't see how this is necessarily different than humans. All emotions are a response to stimulus, unless you tend to start laughing or crying for no reason.

Computers do not have emotions ... so doesn't work.



It is irrelevent, we all experience feelings ... do you believe your computer is having primitive feelings?

Parapsychology
Originally posted by ushomefree
I would venture and say "no." Nothing physical is capable of consciousness. Yes, technology (code) exists to program machines and computers to act upon stimulus/algorithms, but they will never understand why. Machines and computers simply do the bidding (without question). Never will a machine or computer ask for a vacation, speak of personal rights (dignity) and/or feel grief over the death of a loved one, for example in the truest sense.

"Why should a bunch of atoms have thinking ability? Why should I, even as I write now, be able to reflect on what I am doing and why should you, even as you read now, be able to ponder my points, agreeing or disagreeing, with pleasure or pain, deciding to refute me or deciding that I am just not worth the effort? No one, certainly not the Darwinian as such, seems to have any answer to this.... The point is that there is no scientific answer." -Darwinist philosopher, Michael Ruse

Yes smile

Parapsychology
Originally posted by Symmetric Chaos
Categorically untrue. I'm terrible with computers and even I can write a VB program that can ask for vacation time and demand that it has personal rights.

But it doesn't know it is asking and it dosn't know what it is asking for .. these are pure automations, goals set by programmers (not allowed in materialists version of natural selection)



You are confusing immitation with conscious thinking.

Bardock42
Originally posted by Parapsychology
He is quite correct. Computers do not consciously understand anything.

That's just restating the claim.

What do you define as "knowing". What is it in a human that makes it "know" something. How is it different to having a computer be able to answer the question what are you and return the information "computer", which is easily possible, I could code that in a couple minutes.

And if it is more complex, then define what it is and prove that a computer program can not do it.

Parapsychology
Originally posted by Bardock42
What do you define as "knowing".

Being aware of why one is doing something.


That is the great mystery .... one isn't going to get it from classical computations. It is like claiming a light switch knows it is on. You can program a computer to display 'light is on' in pixels or speak 'light is on' through a speaker but the computer has no awareness whatsoever it is doing it.


As stated above. A computer is not aware of 1s or 0s, there is no fact in current physics or chemistry that gives any of this a meaning.

Conscious (human) observers understands what the computer is doing, not the computer.


Sir Roger Penrose, mathematician, expert on Quantum Mechanics has written a book why it is unlikely using classical computations. 'The Emperor's New Mind' .

Of course materialists will dispute that . Like perpetual motionists they will argue extra cogs, wheels, pulleys, levers, water weights, slopes, etc. will one day magically pop it out. But theoretically neither has any basis from these levels of processing

Ian Wardell
Originally posted by Symmetric Chaos
"Mere" biological machines? Have you looked at the organs of any living thing? How about the Olympics? We're absolutely kickass biological machines.

It's irrelevant to the point I was making how "kickass" the machine is. The point being that if we are mere machines, then our lives have no purpose apart from the meaning we create for ourselves. And there are serious questions whether such a view is compatible with us being free agents. It is a spiritually bankrupt and reprehensible metaphysical hypothesis which is inconsistent with common sense, inconsistent with a philosophical analysis, and also inconsistent with the evidence suggesting anomalous cognition and "life after death".

King Kandy
Originally posted by Ian Wardell
It's irrelevant to the point I was making how "kickass" the machine is. The point being that if we are mere machines, then our lives have no purpose apart from the meaning we create for ourselves. And there are serious questions whether such a view is compatible with us being free agents. It is a spiritually bankrupt and reprehensible metaphysical hypothesis which is inconsistent with common sense, inconsistent with a philosophical analysis, and also inconsistent with the evidence suggesting anomalous cognition and "life after death".
Really? Because that actually, seems much more motivational to me than that some superbeing is forcing us to follow its morals.

Ian Wardell
Originally posted by inimalist
whether I agree with specific points or not, your post just shows you have absolutly no idea what my overall point was.



I have, and had, no interest in your overall point. I felt it was important to correct your misunderstanding of optical "illusions".



I suppose a creator could have made us so that we have the ability to visually see all things with perfect vision where ever in the Universe they might be. Is this an argument against a creator? Or a soul? Don't understand your point.

inimalist
Originally posted by Ian Wardell
I have, and had, no interest in your overall point. I felt it was important to correct your misunderstanding of optical "illusions".

good show then...

ushomefree
INTELLIGENCE

0tRo6a4VhvU
kFgXEkzMq7A
qxbuysNGLOM

the ninjak
Yes AI will become self aware in due time.

Quark_666
Neurons appear to be forever more efficient than wires. Until logic gates have six or eight "neurotransmitter" states, haha.... yeah, it ain't happening.

Colossus-Big C
Bump

Oliver North
Originally posted by Quark_666
Neurons appear to be forever more efficient than wires. Until logic gates have six or eight "neurotransmitter" states, haha.... yeah, it ain't happening.

hahaha

funny, one of the things that lead people to discover the electro-chemical nature of neuronal communication was the fact that the system was less efficient than electrical wires were.

Digi
2.5 years later, and I still think my initial response to the OP is all the rationalization needed to answer the OP's question.

Originally posted by Digi
Sure. We're machines, albeit organic ones, and we have awareness.

Colossus-Big C
But then not everyone thinks we are machines erm

Until organic life can be built from scratch by technology.

Mindship
Colossus, what do you mean by self-aware? What would a computer experience as its "self," and how would it be aware of that?

red g jacks
Originally posted by Bardock42
That's just restating the claim.

What do you define as "knowing". What is it in a human that makes it "know" something. How is it different to having a computer be able to answer the question what are you and return the information "computer", which is easily possible, I could code that in a couple minutes.

And if it is more complex, then define what it is and prove that a computer program can not do it. i'm not going to say its impossible for a computer to do it

but i think this thought experiment highlights a difference between asking a computer a question and asking a human a question:

TryOC83PH1g

the computer doesn't have to think about the answer. either it's programmed to answer that question or it isn't, and when it does answer it it's just returning a pre-defined variable that it waits to be prompted for.

in contrast, a human might not know how to answer a question, (i.e. "what is the meaning of life"wink but we can still think about what the answer might be without ever necessarily arriving to a conclusion.

i don't know any neuroscience (disclaimer), so this is purely my opinion, but i feel like the process of thinking is part of what makes us consider ourselves aware. or more specifically, the fact that we actually experience thinking, that it's not just a series of processes which allow us to arrive at an answer.

Oliver North
that is a very "rose-coloured" interpretation of what it means for people to "think" about things.

I think the main issue people have is that it really is impossible for us to think about how complex a machine with billions of parts might be.

red g jacks
Originally posted by Oliver North
that is a very "rose-coloured" interpretation of what it means for people to "think" about things.so of course i expected that, but could you be more specific?

that's true, but i'm not saying computers aren't complex.

Oliver North
Originally posted by red g jacks
so of course i expected that, but could you be more specific?

the process of considering things we don't have a full answer to isn't really that complex, and at a neuronal level may simply be a matter of competing activation of different memory traces related to the problem.

In this way, it might operate in a very similar way to Watson, the Jeopardy playing computer.

Originally posted by red g jacks
that's true, but i'm not saying computers aren't complex.

I was talking about the brain

Digi
Originally posted by Colossus-Big C
But then not everyone thinks we are machines erm

Sure. But I haven't heard a compelling reason why I'm wrong. Your definition of machine seem to be "computers or other metal things." We just have different kinds of parts. But the human body is essentially a complex machine.

Originally posted by Colossus-Big C
Until organic life can be built from scratch by technology.

Why is this a prerequisite for my idea? What would it prove? We are pieced together in a way that allows for self-awareness. We're made of the same things as the rest of the universe. There's nothing that makes our composition different from the rest of the universe around us. To suppose that self-awareness couldn't occur or be created using other parts is ridiculous.

Animals can be self-aware too. Several are quite intelligent. It doesn't take "human" to achieve consciousness.

....

Eventually, provided we don't destroy ourselves, I'm absolutely sure self-awareness will be created in a machine and this debate will seem silly. After the Uncanny Valley backlash and philosophical question of rights, this argument will be as ridiculous as those in the middle ages talking about whether or not the Earth is the center of the universe.

red g jacks
Originally posted by Oliver North
the process of considering things we don't have a full answer to isn't really that complex, and at a neuronal level may simply be a matter of competing activation of different memory traces related to the problem.

In this way, it might operate in a very similar way to Watson, the Jeopardy playing computer.i won't lie, i don't completely understand what "competing activation of different memory traces" means, but i appreciate the input.

i'm trying to consider what you're saying. the conceptual barrier i'm having trouble breaking through isn't necessarily rooted in the complexity of the processes involved. i'm not saying there's something so complex in processing the answers to vague questions that computers couldn't be designed to tackle them.

the main distinction i see is that when a person is asked a question, we reply with an answer that has some meaning to us. if you ask a person what they are and they say 'human', there's an intuitive concept attached to that response. with a computer like watson, if he answers that he's a computer, the word 'computer' is related to other fields in his database, but there's not any sort of conceptual meaning attached to any of them.


oh, i see

Oliver North
Originally posted by red g jacks
the main distinction i see is that when a person is asked a question, we reply with an answer that has some meaning to us. if you ask a person what they are and they say 'human', there's an intuitive concept attached to that response. with a computer like watson, if he answers that he's a computer, the word 'computer' is related to other fields in his database, but there's not any sort of conceptual meaning attached to any of them.

the thing is, all of that, is based on memories and experiences that are activated when we are asked a question. Conceptual meaning is just a stored memory of an emotional or semantic component of stimuli.

When someone asks you a question, competing "traces", these memories that are stored, are activated, and the one that is activated most, the one that is most similar or related to the question, is activated most strongly, and therefore chosen.

It almost seems like you are saying there is some natural knowledge that people have that could never be programmed into a computer. People are not born knowing they are a human, it is something they need to learn before they can have any sort of reflection on it.

Additionally, you still seem to be arguing that there is something special about the way a brain processes data compared to a computer. The mechanism isn't the same, but information processing in humans is not this abstract thing that is free to consider all these things. It works on very mundane processes, and psychologists are able to manipulate the way people process data quite easily because there are biases the system produces. It is just an incomprehensibly complex machine.

red g jacks
i should clarify: i'm not saying it's impossible to program a computer for consciousness, and i'm also not saying that the processes the human brain utilizes are impossible to replicate in a computer. even though i posted that video with john searle's argument i don't ultimately share his position. i just think it's a valid contrast between human intelligence and current AI.

i'm ignorant towards the specifics of neuroscience so i lack that understanding of how exactly the human brain conceptualizes things. but i get what you're saying about us drawing on experience and stimuli to form these conceptions.

i guess at this point my question to you would be:

how well known are the mechanisms by which humans form, store & access the 'concepts' we associate with certain experiences?

what i'm really wondering, but i don't expect you to answer this, is how can we replicate it in machines? not specifically but generally speaking, what would be necessary for a computer so that when it encounters the term 'computer' it not only has a group of written characteristics which it associates with this term, but actually attaches some meaning to these characteristics which at least roughly parallels our own conception of the terms and which the computer can understand?

Digi
There's some interesting philosophy/pseudo-science that speculates that we could be living in a universal computer simulation. A few diehards are willing to put the chance at as much as 40%. It's more complicated than what I'm about to say, but in a nutshell...the idea is that once the technology is developed in a given universe to run universe simulations, there is quickly going to be far more computer-run universes than otherwise. And who is to say we are universe #1 in that chain?

Oliver North
Originally posted by red g jacks
how well known are the mechanisms by which humans form, store & access the 'concepts' we associate with certain experiences?

jeez, set the bar a little higher, eh? stick out tongue

the answer to that question depends on what you mean by "well" and which "concept" is in question. How well known are the mechanisms that give us the experience of "blue"? in some ways we know this down to a neuron-by-neuron level (it would be different for each person, but I mean this in terms of building a model of how things work). Something like, how well do we know where memory comes from? well, not as well, we can speak in broad terms about encoding and retrieval, we can point to clearly important neural structures and we have models that cover the issue very well, but there are still much more fundamental questions (for instance, while it is known that memories compete with eachother to become the "winner", ie, the thing we consciously remember, the exact mechanism is still unknown even though there are many models that talk about "gating" or lateral inhibition... I'll elaborate on that stuff if you are specifically interested).

I think what you want is a little bit above this though, isn't it? Like, you aren't talking about specific component pieces of consciousness, but the actual full conscious experience, no? And this is where it starts to get complex. Broken down, and imho (though, it would be the majority view in neuroscience), there is no real, single thing called "conscious experience". All of these individual systems seem to work in tandem to produce an experience that only feels united as a single thing, but is really not. Think of it sort of like this: The eye doesn't feed a continuous flow of information into the visual cortex, it feeds information at roughly ~25 "frames" per second. However, we experience a flow of visual information over time that is not broken down into smaller parts. That is because the way visual information is processed puts it into this united perceptual experience, but it is really based on unique discrete moments of visual information.

Human consciousness and awareness works in much the same way. This feeling we get of a whole united experience is really just a by-product of how our individual systems process information. The best evidence of this comes from when people have deficits in various systems, such as amnesia or blindness. The "united" experience is not altered, though the person displays very specific deficits in the injured systems. As far as all the research up to this point shows, there is no "consciousness system" responsible for uniting these individual systems, but rather, our conscious experience is produced by the simultaneous functioning of the individual systems.

The closest thing might be what is called the "narrator", located in the left hemisphere of the brain. It is thought to create a linguistic narrative of the world based on immediate sensory context and relevant stored memory or emotional content related to those sensory contexts. Basically, at any given moment, it is going "this is what I see, this is what I was planning to do, this is what I remember about where I am.... etc" and puts it into a story: "I am at school today because I have class every morning". What is amazing is how messed up and literally gibberish these narratives can become if you restrict the flow of information to the left hemisphere. It isn't right to call the narrator a "consciousness center" because it is possible to be "aware" of things the narrator does not include in its narrative, but in terms of what I think you are getting at, reproducing this in a computer might give you what you are looking for.

Originally posted by red g jacks
what i'm really wondering, but i don't expect you to answer this, is how can we replicate it in machines? not specifically but generally speaking, what would be necessary for a computer so that when it encounters the term 'computer' it not only has a group of written characteristics which it associates with this term, but actually attaches some meaning to these characteristics which at least roughly parallels our own conception of the terms and which the computer can understand?

So, humans have a area in their brain called the amygdala. It receives sensory information and, to be very general, analyzes it for emotional content. What this means is that, when you see, hear, touch... etc, something, that information goes to the amygdala where it communicates with stored memories about emotional reactions to previous stimuli. Basically, when you see something that looks like your mother, the amygdala adds the emotional part of that experience.

What is very interesting is that, if information can no longer get to the amygdala, people present extremely bizarre behaviours. so, in the above mentioned example, if you saw someone who looked like your mother, but that information could not get to the amygdala, you would be convinced, convinced, that this person was an imposter. This is because your narrator cannot access the emotional content and essentially goes "well, it looks like her, but I don't feel anything, therefore it can't be her, therefore it isn't". This condition can be so pervasive that sufferers have been known to murder a parent to try and prove that they were a robot (I know that sounds weird, but that conclusion actually makes logical sense given the type of stimuli input the individual had; they know it isn't their parent, therefore how do you explain it? robot, clone, imposter... and then when it looks like they've fooled everyone else, or people tell you that you are crazy... etc - the narrative builds from there).

So, information flowing into the amygdala can be restricted selectively, meaning that audio information from the ears can still reach it, but visual information cannot (or vice versa). In this situation, a person could not recognize their parent by sight (and because we are visually dominant, even if they spoke), but if the parent called them from a different room, they would recognize the voice (the audio information still getting to the amygdala), and that context would allow them to then recognize the parent if they entered the room, because that "emotional context" carries over and informs the narrator.

another thing to keep in mind is that brains and computers serve fundamentally different purposes and were designed for totally different reasons. the brain evolved to act in the wild, and we still have structures as ancient as the brain itself buried deep within our lobes. Our emotional systems, our perceptual systems, our narrator, these all evolved not to be "self-aware" or whatever, but to facilitate our action in an environment, self-awareness being a by-product that itself might play a beneficial role in survival. Computers aren't designed to mimic this process, they are massive number crunchers. In fact, unless we dramatically change how cpu's work at a physical level, self-awarenes will be an issue of software, not of the computer itself. The software might be aware that it is a computer (because it simulates human neurology) but the RAM and such will still just be crunching 1s and 0s.

Astner
Oliver's incoherent rambling aside. From what I understand from one of my previous class mates who has taken two advance courses in intelligent system design (artificial intelligence), and is currently writing his Master thesis on it the subject.

The consciousness is a facet of a complex of electrochemical signals conditioned by evolution.

That said, our consciousness is a very inefficient in contrast to the hardware of the human brain and not something desirable in machines, which are tailor-made for efficiency.

Robtard
This guy is:

http://upload.wikimedia.org/wikipedia/en/thumb/0/09/DataTNG.jpg/250px-DataTNG.jpg

So by the 24th century, we will.

Oliver North
Originally posted by Astner
Oliver's incoherent rambling aside.

oh?

what didn't you understand?

Originally posted by Astner
The consciousness is a facet of a complex of electrochemical signals conditioned by evolution.

lol, as is every property of the brain...

wait, it couldn't be that you don't have a nuanced understanding of human neuroscience, could it?

Astner
Originally posted by Oliver North
oh?

what didn't you understand?
I understood it all, it was just that most of it wasn't on point.

Originally posted by Oliver North
lol, as is every property of the brain...
No. Plenty of differences in activity and behavior can be tracked down to the specific level of the brain. The limbic system for instance functions differently than the cerebral cortex.

That aside, other properties of the brain are manifested in form of structure. There's far more to brain cell than firing electrochemical signals.

Originally posted by Oliver North
wait, it couldn't be that you don't have a nuanced understanding of human neuroscience, could it?
More so than you I'd wager based on what you posted previously as well as this current reply I'm quoting.

TheGodKiller
Originally posted by Digi
There's some interesting philosophy/pseudo-science that speculates that we could be living in a universal computer simulation. A few diehards are willing to put the chance at as much as 40%. It's more complicated than what I'm about to say, but in a nutshell...the idea is that once the technology is developed in a given universe to run universe simulations, there is quickly going to be far more computer-run universes than otherwise. And who is to say we are universe #1 in that chain?
Its crank transhumanism but the premise is quite interesting. A similar point is raised in this video(from 11:00-12:00) as well:
tBKYVwQl3RQ&list=PL41A2FEA2CD25205D&index=13&feature=plcp

Digi
Originally posted by TheGodKiller
Its crank transhumanism but the premise is quite interesting.

I'm not sure "crank" is the right word, since it implies obvious falsehood. Though I'm sure some who ascribe to it as a belief would have a tendency to stretch the limits of science to suit themselves.

But I tend to look at it as lacking evidence one way or another, and possibly unfalsifiable, and thus it's not worth serious consideration. Doesn't mean it isn't possible, we just don't know enough for it to be a justifiable belief. Like Russell's teapot in space, or any number of pseudo-scientific or supernatural/spiritual theories that exist in today's world.

I agree that it's an interesting premise, though. If nothing else, it's some interesting philosophical fodder for Matrix fans. And of course, such a theory provides another potential answer to the OP here, which is why I bring it up.

Oliver North
Originally posted by Astner
I understood it all, it was just that most of it wasn't on point.


No. Plenty of differences in activity and behavior can be tracked down to the specific level of the brain. The limbic system for instance functions differently than the cerebral cortex.

That aside, other properties of the brain are manifested in form of structure. There's far more to brain cell than firing electrochemical signals.


More so than you I'd wager based on what you posted previously as well as this current reply I'm quoting.

... I'm actually a practicing neuroscientist... so, idk, /shrug

EDIT: actually, let me extend the challenge, what did I say that was incorrect?

Robtard
This should be good

*chews popcorn*

Astner
Originally posted by Oliver North
... I'm actually a practicing neuroscientist... so, idk, /shrug
Is that also why you said that every property of the brain is the consequence of electrochemical reactions, and in fact avoided the explanation of the contrary without a concession?

Originally posted by Oliver North
EDIT: actually, let me extend the challenge, what did I say that was incorrect?
I didn't say that it was incorrect. I said that it was incoherent, as in it was irrelevant and not on point. Now, I could definitely nitpick a few of the gems in your post, such as the processing of the information picked up by the eye observing a blue object, being significantly different from person to person, which is outright wrong. But like your post it's not on topic. Besides, I'll save myself the mind numbing experience of explaining the basics of field you supposedly practice.

Robtard
"I could have totally kicked your ass, but I won't. So you're lucky, pal!"

Oliver North
Originally posted by Astner
Is that also why you said that every property of the brain is the consequence of electrochemical reactions, and in fact avoided the explanation of the contrary without a concession?

oh, you parsed that in the wrong way, I meant all properties of the brain (consciousness, emotions, perception, etc) were a product of evolution

additionally, your point about localization of function is unrelated unless you are trying to say that in some parts of the brain neurons work using action potentials and in others they don't

and no, at a neuronal level, the brain is just electrochemical signals (among other things that control ion pathways). what you might be suggesting is that the way these neurons arrange and communicate with eachother is more important when describing the cause of behaviour, which I wouldn't disagree with

Originally posted by Astner
such as the processing of the information picked up by the eye observing a blue object, being significantly different from person to person, which is outright wrong.

... the statement was about the neuron-to-neuron understanding, contrasting how well vision is known compared to memory. In this case, while the structure of the geniculo-striate pathway is hugely consistent at a level above individual neurons, as are the connections among the earliest levels of V1, individuals show huge differences at the neuronal level... Like, just out of curiosity, do you even know what Talairach space is? I mean without Google...

Yes, you and I have very similarly structured orientation pinwheels or CO blobs in the first layer of V1, however, they are not wired identically, which is literally what I said before.

I could also be pedantic and bring up that colour perception is not simply performed by the visual cortex, but requires access to stored experiences, which will differ massively between people.

Originally posted by Astner
But like your post it's not on topic. Besides, I'll save myself the mind numbing experience of explaining the basics of field you supposedly practice.

? it seemed pretty related to storing and accessing concepts to me, and where self-awareness comes from

do you have a better suggestion?

Oliver North
Originally posted by Robtard
"I could have totally kicked your ass, but I won't. So you're lucky, pal!"

nono, come on, give him a chance

I always love hearing physicists talk about how well they understand the brain... And I mean he got his info from someone who took some courses in artificial intelligence...

Astner
Originally posted by Oliver North
oh, you parsed that in the wrong way,
No, that's how you phrased it.

Originally posted by Oliver North
I meant all properties of the brain (consciousness, emotions, perception, etc) were a product of evolution
You don't say? You actually mean to tell me that evolution, which took us from replicating polymers to what we are now actually is responsible for our brain's structure?!

Sarcasm aside, this is exactly what I meant with the mind numbing experience of having to deal with the basics the average high schooler could explain.

Originally posted by Oliver North
additionally, your point about localization of function is unrelated unless you are trying to say that in some parts of the brain neurons work using action potentials and in others they don't
If you're going to challenge me on a topic, don't refute your own arguments with disclaimers.

Originally posted by Oliver North
and no, at a neuronal level, the brain is just electrochemical signals (among other things that control ion pathways). what you might be suggesting is that the way these neurons arrange and communicate with eachother is more important when describing the cause of behaviour, which I wouldn't disagree with
Again with the damn disclaimers.

Originally posted by Oliver North
... the statement was about the neuron-to-neuron understanding, contrasting how well vision is known compared to memory. In this case, while the structure of the geniculo-striate pathway is hugely consistent at a level above individual neurons, as are the connections among the earliest levels of V1, individuals show huge differences at the neuronal level...
Yes, now you're referring to the difference in experience while processing that information. If you, as a child, were beaten by your father in a blue room then your brain activity is going to differ from someone else's not sharing that past experience when you're observing a blue object.

Originally posted by Oliver North
Like, just out of curiosity, do you even know what Talairach space is? I mean without Google...
Yes, it's a coordinate space which I first came across in an example in my multivariable analysis course, at the first year of my program. Do you want me to explain it for you?

Originally posted by Oliver North
I could also be pedantic and bring up that colour perception is not simply performed by the visual cortex, but requires access to stored experiences, which will differ massively between people.
Which, once again, is irrelevant.

Originally posted by Oliver North
? it seemed pretty related to storing and accessing concepts to me, and where self-awareness comes from

do you have a better suggestion?
No, self-awareness comes from the external processing of information, which again is the consequence of the consciousness.

Digi
Those aren't disclaimers. He's trying to account for multiple possible interpretations of a phrase, in order to more fully explain his position and understand yours. Debate doesn't have to be constantly antagonistic. Disclaimers usually imply a weakening of one's position. What he's doing is disambiguation. Correct me if I'm wrong, but Oliver hasn't backed down from any of his points.

Don't mistake this for a rebuttal of your entire post. I enjoy hearing about science from those who presumably understand it better than I do. Even if there's disagreement among individual points, this isn't my subject.

Astner
Originally posted by Digi
Those aren't disclaimers. He's trying to account for multiple possible interpretations of a phrase, in order to more fully explain his position and understand yours.
Then it shouldn't be phrased "your wrong... unless you mean...", but rather in the more inquisitive form "what is it exactly that you mean?"

Originally posted by Digi
Debate doesn't have to be constantly antagonistic. Disclaimers usually imply a weakening of one's position. What he's doing is disambiguation.
He called it a challenge, not a debate.

Originally posted by Digi
Correct me if I'm wrong, but Oliver hasn't backed down from any of his points.
He has corrected at least one of his points.

Originally posted by Digi
Don't mistake this for a rebuttal of your entire post. I enjoy hearing about science from those who presumably understand it better than I do. Even if there's disagreement among individual points, this isn't my subject.
The debate has little to no relevance to the thread. Both he and I share the position of that it is possible, but not practical, to design a conscious machine. What he's doing now is defending his ego.

Digi
Off-topic discussion, if interesting, isn't necessarily bad.

But again, a statement like this:
and no, at a neuronal level, the brain is just electrochemical signals (among other things that control ion pathways). what you might be suggesting is that the way these neurons arrange and communicate with eachother is more important when describing the cause of behaviour, which I wouldn't disagree with
...isn't changing one's stance. It's making the position more nuanced so you know what he's talking about, and what he does/doesn't agree with.

Even in a "challenge" there needs to be no winner. The point is, or should be, working toward a shared understanding, not deciding who conceded in the eyes of the other. I don't have a rooting interest in your debate, nor the requisite knowledge to join it, but your tone leaves much more to be desired.

Oliver North
Originally posted by Astner
No, that's how you phrased it.

lol, ah, the good old "I know what you meant" argument

stay classy Astner

Originally posted by Astner
You don't say? You actually mean to tell me that evolution, which took us from replicating polymers to what we are now actually is responsible for our brain's structure?!

Sarcasm aside, this is exactly what I meant with the mind numbing experience of having to deal with the basics the average high schooler could explain.

to point out, it was you who originally thought it was prudent to talk about consciousness as a product of evolution, but cool, yes, I do think pointing that out would suggest someone has the understanding of a highschooler, why did you point it out then?

Originally posted by Astner
If you're going to challenge me on a topic, don't refute your own arguments with disclaimers.

disclaimer... what.. hold on...

Originally posted by Astner
Then it shouldn't be phrased "your wrong... unless you mean...", but rather in the more inquisitive form "what is it exactly that you mean?"

lol, seriously? oh I'm sorry I offended your sensitivities...

ok, how is this: what exactly do you mean by:

Originally posted by Astner
The limbic system for instance functions differently than the cerebral cortex.

because that statement is incorrect. They perform different functions because of different inputs and outputs, they function in the same way. The differences between limbic and cortex neurons wouldn't be meaningful at all.

Originally posted by Astner
Again with the damn disclaimers.

oh princess, I'm sososo sorry... you totally are forgiven for totally dodging the last points I've made because you thought I worded them improperly.

let me try again: how does the brain function if not through electrochemical signals? It seems like you are confusing structure and function. All neurons work through essentially the same principles, they are structured in groups in different ways.

Originally posted by Astner
Yes, now you're referring to the difference in experience while processing that information. If you, as a child, were beaten by your father in a blue room then your brain activity is going to differ from someone else's not sharing that past experience when you're observing a blue object.

actually, no, I haven't changed what I'm refering to at all. The type of emotional and memory related content you are talking about would have little, if any, impact on incomming information in the geniculo-striate pathway. All I've ever been talking about was the difference in the level of understanding between perceptual systems, namely vision, where we understand things down to the level of neuronal architecture, versus something like memory, where we speak more in broad terms of local structures or "distributed networks".

Additionally, what you have done with that example works against the reason I was talking about vision and memory separately in the first place. red g was interested in how components of awareness are stored, and how well we knew these things. My response was to deconstruct awareness into its component parts so the question is actually answerable. The fact remains, even though "blue" is contained within the experience you described above, it is not being processed by the same systems that processed the emotional memories associated with it, and it is really only at the level of the narrator that we have any evidence of these systems being brought together in a unified experience.

that was actually the entire point of the first part of that post... but I'll touch on you not understanding what I'm writing a bit later.

Originally posted by Astner
Yes, it's a coordinate space which I first came across in an example in my multivariable analysis course, at the first year of my program. Do you want me to explain it for you?

actually, ya, I would.

If you could include some justification for saying both that we have identical colour processing at a neuronal level and that that brains have to be put in a standardized coordinate space to compare them at a neuronal level, I'd love to hear it. Because to me, those things seem at odds, and I think it makes you wrong again.

Originally posted by Astner
Which, once again, is irrelevant.

the point, which you brought up 2 quotes ago, is now irrelevant?

well, at least your arguments aren't getting harder to answer...

Originally posted by Astner
No, self-awareness comes from the external processing of information, which again is the consequence of the consciousness.

see, this is what I mean by you not really understanding...

what is self awareness then. How would I look at the brain and tell what is the self aware part of what is currently in consciousness?

like, wtf? external processing? are you trying to argue some kind of dualism? Like, ya, this sentence is literally mumbo jumbo, like if I tried to describe quantum mechanics using the fundamental forces of water, air and earth.

Originally posted by Astner
He called it a challenge, not a debate.

actually, I challenged to you point out something I got wrong, which you have not done and actually suggested I wasn't.

Originally posted by Astner
He has corrected at least one of his points.

no I haven't

Originally posted by Astner
What he's doing now is defending his ego.

dont flatter yourself

the opinions of a layman don't really irk me...

Omega Vision
Meta-debating ftw, amirite, Astner?

Oliver North
Originally posted by Oliver North
because that statement is incorrect. They perform different functions because of different inputs and outputs, they function in the same way. The differences between limbic and cortex neurons wouldn't be meaningful at all.

d'oh, here Astner, I'll give you some low hanging fruit:

while my broad point here, that the differences between the limbic system and cortex come from the arrangement of inputs/outputs and such, is still true, the last sentence about there being no meaningful differences between the neurons oversells the point.

for instance, the hippocampus, sort of part of the limbic system (depends on who you ask mostly) has neurons specialized for more complex grouping and neurogenesis. they function in the exact same manner as other neurons (neurotransmitters, axons, action potentials, etc) but in the end, it isn't fair to say there are no meaningful differences.

now, Astner, in terms of the level you were talking about (structures rather than neurons) this is irrelevant, and I appologize for making you read something not directly related to the OP, but I figured I'd clarify, because this is something I know you were going to pick up on and nail me for, right?

just needed to fix this to settle my own anxious neuroses.

Mindship
At least we don't have to worry about microtubules. cool


Btw, Colonel, I saw you on the tele the other day. You don't be lookin' like your avatar anymore.

Oliver North
Originally posted by Mindship
At least we don't have to worry about microtubules. cool

amen to that

though, cytoarchitechtonic differences between neurons would be even less relevant to Astner's point than would the neuronal specialization I mentioned above, but ya, if we are talking about differences in dendridic spines or neurotransmitter receptors, I don't think any two neurons would be the same.

Originally posted by Mindship
Btw, Colonel, I saw you on the tele the other day. You don't be lookin' like your avatar anymore.

haha, ya, thats his mug shot smile

also, doesn't it sort of irk you that this confessed war criminal now gets a cushy job as a media pundit. Fox used to have a program called "war stories with oliver north"... Like, Orwell couldn't write it that well.

Mindship
Originally posted by Oliver North
cytoarchitechtonic If I ever drop 'Mindship', dibbs on this.


haha, ya, thats his mug shot smile

also, doesn't it sort of irk you that this confessed war criminal now gets a cushy job as a media pundit. Honey Boo Boo makes me immune to such things.

movie1
I've heard that computers will have the capacity of a human brain by 2020. not sure if this means we'll be close to having a conscious computer.

Dolos
Originally posted by movie1
I've heard that computers will have the capacity of a human brain by 2020. not sure if this means we'll be close to having a conscious computer.

Not even close, maybe by 2065 or 2070.

kJDvdEQJOew



-Source

Very very possible, it's the key to immortality.

Oliver North
that isn't even a relevant question

there are tasks a computer is already trillions of times better at than is the human brain, and, unless we radically redesign the way computers work (which would likely reduce their functionality as a tool) there are things the human brain is and will be better at.

The "computer analogy" for describing the brain only has relevance at the most surface and peripheral levels. In terms of how the brain works at a mechanistic level, it is nothing like a computer.

BTW - the author of that article is a lawyer with no relevant training in anything to do with the brain, or technology it seems, and nearly identical things can be said of Kurzweil. Amazing how untrained amateurs come to radically different conclusions than do trained professionals.

Dolos
Originally posted by Oliver North
that isn't even a relevant question

there are tasks a computer is already trillions of times better at than is the human brain, and, unless we radically redesign the way computers work (which would likely reduce their functionality as a tool) there are things the human brain is and will be better at.

The "computer analogy" for describing the brain only has relevance at the most surface and peripheral levels. In terms of how the brain works at a mechanistic level, it is nothing like a computer.

BTW - the author of that article is a lawyer with no relevant training in anything to do with the brain, or technology it seems, and nearly identical things can be said of Kurzweil. Amazing how untrained amateurs come to radically different conclusions than do trained professionals.

You know nothing about Ray Kurzweil then.

Anyway, despite Ray Kurzweil's superior knowledge on the subject matter to any other human alive today...computers being smarter than people combined with nanoscale electronics provided by future molecular assembly, will allow human beings to replace every organ and every organic part of our bodies with microscopic computer chips to perform superior functions to their organic counterparts: ergo, a human will be a self-aware computer unless suddenly there's a nuclear holocaust in the near-future.

Cyner
Originally posted by Dolos
You know nothing about Ray Kurzweil then.

Anyway, despite Ray Kurzweil's superior knowledge on the subject matter to any other human alive today...computers being smarter than people combined with nanoscale electronics provided by future molecular assembly, will allow human beings to replace every organ and every organic part of our bodies with microscopic computer chips to perform superior functions to their organic counterparts: ergo, a human will be a self-aware computer unless suddenly there's a nuclear holocaust in the near-future.

But... why would we want to do that when we could pack nearly infinite data into DNA and improve the performance of the human body directly instead of replacing parts?

Oliver North
Originally posted by Dolos
You know nothing about Ray Kurzweil then.

he does study human memory then?

where can i find some of his publications?

Symmetric Chaos
Originally posted by Oliver North
that isn't even a relevant question

there are tasks a computer is already trillions of times better at than is the human brain, and, unless we radically redesign the way computers work (which would likely reduce their functionality as a tool) there are things the human brain is and will be better at.

I disagree that redesigning computers to work more like our brains would reduce their functionality as tools. At a minimum a human-like computer would be able to very rapidly interface with a traditional computer to perform calculations and access an arbitrarily large amount of specific information. The combination would be a vastly better tool than a normal computer.

Astner
Originally posted by Oliver North
he does study human memory then?

where can i find some of his publications?
Encyclopedia Britannica? Don't tell me you were taught to use Google to look for scientific articles.

Oliver North
Originally posted by Symmetric Chaos
I disagree that redesigning computers to work more like our brains would reduce their functionality as tools. At a minimum a human-like computer would be able to very rapidly interface with a traditional computer to perform calculations and access an arbitrarily large amount of specific information. The combination would be a vastly better tool than a normal computer.

I was thinking more in terms of memory storage and retrieval. Certainly, a computer would be better at certain things if it associated and stored things the way our long-term memory does, but we rely on quick and accurate retrieval of data from a computer that is very much unlike human memory (ie: it would be unhelpful if the computer's mood at the time of retrieval changed the nature of the items stored in its memory), whereas our survival requires us to use immediate context to shape how we use stored information.

I hadn't specifically thought of interfacing directly between two types of systems, I'm not sure I see the immediate advantage, unless you are saying that the advantages of both are just amalgamated into one system?

I suppose a computer that could infer our context and needs and reproduce saved data in terms of that would be helpful, but that is almost like psychic technology.

Originally posted by Astner
Encyclopedia Britannica? Don't tell me you were taught to use Google to look for scientific articles.

hi Astner

you are a ****wad

good to hear from you again

Symmetric Chaos
Originally posted by Oliver North
I hadn't specifically thought of interfacing directly between two types of systems, I'm not sure I see the immediate advantage, unless you are saying that the advantages of both are just amalgamated into one system?

I suppose a computer that could infer our context and needs and reproduce saved data in terms of that would be helpful, but that is almost like psychic technology.

That's basically what I'm referring to. I wouldn't call it psychic though since it would be like a human making inferences, conceivably more accurate, and massively faster than any person could be. Presumably if we knew enough about the mind/brain to program it in a computer with perfect accuracy we would also know enough model a slightly different version.

Oliver North
Originally posted by Symmetric Chaos
That's basically what I'm referring to. I wouldn't call it psychic though since it would be like a human making inferences, conceivably more accurate, and massively faster than any person could be. Presumably if we knew enough about the mind/brain to program it in a computer with perfect accuracy we would also know enough model a slightly different version.

fair enough

there would still be issues relying on aspects of how humans interpret context, but I get what you are saying. It would vastly improve computers because not only would they be able to retrieve what we want, but also how we want it based on what we are thinking about.

Although, because each person's memory storage would be unique, this would require each person to have a specific computer for their own use, or some type of universal adapter...

interesting concept though

Dolos
Originally posted by Cyner
But... why would we want to do that when we could pack nearly infinite data into DNA and improve the performance of the human body directly instead of replacing parts?

Because, unlike cells, microscopic machines do not need dna, and are not composed of dna. They are composed of nanoscale silicon parts, which, unlike dna, can have any form and function perform superior functions. Otherwise you're talking genetic engineering, unlike technology, biology has limits. Whereas with miniaturizing AI has exponential capabilities, take the theorized quantum computer for instance. You think microscopic computers being smarter than Einstein because of increased information in decreasing space is far out, quantum computers are way beyond that. Even extremely evolved humans, even self-improved, would be very limited by comparison because organic matter is very primitive compared to pure information on smaller and smaller, and eventually quantum scales.

Dolos

Tor__Hershman
Originally posted by Digi
Sure. We're machines, albeit organic ones, and we have awareness.
You state it well, Digi.

753
but we're not machines although everything about our inner workings is bound by physical law. when people think about computers gaining conscience or auto-conscience they are usually talking about silicon chips somehow emulating what the cellular network know as brain can do without an actual mollecular replica of it.

this argument is built upon the assumption that the epiphenomenon of conscience can be replicated independetly of the specific underlying physical processes that actually generate it. therefore electronic circuitry of a completely different material nature and organizational dynamic could generate the same epiphenomenon and software could be a mind.

I am yet to see anything even remotely approaching reasonable evidence for this belief, the so called multi-realizability, which IMO is nothing but sci-fi fantasy

computers as understood by us will never gain sentience (that's conscient sensations like pain, images and smells, not sci-fi personhood btw) or metacognition.

Dolos
Originally posted by 753
but we're not machines although everything about our inner workings is bound by physical law. when people think about computers gaining conscience or auto-conscience they are usually talking about silicon chips somehow emulating what the cellular network know as brain can do without an actual mollecular replica of it.

this argument is built upon the assumption that the epiphenomenon of conscience can be replicated independetly of the specific underlying physical processes that actually generate it. therefore electronic circuitry of a completely different material nature and organizational dynamic could generate the same epiphenomenon and software could be a mind.

I am yet to see anything even remotely approaching reasonable evidence for this belief, the so called multi-realizability, which IMO is nothing but sci-fi fantasy

computers as understood by us will never gain sentience (that's conscient sensations like pain, images and smells, not sci-fi personhood btw) or metacognition.

Agreed, but if one uploads their own "metacognition" through transhuman processes-we will change what it is to be self-aware as we change what it is to be human.

Symmetric Chaos
Originally posted by 753
I am yet to see anything even remotely approaching reasonable evidence for this belief, the so called multi-realizability, which IMO is nothing but sci-fi fantasy

I'd point to the whole history of machines.

Time can be kept on a sun dial, a grandfather clock, and a digital watch. Obviously many processes are multi-realizable (actually I can't think of any that aren't). Why should consciousness be an exception?

Oliver North
In theory, even if it were impossible to build a conscious machine out of non-organic parts, the mechanisms could be simulated on a sufficiently powerful computer. With enough processing power, there is nothing that would stop a program from being identical to the human brain, down to a molecular level if need be.

Stoic
No, a computer program would only mimic true awareness. This would take a lifetime of algorithmic notations however to perfect. It would then mimic a true person, but would never truly feel emotion, although it's computations could fool a person into believing that it feels emotions.

Oliver North
why is it a mimic if it is identical? how couldn't that be emotion?

Stoic
Because emotions are a chemical reaction. This would be technically different.

753
Originally posted by Oliver North
In theory, even if it were impossible to build a conscious machine out of non-organic parts, the mechanisms could be simulated on a sufficiently powerful computer. With enough processing power, there is nothing that would stop a program from being identical to the human brain, down to a molecular level if need be. a painting isn't the portrayed object.

how would a representation of a physical entity actually be identical to it, if it isnt a physical entity onto itself? why should we assume an algorithmic representation of a brain can produce the same epiphenomenon?

Oliver North
Originally posted by Stoic
Because emotions are a chemical reaction. This would be technically different.

emotions are not a chemical reaction

Originally posted by 753
a painting isn't the portrayed object.

how would a representation of a physical entity actually be identical to it, if it isnt a physical entity onto itself? why should we assume an algorithmic representation of a brain can produce the same epiphenomenon?

then what is special about the physical brain that couldn't be modeled? This is dangerously close to dualism, no? some special matter in the brain?

Omega Vision
Does this end with an attempt to define and describe a soul?

753
Originally posted by Symmetric Chaos
I'd point to the whole history of machines.

Time can be kept on a sun dial, a grandfather clock, and a digital watch. Obviously many processes are multi-realizable (actually I can't think of any that aren't). Why should consciousness be an exception?
the examples you provided don't really apply here. we can produce several tools whose behavior we can analyze and infer a sense of time passage from, sure. but how's that the same thing? we are the ones making sense out of those objects, we keep time, not the tools. consciousness is an actual natural phenomenon.

we have models of population genetics and evolution, they produce codes that increase or decrease in frequency depending on their meeting of certain parameters designed to mimic environemntal selective pressures. without even going into definitions of life, are they replicators that undergo darwininan evolution? I say they aren't.

<< THERE IS MORE FROM THIS THREAD HERE >>