Originally posted by Scoobless
So how do you decide if someone has a soul or not?
Who would state that that is an expert on the subject? ... Dr Strange is about the only person who may know and I don't think he's ever met Ultron.
A lot of people are called "soulless" even though they are humans or superhumans, that's more just a reflection of their actions than a comment on their actual soul (or lack thereof)
Originally posted by Scoobless
Who would state that that is an expert on the subject? ... Dr Strange is about the only person who may know and I don't think he's ever met Ultron.
A lot of people are called "soulless" even though they are humans or superhumans, that's more just a reflection of their actions than a comment on their actual soul (or lack thereof)
Chaos theory is nothing like it has been described in this thread lol. Read Glicks Chaos it has nothing to do with predestination, lol. From Jacques Hadamard, Pierre Duhem and the father of the Chaos theory, Henri Poincere you will find it is to do with the idea of chance being the determining factor in dynamic systems because of some factor in the beginning that we didn't know about. This is the basis of the mathematics behind Mandelbrot and Julia sets. Chaos and predestination hilarious.
Back to topic, can a machine be alive? Not so far! Unless you go down the route of organisms as biological machines. Which is always interesting and dates back to Leonardo Da vinci. Posthumanism..............hmmm................................. That's the phrase you're all looking for.
Vernor Vinge and the Singularity..............hmmmmm..................
Can machines be alive in fiction? Of course, can a machine have compassion or show emotion? We really have no way of knowing until one does, Vinge's ideas are perhaps the most famous on how this might come about.
By Crom!
🙂
Originally posted by ScooblessSure 🙄
I was looking for one in particular then noticed a few others that could still have some life in them (no pun intended ..... honest .... 😖hifty: )
Anyways by comic book standards I think it deals more with how they interact with the input we know all synthetic beings receive.
In short can they make unconditioned responses from conditioned input. Meaning can they self-expand their initial perimeters.
Or can they not do anything past their programming perimeters.
Originally posted by Newjak
Sure 🙄Anyways by comic book standards I think it deals more with how they interact with the input we know all synthetic beings receive.
In short can they make unconditioned responses from conditioned input. Meaning can they self-expand their initial perimeters.
Or can they not do anything past their programming perimeters.
This is going to be hard for me to vocalize without sounding very circular, but here goes;
It isn't as cut and dry as you say here. There is a distinction that can be made between computers that are created to be artificially intelligent, and those that spontaneously spark artificial intelligence.
The ones that are created with the intention of being artificially intelligent are programmed to expand their own programming, so to speak. Hence, by going past their programmed instructions, they are following their programmed instructions, if you get my meaning.
Originally posted by SoljerI understand what you mean.
This is going to be hard for me to vocalize without sounding very circular, but here goes;It isn't as cut and dry as you say here. There is a distinction that can be made between computers that are created to be artificially intelligent, and those that spontaneously spark artificial intelligence.
The ones that are created with the intention of being artificially intelligent are programmed to expand their own programming, so to speak. Hence, by going past their programmed instructions, they are following their programmed instructions, if you get my meaning.
That is why I added the Unconditional responses to input.
For instance Vision falling in love with someone. It is an unconditional response why because there exists no emotional or biological link to set stimuli or the response.
It happens outside of the programming of what they can execute or what they can learn to execute.
Ok let me try to say this without sounding circular either. Ok now you use the example of A.I. designed to expand their initial perimeters. In short they can adept.
Ok but what I was getting at is a true A.I., even an adapting A.I. can only generate one response to input, and then they can only base it off that input without other options being viable.
Ok let's say we have an A.I. its initial programming does not include anything about guns. It primary programming is to go out into the world and learn and expand from there. Also that it can not hurt humans.
Then it meets someone with a gun but it shoots the AI. Well that AI would expand its programming to state humans with guns can hurt AI. Now the input is that guns hurt but it came from a human.
Would it expand it programming to hurt the human or would it simply run. A Machine would choose one or other as the viable option to the input given to it.
Either way it expanded it's programing either by choosing to run from the input of guns or disregard initial programming and carry on new programming.
Well ok but here is something what if it instead decides to expand and add both. Once again referring to Vision. At least one version would run from, say a bunch of misguided police, thinking he was bad. Yet he would disarm a bad guy looting a bank because he is waving a gun at a kid.
There is no emotional or biological response to that input a true thinking AI would only adopt one way or the other as the conditioned response to that input. A, I'll say weird AI, would consider that the Input can generate different responses in different circumstances from the same Input and that is something that simply can not be programmed in or taught it.
This may not be enough to constitute life but it is human thinking and if an AI could be considered alive I think that would be a very important step.
Originally posted by Newjak
I understand what you mean.That is why I added the Unconditional responses to input.
For instance Vision falling in love with someone. It is an unconditional response why because there exists no emotional or biological link to set stimuli or the response.
It happens outside of the programming of what they can execute or what they can learn to execute.
Ok let me try to say this without sounding circular either. Ok now you use the example of A.I. designed to expand their initial perimeters. In short they can adept.
Ok but what I was getting at is a true A.I., even an adapting A.I. can only generate one response to input, and then they can only base it off that input without other options being viable.
Ok let's say we have an A.I. its initial programming does not include anything about guns. It primary programming is to go out into the world and learn and expand from there. Also that it can not hurt humans.
Then it meets someone with a gun but it shoots the AI. Well that AI would expand its programming to state humans with guns can hurt AI. Now the input is that guns hurt but it came from a human.
Would it expand it programming to hurt the human or would it simply run. A Machine would choose one or other as the viable option to the input given to it.
Either way it expanded it's programing either by choosing to run from the input of guns or disregard initial programming and carry on new programming.
Well ok but here is something what if it instead decides to expand and add both. Once again referring to Vision. At least one version would run from, say a bunch of misguided police, thinking he was bad. Yet he would disarm a bad guy looting a bank because he is waving a gun at a kid.
There is no emotional or biological response to that input a true thinking AI would only adopt one way or the other as the conditioned response to that input. A, I'll say weird AI, would consider that the Input can generate different responses in different circumstances from the same Input and that is something that simply can not be programmed in or taught it.
This may not be enough to constitute life but it is human thinking and if an AI could be considered alive I think that would be a very important step.
You're talking about the difference, as I noted, between spontaneous artificial intelligence and programmed artificial intelligence - and I hate to give such a short reply to one that seems quite thought out, but I don't have a lot of time, and really - therein lies the rub, so to speak.
You're talking about a limited AI - something way more realistic than the theoretical AI I'm describing. I'm talking about something that was programmed with only the intention of learning, almost like a child. It learns good and bad from whatever 'teaches' it, just like a child. It learns about pain and pleasure (assuming said computer is fitted with receptors for both inputs) the same as a child.
I'm not talking about an adapting robot. I'm talking about THE adapting robot - an entirely artificial analogue to a human being.
Originally posted by SoljerIt's ok.
You're talking about the difference, as I noted, between spontaneous artificial intelligence and programmed artificial intelligence - and I hate to give such a short reply to one that seems quite thought out, but I don't have a lot of time, and really - therein lies the rub, so to speak.You're talking about a limited AI - something way more realistic than the theoretical AI I'm describing. I'm talking about something that was programmed with only the intention of learning, almost like a child. It learns good and bad from whatever 'teaches' it, just like a child. It learns about pain and pleasure (assuming said computer is fitted with receptors for both inputs) the same as a child.
I'm not talking about an adapting robot. I'm talking about THE adapting robot - an entirely artificial analogue to a human being.
I described both a realistic AI and your theoretical.
In fact that is what I described my weird AI.
Here is where I think we kind of miss each other.
where you think that because it was programmed to learn like a human it is therefore still AI. Thats fine that is exactly what my wired AI is.
Now comes the question if that weird AI capable of making conscious decisions like a human then could they not be treated like a human being.
Now does that grant them life maybe not but then again I feel the only way to find out if an AI could have life is if it thought like a human being. Being that it can make unconditioned responses to stimuli like a human can.