Originally posted by Bardock42
Would it be a bad thing if humans were killed and Robots would take over instead? And if yes, why?
Well It would be the end for humans, the world would no longer hold any Biological creature with the intellect of us. Robots would eventually take over the whole world killing things most likely and causing a huge amount of gas to go into the Ozone. But if theyy could find a way to get around gasses and they acted like humans they would most likely be better for eart 😖
It's not so much robots per se that will challenge us, it's artificial intelligence. I believe every computer expert on the planet will tell you that before this century ends, there will be machines more intelligent than human beings. Among other things, AI will begin to design and improve itself, all at speeds far faster than our biological snail's pace.
The Singularity cometh.
Originally posted by Mindship
It's not so much robots per se that will challenge us, it's artificial intelligence. I believe every computer expert on the planet will tell you that before this century ends, there will be machines more intelligent than human beings. Among other things, AI will begin to design and improve itself, all at speeds far faster than our biological snail's pace.The Singularity cometh.
And cometh fast, Vinge is the prophet for the 21st Century!
Originally posted by Mindship
It's not so much robots per se that will challenge us, it's artificial intelligence. I believe every computer expert on the planet will tell you that before this century ends, there will be machines more intelligent than human beings. Among other things, AI will begin to design and improve itself, all at speeds far faster than our biological snail's pace.The Singularity cometh.
I don´t think AI( in the way we are talking about, not ordinary AI) can be achived just by making better algorythims. I mean.... AI is not just a matter of logic, you will need a new kind of hardware, and a new kind of logic, and a new kind of mathematics to rule over that logic. So, in my opinion computer specialists don´t know really much about creating AI, thats more a problem for physicists.
Originally posted by Atlantis001
I don´t think AI( in the way we are talking about, not ordinary AI) can be achived just by making better algorythims. I mean.... AI is not just a matter of logic, you will need a new kind of hardware, and a new kind of logic, and a new kind of mathematics to rule over that logic. So, in my opinion computer specialists don´t know really much about creating AI, thats more a problem for physicists.
No doubt, a whole new approach to understanding cognitive processes will have to be involved. I also find--in literature regarding AI, the Singularity, and dangers inherent--that there seems to be a confusion of intelligence with consciousness and/or motivation. Just because you have an artificial brain which can think faster and even perform far more parallel operations than a human brain, it doesn't necessarily mean it is "conscious" (not operationally defining "consciousness" for the moment), or that it will wanna take over the world (inherently have that motivation).
Still, once artificial brains start designing better artificial brains, we humans will, to a large extent, be "out of the loop."
Originally posted by Mindship
No doubt, a whole new approach to understanding cognitive processes will have to be involved. I also find--in literature regarding AI, the Singularity, and dangers inherent--that there seems to be a confusion of intelligence with consciousness and/or motivation. Just because you have an artificial brain which can think faster and even perform far more parallel operations than a human brain, it doesn't necessarily mean it is "conscious" (not operationally defining "consciousness" for the moment), or that it will wanna take over the world (inherently have that motivation).Still, once artificial brains start designing better artificial brains, we humans will, to a large extent, be "out of the loop."
I agree. Consciousness is something that cannot be achieved by logic, its impossible to write an algorithm that would be the equivalent to consciousness. Consciousness is non-algorithmic, or by the terminology of the mathematician, and physicist Roger Penrose, consciousness is non computable, what means it cannot be described.
To create AI, its needed to create an algorithm that makes "choices" (like we do), and it is impossible to make it. Many mathematicians tryied to define choice mathematically, and they concluded it is impossible. There is no mathematical function that "chooses" an element from an given set, it always need to follow a rule to pick the element. Choice is a concept that does not exist in mathematics and logic, and choice is needed for consciousness. This is interesting because if someone wants consciousness, then he will have to give up logic.
I've always viewed intelligence as an operation which takes place in the "field" of consciousness. Or put another way, mind is figure in the larger ground of consciousness. In any event, the two are not synonomous. It just seems that way cuz we identify so closely with our minds that that "figure" takes up the whole ground. We fail to see the forest for the trees.
Originally posted by Vinny Valentine
what If they didn't have to make choices though, They did what they needed to do at all times?
It will mean that we are like robots without free will. Everything I am doing right now is like if I was programmed to do. This is what Alan Turing suggested in his Turing test :
definition taken from wikipedia
"The Turing test is a proposal for a test of a machine's capability to perform human-like conversation. Described by Alan Turing in the 1950 paper "Computing machinery and intelligence", it proceeds as follows: a human judge engages in a natural language conversation with two other parties, one a human and the other a machine; if the judge cannot reliably tell which is which, then the machine is said to pass the test. It is assumed that both the human and the machine try to appear human."
The test basically means that if it is impossible to distinguish between human, and machine in a test like this, then it is possible in theory to create consciousness just by logic, or more precisely, reduce consciouness to an algorithm. Human consciousness would be nothing more than very good algorithms, humans would be not much different than machines and it will mean that our brains works in the same way as any other computer. But I don´t think like this.
Actually there is a way to simulate our ability to choose, and it will use quantum mechanics. In quantum mechanics subatomic particles seems to choose their trajectory. It is impossible to determine what happens with an electron, it is totally random, it is like if it is free to choose its trajectory. A quantum computer would be capable to use this randomness in their algorithms, and maybe it could produce consciousness. In this way our brain would be a quantum computer.
The only problem is : Does this randomness we see in quantum mechanics means freedom to make decisions ? If your answer is yes, you will be saying that this randomness and free will are the same thing. But it is not senseless to think this, to get the entire picture you will need to know quantum mechanics, and you will not get it in just one day.