A.I. Mankinds next step or pitfall

Started by mac115863 pages

Im glad everyone is enjoying this thread. I have just been catching up. It sounds like all of you have had a mouth full to say. But most of it is not even remotely close to what i was asking.

I want to know if you think it is right for us to create A.I. (i dont care if it is possible now.) If we do create it how will we know if it is under us or on par with us. Do we treat it like a race equal to humans. I want to know the large ideas.

So if you could stop arguing like three year olds I would greatly appreciate it. Can we return to a sensible, rational, and logical debate like the true matrix fans we are.

iN MY OPINION WHOS TO SAY THAT A.I ORGAINSM WOULD WANT TO CONQUER AFTER ALL I MEAN IN THE END THEY ARE STILL MACHINES WITHOUT EMOTIONS WHICH ARE THE CAUSE FOR ANGER, GREED WHICH BRING US HUMAN TO WAR, ITS MY BELIEVE THAT A.I WOULD HAVE NO NEED FOR US BUT THEY WOULD LEAVE US ALONE TO OUR OWN DEMISE UNLESS FEELINGS AND EMOTIONS COME WITH THE A.I PACKAGE AND THEN ILL AGREE WITH MOST OF YOU BUT THE WAY I SEE A.I ITS SOMETHING LIKE DATA ON STAR TREK WHILE HE WAS SUPERIOR TO HUMANS IN EVERY WAY HE HAD NO DESIRE TO CONQUER US

SORRY FOR THE CAPS, DIDNT REALISE I HAD THEM ON!

That is a good point. A.I. would want to assimilate as much knowledge as possible to feed its purpose. I would say it would want to keep humans around because each human experinces life in a diffrent way. Could you imagine learning from 6 billion diffrent sources. I say without emotion A.I. would be good.

The AI in the Matrix is far more extensive than mere intelligence persay. It is artificial life. These machines don't merely exhibit intelligence, they also simulate life. They can perceive, discern, comprehend and react, thus, they can learn. But, they also maintain and repair themselves and replicate themselves and even evolve or improve their "species."

AI alone can be programmed in many ways based upon many underlying theories of intelligence. The outcome however can only be as good as the base theory. Garbage in, Garbage out as they say. Programmers are going to need a more advanced understanding of intelligence before they can create artificial intelligence even anywhere approaching what we seen in the Matrix. Targeting a particular task or situation is possible with limited results. But to program a good simulation of intelligence which addresses a broad, changing environment and simulates life is a way off I would guess.

Mac11586:

Sorry about the digression. It's just so tempting to ponder "what if's," scifi and technology I suppose.

But, to answer your question, I think developing AI or any other technolgy is fine if our ethical knowledge and moral courage can keep up with our scientific knowledge to insure it is utilized in ways that are good and just. I suppose we should be a couple steps ahead of any technology we produce or unleash on the world however to insure we can control our creations...

I also think that even if by accident our creations become "intelligent life forms" we should treat them with the same respect that we would any organic life form. Intelligent life is intelligent life and as Kant said, all are due respect simply on that basis alone. In watching the two Animatrix shorts on man's interactions with the machines, I cannot help but thinking man was wrong to alienate them and attempt to destroy them. It sure didn't work out so good.

Honda has created a robot the marks the beginning of AI. It is very insignificant scale compared to the matrix, but the robot can learn just from hearing, feeling and seeing and reprograms itself accourding to its circumstances. It is similar to a dog at this point I suppose. However scientist's have predicted that in 100 years AI will have it's own communities and territories to live in. They also have said that the AI will be programmed to die before ever harming a human. The problem with this is that if AI become smart enough they may be able to program themselves for defense and attack.