Originally posted by Newjak
Nope it can not process uncertainty.Could life be fully digitized, hmmm that is a hard one.
That ould be funny.
And my thinking on the subject is this. What makes a computer, the great piece of equipment it is? It's precision and ability to do massive amounts of work in little time.
What makes a computer so precise though is that it has a sort of built in barrier.
I'll create a scenario. Say you want to have a computer add 1 a billion times. The computer will only add when the variable is equal to one. So if for some reason ,data corruption or a bad input, the number changes. The computer evaluates its parameters to be false because the number isn't one. Therefore error, can not do please change the number given.
It's precision is a by-product of it's detail, perfection oriented nature. Although once you place in the idea of that something can be true and false, or both. Well it looses it's built-in precision judger. This could potentially leave the computer open to flaws. Now instead of not adding say the two and only the one. A true AI may go well it could be true. Thus imprecision can take place. Thus AI could potentially make the computer worthless as its intended purpose.
I think we have gotten a little off track with this discussion. No, it will still be comprised of many circuits that basically do math. Because it is electronic, it will inherently be in a state of constant mistakes. A computer is built to get around those mistakes. (Such as DRAM needing to constantly be refreshed because it loses it charge so fast...a flaw that has a built in work around.)
The hardware that houses the AI can still work with explicit values and extrapolate explicit results. It is the interpretation of that data and how the AI works with that data that would define its status of "AI".
Also, AI would be a software program run on very advanced hardware. (Advanced...relative to our current standards.) This software would still, or rather is should, be programmed with self programmable parameters. You would not want an AI program to alter certain portions of its own code but you would want it to be adaptable enough to call it AI. You would actually allot specific attributes as "modifiable".
The above could all be rubbish as we learn more about AI. But would good is AI if it doesn't do you any good because it programs itself to just sit around and smoke weed all day?
The writers of Mass Effect hit it on the head a little bit better when they described a difference between computer intelligence: Virtual Intelligence vs. Artificial Intelligence. Virtual Intelligence has some sort of parameters that prevent it from being able to do certain things. This would stop the machine from becoming truly sentient but allow the machine to still adapt to the work/tasks it has been assigned.
Again, I am talking out of my ass because we are not even in our infancy when it comes to AI...but I believe ethical laws will have to be drawn up like in Mass Effect.