Originally posted by inimalist
maybe. I think the issue here would be like specialization. In theory, you could probably design a computer that would outscore most people on the Stanford Binet, or whatever measure of intelligence you want, but it is likely only going to be "intelligent" at the things and contexts it is specifically designed for. One of the cornerstones of human intelligence is the sort of ability to carry over from one thing to the next. So, being able to use the problem solving strategies in new and never previously experienced contexts.
Insects are quite easy to emulate with AI. There are very few insects with "reason" abilities that make it difficult for program for. Even the "intelligent' navigation abilities of the bumblebee can be emulated with software (they have a nack for finding the optimal path through an array of flowers and they actually improve their path as subsequent trips to the same field which increases energy efficiency (this kind of stuff is simply astounding to me)).
The smartest "bug" is probably the Portia Labiata. This little sh*t has been compared to the big cats when it comes to hunting smarts. It actually improvises hunting tactics and learns from mistakes. It has forward thinking abilities and even anticipates (almost like excitement/joy) the hunt. It has a sort of sign-language it uses with other members of the same genus. This little guy cannot be emulated yet, as far as I know. It is far too complex. True, I moved the bar too high because I changed the "measure" from insects to arachnids, but I feel I am keeping in spirit of the conversation.
I do not think we have AI that could pass the Stanford and Binet test. Well, we MIGHT be able to do so if we had recycled questions and then used "Watson" for the "wild-ones" but that is not really AI: that's just regurgitating data in a not-so-intelligent way.
IMO, in order for an AI program to be considered "smart" for passing a SB5 is for it to answer questions that are dissimilar to any other tests taken OR for that AI to pass the test without ever having been "prepped" for that kind of test...just like a human who has never taken the test.
Originally posted by inimalist
Its sort of like the Assimo robot. The latest version is able to adjust for weight changes in a thermos as it pours a liquid, and is able to scale its grip appropriately to lift various glasses or containers. Technically, it could probably be designed to do this specific function in a way superior to humans, but the generalizability of those actions is almost none. Assimo can't generalize these rules of interacting with the environment, or at least, it can't do it in a way even approaching most organisms.
You're right: the minute adjustments that a robot can make with some AI running the robot are more accurate (sometimes by thousands or even millions of times such as our micro-computing manufacturing processes (MCMP) than humans. It would be the aggregation of such programming that would/will culminate in a true AI result. Combine the fine-tuned robotics AI (MCMP), the linguistic AI (Watson), the visual intelligence present in the F-22 and Prototype F-35 (the visual AI in those machines is ridiculously advanced. I would say that those two machines culminate the most sophisticated combination of computer-human interfacing of anything on the planet...even our particle accelerators), and vastly improve our "reasoning" AI for things like feelings, and we have a super-intelligent and functioning AI.
The last one is the largest hurdle, obviously. And Watson still has its (or "his"?) problems. Those two need to be perfected and integrated before we can claim an age of reasoning machines.
Originally posted by inimalist
For sure, we have computers far smarter than insects though (for, I suppose, the general measures of intelligence... insects probably have better insect specific intelligence than our computers).
Well, it really does depend on what is being talked about. Generally, we can emulate pretty much all aspects of insect intelligence. Even some of the super-human abilities like different EM Spectrum sensing, super smells, etc.
It's when you scale up to small animals that we run into the wall of our current AI abilities.
Rats? We still cannot simulate more than a few neurons (I don't remember how many...it could be a dozen or it could have been 200...just don't remember) if a rat's brain. That tells me that we literally have an exponential ways to grow before we start making any sort of decent progress. Sure, that exponential growth is predicted to occur in the next 2 decades...but I remain skeptical until I see a leap in AI that makes Watson look like a novelty toy.