Did You Know?

Started by Symmetric Chaos2 pages

Originally posted by Newjak
I would like to point out that CPUs are stupid, not because we can make mistakes.

But because we be inexact and still accomplish our goals. Computers can not begin to cope with something that is not programmed into it.

Programs are working on that problem right now. One of the current ways that AI is hoped to be achieved is by creating a child-like system that learns and applies information.

Originally posted by Symmetric Chaos
Programs are working on that problem right now. One of the current ways that AI is hoped to be achieved is by creating a child-like system that learns and applies information.
I know its interesting stuff, but the problem is that if it reaches too much problems it could crash and cause it to reboot. Loosing everything.

Still interesting stuff.

Originally posted by Newjak
I know its interesting stuff, but the problem is that if it reaches too much problems it could crash and cause it to reboot. Loosing everything.

Still interesting stuff.

At least that would prove that it was capable of discovering and trying to solve problems.

Originally posted by Symmetric Chaos
At least that would prove that it was capable of discovering and trying to solve problems.
That is the most interesting part though.

Because we need a way for the program to essentially prgram itself into new programming parameters. That is a lot harder to do then people think.

Originally posted by DigiMark007
Humans will eventually be more than human anyway, if we allow evolution to continue to do its work over unfathomably long periods of time. This is just speeding up the process.

I see it as a testament to human ingenuity, intelligence, as well as the power and majesty of science. Both are awe-inspiring.

...

But let's look at it another way: say an alien species came along that was vastly superior to us both mentally and physically. Like humans to dogs. But they embraced us and shepherded us to new frontiers of the mind and body. Is this bad?

Hopefully not (for most, at least). And this is the same thing. Technology won't become our master (presumably), and we are the creators behind their mechanisms, so we can guide them to ever higher heights....which will bring humans up several levels in the process.

So quit being such a species-ist. I for one look forward to possibly having a conversation with a non-human sentience in my lifetime.

OMG! you mean we could create our own non-human sentient being?

Originally posted by Newjak
Because we need a way for the program to essentially prgram itself into new programming parameters.
If I understand this correctly, you need a program which can expand itself by anticipating/creating properties which, in its current state, do not exist. Put another way: you need an arithmetic program which can self-evolve into calculus.

Originally posted by Mindship
If I understand this correctly, you need a program which can expand itself by anticipating/creating properties which, in its current state, do not exist. Put another way: you need an arithmetic program which can self-evolve into calculus.
Kind if

But to expand on it more.

A computer is only as smart as a Human tells it to be. To have AI we need to have a computer that can take in information then create a program to execute on that input.

That is a lot harder to do then people realize. Why because current computers can not do that or they can but only to the point of the parameters humans give it.

For instance having an AI that can see danger coming then create a program from the blue that deals with that danger. Like someone coming up to it with a gun or an axe.

Originally posted by ~Forever*Alone~
OMG! you mean we could create our own non-human sentient being?

A nice ****-buddy.

Originally posted by Newjak
Kind if

But to expand on it more.

A computer is only as smart as a Human tells it to be. To have AI we need to have a computer that can take in information then create a program to execute on that input.

That is a lot harder to do then people realize. Why because current computers can not do that or they can but only to the point of the parameters humans give it.

For instance having an AI that can see danger coming then create a program from the blue that deals with that danger. Like someone coming up to it with a gun or an axe.


Seems we may require a science of synergy or emergent properties (chaos theory?).

that vid was amazing

i also found it kinda funny

Originally posted by Mindship
Seems we may require a science of synergy or emergent properties (chaos theory?).
Perhaps but that still may not get past the fundmental problem.

That being at some point any AI will be forced to go past any human parameters or input. That bridge between human input and self input is major bridge that needs to get crossed. If someone can get past then its all observation, and information gathering.

And one of the most interesting concepts I enjoy about this is if we do bridge that gap would the AI loose its fundmental ability to do logic thus lose anything it may have over a human being?

Originally posted by Newjak
...at some point any AI will be forced to go past any human parameters or input. That bridge between human input and self input is major bridge that needs to get crossed.
This is what I'm wondering. What factors / properties are required for this gap to be crossed? If one looks to the functioning of the human brain (and I will keep this on a purely empirical level) for an answer, one could argue that emergent properties arise from synergistic genetic coding (I suppose one could argue even further for a role quantum mechanics may play), which enable a person (as they mature) to move from simple image processing (sensorimotor functioning) through simple symbolic processing all the way to complex symbolic/metacognitive functioning.

And one of the most interesting concepts I enjoy about this is if we do bridge that gap would the AI loose its fundmental ability to do logic thus lose anything it may have over a human being?
Good question. Generally speaking: given that, very often, when we think we have a phenomenon figured out, and then have an opportunity to test / observe the phenomenon, factors arise out of the blue (eg, what we found regarding the moons of the outer planets vs what we thought we'd find; cloning--weren't there unexpected problems with telomeres?).

Originally posted by Mindship
This is what I'm wondering. What factors / properties are required for this gap to be crossed? If one looks to the functioning of the human brain (and I will keep this on a purely empirical level) for an answer, one could argue that emergent properties arise from synergistic genetic coding (I suppose one could argue even further for a role quantum mechanics may play), which enable a person (as they mature) to move from simple image processing (sensorimotor functioning) through simple symbolic processing all the way to complex symbolic/metacognitive functioning.

Good question. Generally speaking: given that, very often, when we think we have a phenomenon figured out, and then have an opportunity to test / observe the phenomenon, factors arise out of the blue (eg, what we found regarding the moons of the outer planets vs what we thought we'd find; cloning--weren't there unexpected problems with telomeres?).

I would go even simpler then that.

A human can make decisions based off of true, not true, OR possibly both. A computer as is can only work if something is True or False. It can not deal with that middle ground. I think any attempt at making a true AI will have to be bridge that ability to be imprecise with truth or false not being known.

And I know what you mean about problems coming out of the blue. Most people when thinking about True AI picture a computer as it is know but with human thinking. The problem is that a computer is precise, detailed and unyielding in its demand of perfection. To the point that not a single thing can be undecided or wrong inside of its program.

The very nature of a true AI though leads to a computer being imperfect. It leads to an interesting question. In that how can a computer function as a computer when precision has to be removed from it's required logic?

In essence by making it more human do we in fact make less like a computer, so much so it can not function as a computer anymore?

Originally posted by Newjak
A human can make decisions based off of true, not true, OR possibly both...I think any attempt at making a true AI will have to bridge that ability to be imprecise with truth or false not being known.

The computer can't process uncertainty/mystery.

The very nature of a true AI though leads to a computer being imperfect. It leads to an interesting question. In that how can a computer function as a computer when precision has to be removed from it's required logic?
Very interesting question. Humans can make decisions with uncertainty as a variable because, well, that's how life is: ultimately, uncertain, and we are life. Could life be fully digitized?

In essence by making it more human do we in fact make less like a computer, so much so it can not function as a computer anymore?
Wouldn't that be a kick in the teeth... 😮‍💨

Originally posted by Mindship
The computer can't process uncertainty/mystery.

Very interesting question. Humans can make decisions with uncertainty as a variable because, well, that's how life is: ultimately, uncertain, and we are life. Could life be fully digitized?

Wouldn't that be a kick in the teeth... 😮‍💨

Nope it can not process uncertainty.

Could life be fully digitized, hmmm that is a hard one.

That ould be funny.

And my thinking on the subject is this. What makes a computer, the great piece of equipment it is? It's precision and ability to do massive amounts of work in little time.

What makes a computer so precise though is that it has a sort of built in barrier.

I'll create a scenario. Say you want to have a computer add 1 a billion times. The computer will only add when the variable is equal to one. So if for some reason ,data corruption or a bad input, the number changes. The computer evaluates its parameters to be false because the number isn't one. Therefore error, can not do please change the number given.

It's precision is a by-product of it's detail, perfection oriented nature. Although once you place in the idea of that something can be true and false, or both. Well it looses it's built-in precision judger. This could potentially leave the computer open to flaws. Now instead of not adding say the two and only the one. A true AI may go well it could be true. Thus imprecision can take place. Thus AI could potentially make the computer worthless as its intended purpose.

Originally posted by Newjak
Nope it can not process uncertainty.

Could life be fully digitized, hmmm that is a hard one.

That ould be funny.

And my thinking on the subject is this. What makes a computer, the great piece of equipment it is? It's precision and ability to do massive amounts of work in little time.

What makes a computer so precise though is that it has a sort of built in barrier.

I'll create a scenario. Say you want to have a computer add 1 a billion times. The computer will only add when the variable is equal to one. So if for some reason ,data corruption or a bad input, the number changes. The computer evaluates its parameters to be false because the number isn't one. Therefore error, can not do please change the number given.

It's precision is a by-product of it's detail, perfection oriented nature. Although once you place in the idea of that something can be true and false, or both. Well it looses it's built-in precision judger. This could potentially leave the computer open to flaws. Now instead of not adding say the two and only the one. A true AI may go well it could be true. Thus imprecision can take place. Thus AI could potentially make the computer worthless as its intended purpose.

I think we have gotten a little off track with this discussion. No, it will still be comprised of many circuits that basically do math. Because it is electronic, it will inherently be in a state of constant mistakes. A computer is built to get around those mistakes. (Such as DRAM needing to constantly be refreshed because it loses it charge so fast...a flaw that has a built in work around.)

The hardware that houses the AI can still work with explicit values and extrapolate explicit results. It is the interpretation of that data and how the AI works with that data that would define its status of "AI".

Also, AI would be a software program run on very advanced hardware. (Advanced...relative to our current standards.) This software would still, or rather is should, be programmed with self programmable parameters. You would not want an AI program to alter certain portions of its own code but you would want it to be adaptable enough to call it AI. You would actually allot specific attributes as "modifiable".

The above could all be rubbish as we learn more about AI. But would good is AI if it doesn't do you any good because it programs itself to just sit around and smoke weed all day?

The writers of Mass Effect hit it on the head a little bit better when they described a difference between computer intelligence: Virtual Intelligence vs. Artificial Intelligence. Virtual Intelligence has some sort of parameters that prevent it from being able to do certain things. This would stop the machine from becoming truly sentient but allow the machine to still adapt to the work/tasks it has been assigned.

Again, I am talking out of my ass because we are not even in our infancy when it comes to AI...but I believe ethical laws will have to be drawn up like in Mass Effect.