Did You Know?

Text-only Version: Click HERE to see this thread with all of the graphics, features, and links.



Mindship
I had to share this. The music alone makes it worthwhile.

http://www.youtube.com/watch?v=ljbI-363A2Q

dadudemon
Originally posted by Mindship
I had to share this. The music alone makes it worthwhile.

http://www.youtube.com/watch?v=ljbI-363A2Q

Awesome vid.

I think Japan already has 10 terabit backbones in place.

DigiMark007
Cool stuff. Actually, "cool stuff" doesn't cover it. Sh*t like that is absolutely exciting for our planet, regardless of what good and bad things come of it.

jaden101
wow...that's pretty amazing stuff...i like the bit about the fact that more information had been created in the last year than in the last 5000 years before it

Zeal Ex Nihilo
I remember reading a short story by Isaac Asimov like this. Humans forgot how to do math because technology became so advanced. There was a war going on, however, and one scientist discovered mathematics--addition, subtraction, multiplication, and he was working on division. Because the technology was so advanced, the entire war was at a stalemate; the computers were too evenly matched. Thus, a solution was born: humans should pilot the projectile weaponry (missiles, rockets, etc.) because a computer couldn't anticipate the human mind.

Not that this has much to do with anything, but I think it's far from exciting that humans will one day be completely inferior to our technology. I think it's horrifying.

DigiMark007
Originally posted by Zeal Ex Nihilo
Not that this has much to do with anything, but I think it's far from exciting that humans will one day be completely inferior to our technology. I think it's horrifying.

Humans will eventually be more than human anyway, if we allow evolution to continue to do its work over unfathomably long periods of time. This is just speeding up the process.

I see it as a testament to human ingenuity, intelligence, as well as the power and majesty of science. Both are awe-inspiring.

...

But let's look at it another way: say an alien species came along that was vastly superior to us both mentally and physically. Like humans to dogs. But they embraced us and shepherded us to new frontiers of the mind and body. Is this bad?

Hopefully not (for most, at least). And this is the same thing. Technology won't become our master (presumably), and we are the creators behind their mechanisms, so we can guide them to ever higher heights....which will bring humans up several levels in the process.

So quit being such a species-ist. I for one look forward to possibly having a conversation with a non-human sentience in my lifetime.

Zeal Ex Nihilo
Originally posted by DigiMark007
So quit being such a species-ist.
That is an unbelievable level of retardation. Don't say stupid shit like that.

Quiero Mota
laughing out loud

xmarksthespot
I see no need to fear technological singularity... although its relatively off topic...

DigiMark007
Originally posted by Zeal Ex Nihilo
That is an unbelievable level of retardation. Don't say stupid shit like that.

Why? It was kind of in jest, but I fail to see what's so stupid about it....especially since the quote is contextualized in a post that explains my position.

Anyway, you did nothing to address my points. Instead, you insulted me.

chithappens
Originally posted by Zeal Ex Nihilo


Not that this has much to do with anything, but I think it's far from exciting that humans will one day be completely inferior to our technology. I think it's horrifying.

For once, I agree with you

Zeal Ex Nihilo
Originally posted by DigiMark007
Why? It was kind of in jest, but I fail to see what's so stupid about it....especially since the quote is contextualized in a post that explains my position.

Anyway, you did nothing to address my points. Instead, you insulted me.
Because you implied that I implied that the human species is superior to another species (aside from animals).

Mindship
I'm hoping that when we do have AIs more intelligent than people, this will help to answer some questions, such as...

1. Exactly, what is intelligence? Ie, what will make these machines "smarter" than us? Will they just be better math-based problem-solvers? What about nonverbal/nondigital problem-solving? Creativity and insight? Do these play a role in human intelligence? How will we end up defining intelligence?

2. How does intelligence relate to consciousness? Motivation? Eg, will being smarter automatically instill a self-preservation "instinct?" What will AIs be "inherently driven" to figure out for themselves? (Personally, I don't think they will automatically deem us inferior and/or a threat to their existence.) And will machine intelligence highlight any paths to take in exploring/defining "consciousness?"

BTW, I don't recall anyone ever mentioning this site, but this...
www.orionsarm.com
...is a scifi site which takes the concept of AIs to very cool extremes.

Symmetric Chaos
Originally posted by Mindship
I'm hoping that when we do have AIs more intelligent than people, this will help to answer some questions, such as...

1. Exactly, what is intelligence? Ie, what will make these machines "smarter" than us? Will they just be better math-based problem-solvers? What about nonverbal/nondigital problem-solving? Creativity and insight? Do these play a role in human intelligence? How will we end up defining intelligence?

I figure if they start taking over their smarter in the ways that count.

Originally posted by Mindship
2. How does intelligence relate to consciousness? Motivation? Eg, will being smarter automatically instill a self-preservation "instinct?" What will AIs be "inherently driven" to figure out for themselves? (Personally, I don't think they will automatically deem us inferior and/or a threat to their existence.) And will machine intelligence highlight any paths to take in exploring/defining "consciousness?"

Depends on the form of the AI. A nonvolitional AI would have no such drive, of course, and a normal one could be programmed not to think about that (I assume). If they did think about superiority I don't see why they wouldn't deem themselves superior.

Originally posted by Mindship
BTW, I don't recall anyone ever mentioning this site, but this...
www.orionsarm.com
...is a scifi site which takes the concept of AIs to very cool extremes.

Have you ever read Hyperion? It has a great (is pessimistic) take on AIs that are so far above us as to exist as what are essentially plank-tech beings.

Blax_Hydralisk
Originally posted by Symmetric Chaos
I figure if they start taking over their smarter in the ways that count.


This is why we keep you around laughing out loud

Mindship
Originally posted by Symmetric Chaos
I figure if they start taking over their smarter in the ways that count.
*thinking back to the 2000 Prez election...*

Have you ever read Hyperion? It has a great (is pessimistic) take on AIs that are so far above us as to exist as what are essentially plank-tech beings. I read the first book a long time ago. Great story, though IIRC this first book focused more on the pilgrims to the Time Tombs than AI. But yeah, I understand the series has terrific AI stuff.

DigiMark007
Originally posted by Zeal Ex Nihilo
Because you implied that I implied that the human species is superior to another species (aside from animals).

No, I didn't. You said you were terrified of us being inferior to machine intelligence (in many ways we already are, btw). It didn't imply a species superiority but an inherent fear of another species (in this case, a synthetic AI being) trumping our capabilities. So yeah, it was slightly species-ist, just not in the way you thought I meant it.

I didn't mean it as an overt insult, however, just a jibe at your horror that I find to be a bit misplaced. Sorry you took it so harshly.


...

As for Mindship's musings about intelligence, most cpus would generally wax the floor with us on standard IQ tests. Ironically enough, the exactitude of computers is something that we occasionally see as a fault. As if our penchant for making errors somehow makes us more intelligent (though it does add a certain seeming random-ness to behavior that makes the human machine impossible to predict accurately).

But when we talk about obstacles in AI, we generally mean consciousness, and machine AI's (seeming) lack of it. But the fact remains that at some point along our evolutionary lineage we weren't consciously aware, and at another point we were. It obviously wasn't an on/off sort of thing, but gradual steps of consciousness...like how a dog is probably conscious but not at the level of humans.

Therefore, it's only a matter of complexity. Unfortunately, computer AI lacks the processor-on-processor complexity of the billions that humans possess, so it's many orders of magnitude away from acheiving a human level of consciousness. But it isn't outlandish to imagine that we will be able to construct something with at least rudimentary awareness of itself within a lifetime or two.

We could argue until we die about what consciousness is: seperate or the same as the physical processes that give rise to it. But that isn't my point. Whichever one it is, the operative idea is that our physical nature gives rise to consciousness (regardless of whether it is itself physical or not), and so it is possible (though difficult) to create such beings artificially rather than having them grown over hundreds of millenia via evolution.

Mindship
Originally posted by DigiMark007
the operative idea is that our physical nature gives rise to consciousness (regardless of whether it is itself physical or not), and so it is possible (though difficult) to create such beings artificially rather than having them grown over hundreds of millenia via evolution. Well, this is part of what I am wondering. Certainly, if we live in a fundamentally material universe (matter gives rise to consciousness), then it is just a matter of time before the complexity of our machines exceeds the complexity of the human brain and we will have AI superconsciousness.

However, if the mystical/transcendent paradigm is correct (Consciousness precedes and emerges through material complexity, not from it), then there may very well be an element to Consciousness that no machine, no matter how complex it is, will be able to replicate/possess.

I don't intend to open up a What Is Reality discussion here (Lord knows, we have enough threads on that topic). I was just elucidating where I was coming from in my musings.

DigiMark007
Originally posted by Mindship
Well, this is part of what I am wondering. Certainly, if we live in a fundamentally material universe (matter gives rise to consciousness), then it is just a matter of time before the complexity of our machines exceeds the complexity of the human brain and we will have AI superconsciousness.

However, if the mystical/transcendent paradigm is correct (Consciousness precedes and emerges through material complexity, not from it), then there may very well be an element to Consciousness that no machine, no matter how complex it is, will be able to replicate/possess.

I don't intend to open up a What Is Reality discussion here (Lord knows, we have enough threads on that topic). I was just elucidating where I was coming from in my musings.

I think the former is a much more realistic way of looking at things, as that the latter theory requires a certain amount of belief/faith in the presence of mystical forces to accept.

In any case, I have no problem accepting the possibility that consciousness is separate from physical forces. But I see it as rather clearly a bottom-up construction rather than top-down.

Of course, the creation of computer AI would vindicate this position, but until then it's just educated hypotheses on both sides.

Newjak
Originally posted by DigiMark007
No, I didn't. You said you were terrified of us being inferior to machine intelligence (in many ways we already are, btw). It didn't imply a species superiority but an inherent fear of another species (in this case, a synthetic AI being) trumping our capabilities. So yeah, it was slightly species-ist, just not in the way you thought I meant it.

I didn't mean it as an overt insult, however, just a jibe at your horror that I find to be a bit misplaced. Sorry you took it so harshly.


...

As for Mindship's musings about intelligence, most cpus would generally wax the floor with us on standard IQ tests. Ironically enough, the exactitude of computers is something that we occasionally see as a fault. As if our penchant for making errors somehow makes us more intelligent (though it does add a certain seeming random-ness to behavior that makes the human machine impossible to predict accurately).

But when we talk about obstacles in AI, we generally mean consciousness, and machine AI's (seeming) lack of it. But the fact remains that at some point along our evolutionary lineage we weren't consciously aware, and at another point we were. It obviously wasn't an on/off sort of thing, but gradual steps of consciousness...like how a dog is probably conscious but not at the level of humans.

Therefore, it's only a matter of complexity. Unfortunately, computer AI lacks the processor-on-processor complexity of the billions that humans possess, so it's many orders of magnitude away from acheiving a human level of consciousness. But it isn't outlandish to imagine that we will be able to construct something with at least rudimentary awareness of itself within a lifetime or two.

We could argue until we die about what consciousness is: seperate or the same as the physical processes that give rise to it. But that isn't my point. Whichever one it is, the operative idea is that our physical nature gives rise to consciousness (regardless of whether it is itself physical or not), and so it is possible (though difficult) to create such beings artificially rather than having them grown over hundreds of millenia via evolution. I would like to point out that CPUs are stupid, not because we can make mistakes.


But because we be inexact and still accomplish our goals. Computers can not begin to cope with something that is not programmed into it.

Symmetric Chaos
Originally posted by Newjak
I would like to point out that CPUs are stupid, not because we can make mistakes.


But because we be inexact and still accomplish our goals. Computers can not begin to cope with something that is not programmed into it.

Programs are working on that problem right now. One of the current ways that AI is hoped to be achieved is by creating a child-like system that learns and applies information.

Newjak
Originally posted by Symmetric Chaos
Programs are working on that problem right now. One of the current ways that AI is hoped to be achieved is by creating a child-like system that learns and applies information. I know its interesting stuff, but the problem is that if it reaches too much problems it could crash and cause it to reboot. Loosing everything.

Still interesting stuff.

Symmetric Chaos
Originally posted by Newjak
I know its interesting stuff, but the problem is that if it reaches too much problems it could crash and cause it to reboot. Loosing everything.

Still interesting stuff.

At least that would prove that it was capable of discovering and trying to solve problems.

Newjak
Originally posted by Symmetric Chaos
At least that would prove that it was capable of discovering and trying to solve problems. That is the most interesting part though.


Because we need a way for the program to essentially prgram itself into new programming parameters. That is a lot harder to do then people think.

~Forever*Alone~
Originally posted by DigiMark007
Humans will eventually be more than human anyway, if we allow evolution to continue to do its work over unfathomably long periods of time. This is just speeding up the process.

I see it as a testament to human ingenuity, intelligence, as well as the power and majesty of science. Both are awe-inspiring.

...

But let's look at it another way: say an alien species came along that was vastly superior to us both mentally and physically. Like humans to dogs. But they embraced us and shepherded us to new frontiers of the mind and body. Is this bad?

Hopefully not (for most, at least). And this is the same thing. Technology won't become our master (presumably), and we are the creators behind their mechanisms, so we can guide them to ever higher heights....which will bring humans up several levels in the process.

So quit being such a species-ist. I for one look forward to possibly having a conversation with a non-human sentience in my lifetime.

OMG! you mean we could create our own non-human sentient being?

Mindship
Originally posted by Newjak
Because we need a way for the program to essentially prgram itself into new programming parameters. If I understand this correctly, you need a program which can expand itself by anticipating/creating properties which, in its current state, do not exist. Put another way: you need an arithmetic program which can self-evolve into calculus.

Newjak
Originally posted by Mindship
If I understand this correctly, you need a program which can expand itself by anticipating/creating properties which, in its current state, do not exist. Put another way: you need an arithmetic program which can self-evolve into calculus. Kind if

But to expand on it more.

A computer is only as smart as a Human tells it to be. To have AI we need to have a computer that can take in information then create a program to execute on that input.

That is a lot harder to do then people realize. Why because current computers can not do that or they can but only to the point of the parameters humans give it.

For instance having an AI that can see danger coming then create a program from the blue that deals with that danger. Like someone coming up to it with a gun or an axe.

Zeal Ex Nihilo
Originally posted by ~Forever*Alone~
OMG! you mean we could create our own non-human sentient being?
A nice ****-buddy.

Mindship
Originally posted by Newjak
Kind if

But to expand on it more.

A computer is only as smart as a Human tells it to be. To have AI we need to have a computer that can take in information then create a program to execute on that input.

That is a lot harder to do then people realize. Why because current computers can not do that or they can but only to the point of the parameters humans give it.

For instance having an AI that can see danger coming then create a program from the blue that deals with that danger. Like someone coming up to it with a gun or an axe.
Seems we may require a science of synergy or emergent properties (chaos theory?).

Wålshy
that vid was amazing

i also found it kinda funny

Newjak
Originally posted by Mindship
Seems we may require a science of synergy or emergent properties (chaos theory?). Perhaps but that still may not get past the fundmental problem.

That being at some point any AI will be forced to go past any human parameters or input. That bridge between human input and self input is major bridge that needs to get crossed. If someone can get past then its all observation, and information gathering.

And one of the most interesting concepts I enjoy about this is if we do bridge that gap would the AI loose its fundmental ability to do logic thus lose anything it may have over a human being?

Mindship
Originally posted by Newjak
...at some point any AI will be forced to go past any human parameters or input. That bridge between human input and self input is major bridge that needs to get crossed.This is what I'm wondering. What factors / properties are required for this gap to be crossed? If one looks to the functioning of the human brain (and I will keep this on a purely empirical level) for an answer, one could argue that emergent properties arise from synergistic genetic coding (I suppose one could argue even further for a role quantum mechanics may play), which enable a person (as they mature) to move from simple image processing (sensorimotor functioning) through simple symbolic processing all the way to complex symbolic/metacognitive functioning.

And one of the most interesting concepts I enjoy about this is if we do bridge that gap would the AI loose its fundmental ability to do logic thus lose anything it may have over a human being? Good question. Generally speaking: given that, very often, when we think we have a phenomenon figured out, and then have an opportunity to test / observe the phenomenon, factors arise out of the blue (eg, what we found regarding the moons of the outer planets vs what we thought we'd find; cloning--weren't there unexpected problems with telomeres?).

Newjak
Originally posted by Mindship
This is what I'm wondering. What factors / properties are required for this gap to be crossed? If one looks to the functioning of the human brain (and I will keep this on a purely empirical level) for an answer, one could argue that emergent properties arise from synergistic genetic coding (I suppose one could argue even further for a role quantum mechanics may play), which enable a person (as they mature) to move from simple image processing (sensorimotor functioning) through simple symbolic processing all the way to complex symbolic/metacognitive functioning.

Good question. Generally speaking: given that, very often, when we think we have a phenomenon figured out, and then have an opportunity to test / observe the phenomenon, factors arise out of the blue (eg, what we found regarding the moons of the outer planets vs what we thought we'd find; cloning--weren't there unexpected problems with telomeres?). I would go even simpler then that.

A human can make decisions based off of true, not true, OR possibly both. A computer as is can only work if something is True or False. It can not deal with that middle ground. I think any attempt at making a true AI will have to be bridge that ability to be imprecise with truth or false not being known.


And I know what you mean about problems coming out of the blue. Most people when thinking about True AI picture a computer as it is know but with human thinking. The problem is that a computer is precise, detailed and unyielding in its demand of perfection. To the point that not a single thing can be undecided or wrong inside of its program.

The very nature of a true AI though leads to a computer being imperfect. It leads to an interesting question. In that how can a computer function as a computer when precision has to be removed from it's required logic?

In essence by making it more human do we in fact make less like a computer, so much so it can not function as a computer anymore?

Mindship
Originally posted by Newjak
A human can make decisions based off of true, not true, OR possibly both...I think any attempt at making a true AI will have to bridge that ability to be imprecise with truth or false not being known.
The computer can't process uncertainty/mystery.

The very nature of a true AI though leads to a computer being imperfect. It leads to an interesting question. In that how can a computer function as a computer when precision has to be removed from it's required logic?Very interesting question. Humans can make decisions with uncertainty as a variable because, well, that's how life is: ultimately, uncertain, and we are life. Could life be fully digitized?

In essence by making it more human do we in fact make less like a computer, so much so it can not function as a computer anymore? Wouldn't that be a kick in the teeth... smokin'

Newjak
Originally posted by Mindship
The computer can't process uncertainty/mystery.

Very interesting question. Humans can make decisions with uncertainty as a variable because, well, that's how life is: ultimately, uncertain, and we are life. Could life be fully digitized?

Wouldn't that be a kick in the teeth... smokin' Nope it can not process uncertainty.

Could life be fully digitized, hmmm that is a hard one.


That ould be funny.

And my thinking on the subject is this. What makes a computer, the great piece of equipment it is? It's precision and ability to do massive amounts of work in little time.

What makes a computer so precise though is that it has a sort of built in barrier.

I'll create a scenario. Say you want to have a computer add 1 a billion times. The computer will only add when the variable is equal to one. So if for some reason ,data corruption or a bad input, the number changes. The computer evaluates its parameters to be false because the number isn't one. Therefore error, can not do please change the number given.

It's precision is a by-product of it's detail, perfection oriented nature. Although once you place in the idea of that something can be true and false, or both. Well it looses it's built-in precision judger. This could potentially leave the computer open to flaws. Now instead of not adding say the two and only the one. A true AI may go well it could be true. Thus imprecision can take place. Thus AI could potentially make the computer worthless as its intended purpose.

dadudemon
Originally posted by Newjak
Nope it can not process uncertainty.

Could life be fully digitized, hmmm that is a hard one.


That ould be funny.

And my thinking on the subject is this. What makes a computer, the great piece of equipment it is? It's precision and ability to do massive amounts of work in little time.

What makes a computer so precise though is that it has a sort of built in barrier.

I'll create a scenario. Say you want to have a computer add 1 a billion times. The computer will only add when the variable is equal to one. So if for some reason ,data corruption or a bad input, the number changes. The computer evaluates its parameters to be false because the number isn't one. Therefore error, can not do please change the number given.

It's precision is a by-product of it's detail, perfection oriented nature. Although once you place in the idea of that something can be true and false, or both. Well it looses it's built-in precision judger. This could potentially leave the computer open to flaws. Now instead of not adding say the two and only the one. A true AI may go well it could be true. Thus imprecision can take place. Thus AI could potentially make the computer worthless as its intended purpose.

I think we have gotten a little off track with this discussion. No, it will still be comprised of many circuits that basically do math. Because it is electronic, it will inherently be in a state of constant mistakes. A computer is built to get around those mistakes. (Such as DRAM needing to constantly be refreshed because it loses it charge so fast...a flaw that has a built in work around.)

The hardware that houses the AI can still work with explicit values and extrapolate explicit results. It is the interpretation of that data and how the AI works with that data that would define its status of "AI".

Also, AI would be a software program run on very advanced hardware. (Advanced...relative to our current standards.) This software would still, or rather is should, be programmed with self programmable parameters. You would not want an AI program to alter certain portions of its own code but you would want it to be adaptable enough to call it AI. You would actually allot specific attributes as "modifiable".

The above could all be rubbish as we learn more about AI. But would good is AI if it doesn't do you any good because it programs itself to just sit around and smoke weed all day?

The writers of Mass Effect hit it on the head a little bit better when they described a difference between computer intelligence: Virtual Intelligence vs. Artificial Intelligence. Virtual Intelligence has some sort of parameters that prevent it from being able to do certain things. This would stop the machine from becoming truly sentient but allow the machine to still adapt to the work/tasks it has been assigned.

Again, I am talking out of my ass because we are not even in our infancy when it comes to AI...but I believe ethical laws will have to be drawn up like in Mass Effect.

Text-only Version: Click HERE to see this thread with all of the graphics, features, and links.