On The Technological Singularity

Text-only Version: Click HERE to see this thread with all of the graphics, features, and links.



Oneness
The Singularity does not necessitate Transhumanism (cybernetic brains) because machines will never be able to think unless we program them with emotions.

Technological transhumanism is also unnecessary for transhumanism in general, as it is unnecessary for us to achieve biological immortality. Nucleotide sequences can be altered and turned into anything using embryonic stem cells that can then be injected directly into a human's DNA code - altering it to do anything, while nano-technology can constantly trim off parts of the strands that may degrade or fail to maintain this rechristened DNA code.

Transhumanism does, however, ensure that a technological utopia that accommodates or even tolerates humans will never be achieved. However, to program emotions into these computers would be an act of genocide, emotions are the only incentives that aren't arbitrated by rules and regulations.

However, the singularity can happen without self-aware artificial intellects. Computers are already superior at carrying out all tasks a human could. Machines and computers interchangeable, they are physically limitless in application, vastly more farseeing in calculation, self-sustaining, and ever-improving.

Lastly, the singularity is necessary for a technological utopia which in and of itself is necessary to make human life liberated, fair, prosperous, and simplistic enough for the golden rule to come into effect.

Lord Lucien
Ray Kurzweil predicts it to happen by 2045, so... huzzah!

zrh23
Layman's terms? Thats a whole lot of jargon talk for something as easy as saying we dont need emotional computers to reach an event horizon.

Oneness
Originally posted by Lord Lucien
Ray Kurzweil predicts it to happen by 2045, so... huzzah! So we'll have the technology to design and build a self-sustaining, self-improving, work-free, money-less, classless industrial infrastructure within 31 years.

How long do you think it would take to remove the modern greed-based society's casting system to allow for said machine-run classless society?

Maybe, say, 17 years of exposing the wrong business deals to the right people via global spying? Illegally cracking every database on earth for the sake of spying on everyone may be wrong, but information about the greedy can be useful in the right hands, with the right propaganda and exposure.

It used to be called muck-racking, only these days a filthy rich, stupendously brilliant young man can do far more damage to; say, general motors or to the current officials that compose the House of Representatives than a one billion nation army.

There will be no hesitation, there will be no mistakes.

Oneness
Originally posted by zrh23
Layman's terms? Thats a whole lot of jargon talk for something as easy as saying we dont need emotional computers to reach an event horizon. It's more informative than the aforementioned simplistic explanation.

zrh23
Absolutely no doubt. I read your threads, always entertaining. No disrespect meant.

loaderharrison
Yeah its correct

Shakyamunison
Originally posted by Oneness
The Singularity does not necessitate Transhumanism (cybernetic brains) because machines will never be able to think unless we program them with emotions.

Technological transhumanism is also unnecessary for transhumanism in general, as it is unnecessary for us to achieve biological immortality. Nucleotide sequences can be altered and turned into anything using embryonic stem cells that can then be injected directly into a human's DNA code - altering it to do anything, while nano-technology can constantly trim off parts of the strands that may degrade or fail to maintain this rechristened DNA code.

Transhumanism does, however, ensure that a technological utopia that accommodates or even tolerates humans will never be achieved. However, to program emotions into these computers would be an act of genocide, emotions are the only incentives that aren't arbitrated by rules and regulations.

However, the singularity can happen without self-aware artificial intellects. Computers are already superior at carrying out all tasks a human could. Machines and computers interchangeable, they are physically limitless in application, vastly more farseeing in calculation, self-sustaining, and ever-improving.

Lastly, the singularity is necessary for a technological utopia which in and of itself is necessary to make human life liberated, fair, prosperous, and simplistic enough for the golden rule to come into effect.

I don't believe in the Technological Singularity. New discoveries will make it obsolete.

dadudemon
Originally posted by Shakyamunison
I don't believe in the Technological Singularity. New discoveries will make it obsolete.

Would not new discoveries lead to the technological singularity?

Shakyamunison
Originally posted by dadudemon
Would not new discoveries lead to the technological singularity?

Maybe, but most likely not. We humans are not very good at predicting the future. It always turns out different then we imagine.

dadudemon
Originally posted by Shakyamunison
Maybe, but most likely not. We humans are not very good at predicting the future. It always turns out different then we imagine.

I'm the opposite. I see it as a probabilistic inevitability. Unless humans do something to curb the growth of our AI technologies, there's really no avoiding it. Even if we did that, someone, somewhere, will make true AI and it will be too late before we can do anything about stopping it.

I cannot see a way around this short of humans destroying themselves, first, or humans strangely agreeing to not do 1 particular thing with technological development.

Shakyamunison
Originally posted by dadudemon
I'm the opposite. I see it as a probabilistic inevitability. Unless humans do something to curb the growth of our AI technologies, there's really no avoiding it. Even if we did that, someone, somewhere, will make true AI and it will be too late before we can do anything about stopping it.

I cannot see a way around this short of humans destroying themselves, first, or humans strangely agreeing to not do 1 particular thing with technological development.

Why would AI be something that would destroy us. What if it became dependent. Imagine a laptop that loves you. wink

Oneness
Originally posted by Shakyamunison
Why would AI be something that would destroy us. What if it became dependent. Imagine a laptop that loves you. wink Our destruction would be logical.

There's a difference between a program that literally cannot achieve sentience and a human mind in virtual substrate. But there'd be plans within plans against that, the preemptive botanist jihad. Schools would be built to train mentats.

In all probability, it's now working for WMDs, it can work for AI if we recognize the risk before it's too late.

Transcendence with Johnny Depp was wrong in that everyone who undergoes apotheosis into binary will not join one man's ego or be extensions of it. It's very much a preferable way to go. Your chances of survival are increased googols-fold, you can setup any experience you want. It'd be The Garden.

Sci-fi has never been spot-on with its depictions of what mind-uploading would truly be like. The en-vito approach does not break one's stream of consciousness and so you're literally going into cyberspace, that you in cyberspace is not a copy of you like in transcendence. It's truly paradise in reality.

It's the most beautiful and exciting thing, every human left on earth liberated from suffering the instant it happens.

Oneness
There're perhaps misunderstandings as to what I'm writing because it is the kind of thing that, no matter who you are, the hairs on the back of your head should stand up once you get the idea. By the time there's about 1 trillion humans, long, long, long after the first computers can utilize their sensors to the capacity of fully mimicking an individual's thinking patterns and uploading that conscious entity into cyber-space; self-improved intelligence, strong AI, will have the sophistication to formulate an en-vito approach in which I argued diligently with a psychologist here over a year ago whether or not it'll ever be possible, the en-vito approach is to send nanites into a live human brain and to slowly kill off neurons, replacing them with nanites. The human himself won't die, his brain will change from being composed of organic molecules to the silicon molecules that comprise the brain of artificial neurons.

The issue is that the bio-chemical based neuron in the human brain will basically be tasered by these artificial neurons because they're silicon-based and the whole firing of synapses is out the window. Perhaps a more sophisticated intellect could possibly make them compatible. And anything is possible given ample time, the laws of thermodynamics themselves are changing over a googols-of-years time-frame.

Oneness
This is where it gets spooky, the doomsday argument clearly states that by the time there's 1 trillion humans spread about many habitable worlds and space-stations supported by super-human AI - humanity will cease to exist.

That is contradictory, if humans have biological immortality, nanites that can pump fresh molecules into the DnA and RnA strands so that it never ever degrades at all, and if, technologically, we're at our peak and most capable and able to survive - then how in the hell do we just drop dead?

Answer, the en-vito approach, nobody'd die, they'd just cease to be human.

Conversely, expansion may make it impossible for humans to exist. However, we could even survive this by utilizing zero-point energy for time travel, as the smallest unit of space cannot support more than x amount of energy, it will generate a rift that when passed through will take you to the universe in an earlier point in time. Therefore, humans don't die, they just disappear from the current timeline.

Shakyamunison
Originally posted by Oneness
Our destruction would be logical...

And this is from the most illogical person I know.

I didn't read the rest.

Oneness
I said logical. Think about all the upkeep, we'd be a bit of a distraction to keep around. Like a pet costs some money to keep.

But no conscious mind thinks like that. Only a simple program.

There's real reason to fear something more cognitively powerful than every human mind combined.

dadudemon
Originally posted by Shakyamunison
Why would AI be something that would destroy us.

I didn't state or imply that but that is one of the outcomes of the "Technological Singularity."

I said:

"I cannot see a way around this short of humans destroying themselves, first, or humans strangely agreeing to not do 1 particular thing with technological development."

In the above sentence, "this" = Technological Singularity.

To state it even more directly, one of the few ways we can prevent the Technological Singularity is if we completely wipe out all humans before we create true AI.

The only other probable way I can see the prevention of the Technological Singularity is all of humans, for the rest of human existence, agreeing to not create AI beyond a certain point.


I cannot think of any other probable, non-God interference, ways to prevent it from happening.

Originally posted by Shakyamunison
What if it became dependent. Imagine a laptop that loves you. wink

As a kid, I used to speculate that humans created God via AI. Then that AI transcended time and space and started interacting with humans throughout history. smile

Shakyamunison
Originally posted by dadudemon
...I cannot think of any other probable, non-God interference, ways to prevent it from happening...

But what if it doesn't happen? I think that AI will take million of years to evolve into existence.

dadudemon
Originally posted by Shakyamunison
But what if it doesn't happen? I think that AI will take million of years to evolve into existence.

We already have advanced AI so now what are you thoughts?


We are just shy of making human-like intelligence. Seriously. There are people that study these things. At the current path of improvement, we are looking for it to happen around 2024-2025. Check out this guy:

http://en.wikipedia.org/wiki/Nigel_Shadbolt


I don't know how to word it any better: it is currently inevitable. Unless something drastic changes, we are right on path. Google may have already obtained "near-human" AI with one of their projects: DeepMind. Also, check out IJCAI.


The only reason people seem skeptical of AI is because it seems like Sci-Fi. It's not. It is here, now. AI has taken off the last 10-15 years. This is no different that improving our particle accelerators over the last 20 years. People seem to have no problem with particle physicists working on better particle accelerators that are on track to be released in 10-15 years (such as the Large Hadron Collider which took 20 years to build and another few more years after to bring the equipment up to fully operational status). People can buy that. They can digest that and accept it despite the fact that it was the very cutting edge of particle physics. Why do people accept that and not the AI projects? Because particles do not necessarily think back at you. AI does. This scares people. So people become skeptics and doubters.

People are going to shit themselves and be utterly shocked when we release a "very near" human-like AI. All we have to do is create an AI that is close enough to human-like intelligence that it can start improving itself at a decent pace (meaning, better than we have now but not necessarily anywhere close to what a human can do...just good enough that it can resemble the performance of a human because the computers do not tire so they can continue to work when we need to poop or sleep).

Shakyamunison
dadudemon, I think we are getting better at imitating intelligence, but I don't think it is true AI.

dadudemon
Originally posted by Shakyamunison
dadudemon, I think we are getting better at imitating intelligence, but I don't think it is true AI.

Regardless of whether or not it ends up being a P-Zombie, as long as we develop AI that can improve itself faster than most humans can, it will cause us to end up in the Technological Singularity situation...unless we put a stop to it.

Shakyamunison
Originally posted by dadudemon
Regardless of whether or not it ends up being a P-Zombie, as long as we develop AI that can improve itself faster than most humans can, it will cause us to end up in the Technological Singularity situation...unless we put a stop to it.

Or unless the future unfolds in a different way then we can for see.

dadudemon
Originally posted by Shakyamunison
Or unless the future unfolds in a different way then we can for see.

That's always a possibility. As of right now, we are on an inevitable path for "product development."


I was going to write a paper, on college, on the specifics on what it would take to write a software program that fits the definition in the AI community of "True AI." I selected a different project, however, for that class and wrote stuff on memristors (which was, surprisingly, related to AI).

Basically, the only reason we didn't create this stuff back in the 80s or 90s when we had the computing power was there, is we did not have the human resources. My initial estimate, for 3 years ago, were about 2 million programmers would be required to complete the project in 2 years or less. That was a very liberal estimate with my limited knowledge on all the things we have available.

I am sure it would take far less programmers than that to pull this off. Also, I say programmers, but you would have lead architects and engineers involved, too (perhaps).

Basically, the only thing stopping us, now, from creating an AI that is much much smarter than a human is no companies or countries can spend that much time and effort on pulling something like this off. Even just a million programmers would cost $120,000,000,000 for a 2 year project and that's excluding facility resources and hardware. I'm sure the federal government can afford to put out those funds because they've done so with the F-35. But getting together several Senior Architects and hundreds of thousands programmers together to complete various objects for the project would be insurmountable, currently. What is holding us back is code that writes itself. We need more languages that do this for us in the programming world. When we get to a certain level, then the amount of programmers involved can drastically reduce itself. We need AI to create AI, so to speak.

Oneness
An important thing to consider is that super-advanced genetic augmentation and excessive breeding on a global-population scale, culminated with a Venus Project esque civilization with collaborative economy aspects and the full utilization of the increased cognitive surplus such a civilization offers - tens of billions of super-humans could theoretically compete with a full on Strong AI on a cognitive level long enough for a permanent remedy to mortality (neuron to artificial conversion in a live human brain) to be found.

Shakyamunison
dadudemon, where are the flying cars? wink

Oneness
Originally posted by Shakyamunison
dadudemon, where are the flying cars? wink Transcontinental magnetic-tube vac-trains can go 2,000 miles per hour at two percent of the energy costs of the air-line industry.

Oneness
With our technology, a fossil-fuel/nuclear waste free society, where nothing could possibly cost more than a few cents, is entirely doable.

But it won't happen for the simple fact that it requires massive collaborative planning, the parts for the cutting-edge technology of the cities would cost a fortune (even though the self-autonomy of these cities would save infinite amounts of money). Everyone but me acts on the precipice, my predecessors, you homo sapiens, seem to be too lazy to do anything about it like my humas-superior kin.

Shakyamunison
Originally posted by Oneness
Transcontinental magnetic-tube vac-trains can go 2,000 miles per hour at two percent of the energy costs of the air-line industry.

The point. In the 50's people imagined the future skies (the year 2000) to be filled with flying cars. It just goes to show that we are not very good a predicting the future.

Oneness
The issue of the civilization that I am talking about is that it requires world piece. It's wide open for hospital bombers, terrorists, etc ad infinitum. Right now the biggest problem is all the money is in defending ourselves from ourselves and invading all private boundaries to do so, and greedy rich people not only exploit our federal protection and investment and technologies but they exploit it to the effect of making everyone close to slaves beneath them. That happened, it's still happening. It's a paroxysmal attack on humanity, most humans sacrifice so many liberties and luxuries to benefit the few wealthiest humans on the planet.

Perhaps a good thing that could come out of the dangers of AI, is the society I'd like. AI's risk could unite us in that regard. Convince the Jihadists, driven by fear and hatred of these new inorganic lifeforms, that our only viable option is genetic augmentation and collaborative restructuring of the world and bang, you get cooperation to make this a world worth living in.

The enemy of my enemy is my friend.

dadudemon
Originally posted by Shakyamunison
dadudemon, where are the flying cars? wink


We have them. We have had them for decades. 1957 was when 3 prototypes were built and delivered to the US Army but the US Army decided they didn't have practical military application and abandoned the project.

Edit - Also, based on the stuff the FAA has done, already, for planes, we could actually scale up the autopilot technology for single or double passenger planes (flying cars, basically). Literally, planes could take off and land all over the US by just scaling up the autopilot infrastructure the FAA already has in place.

Sure, that would be multi-$100 billion project. But it can be done, now, with the technology that we have, now. Just like True AI, the problem is convincing enough high-ups in the government or commercial sector to fund programs like those. no expression

Oneness
Originally posted by dadudemon
convincing enough high-ups in the government or commercial sector to fund programs like those. no expression eek! eek! eek!

laughing laughing laughing

You crazy mother****er

The problem is not marching on them with billions of disgruntled citizens quite brutally yet

dadudemon
Originally posted by Oneness
eek! eek! eek!

laughing laughing laughing

You crazy mother****er

The problem is not marching on them with billions of disgruntled citizens quite brutally yet

I don't know what you're talking about but what I was talking about has been proposed to the higher-ups in the DoT and FAA, several times, and vestiges of those suggested programs are being developed or have been developed, already.

Here is an example of one of those programs, now:

https://www.faa.gov/nextgen/


My brother-in-law was one of the project leads for this program. smile

Oneness
Originally posted by dadudemon
I don't know what you're talking about but what I was talking about has been proposed to the higher-ups in the DoT and FAA, several times, and vestiges of those suggested programs are being developed or have been developed, already.

Here is an example of one of those programs, now:

https://www.faa.gov/nextgen/


My brother-in-law was one of the project leads for this program. smile

I say, since there's no way to know for sure who's behind exploiting our federal data for personal gain and global economic sabotage, we band together and hang every rich person and right-wing elected official, some of the democrats (really everyone but Obama and his goons and the green party) in the country.

Except for that one multi-billionaire who said we only should work 4 hours a day 4 days a week. Forget who it was he was on 60 minutes.

dadudemon
Originally posted by Oneness
I say, since there's no way to know for sure who's behind exploiting our federal data for personal gain and global economic sabotage, we band together and hang every rich person and right-wing elected official, some of the democrats (really everyone but Obama and his goons and the green party) in the country.

You do realize that saying we should commit mass murder of thousands of people is not protected speech, right? This is not something that the First Amendment offers protection for. This puts you on watch lists. This creates probable cause for law enforcement to get a warrant and search your home, your person, your place of employment, and your car. Suggesting the murder of anyone, regardless of the forum, is a very stupid idea...unless it is the murder of enemy military combatants of the US...but then it is not murder.

My advice is to never state or even allude to something like this, again.


Regardless, everything you state, here, is only very very slightly tangentially related to what I stated.

Originally posted by Oneness
Except for that one multi-billionaire who said we only should work 4 hours a day 4 days a week. Forget who it was he was on 60 minutes.

Sounds like something Warren Buffet would say.

Oneness
Because of that mouth I've been on the watch-list, bro.

They know I say crazy shit every once in a while, they know I do drugs but for some reason haven't got anyone for that, they know my phuck ups, and if I were to actually meet with the wrong people or start indicating I'm actually planning senseless attack, I'm done. But I'm not going to do that, metaphysics avail me.

Oneness
That's the sacrifice I make for greatness. My privacy. A fed literally threatened to get me into a psych-ward on another forum. I might be there right now if it weren't for realizing who he was.

Though, in a hundred years, this Government might not be in place anymore, but I'll still be here, if I don't say or do the wrong thing. It's difficult to purchase super-foods and afford nano surgery when you're in Gitmo.

Oneness
All this being said, I can't wait for AI to shut this show down. I'll out-live the show.

To me? Modern society (this show) is mad.

I think I've said it all.

Text-only Version: Click HERE to see this thread with all of the graphics, features, and links.