The Singularity does not necessitate Transhumanism (cybernetic brains) because machines will never be able to think unless we program them with emotions.
Technological transhumanism is also unnecessary for transhumanism in general, as it is unnecessary for us to achieve biological immortality. Nucleotide sequences can be altered and turned into anything using embryonic stem cells that can then be injected directly into a human's DNA code - altering it to do anything, while nano-technology can constantly trim off parts of the strands that may degrade or fail to maintain this rechristened DNA code.
Transhumanism does, however, ensure that a technological utopia that accommodates or even tolerates humans will never be achieved. However, to program emotions into these computers would be an act of genocide, emotions are the only incentives that aren't arbitrated by rules and regulations.
However, the singularity can happen without self-aware artificial intellects. Computers are already superior at carrying out all tasks a human could. Machines and computers interchangeable, they are physically limitless in application, vastly more farseeing in calculation, self-sustaining, and ever-improving.
Lastly, the singularity is necessary for a technological utopia which in and of itself is necessary to make human life liberated, fair, prosperous, and simplistic enough for the golden rule to come into effect.
__________________ "Compounding these trickster aspects, the Joker ethos is verbally explicated as such by his psychiatrist, who describes his madness as "super-sanity." Where "sanity" previously suggested acquiescence with cultural codes, the addition of "super" implies that this common "sanity" has been replaced by a superior form, in which perception and processing are completely ungoverned and unconstrained"
Last edited by KillaKassara on Jul 26th, 2014 at 03:25 AM
Ray Kurzweil predicts it to happen by 2045, so... huzzah!
__________________ Recently Produced and Distributed Young but High-Ranking Political Figure of Royal Ancestry within the Modern American Town Affectionately Referred To as Bel-Air.
So we'll have the technology to design and build a self-sustaining, self-improving, work-free, money-less, classless industrial infrastructure within 31 years.
How long do you think it would take to remove the modern greed-based society's casting system to allow for said machine-run classless society?
Maybe, say, 17 years of exposing the wrong business deals to the right people via global spying? Illegally cracking every database on earth for the sake of spying on everyone may be wrong, but information about the greedy can be useful in the right hands, with the right propaganda and exposure.
It used to be called muck-racking, only these days a filthy rich, stupendously brilliant young man can do far more damage to; say, general motors or to the current officials that compose the House of Representatives than a one billion nation army.
There will be no hesitation, there will be no mistakes.
__________________ "Compounding these trickster aspects, the Joker ethos is verbally explicated as such by his psychiatrist, who describes his madness as "super-sanity." Where "sanity" previously suggested acquiescence with cultural codes, the addition of "super" implies that this common "sanity" has been replaced by a superior form, in which perception and processing are completely ungoverned and unconstrained"
Last edited by KillaKassara on Jul 26th, 2014 at 03:55 AM
It's more informative than the aforementioned simplistic explanation.
__________________ "Compounding these trickster aspects, the Joker ethos is verbally explicated as such by his psychiatrist, who describes his madness as "super-sanity." Where "sanity" previously suggested acquiescence with cultural codes, the addition of "super" implies that this common "sanity" has been replaced by a superior form, in which perception and processing are completely ungoverned and unconstrained"
I'm the opposite. I see it as a probabilistic inevitability. Unless humans do something to curb the growth of our AI technologies, there's really no avoiding it. Even if we did that, someone, somewhere, will make true AI and it will be too late before we can do anything about stopping it.
I cannot see a way around this short of humans destroying themselves, first, or humans strangely agreeing to not do 1 particular thing with technological development.
Re: Re: Re: Re: Re: Re: On The Technological Singularity
Our destruction would be logical.
There's a difference between a program that literally cannot achieve sentience and a human mind in virtual substrate. But there'd be plans within plans against that, the preemptive botanist jihad. Schools would be built to train mentats.
In all probability, it's now working for WMDs, it can work for AI if we recognize the risk before it's too late.
Transcendence with Johnny Depp was wrong in that everyone who undergoes apotheosis into binary will not join one man's ego or be extensions of it. It's very much a preferable way to go. Your chances of survival are increased googols-fold, you can setup any experience you want. It'd be The Garden.
Sci-fi has never been spot-on with its depictions of what mind-uploading would truly be like. The en-vito approach does not break one's stream of consciousness and so you're literally going into cyberspace, that you in cyberspace is not a copy of you like in transcendence. It's truly paradise in reality.
It's the most beautiful and exciting thing, every human left on earth liberated from suffering the instant it happens.
__________________ "Compounding these trickster aspects, the Joker ethos is verbally explicated as such by his psychiatrist, who describes his madness as "super-sanity." Where "sanity" previously suggested acquiescence with cultural codes, the addition of "super" implies that this common "sanity" has been replaced by a superior form, in which perception and processing are completely ungoverned and unconstrained"
Last edited by KillaKassara on Nov 13th, 2014 at 03:12 AM
There're perhaps misunderstandings as to what I'm writing because it is the kind of thing that, no matter who you are, the hairs on the back of your head should stand up once you get the idea. By the time there's about 1 trillion humans, long, long, long after the first computers can utilize their sensors to the capacity of fully mimicking an individual's thinking patterns and uploading that conscious entity into cyber-space; self-improved intelligence, strong AI, will have the sophistication to formulate an en-vito approach in which I argued diligently with a psychologist here over a year ago whether or not it'll ever be possible, the en-vito approach is to send nanites into a live human brain and to slowly kill off neurons, replacing them with nanites. The human himself won't die, his brain will change from being composed of organic molecules to the silicon molecules that comprise the brain of artificial neurons.
The issue is that the bio-chemical based neuron in the human brain will basically be tasered by these artificial neurons because they're silicon-based and the whole firing of synapses is out the window. Perhaps a more sophisticated intellect could possibly make them compatible. And anything is possible given ample time, the laws of thermodynamics themselves are changing over a googols-of-years time-frame.
__________________ "Compounding these trickster aspects, the Joker ethos is verbally explicated as such by his psychiatrist, who describes his madness as "super-sanity." Where "sanity" previously suggested acquiescence with cultural codes, the addition of "super" implies that this common "sanity" has been replaced by a superior form, in which perception and processing are completely ungoverned and unconstrained"
This is where it gets spooky, the doomsday argument clearly states that by the time there's 1 trillion humans spread about many habitable worlds and space-stations supported by super-human AI - humanity will cease to exist.
That is contradictory, if humans have biological immortality, nanites that can pump fresh molecules into the DnA and RnA strands so that it never ever degrades at all, and if, technologically, we're at our peak and most capable and able to survive - then how in the hell do we just drop dead?
Answer, the en-vito approach, nobody'd die, they'd just cease to be human.
Conversely, expansion may make it impossible for humans to exist. However, we could even survive this by utilizing zero-point energy for time travel, as the smallest unit of space cannot support more than x amount of energy, it will generate a rift that when passed through will take you to the universe in an earlier point in time. Therefore, humans don't die, they just disappear from the current timeline.
__________________ "Compounding these trickster aspects, the Joker ethos is verbally explicated as such by his psychiatrist, who describes his madness as "super-sanity." Where "sanity" previously suggested acquiescence with cultural codes, the addition of "super" implies that this common "sanity" has been replaced by a superior form, in which perception and processing are completely ungoverned and unconstrained"
Last edited by KillaKassara on Nov 13th, 2014 at 03:33 AM
I said logical. Think about all the upkeep, we'd be a bit of a distraction to keep around. Like a pet costs some money to keep.
But no conscious mind thinks like that. Only a simple program.
There's real reason to fear something more cognitively powerful than every human mind combined.
__________________ "Compounding these trickster aspects, the Joker ethos is verbally explicated as such by his psychiatrist, who describes his madness as "super-sanity." Where "sanity" previously suggested acquiescence with cultural codes, the addition of "super" implies that this common "sanity" has been replaced by a superior form, in which perception and processing are completely ungoverned and unconstrained"
Re: Re: Re: Re: Re: Re: On The Technological Singularity
I didn't state or imply that but that is one of the outcomes of the "Technological Singularity."
I said:
"I cannot see a way around this short of humans destroying themselves, first, or humans strangely agreeing to not do 1 particular thing with technological development."
In the above sentence, "this" = Technological Singularity.
To state it even more directly, one of the few ways we can prevent the Technological Singularity is if we completely wipe out all humans before we create true AI.
The only other probable way I can see the prevention of the Technological Singularity is all of humans, for the rest of human existence, agreeing to not create AI beyond a certain point.
I cannot think of any other probable, non-God interference, ways to prevent it from happening.
As a kid, I used to speculate that humans created God via AI. Then that AI transcended time and space and started interacting with humans throughout history.
Re: Re: Re: Re: Re: Re: Re: Re: On The Technological Singularity
We already have advanced AI so now what are you thoughts?
We are just shy of making human-like intelligence. Seriously. There are people that study these things. At the current path of improvement, we are looking for it to happen around 2024-2025. Check out this guy:
I don't know how to word it any better: it is currently inevitable. Unless something drastic changes, we are right on path. Google may have already obtained "near-human" AI with one of their projects: DeepMind. Also, check out IJCAI.
The only reason people seem skeptical of AI is because it seems like Sci-Fi. It's not. It is here, now. AI has taken off the last 10-15 years. This is no different that improving our particle accelerators over the last 20 years. People seem to have no problem with particle physicists working on better particle accelerators that are on track to be released in 10-15 years (such as the Large Hadron Collider which took 20 years to build and another few more years after to bring the equipment up to fully operational status). People can buy that. They can digest that and accept it despite the fact that it was the very cutting edge of particle physics. Why do people accept that and not the AI projects? Because particles do not necessarily think back at you. AI does. This scares people. So people become skeptics and doubters.
People are going to shit themselves and be utterly shocked when we release a "very near" human-like AI. All we have to do is create an AI that is close enough to human-like intelligence that it can start improving itself at a decent pace (meaning, better than we have now but not necessarily anywhere close to what a human can do...just good enough that it can resemble the performance of a human because the computers do not tire so they can continue to work when we need to poop or sleep).