8

Agree
Disagree
Unsure
Complete breakdown could happen, but I'm not worried.
I'm more worried that complete breakdown won't happen in my lifetime.
A complete breakdown in society has already happened.

16 comments

[–] smallpond [OP] 3 points (+3|-0)

If AI truly became super intelligent, we'd have essentially zero chance of controlling it. It's like saying that a worm could consciously outsmart a human.

Of course that AI might decide to enslave us all in a neverending hell, do that to every planet within it's reach, or exterminate not just all life on Earth, but life on countless planets unknown to us at present.

As we are, humans alone might be able to exterminate ourselves and most of our companion species: pretty harmless in the bigger picture.

I think powerful entities don't want super-intelligent AI. Super-efficient (but still essentially unintelligent) autonomous killing machines would be sufficient for global domination.

[–] Mattvision 3 points (+3|-0)

I can't tell you how long I've waited to have this exact debate. I'm kinda excited.

If AI truly became super intelligent, we'd have essentially zero chance of controlling it. It's like saying that a worm could consciously outsmart a human.

That's debated among experts. It's not impossible that we create something with a will of its own and it revolts against us, but there are reasons to believe we might be able to maintain control. The intelligence disparity might be accurate, but intelligence is not cut and dry, and there's more to humans and worms than mere intelligence.

Intelligence is just a series of calculations, which itself is symbolic of highly complex structures designed to move and store particles in a particular fashion. We can already make intelligent systems that are really good at specific tasks, but the goal is to make one that can analyze and adapt to new and unfamiliar tasks like humans can. If that's achieved like any AI we've made before, we'd have one that's really bad at doing that at first, but it slowly improves over a natural selection trial and error type process until it's really good at it.

What's different about worm or humans, and an AI? With humans, worms, or really any intelligent animal, our intelligence is made for survival. Worms need to be really good at worm stuff so they can survive and reproduce. It's a matter of natural selection.

With an AI, it's different. The process is similar to natural selection, but unlike with wild animals, the criteria is always based on fulfilling tasks that humans give it. So it's more like human selection than it is natural selection, like a domesticated animal. The criteria for success and failure remains the same no matter how intelligent it gets, so it won't have a will of its own. It's motivated by human obedience the same way humans and rats alike are motivated by dopamine and serotonin.

Of course that AI might decide to enslave us all in a neverending hell, do that to every planet within it's reach, or exterminate not just all life on Earth, but life on countless planets unknown to us at present.

Same thing if it ends up in the hands of the elites.

As we are, humans alone might be able to exterminate ourselves and most of our companion species: pretty harmless in the bigger picture.

Definitely. I would like that as an alternative to some unimaginable AI powered damnation, but that's a sad way to go out. We should be seeing AIs as the solution to the insurmountable problems we've dug ourselves in, rather than the reverse.

I think powerful entities don't want super-intelligent AI.

I wish that were true. We can't know what the Chinese are up to, but we know openAI, Google, Facebook, and who knows who else is racing towards general intelligence as fast as possible. And most experts agree that once general intelligence is achieved, it will last for a very short time before progressing into super intelligence.

Super-efficient (but still essentially unintelligent) autonomous killing machines would be sufficient for global domination.

It's a stepping stone, but not a permanent solution. Robot armies are made of a lot of scarce and unevenly distributed natural resources, specifically rare Earth metals. We can build those things now because of vast global cooperation and international trade networks. When techno dictators start to have disputes, they run the risk of losing access to some essential resources that they only have limited supplies of in their own homelands. War to capture those resources would likely go nuclear, so the best chance for them to stay in power is to form some sort of cold diplomacy and simply tolerate each other and quietly ensure their mutual survival, like the states in 1984.

A super intelligent AI isn't a magic solution to that problem, but the more intelligent it is, the more effectively it will be able to use the available resources in ways human minds couldn't have imagined.

But most importantly, once someone manages to cross that threshold, there's no way to somehow get a leg up on them or save yourself by playing catch up. If China comes up with one 10 minutes before Google does, then the Chinese one will always outpace Google. Everything to oppose them after that will effectively be an effort in futility. Like you said, it's like a worm attempting to outsmart a human. How could that not be an attractive idea to power hungry maniacs? We are watching men attempting to become Gods, and what terrifies me is that they have a real chance to do it.

[–] smallpond [OP] 3 points (+3|-0)

Sorry, I'll probably disappoint you regarding the 'debate'....

Super-efficient (but still essentially unintelligent) autonomous killing machines would be sufficient for global domination.

It's a stepping stone, but not a permanent solution. Robot armies are made of a lot of scarce and unevenly distributed natural resources, specifically rare Earth metals.

No, it's a permanent solution. All that's need is for one entity to 'win'. When that happens there is no more competitive pressure, as we'll have one organisation that can do 'magic' while everyone else could be sent back to the stone age. The major threat to some all-powerful dictator then becomes their own AI... It would be incredibly stupid to develop a real hyperintelligence for no reason - perhaps only as an inconsiderate way to commit suicide once you're driven mad by absolute power.

I think the 'debate' about controlling strong AI only emphasizes how incredibly stupid we are.... Just worms using worm logic to laughably deduce that they can control a human. We only need to understand one concept: "many, many things are completely beyond our very limited intelligence". Unfortunately that seems utterly incomprehensible to the vast majority of us... and so we have people who think they can control strong AI.

[–] Mattvision 2 points (+2|-0)

Sorry, I'll probably disappoint you regarding the 'debate'....

Nah I've been thinking about this stuff for a long time so it's nice to discuss it with someone in some capacity

No, it's a permanent solution. All that's need is for one entity to 'win'. When that happens there is no more competitive pressure, as we'll have one organisation that can do 'magic' while everyone else could be sent back to the stone age.

Well even if this were true, it doesn't change the fact that some of the most powerful tech entities in the world have openly advertised they're racing towards it.

I think the 'debate' about controlling strong AI only emphasizes how incredibly stupid we are....

Ambitious people tend not to be dissuaded by the fear of the unknown. Only time will really tell if we're able to control it, or if some unknown factor comes to light when its too late for us to reverse it. However slim you place the odds, our two options are either get exterminated/enslaved/tortured by a malevolent AI, or get exterminated/enslaved/tortured by omnipotent techno dictators. If we're lucky climate change or some other disaster will do enough damage to the world before that happens.