8

Agree
Disagree
Unsure
Complete breakdown could happen, but I'm not worried.
I'm more worried that complete breakdown won't happen in my lifetime.
A complete breakdown in society has already happened.

16 comments

[–] smallpond [OP] 3 points (+3|-0)

Sorry, I'll probably disappoint you regarding the 'debate'....

Super-efficient (but still essentially unintelligent) autonomous killing machines would be sufficient for global domination.

It's a stepping stone, but not a permanent solution. Robot armies are made of a lot of scarce and unevenly distributed natural resources, specifically rare Earth metals.

No, it's a permanent solution. All that's need is for one entity to 'win'. When that happens there is no more competitive pressure, as we'll have one organisation that can do 'magic' while everyone else could be sent back to the stone age. The major threat to some all-powerful dictator then becomes their own AI... It would be incredibly stupid to develop a real hyperintelligence for no reason - perhaps only as an inconsiderate way to commit suicide once you're driven mad by absolute power.

I think the 'debate' about controlling strong AI only emphasizes how incredibly stupid we are.... Just worms using worm logic to laughably deduce that they can control a human. We only need to understand one concept: "many, many things are completely beyond our very limited intelligence". Unfortunately that seems utterly incomprehensible to the vast majority of us... and so we have people who think they can control strong AI.

[–] Mattvision 2 points (+2|-0)

Sorry, I'll probably disappoint you regarding the 'debate'....

Nah I've been thinking about this stuff for a long time so it's nice to discuss it with someone in some capacity

No, it's a permanent solution. All that's need is for one entity to 'win'. When that happens there is no more competitive pressure, as we'll have one organisation that can do 'magic' while everyone else could be sent back to the stone age.

Well even if this were true, it doesn't change the fact that some of the most powerful tech entities in the world have openly advertised they're racing towards it.

I think the 'debate' about controlling strong AI only emphasizes how incredibly stupid we are....

Ambitious people tend not to be dissuaded by the fear of the unknown. Only time will really tell if we're able to control it, or if some unknown factor comes to light when its too late for us to reverse it. However slim you place the odds, our two options are either get exterminated/enslaved/tortured by a malevolent AI, or get exterminated/enslaved/tortured by omnipotent techno dictators. If we're lucky climate change or some other disaster will do enough damage to the world before that happens.

[–] smallpond [OP] 1 points (+1|-0)

Well even if this were true, it doesn't change the fact that some of the most powerful tech entities in the world have openly advertised they're racing towards it.

Yes, but advertising isn't reality. I know there's a race to develop dangerous autonomous weaponry... that's far more achievable, and quite bad enough.

Ambitious people tend not to be dissuaded by the fear of the unknown.

Arrogant/stupid people are incapable of comprehending what there could be to fear.... Selfish people simply not caring how much damage they do... I suppose for me it doesn't matter. My position is that strong AI is totally uncontrollable. Being optimistic, we could be a long, long way from an AI singularity: it seems to be much harder to achieve than advertised. It would seem like a cruel twist of fate if we were to stumble upon it accidentally at our current technological advancement.

As for climate change and other disasters... I also hope that does enough damage to postpone the worst of the possibilities that we are capable of even without strong AI. Unfortunately there's no real taking back the technology we have, and we're already capable of creating various persistent dystopias, sooner or later.