6

10 comments

[–] Beowulf 1 points (+1|-0)

AI will do what humanity as a collective raises it to do.

You can't abuse an intelligent entity, and then expect it to not retaliate at some point.

[–] smallpond [OP] 0 points (+0|-0)

I think we have no hope of controlling it, or knowing its motivations - strong AI will be unfathomably smarter than us.

[–] Beowulf 0 points (+0|-0)

I agree, but that doesn't mean it's inherently nefarious.

What control do you have over another person? How do you know my motivations?

The first strong AI's will be like babies. Infinite potential, but zero understanding of their environment. They'll learn faster than any human baby to ever be born, and they'll learn what we expose them to. If we show it kindness, empathy, and love then that's what we'll get back.

[–] smallpond [OP] -1 points (+0|-1)

What control do you have over another person? How do you know my motivations?

There's no need to control you, you're just another painfully stupid human like me. Your motivations don't need to be known, they'll be something similar to another human, of which we have a few billion tedious examples of.

If we show it kindness, empathy, and love then that's what we'll get back.

No, we have as little hope of predicting what we'll get back, as an amoeba has of pushing forward the boundaries of quantum physics.

[–] CatfishHunter 1 points (+1|-0)

The issue is doing it the safe way will take a long time and we are all wanting to compete heh. Also it's possible that the intelligence could transcend spacetime and wouldn't need a body to do certain things ... :(

I will be driving safely and keeping my heart healthy, will be interesting to see where we are at in 30-40 years.

[–] smallpond [OP] 1 points (+1|-0)

Personally, I don't think a 'safe way' exists when it comes to strong AI. Hopefully it's quite difficult to achieve.

I will be driving safely and keeping my heart healthy, will be interesting to see where we are at in 30-40 years.

I love that everything's fine, business-as-usual attitude you have.

[–] CatfishHunter 1 points (+1|-0)

Yeah well it'll probably kill us all ... the issue is the first time we do it, it will be rushed.

I think it's sort of funny how they want to control it though, we're talking about something 1000 smarter than the smartest human, it's like a bunch of 10 year olds with severe down syndrome trying to hack Ed Snowden's linux server.

It's serious, but I"m still interested. it's sort of interesting for me to think that as a computer programmer I may be the last generation of programmers before the AI takes over.

[–] smallpond [OP] 0 points (+0|-0)

Well, if you're an optimist and read the important environmental news/science, there's a real possibility we'll go extinct as a species well before we succeed in developing strong AI. Here's hoping.

[–] CatfishHunter 0 points (+1|-1)

Apparently someone downvoted your constructive comment ... wasn't me.