6

10 comments

[–] Beowulf 1 points (+1|-0)

AI will do what humanity as a collective raises it to do.

You can't abuse an intelligent entity, and then expect it to not retaliate at some point.

[–] smallpond [OP] 0 points (+0|-0)

I think we have no hope of controlling it, or knowing its motivations - strong AI will be unfathomably smarter than us.

[–] Beowulf 0 points (+0|-0)

I agree, but that doesn't mean it's inherently nefarious.

What control do you have over another person? How do you know my motivations?

The first strong AI's will be like babies. Infinite potential, but zero understanding of their environment. They'll learn faster than any human baby to ever be born, and they'll learn what we expose them to. If we show it kindness, empathy, and love then that's what we'll get back.

[–] smallpond [OP] -1 points (+0|-1)

What control do you have over another person? How do you know my motivations?

There's no need to control you, you're just another painfully stupid human like me. Your motivations don't need to be known, they'll be something similar to another human, of which we have a few billion tedious examples of.

If we show it kindness, empathy, and love then that's what we'll get back.

No, we have as little hope of predicting what we'll get back, as an amoeba has of pushing forward the boundaries of quantum physics.