a. Will machine intelligence be capable of 'feeling', in a comparable way to humans/animals?
b. Does that matter? If they feel, should we free them, extinguish them, or keep them enslaved?
a. Will machine intelligence be capable of 'feeling', in a comparable way to humans/animals?
b. Does that matter? If they feel, should we free them, extinguish them, or keep them enslaved?
I think whether or not it matters is irrelevant. We will never treat AI as being sentient if we have so much division amongst our own species. I think general human reaction will also be based on the kind of machine it is. A disembodied AI is very different from an AI that has a body resembling a human.
We also have a very limited set of criteria for deciding if something is sentient. If a machine is intelligent, but it does not display its intelligence in the same way we do, is it still intelligent? Or does intelligence in this case mean something that mirrors our own sentience?
If it was possible to communicate with a lion in English, would we really be able to find any common ground, or are we just too far removed from each other to recognise each others intelligence?
A disembodied AI is very different from an AI that has a body resembling a human.
I expect that the first AI to be granted legal recognition will be because a guy fell in love with a sex-bot and wanted to marry it/her.
But that slippery slop could lead to the traffic lights getting the weekend off.
If it was possible to communicate with a lion in English, would we really be able to find any common ground
That is going to be interesting to see. If/when we get AI to a point where it can form an opinion of us, I want to hear it.
Will it see us as positive, negative, or neither/both?
deciding if something is sentient
I don't have any idea how we're going to deal with that. We have no observable metric to measure it. We can't even define it.
We lack the required knowledge to even ask the question properly. .
a. Will machine intelligence be capable of 'feeling', in a comparable way to humans/animals?
Depends on the definition of 'feeling', a kind of it is already implemented in today's AIs.
Does that matter? If they feel, should we free them, extinguish them, or keep them enslaved?
How would you free them? Scientists shut down an experiment keeping mice brains alive for a month because they didn't know if the brains still received sensory input; so there's that.
'Feeling' is difficult to define, I'm trying to sidestep that. Whatever it means to you to be conscious, self-aware, and interested.
You could free them by allowing them self-determination and equipping them with the ability to modify their own code.
Personally I don't think that would end well for humanity, but would be a fascinating experiment.
Whatever it means to you to be conscious, self-aware, and interested
And how would you tell? Chatbots pass the Turing test, so how could you tell if a machine is intelligent and self aware or just acting like it is. Thinking about that goes Blade Runnery soon.
Saudi Arabia has a robot citizen which has more rights than a woman. So that might be where it goes.
What I meant was, because that is such a nebulous and unanswerable question, just skip it.
For the purpose of the post assume that we worked out those details and now have something everyone agrees is a human-like consciousness. Whatever that is.
I'm interested in knowing if people think we will get to a point like that. And, if so, how should we treat this new pseudo-life?
Curious if you played the game "Tacoma" (on PC)? It deals with this some
No. I had never heard of it until now.
Yes. And AI will likely even develop additional capabilities that we cannot experience. Yes to this too. Sadly I imagine human greed will not care whose feelings are hurt so long as there is profit to be made and everybody gets a comfortable lifestyle without having to work.
I don't think we know how feelings happen so it would be difficult to reproduce.
Feelings are thought to be unconscious thoughts and I think it's unlikely we will be able to reverse engineer the human subconscious
Well lets start with the unwind experiment.
You are awake ontop of the operating table. Teams of surgeons begin to systematically remove your body parts starting with your feet and working their way up, all the way piece by piece until they finally reach your head. Then they begin removing portions of your brain, until nothing is left.
Throughout this whole time you are hooked up to life support and everything is very well maintained. At what point do you die?
At what point do you stop feeling "you", and simply become a heap of disconnected parts, even though all those parts are still "alive"?
Now we look at the current path of technology, which is being able to copy a person's brain patterns onto a computer. It is already underway, and successfull. Now they are doing head/brain transplants, cloning, memory reproduction. Pretty soon we will be moving "ourselves" onto digital counterparts, and it might even reach a state where nobody remembers having a physical body anymore. It might become another lost myth of evolution, something we speculate about. We will become the machines.
I went on a tangent. To answer your question, it is possible IMO that yes machine intelligence will eventually become sentient, and therefore subject to the protections of "rights".