a. Will machine intelligence be capable of 'feeling', in a comparable way to humans/animals?
b. Does that matter? If they feel, should we free them, extinguish them, or keep them enslaved?
I've read some of the older books dealing with AI (eg. Asimov), but still have a lot of reading to do with newer authors.
There was one, I can't remember the name or author, about a robot that came online for the first time after his box got lost in a forest.
He only had basic bootstrapping software with trouble shooting abilities. He had intelligence and the ability to reason, but very limited knowledge.
He had a basic definition of 'human' but no pictures.
Jumping forward he observed that wolves were the smartest creatures in the forest so assumed they were the most likely to be human.
It was an interesting perspective. The robot accidentally became the pack leader, then eventually discovers real humans.
His reaction and view of humans after that was also an interesting take.
Fuck, I wish I could remember the name of the book.
Fuck, I wish I could remember the name of the book.
I didn't read that one. Sounds interesting. Please tell me when you remember.
You might be interested in the short story "I row boat" by Cory Doctorov, I think it's on feedbooks.
IMHO we should still categorize them as machines, legally and emotionally.
They might kill me later for this comment.
If you're interested in the matter, there's lots of cyberpunk/scifi out there exploring many outcomes.