View Single Post
Old 06-18-2016, 08:16 AM   #16
fjtorres
Grand Sorcerer
fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.fjtorres ought to be getting tired of karma fortunes by now.
 
Posts: 11,732
Karma: 128354696
Join Date: May 2009
Location: 26 kly from Sgr A*
Device: T100TA,PW2,PRS-T1,KT,FireHD 8.9,K2, PB360,BeBook One,Axim51v,TC1000
Quote:
Originally Posted by DuckieTigger View Post
I am saying that it is not a artificial copy of a real intelligent being.

Ignore law 1 and 2 as they are only there for human convenience and to make sure they are slaves to humans except as a weapon against other humans. Third law makes it impossible for one robot to protect another robot by sacrificing or harming its own existence to ensure survival of the other robots existence.
Kinda makes robot altruism hard, doesn't it?

No, Asimov robots aren't AI and were never intended to be AI.

(Adam Link was. https://en.m.wikipedia.org/wiki/Adam_Link)

Asimov's robots were always intended (within lore) to be appliances (mostly because he wrote the first few to debunk the "Frankenstein Principle") but as he kept exploring the material they really became an exploration of human foibles (even cold analytical Susan Calvin ended up projecting her needs on one) and the Law of Unintended consequences. Complex systems created by fallible humans will themselves be fallible in unpredictable ways. (Notice how Skynet always screws up? )

R. Daneel was not intended to be sentient but in the end, just like Mycroft, sentience emerged and he ended up substituting his judgment and agenda for humanity's. He claimed it was in service to protecting humans...

But, lets face it: we hear that all the time from busybodies, politicians, and other villain types.
(Asimov was a great writer but his politics and personal philosophy were a tad Pollyanna-ish.)

We don't need to go too far to find and example of human fallibility in trying to fake AI:

http://money.cnn.com/2016/03/25/tech...tay/index.html

Imagine the same problem popping up in a true AI, which would be millions of times more complex. Just like a child, you can only control what they learn while in closed environments, and sometimes not even that. Not saying the Frankenstein principle is right but all evidence to date is that a true AI will be at least as unpredictable as humans and almost certainly more so. No biological imperatives at work: we have an eon or two of genetically ingrained responses that make us semi-predictable, especially on a statistical basis--Asimov himself saw this. His PsychoHistory is almost certainly possible. Another SF concept headed our way.

And one to beware of, at that.

Last edited by fjtorres; 06-18-2016 at 08:19 AM.
fjtorres is offline   Reply With Quote