View Single Post
Old 08-12-2011, 04:55 AM   #88
Nancy Fulda
I write stories.
Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.Nancy Fulda ought to be getting tired of karma fortunes by now.
 
Nancy Fulda's Avatar
 
Posts: 700
Karma: 16437432
Join Date: Jul 2011
Location: Northern Germany
Device: kindle
Quote:
Originally Posted by Rob87 View Post
Artificial neural networks have been around a long time now and it seems not much progress has been made in creating human like intelligence.

Probably because they require too much hand holding - from my limited understanding it sounds like you need to tell them what to learn, how to assess their performance at it and through lots feedback they eventually improve at what they've been coded to do.

If thats the best we can do Human like A.I is miles off
Back when I was writing my Master's thesis, one of the biggest problems with neural nets (and most other learning algorithms) was setting up the reward structure right. Computerized systems are very, VERY good at learning what you tell them to do.

If you give a mobile robot negative feedback whenever it bumps into a wall, guess what it will learn to do? Yup. Sit there like a rock. So the researcher has to add in a positive reinforcement for moving around, at which point the robot will learn to spin in endless circles -- it's moving, but it's not hitting any walls, see? So then you start layering on things like rewards for being 'curious' or exploring new territory, rewards for reaching certain goal locations, and so forth. The whole process is dreadfully complicated, but in a very real sense, it's not the neural network's fault. We just don't know the magic recipe for laying out the right reinforcements.

Well, okay, then there are also generalization problems, but that's a different kettle of fish.
Nancy Fulda is offline   Reply With Quote