I recently posted a similar question over on Stack Overflow but they just weren't having it, so I figured I'd bring it over here.
I'm developing a model for a general AI based roughly on Douglas Hofstadter's theory of analogy/categorization as the core of cognition. I'm imagining a mobile robot with stereoscopic vision, a microphone and a speaker. Using machine learning techniques which are well established it shouldn't be too difficult to spoon feed the robot a series of useful "symbols" to start off with, like what a chair looks like, what a desk looks like, what a computer looks like, etc. There could even be a mirror in the room and you could teach it that when it sees its reflection it's seeing itself, which would in essence make it a self conscious artificial intelligence. Maybe that last bit is a leap of wishful thinking but let's run with it for now.
So we have this sentient robot, basically running an object oriented program in which it roams around the room and recognizes and categorizes things (if you're familiar with OOP it actually makes perfect sense for AI, assuming you agree with Hofstadter's take on cognition), so according to Hofstadter it would have some degree of "understanding" about the world around it. For instance you could walk into the room and hold up a piece of paper and say "what is this robot?" and it would say "paper." Then maybe if there's enough examples of paper in the room you could ask it "what is paper used for" and it may respond, "to write things on." Great, we may now have the world's best general AI in our possession, but, here is where I get stuck. As humans, we have an innate tendency to want to explore the world a bit, and I think this can be boiled down to the fact that we have needs and emotions. We get hungry so we go explore the kitchen, we get horny so we go explore the clubs (or internet more likely), but our robot would require none of that. As long as it's plugged into an outlet I don't see any reason why it would have any desire to do anything other than sit there like a perfect little monk, thoughtless, and just existing.
So, if you're with me so far and you have any background in programming, how would you go about putting incentives into the code to make the robot actually do something? I mean, you could hard code it to just try and find new objects, or roam around aimlessly categorizing things and gaining knowledge, but since that's hard coded in it kind of feels like a shame or like breaking the rules of what it means to have a general AI. Although I guess that's what I argued our DNA does for us so maybe that is the answer.
I apologize that the question is vague, I'm both looking for some sort of algorithm in the code which would incentivize the robot to act more like a human and less like a giant Amazon Echo, but I'd also like any general feedback or ideas on whether this would work, improvements to what I've suggested, whatever.