This is interesting.
Well, first off, you'd have to define what you actually mean by "Intelligence", in order to create an artificial one.
So, what is it? Is it the ability to learn how to recognize/interact/deal appropriately with any situation? Yes, i'd say that it's the "program rewriting itself" that is actually key to learning, which I'd say should be the first goal...
The problem is that, in existing "intelligences", the rewriting routines also change, i.e. are rewritten. That would amount to a certain perception of the situation someone is in - so the same situation can have different impacts on different sentient beings.
The problem is also that the learning process has to be self-sufficient. Controlled learning is irrelevant and is actually not truly "AI", the way I see it. But, if the learning process isn't thoroughly stable, there's no guarantee that your AI will actually "evolve" in the classical sense, but rather an increasing probability of falling into some sort of bizarre state of evolutionary stop. That'd be a program glitch, of course. The problem is that you'd actually need a routine just to check if things are working properly, i.e. your AI hasn't gone "insane" (not in that classical movie style, of course; I mean an evolutionary breakdown, like an endless loop).
But given the machinery that is able to process what could be called a sentient program, how do we come up with such a thing? Someone has to program that too. What does it have that makes it rewrite itself? How does it know how to rewrite itself, or what to rewrite? The rewriting process has to be performed according to the interaction data stored. How can we assure that that program will have the necessary stability and enough heuristic abilities to warrant being called "AI"?
So, what <i>are</i> we talking about here?
I have no idea. It's just too big an issue for me to handle right now. I'm hungry, and I'm off to lunch.