Artificial intelligence in computer games covers the behaviour and
decision-making process of game-playing opponents. In classic
analytical games, such as chess, checkers and go, the strongest
game-playing programs rely mostly on fast search techniques, whereas
in commercial games, such as action games, role-playing games and
strategy games, the behaviour of opponents is commonly implemented as
simple rule-based systems. With a few exceptions machine-learning
techniques are rarely applied to state-of-the-art computer game
Machine-learning techniques may provide game-playing programs with the
ability to improve their performance by learning from mistakes and
successes, to automatically adapt to the strengths and weaknesses of a
human player, to learn from their opponents by imitating their
tactics, or to discover new knowledge by analysing game collections or
perfect move databases.
There is a relatively small group of enthusiastic researchers that
investigate the use of machine-learning techniques to enhance computer
games. Our aim is to bring them together at the CGAIDE 2004
conference, by having a special track on "Learning and Adaptation in
Games", with a good selection of high quality papers in this research
area. We also strive to use this track to increase the computer-games
industry's awareness of machine-learning techniques.
Topics of interest:
The special track on "Learning and Adaptation in Games" will cover the
application of machine-learning techniques to all aspects of computer
games. The track is limited neither to specific types of games, nor to
specific machine-learning techniques.
Draft paper submission: 30 July 2004
Notification of acceptance: 23 August 2004
Camera-ready submission deadline: 13 September 2004
Accepted papers will be published in the conference proceedings.
Authors of the best of the accepted papers will be invited to publish
their papers in the on-line International Journal of Intelligent Games
and Simulation (http://www.scit.wlv.ac.uk/~cm1822/ijigs.htm).
Curt Bererton wrote:
> But really smart AI is not the same as a fun AI.
Well, it's just not really smart about the correct problem domain.
> The other problem is that very few academics have ever been
> a game designer... they don't know what is fun to play against.
Heh! Maybe this will slowly change?
> My perspective is that the goal for academic AI research applied to
> games should be twofold: 1. Make an AI that's fun to play against 2.
> If #1 is hard to define, make an AI that is easy for a game designer
> to tune to their specifications.
Yes, a *lot* of valuable work could be done for 2, for game design
usability. We'd really like systems that don't take forever to implement or
test. Actually that's true of all software development.
I have seen one piece of AI work that may be correctly directed for games.
It was a first round entry in the Independent Games Festival 2004. "Pearl
Demon" from Zoesis Studios. www.zoesis.com The demon avatar reacted to the
player's actions, got angry, did responsive stuff, etc. My main problem
with the game, and the reason I don't think it advanced to the final round
despite all the (claimed) technology dumped into it, is the game was trivial
in length and scope. One could have coded it up in any scripting language
easily enough. I wrote the game authors and told them I'd like to see a
large scale demonstration of their technology, something that proves their
(supposedly) automated systems are a big 'win' over hand cobbled scripts.
On Wed, 02 Jun 2004 17:17:46 -0400, Curt Bererton
>I certainly agree that a good chunk academia hasn't figured out that
>thrashing the player is not the desired goal for computer games.
No, but for a strategy game, putting up a halfway creditable fight
without cheating would be a good start. Honestly, "this game's no fun
because the AI always kicks the player's arse" would be a very nice
problem to have.
(To repeat: I'm talking about strategy games; FPS AI always cheats by
virtue of the interface between it and the game engine, so it's both a
more complicated and less interesting problem.)
>My perspective is that the goal for academic AI research applied to
>games should be twofold: 1. Make an AI that's fun to play against 2. If
>#1 is hard to define, make an AI that is easy for a game designer to
>tune to their specifications.
>Since there are no well defined metrics for #1, I would recommend
>pursuing #2 until we can figure out how to transfer some game designer
>knowledge into an automated method.
There are no well defined metrics for #2 either.
>I'm going to a "challenges in game AI" workshop at AAAI in San Jose in
>July (same idea except slightly broader to the advertisement for
>CGAIDE). The question for folks in this group is: Do you disagree with
>my statement of goals for academic AI research applied to games? Tell me
>why, and maybe I can bring some of that response from the game community
>back to academia in July.
I disagree, for the above reasons. Improving quality of play (to
repeat again, I'm talking about strategy games here, not the likes of
FPS) would be far and away the best step forward, and it also has the
huge advantage of having reasonably well defined criteria for success.
But I'd definitely be interested in hearing the feedback you get!
"Sore wa himitsu desu."
To reply by email, remove
the small snack from address.
> No, but for a strategy game, putting up a halfway creditable fight
> without cheating would be a good start. Honestly, "this game's no fun
> because the AI always kicks the player's arse" would be a very nice
> problem to have.
> (To repeat: I'm talking about strategy games; FPS AI always cheats by
> virtue of the interface between it and the game engine, so it's both a
> more complicated and less interesting problem.)
OK, I certainly agree with the putting up a halfway creditable fight
without cheating. I was recently talking to some RTS folks, and what
they were saying was that typically, a strategy game AI would totally
crush a player without cheating, *until* the players found the flaw that
the AI programmer didn't foresee, at which point they would
systematically crush it using the same technique until they got bored.
Certainly there is progress to be made in strategy games. However, rule
based and state-machine based AI ( the most common in the industry )
seems to be great until the players find the flaw... what techniques can
we apply? Stochastic games and POMDPs seem far away from being able to
solve anything of the size of a real time strategy game ( correct me if
I'm wrong ). Is there a solid method available now that can really beat
the player without cheating at all? By not cheating, I mean using
realistic models of how their units can observe the internal game state.
The other problem is that the game industry avoids learning methods
because they cannot predict the outcome of the learner in every
situation leading to a quality assurance nightmare. Black and white uses
a very limited model of learning where they can actually test the end
result for every thing the creature can learn.
Secondly, I disagree with the implication that strategy games are a
lower hanging fruit for good AI than FPS or RPG games. Check out what
I'm presenting at AAAI: (avi movie on my website)
The basic idea is that by removing some of the knowledge that the FPS
agent uses to cheat, we really can make something smarter. Not only
that, wouldn't it be cool to show the player exactly how the AI beat
them without cheating?
>>My perspective is that the goal for academic AI research applied to
>>games should be twofold: 1. Make an AI that's fun to play against 2. If
>>#1 is hard to define, make an AI that is easy for a game designer to
>>tune to their specifications.
>>Since there are no well defined metrics for #1, I would recommend
>>pursuing #2 until we can figure out how to transfer some game designer
>>knowledge into an automated method.
> There are no well defined metrics for #2 either.
Good point, "easy to tune by the game designer" does leave something to
be desired in terms of metrics. I guess what I should say is that #2 is
an easier problem than #1. I imagine it would also be much easier to
develop metrics for #2 than #1.
> But I'd definitely be interested in hearing the feedback you get!
Well, I'll let you know. Typical feedback is: "well that couldn't
possibly run fast enough to be used in a real game". We'll see how it