Archived from groups: comp.ai.games (
More info?)
Christoph Nahr wrote:
> On Fri, 10 Dec 2004 13:55:33 -0500, Carl Burke <cburke@mitre.org>
> wrote:
>
>
>>Since the descendants function depends on the currently open node,
>>the choices along one path don't affect choices along another path.
>>You can calculate which nodes are a single move away from a location
>>for a unit type given your rules for movement, the static map data,
>>and any mobile obstructions or dynamic terrain changes (like other units).
>
>
> Yes, correct. I have an algorithm to do that, but recursively
> computing *all* reachable locations from all previous locations, until
> we hit the target, would take an inordinate amount of time. As I
> said, I need an algorithm that allows me to plan 20+ turn moves on
> maps with 10,000 locations or more.
>
>
>>That may introduce some complications during coding, since you might
>>need to do more than just a simple lookup of static links out of a
>>map location, but it isn't going to be a big deal.
>
>
> My A* implementation already is dynamic, in that sense. Units are
> queried at every step whether they can enter a given descendant.
>
> That's not the issue. The problem, as I see it, is that A* optimizes
> its path by joining path segments, and choosing the the less expensive
> path. But what if the complete path that was constructed from such
> segments cannot be traversed due to the end-of-turn conditions, or is
> suboptimal under such conditions? If I ignore these conditions I
> might end up with invalid paths; if I use them as cut-off conditions
> when looking for descendants I might end up with suboptimal paths.
>
>
>>You'll need to
>>attach board configuration information to the nodes anyway so that you
>>can evaluate each of the intermediate positions (because the opponent(s)
>>will react to your moves) and so that you can evaluate moves in order
>>of quality rather than in strict depth-first order (which would let you
>>keep a single board you would modify for each candidate).
>
>
> That's not relevant to my application. On the level of the
> pathfinding algorithm, the rules are completely unknown. The
> algorithm just moves an agent along a graph, with some callback
> function that tell the algorithm where the agent can go. Any enemy
> reactions are considered at the AI level, not the pathfinding level.
>
>
>>At least it seems that way to me. I've never tried to use A* with any
>>explicit consideration of turn-taking. Instead, I've used it to calculate
>>the best path with the current state of knowledge, holding the world frozen,
>>and replanning when necessary. When you start including turn-taking,
>>you've really changed the nature of the algorithm into something like
>>an alpha-beta search.
>
>
> Apparently I didn't express myself too clearly. I intend to offer
> multi-turn pathfinding strictly as a way to determine whether a given
> target location can be reached at all, and if so, in what time.
>
> I'm not talking about games like chess that are played on a tiny board
> densely packed with pieces. I'm talking about wargames played on huge
> maps that are mostly empty. When a multi-turn move is planned at all,
> it's because there are no opposing units nearby, so enemy actions are
> likely irrelevant to the found path, at least initially. If later
> path sections become irrelevant, well, I don't care about that. The
> AI completely re-plans all of its actions every turn anyway.
>
> The situations that would block a unit from occupying a hexagon are
> typically *not* occupation by enemy units but perhaps occupation by
> friendly units (i.e. stacking limits) or the requirement to occupy a
> specific terrain at the end of each turn. This is certainly part of
> the current state of knowledge, but it's a specific part that A*
> apparently cannot take into account, as far as I can tell.
Cristoph,
It definitely seems to me that you are mixing things up. Carl is
absolutely right, A* works the way he mentions. And, just in case you
wanna keep track of "other agents' decisions" you have to move to other
type of algos like Alpha-beta and its optimized versions: Falpha-beta,
Palpha-beta, Lalpha-beta, test, MTD(f) and the like.