Pathfinding with partial occupation?

G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

Note: "Partial occupation" is a term I just made up. :p

What I'm looking for is a pathfinding algorithm for turn-based games
that can handle movement rules where pieces can *occupy* only a subset
of those squares that they can *move over*.

Example: Chess. Pieces (other than pawns) can move over squares
occupied by other units but can only be placed on empty squares.

A* appears to be unable to handle this situation because its path
optimization relies on the assumption that the question of permanent
occupation of a location is irrelevant to pathfinding. This assumption
is invalid for multi-turn movements because the moving units would
have to rest somewhere along the path.

Now there is always exhaustive search, but that's probably not a good
idea for wargame maps with 10,000 hexagons... any better ideas?
--
http://www.kynosarges.de
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

Christoph Nahr wrote:
> Note: "Partial occupation" is a term I just made up. :p
>
> What I'm looking for is a pathfinding algorithm for turn-based games
> that can handle movement rules where pieces can *occupy* only a subset
> of those squares that they can *move over*.
>
> Example: Chess. Pieces (other than pawns) can move over squares
> occupied by other units but can only be placed on empty squares.
>
> A* appears to be unable to handle this situation because its path
> optimization relies on the assumption that the question of permanent
> occupation of a location is irrelevant to pathfinding. This assumption
> is invalid for multi-turn movements because the moving units would
> have to rest somewhere along the path.
>
> Now there is always exhaustive search, but that's probably not a good
> idea for wargame maps with 10,000 hexagons... any better ideas?

Sorry, man, but A* (or other heuristic search algos like IDA*, RBFS,
RTA*, etc.) do not rely on such a kind of conditions! That has nothing
to do with the search algorithm itself, ...

On the other hand, A* is an algo belonging to the class known as
single-agent search algorithms. Therefore, it should never be applied to
chess or games where there are more than one player involved.

Finally, your prob seems to be related only to the "descendants"
function used by *ANY* search algorithm. At some point, *ANY* search
algorithm will ever have to expand a selected node. This is usually
accomplished by invoking a method descendants which returns all the
descendants of the specified node. You are free to introduce there as
many constraints as neccessary for computing all the positions where a
piece (or unit) can move to.

Best regards,
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

On Fri, 10 Dec 2004 15:12:10 +0100, Carlos Linares Lopez
<carlos.linares@uc3m.es> wrote:

>Sorry, man, but A* (or other heuristic search algos like IDA*, RBFS,
>RTA*, etc.) do not rely on such a kind of conditions! That has nothing
>to do with the search algorithm itself, ...

I'd say you're wrong if I even understood what exactly you're
referring to...

>On the other hand, A* is an algo belonging to the class known as
>single-agent search algorithms. Therefore, it should never be applied to
> chess or games where there are more than one player involved.

That's definitely complete nonsense.

>Finally, your prob seems to be related only to the "descendants"
>function used by *ANY* search algorithm. At some point, *ANY* search
>algorithm will ever have to expand a selected node. This is usually
>accomplished by invoking a method descendants which returns all the
>descendants of the specified node. You are free to introduce there as
>many constraints as neccessary for computing all the positions where a
>piece (or unit) can move to.

Sure I could introduce extra conditions when adding descendants but
that would later throw off A* optimization because some descendants
that I ignored because they were not valid placement locations along
the first path might be acceptable when following another path.
--
http://www.kynosarges.de
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

Christoph Nahr wrote:
> On Fri, 10 Dec 2004 15:12:10 +0100, Carlos Linares Lopez
> <carlos.linares@uc3m.es> wrote:
....
> >Finally, your prob seems to be related only to the "descendants"
> >function used by *ANY* search algorithm. At some point, *ANY* search
> >algorithm will ever have to expand a selected node. This is usually
> >accomplished by invoking a method descendants which returns all the
> >descendants of the specified node. You are free to introduce there as
> >many constraints as neccessary for computing all the positions where a
> >piece (or unit) can move to.
>
> Sure I could introduce extra conditions when adding descendants but
> that would later throw off A* optimization because some descendants
> that I ignored because they were not valid placement locations along
> the first path might be acceptable when following another path.

Since the descendants function depends on the currently open node,
the choices along one path don't affect choices along another path.
You can calculate which nodes are a single move away from a location
for a unit type given your rules for movement, the static map data,
and any mobile obstructions or dynamic terrain changes (like other units).
That may introduce some complications during coding, since you might
need to do more than just a simple lookup of static links out of a
map location, but it isn't going to be a big deal. You'll need to
attach board configuration information to the nodes anyway so that you
can evaluate each of the intermediate positions (because the opponent(s)
will react to your moves) and so that you can evaluate moves in order
of quality rather than in strict depth-first order (which would let you
keep a single board you would modify for each candidate).

At least it seems that way to me. I've never tried to use A* with any
explicit consideration of turn-taking. Instead, I've used it to calculate
the best path with the current state of knowledge, holding the world frozen,
and replanning when necessary. When you start including turn-taking,
you've really changed the nature of the algorithm into something like
an alpha-beta search.

--
Carl Burke
cburke@mitre.org
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

On Fri, 10 Dec 2004 13:55:33 -0500, Carl Burke <cburke@mitre.org>
wrote:

>Since the descendants function depends on the currently open node,
>the choices along one path don't affect choices along another path.
>You can calculate which nodes are a single move away from a location
>for a unit type given your rules for movement, the static map data,
>and any mobile obstructions or dynamic terrain changes (like other units).

Yes, correct. I have an algorithm to do that, but recursively
computing *all* reachable locations from all previous locations, until
we hit the target, would take an inordinate amount of time. As I
said, I need an algorithm that allows me to plan 20+ turn moves on
maps with 10,000 locations or more.

>That may introduce some complications during coding, since you might
>need to do more than just a simple lookup of static links out of a
>map location, but it isn't going to be a big deal.

My A* implementation already is dynamic, in that sense. Units are
queried at every step whether they can enter a given descendant.

That's not the issue. The problem, as I see it, is that A* optimizes
its path by joining path segments, and choosing the the less expensive
path. But what if the complete path that was constructed from such
segments cannot be traversed due to the end-of-turn conditions, or is
suboptimal under such conditions? If I ignore these conditions I
might end up with invalid paths; if I use them as cut-off conditions
when looking for descendants I might end up with suboptimal paths.

>You'll need to
>attach board configuration information to the nodes anyway so that you
>can evaluate each of the intermediate positions (because the opponent(s)
>will react to your moves) and so that you can evaluate moves in order
>of quality rather than in strict depth-first order (which would let you
>keep a single board you would modify for each candidate).

That's not relevant to my application. On the level of the
pathfinding algorithm, the rules are completely unknown. The
algorithm just moves an agent along a graph, with some callback
function that tell the algorithm where the agent can go. Any enemy
reactions are considered at the AI level, not the pathfinding level.

>At least it seems that way to me. I've never tried to use A* with any
>explicit consideration of turn-taking. Instead, I've used it to calculate
>the best path with the current state of knowledge, holding the world frozen,
>and replanning when necessary. When you start including turn-taking,
>you've really changed the nature of the algorithm into something like
>an alpha-beta search.

Apparently I didn't express myself too clearly. I intend to offer
multi-turn pathfinding strictly as a way to determine whether a given
target location can be reached at all, and if so, in what time.

I'm not talking about games like chess that are played on a tiny board
densely packed with pieces. I'm talking about wargames played on huge
maps that are mostly empty. When a multi-turn move is planned at all,
it's because there are no opposing units nearby, so enemy actions are
likely irrelevant to the found path, at least initially. If later
path sections become irrelevant, well, I don't care about that. The
AI completely re-plans all of its actions every turn anyway.

The situations that would block a unit from occupying a hexagon are
typically *not* occupation by enemy units but perhaps occupation by
friendly units (i.e. stacking limits) or the requirement to occupy a
specific terrain at the end of each turn. This is certainly part of
the current state of knowledge, but it's a specific part that A*
apparently cannot take into account, as far as I can tell.
--
http://www.kynosarges.de
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

Christoph Nahr wrote:
> On Fri, 10 Dec 2004 13:55:33 -0500, Carl Burke <cburke@mitre.org>
> wrote:
>
>
>>Since the descendants function depends on the currently open node,
>>the choices along one path don't affect choices along another path.
>>You can calculate which nodes are a single move away from a location
>>for a unit type given your rules for movement, the static map data,
>>and any mobile obstructions or dynamic terrain changes (like other units).
>
>
> Yes, correct. I have an algorithm to do that, but recursively
> computing *all* reachable locations from all previous locations, until
> we hit the target, would take an inordinate amount of time. As I
> said, I need an algorithm that allows me to plan 20+ turn moves on
> maps with 10,000 locations or more.
>
>
>>That may introduce some complications during coding, since you might
>>need to do more than just a simple lookup of static links out of a
>>map location, but it isn't going to be a big deal.
>
>
> My A* implementation already is dynamic, in that sense. Units are
> queried at every step whether they can enter a given descendant.
>
> That's not the issue. The problem, as I see it, is that A* optimizes
> its path by joining path segments, and choosing the the less expensive
> path. But what if the complete path that was constructed from such
> segments cannot be traversed due to the end-of-turn conditions, or is
> suboptimal under such conditions? If I ignore these conditions I
> might end up with invalid paths; if I use them as cut-off conditions
> when looking for descendants I might end up with suboptimal paths.
>
>
>>You'll need to
>>attach board configuration information to the nodes anyway so that you
>>can evaluate each of the intermediate positions (because the opponent(s)
>>will react to your moves) and so that you can evaluate moves in order
>>of quality rather than in strict depth-first order (which would let you
>>keep a single board you would modify for each candidate).
>
>
> That's not relevant to my application. On the level of the
> pathfinding algorithm, the rules are completely unknown. The
> algorithm just moves an agent along a graph, with some callback
> function that tell the algorithm where the agent can go. Any enemy
> reactions are considered at the AI level, not the pathfinding level.
>
>
>>At least it seems that way to me. I've never tried to use A* with any
>>explicit consideration of turn-taking. Instead, I've used it to calculate
>>the best path with the current state of knowledge, holding the world frozen,
>>and replanning when necessary. When you start including turn-taking,
>>you've really changed the nature of the algorithm into something like
>>an alpha-beta search.
>
>
> Apparently I didn't express myself too clearly. I intend to offer
> multi-turn pathfinding strictly as a way to determine whether a given
> target location can be reached at all, and if so, in what time.
>
> I'm not talking about games like chess that are played on a tiny board
> densely packed with pieces. I'm talking about wargames played on huge
> maps that are mostly empty. When a multi-turn move is planned at all,
> it's because there are no opposing units nearby, so enemy actions are
> likely irrelevant to the found path, at least initially. If later
> path sections become irrelevant, well, I don't care about that. The
> AI completely re-plans all of its actions every turn anyway.
>
> The situations that would block a unit from occupying a hexagon are
> typically *not* occupation by enemy units but perhaps occupation by
> friendly units (i.e. stacking limits) or the requirement to occupy a
> specific terrain at the end of each turn. This is certainly part of
> the current state of knowledge, but it's a specific part that A*
> apparently cannot take into account, as far as I can tell.

Cristoph,

It definitely seems to me that you are mixing things up. Carl is
absolutely right, A* works the way he mentions. And, just in case you
wanna keep track of "other agents' decisions" you have to move to other
type of algos like Alpha-beta and its optimized versions: Falpha-beta,
Palpha-beta, Lalpha-beta, test, MTD(f) and the like.
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

On Mon, 13 Dec 2004 14:53:09 +0100, Carlos Linares Lopez
<carlos.linares@uc3m.es> wrote:

>It definitely seems to me that you are mixing things up. Carl is
>absolutely right, A* works the way he mentions.

Since you always quote entire discussions with no indication what
specific points you're replaying to, I have no idea where you see a
difference between Carl and me about how A* works. I don't see one.

To reiterate: I'm already using A* for finding paths that will take
multiple turns to execute. Works just great.

>And, just in case you
>wanna keep track of "other agents' decisions"

I thought I'd written specifically that I do NOT want to do this...
--
http://www.kynosarges.de
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

Christoph Nahr wrote:
....
> My A* implementation already is dynamic, in that sense. Units are
> queried at every step whether they can enter a given descendant.
>
> That's not the issue. The problem, as I see it, is that A* optimizes
> its path by joining path segments, and choosing the the less expensive
> path. But what if the complete path that was constructed from such
> segments cannot be traversed due to the end-of-turn conditions, or is
> suboptimal under such conditions? If I ignore these conditions I
> might end up with invalid paths; if I use them as cut-off conditions
> when looking for descendants I might end up with suboptimal paths.
....
> The situations that would block a unit from occupying a hexagon are
> typically *not* occupation by enemy units but perhaps occupation by
> friendly units (i.e. stacking limits) or the requirement to occupy a
> specific terrain at the end of each turn. This is certainly part of
> the current state of knowledge, but it's a specific part that A*
> apparently cannot take into account, as far as I can tell.

Ok, I think I have a clearer idea of what you're looking for.
Unfortunately, I don't know how to solve the problem. I can think of some
workarounds by which you can add information for use during route planning,
and I'm sure you can too, but I don't know of a specific algorithm that
does simultaneous route planning for multiple units.

One possible approach to handling stacking limits might be to plan routes
for your units in some reasonable order, maybe based on priority or on
strictness of constraints, and then use that information when routing
later units. For example, the descendants function might allow movement
into a hex unless it had N occupants whose paths were already going to be
stopping in that hex on that turn. (Or you could modify the cost function
to vastly increase the cost of entry.) The first units whose paths are plotted
will get the most optimal routes, while later units would take paths that
were optimal given the presence of higher-priority movement. But that does
assume that you are able and willing to modify either the cost function
or descendant function to reflect additional state.

Stopping in specific terrains is more difficult, since you would need to
plan your route move-by-move and treat each of those single-move paths
as an A* node in the overall path.

Hope this helps in some way.

--
Carl Burke
cburke@mitre.org
 
G

Guest

Guest
Archived from groups: comp.ai.games (More info?)

On Mon, 13 Dec 2004 18:11:56 -0500, Carl Burke <cburke@mitre.org>
wrote:

>Ok, I think I have a clearer idea of what you're looking for.
>Unfortunately, I don't know how to solve the problem. I can think of some
>workarounds by which you can add information for use during route planning,
>and I'm sure you can too, but I don't know of a specific algorithm that
>does simultaneous route planning for multiple units.

Simultaneous planning is not required. Moving one unit (or partial
stack) after another is quite okay. The movement order is controlled
by the computer player, either by unit/target priority or by
generating alternative command sequences to see which order is best.

>One possible approach to handling stacking limits might be to plan routes
>for your units in some reasonable order, maybe based on priority or on
>strictness of constraints, and then use that information when routing
>later units. For example, the descendants function might allow movement
>into a hex unless it had N occupants whose paths were already going to be
>stopping in that hex on that turn.

My algorithms already do that. :) The callback method that informs
the pathfinder whether the currently moving units can enter a given
target location will evaluate any stacking limits or other known
conditions, and respond accordingly that they cannot enter.

(The AI immediately executes any planned command on its own internal
copy of the configuration, so any prior movement of friendly units has
already been executed and is known to the currently moving units.)

The only problem is that this method cannot take the difference
between moving across a hexagon vs occupying it into account...

>Stopping in specific terrains is more difficult, since you would need to
>plan your route move-by-move and treat each of those single-move paths
>as an A* node in the overall path.

The "stopping" part is the whole extent of my trouble. But your idea
here is quite interesting! The descendants would then be the entirety
of hexagons that can be reached in one turn, and I'd try to find an
optimal path between end-of-turn positions in a higher-level A* run.
That should work, I think. Thanks!
--
http://www.kynosarges.de