public inbox for xconq7@sourceware.org
 help / color / mirror / Atom feed
* Re: AI now goes after bases
@ 2004-01-03  6:06 Hans Ronne
  2004-01-03  6:27 ` Eric McDonald
  0 siblings, 1 reply; 12+ messages in thread
From: Hans Ronne @ 2004-01-03  6:06 UTC (permalink / raw)
  To: xconq7

>>> First, it would make it easier to write and test different AIs, using the
>> existing pluig-in structure (e.g. mplayer vs. iplayer). Right now, there is
>> so much ai code in plan.c and task.c that a new ai would not make much of a
>> dif´ference.
>
>Yes, but aside from all the little utility calculations and
>evaluators, some of that AI code is directly associated with plans
>or with tasks. Plan-related generic AI should either be in plan.c
>or else a new file aiplan.c.

Agreed. My point was essentially that the latter file already exists as
ai.c. That's where I would like to move some more ai stuff from plan.c.

>So how do you reconcile which function to use?
>On initialization, you create registries of function pointers for
>different families of functionality, and initially the generic AI
>would register all of the ai_* functions in their appropriate
>slots. Then as AI-players did their initializations, they could
>opt to use the generic AI functions, or register their own
>substitutes in the appropriate slots.

Good point. And easy to implement. We could just use the mplayer as the
default. It's even easier to make a copy of the mplayer, though, and hack
it. That's how we made the iplayer.

>Then, things like plan_offensive simply become wrappers for
>invoking a function pointer based on a lookup in, say, a plan
>registry, based on the calling AI.
>
>> Second, a better separation of AI and task level code is desirable since
>> the latter is used also by the human interface. The task code should
>> therefore be only about how to implement specific orders, whether given by
>> the ai or a human. Accordingly, a task should just be a logical chain of
>> actions without any tactical or strategic considerations.
>
>> This can be done already now, at the ai level
>
>Yeah, but now we essentially have duplicated code with
>go_after_victim and ai_go_after_victim. I personally would just
>rename things like go_after_victim to ai_go_after_victim, leave
>them in plan.c, and go from there.

Agreed. Axcept I think ai_go_after_victim should stay where it is is ai.c.

Hans


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-03  6:06 AI now goes after bases Hans Ronne
@ 2004-01-03  6:27 ` Eric McDonald
  2004-01-03 14:40   ` Hans Ronne
  0 siblings, 1 reply; 12+ messages in thread
From: Eric McDonald @ 2004-01-03  6:27 UTC (permalink / raw)
  To: Hans Ronne; +Cc: xconq7

On Sat, 3 Jan 2004, Hans Ronne wrote:

> default. It's even easier to make a copy of the mplayer, though, and hack
> it. That's how we made the iplayer.

True. I will concede that it is easier, but:
(1) If someone else makes [positive] changes to the template file 
(say mplayer.c), then the AI hacker does not get the benefits of 
those in, say, megaplayer.c.
(2) As a corollary, if a bug is discovered in mplayer.c, then the 
fix must be done not only in mplayer.c, but also iplayer.c and 
megaplayer.c, provided that they still have the affected code in 
common with mplayer.c.
(3) The duplicated code bloats the size of the Xconq sources with 
little additional benefit. One could say that Xconq's entropy is 
unduly increased. :-)

> >Yeah, but now we essentially have duplicated code with
> >go_after_victim and ai_go_after_victim. I personally would just
> >rename things like go_after_victim to ai_go_after_victim, leave
> >them in plan.c, and go from there.
> 
> Agreed. Axcept I think ai_go_after_victim should stay where it is is ai.c.

That is fine with me, as long as we are then saying that ai.c is 
in some sense "coequal" to plan.c and task.c, and not a layer 
above them. The "layer above" mentality is what was driving me to 
argue for the taxonomy that I did....

Eric

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-03  6:27 ` Eric McDonald
@ 2004-01-03 14:40   ` Hans Ronne
  0 siblings, 0 replies; 12+ messages in thread
From: Hans Ronne @ 2004-01-03 14:40 UTC (permalink / raw)
  To: Eric McDonald; +Cc: xconq7

>(2) As a corollary, if a bug is discovered in mplayer.c, then the
>fix must be done not only in mplayer.c, but also iplayer.c and
>megaplayer.c, provided that they still have the affected code in
>common with mplayer.c.

Absolutely. We have already been down that path. We used to have another
copy of the mplayer, the oplayer (old mplayer). The idea was to update it
less frequently than the mplayer, but let it have all bug fixes. I finally
got fed up with checking in each bug fix twice (or even thrice counting the
iplayer) and threw out the oplayer.

>> Agreed. Axcept I think ai_go_after_victim should stay where it is is ai.c.
>
>That is fine with me, as long as we are then saying that ai.c is
>in some sense "coequal" to plan.c and task.c, and not a layer
>above them. The "layer above" mentality is what was driving me to
>argue for the taxonomy that I did....

What I'm after is not organizing things in layers as much as physical
separation of ai code from code used by all players. Ideally, I would like
to have a bunch of files that we could put in a separate ai folder because
they contain *only* ai code. This would include the mplayer, ai.c  but not
task.c. plan.c is in between. Right now, only ai players use plans, but I
have considered a semiautomatic mode where a human player would set a
unit's plan and then let the low level ai code run it. This has been
discussed several times on the list, so I wont repeat myself.

Hans


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-04 12:57       ` Peter Garrone
@ 2004-01-05  0:27         ` Eric McDonald
  0 siblings, 0 replies; 12+ messages in thread
From: Eric McDonald @ 2004-01-05  0:27 UTC (permalink / raw)
  To: Peter Garrone; +Cc: xconq7

On Sun, 4 Jan 2004, Peter Garrone wrote:

> I suppose its mutual support really. The ai takes little account of one unit
> supporting another unit.

Practically no account. That is part of why I was considering the 
tactical coordinator objects that I mentioned yesterday.

> Also the combat model does not really support
> this either. I mean that generally for an attack the user selects a
> single attacking unit, when usually in these sorts of games the idea is
> to cordinate your side spatially so that simultaneous attacks with
> multiple units have advantage over individual uncoordinated attacks.

I did notice in doc/PROJECTS, when reading it awhile back ago, 
that someone (Stan?) had proposed creating a battle container 
object. It had a bunch of thoughts on commitement levels, but I 
think it also mentioned the ability to bind multiple units from a 
single side into a battle.

> Generally, for an adjacent enemy, no movement is necessary. So if an AA
> unit were not adjacent to an aircraft, usually it should not be assigned
> to move to the aircraft and attack it. But if it were adjacent, it
> should be attacked. That is the concept I was struggling for.

Fair enough. Can't really argue with that.

Eric

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-04 12:57       ` Peter Garrone
@ 2004-01-04 14:46         ` Hans Ronne
  0 siblings, 0 replies; 12+ messages in thread
From: Hans Ronne @ 2004-01-04 14:46 UTC (permalink / raw)
  To: Peter Garrone; +Cc: xconq7

>On Sat, Jan 03, 2004 at 06:09:06AM +0100, Hans Ronne wrote:
>> >I dont see much benefit from saying cities are more important than
>> >mobile units, by a factor of 10. If an opponent were composed entirely
>> >of cities, it would be an easy job to defeat them, so it cannot be said
>> >that cities are more important than mobile units.
>>
>> I disagree. In most games cities (or more specifically any units that can
>> build other units) are easily 10 times more important than any mobile
>> units. Provided, of course that new units are built at a reasonable rate.
>
>Depends on the time frame. If a battle is going to be over before even a
>single unit is built, then a city isnt very important.

That's what I meant by a reasonable rate. In the roman game, where building
is slow, cities are less important for that reason (though still important
for other reasons). In the standard or advances games, the side that has
most cities will usually win.

Hans


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-03  5:11     ` Eric McDonald
@ 2004-01-04 12:57       ` Peter Garrone
  2004-01-05  0:27         ` Eric McDonald
  0 siblings, 1 reply; 12+ messages in thread
From: Peter Garrone @ 2004-01-04 12:57 UTC (permalink / raw)
  To: Eric McDonald; +Cc: xconq7

On Sat, Jan 03, 2004 at 12:11:05AM -0500, Eric McDonald wrote:
> Hi Peter, others,
> 
> On Sat, 3 Jan 2004, Peter Garrone wrote:
> 
> >and the different ai-controlled sides
> > tend to lose spatial organization.
> 
> Hmmm.... Not sure that I follow you here.
> If anything, the increased number of rejection criteria have the 
> benefit of spatially concentrating attackers on the fewer number 
> of victims considered worthwhile.

Sorry. think of world war one, with one lot on one side and the
other lot on the other. Thats spatially organised. 

I suppose its mutual support really. The ai takes little account of one unit
supporting another unit. Also the combat model does not really support
this either. I mean that generally for an attack the user selects a
single attacking unit, when usually in these sorts of games the idea is
to cordinate your side spatially so that simultaneous attacks with
multiple units have advantage over individual uncoordinated attacks.

> 
> > Adjacent enemy units should always be attacked. 
> 
> Why?

I disagree with myself. Indeed why. I was having a problem in the
roman game where legions were disposed to attack the enemy on the
other side of the adriatic rather than immediately adjacent units,
hence the hyperbole. This problem appeared to be present with the
pre-pathfinding code as well.

Generally, for an adjacent enemy, no movement is necessary. So if an AA
unit were not adjacent to an aircraft, usually it should not be assigned
to move to the aircraft and attack it. But if it were adjacent, it
should be attacked. That is the concept I was struggling for.

Peter

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-03  5:10     ` Hans Ronne
  2004-01-03  5:39       ` Eric McDonald
@ 2004-01-04 12:57       ` Peter Garrone
  2004-01-04 14:46         ` Hans Ronne
  1 sibling, 1 reply; 12+ messages in thread
From: Peter Garrone @ 2004-01-04 12:57 UTC (permalink / raw)
  To: Hans Ronne; +Cc: xconq7

On Sat, Jan 03, 2004 at 06:09:06AM +0100, Hans Ronne wrote:
> >I dont see much benefit from saying cities are more important than
> >mobile units, by a factor of 10. If an opponent were composed entirely
> >of cities, it would be an easy job to defeat them, so it cannot be said
> >that cities are more important than mobile units.
> 
> I disagree. In most games cities (or more specifically any units that can
> build other units) are easily 10 times more important than any mobile
> units. Provided, of course that new units are built at a reasonable rate.

Depends on the time frame. If a battle is going to be over before even a
single unit is built, then a city isnt very important. It may be
important as a supply point though, especially with aerial combat.

Peter

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-03  5:10     ` Hans Ronne
@ 2004-01-03  5:39       ` Eric McDonald
  2004-01-04 12:57       ` Peter Garrone
  1 sibling, 0 replies; 12+ messages in thread
From: Eric McDonald @ 2004-01-03  5:39 UTC (permalink / raw)
  To: Hans Ronne; +Cc: Peter Garrone, xconq7

Hi Hans,

On Sat, 3 Jan 2004, Hans Ronne wrote:

> That being said, the low level ai code could be more sophisticated. 

Yes. Much more....

>There
> are a lot of hacks right now, not only in victim_here, but also at other
> points in the low level ai code, where factors 2, 5 or 10 are used rather
> arbitrarily. I have replaced some of this stuff with either pre-computed
> worth functions or doctrines, but much remains to be done.

Yes. I even introduced a few of the above-mentioned factors simply 
because the infrastructure did not exist for choosing optimal 
values in a given situation. Of course, I introduced them with the 
mental note that they would be replaced with more sophisticated 
evaluators or perhaps even outputs from a neural net or some such 
in the future.

> The advantages of broadcasting only actions in network games is something
> that most of us agree on. However, even if that is implemented, I would
> like to move more ai code from the plan and task level to the ai proper.
> There are several good reasons for that:
> 
> First, it would make it easier to write and test different AIs, using the
> existing pluig-in structure (e.g. mplayer vs. iplayer). Right now, there is
> so much ai code in plan.c and task.c that a new ai would not make much of a
> dif´ference.

Yes, but aside from all the little utility calculations and 
evaluators, some of that AI code is directly associated with plans 
or with tasks. Plan-related generic AI should either be in plan.c 
or else a new file aiplan.c. Similarly for task-related generic 
AI. AI-player code should not be there, though. If someone 
modifies mplayer and wants it to use something different than 
plan_offenseive (or, more properly, ai_plan_offensive), then 
mplayer_plan_offensive should go in mplayer.c.

So how do you reconcile which function to use?
On initialization, you create registries of function pointers for 
different families of functionality, and initially the generic AI 
would register all of the ai_* functions in their appropriate 
slots. Then as AI-players did their initializations, they could 
opt to use the generic AI functions, or register their own 
substitutes in the appropriate slots.

Then, things like plan_offensive simply become wrappers for 
invoking a function pointer based on a lookup in, say, a plan 
registry, based on the calling AI.

> Second, a better separation of AI and task level code is desirable since
> the latter is used also by the human interface. The task code should
> therefore be only about how to implement specific orders, whether given by
> the ai or a human. Accordingly, a task should just be a logical chain of
> actions without any tactical or strategic considerations.

I think some tactical considerations are okay, provided they do 
not make any major assumptions. But, I agree that strategic 
considerations do not belong at the task level in the long run.

> Third, it is in my experience very important to consolidate all ai code
> that does one thing in the same place.

I agree, and that is my argument for leaving generic AI code in 
plan.c and task.c.

> have had ample examples of such problems in xconq. For this reason, I am a
> little wary about expanding the ai code in victim_here

I personally believe that the generic victim_here 
(ai_victim_here) and other tactical decision-making code should 
be as sound as possible, even if that means expanding it, or 
refactoring the problem. I would like to see someone be able to 
throw together an AI by initially relying on the same generic AI 
code that could be used from an UI, and then providing 
replacements as development on the new AI progresses. The generic 
AI should be strong enough for this purpose and to provide an user 
reasonable behavior from a UI.

> This can be done already now, at the ai level

Yeah, but now we essentially have duplicated code with 
go_after_victim and ai_go_after_victim. I personally would just 
rename things like go_after_victim to ai_go_after_victim, leave 
them in plan.c, and go from there.

  Just my US$0.02,
    Eric

^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-03  4:21   ` Peter Garrone
  2004-01-03  5:10     ` Hans Ronne
@ 2004-01-03  5:11     ` Eric McDonald
  2004-01-04 12:57       ` Peter Garrone
  1 sibling, 1 reply; 12+ messages in thread
From: Eric McDonald @ 2004-01-03  5:11 UTC (permalink / raw)
  To: Peter Garrone; +Cc: xconq7

Hi Peter, others,

On Sat, 3 Jan 2004, Peter Garrone wrote:

> I think that bit of code does some silly things sometimes, like telling
> sea-bound troop transports to attack aircraft in the middle of a
> continent.

It probably does. There is really nothing preventing it from doing 
so at this point. I will look into addressing this issue after the 
7.5 release. Basically, my changes were intended to address a 
behavioral issue that I found quite annoying in the AI. 
There are more, and I have much broader, bolder plans for AI 
revision once I feel at liberty to do them (after 7.5). You 
certainly mentioned a relevant area to look into.

> However I have found this discussion beneficial in that I will endeavour
> to not change this code while trying to get the ai to fight properly
> with plan_transport and refueling while pathfinding. Some observations

Well, aside from the little discussion I had with Jim, I don't 
intend to make any more functionality changes to it until after 
the next release. If bug fixes need to be made, I will certainly 
make them. Also, I still need to farm some of the logic out into 
separate generic evaluator functions, since victim_here is 
currently looking a bit bloated (and there is a lot more that 
could be done with it).

> I dont see much benefit from saying cities are more important than
> mobile units, by a factor of 10.

By cities, are we talking about all builders or just immobile 
builders?

As far as factors go, I left the actual victim scoring code alone; 
it is the same as before. I have plenty of ideas for that as well; 
victim_here may not even be the place to do the scoring, just the 
weeding....

> Selecting different units based on too many criteria leads to
> disorganised melee-like warfare, 

I was more into _rejecting_ units based on many criteria. :-)
Selecting the best ones is another matter entirely, and I mostly 
agree that the scoring should be fairly straightforward without a 
lot of twists and turns.

>and the different ai-controlled sides
> tend to lose spatial organization.

Hmmm.... Not sure that I follow you here.
If anything, the increased number of rejection criteria have the 
benefit of spatially concentrating attackers on the fewer number 
of victims considered worthwhile.

> Adjacent enemy units should always be attacked. 

Why?

>But if an enemy unit is
> not adjacent, then only units having equal or greater mobility should be sent
> after them, perhaps. 

Also depends on whether a lesser mobility unit has enough ACP to 
reach and attack the defender that turn before the defender can 
move (assuming sequential play).

>I think mobility is the key criteria here, and
> ability to operate on the same terrain as the opponent unit.

I agree on the latter point. I have already done some speculative 
thinking on both, but I don't want to swing the wrecking 
ball before 7.5, and possibly cause breakage that would further 
delay the release.

> Looking at the victim_here code, I think the initial idea was to give
> all possible targets a rating

Correct.

> I disagree with the implementation of code to return immediately
> when certain targets are detected,

As do I. This, however, was in the code that existed prior to my 
changes, and I opted to leave it rather than have additional 
changes in go_after_victim, and so on in ever widening circles. 
There is a big difference between my vision for this code, and 
what I did. The reason is simple: we are trying to get a release 
out the door, and changing gobs of code does not particularly 
help that effort in most cases.

> because it tends to detract from the
> code organisation. It should simply assign a rating to the target.

Agreed.

> Perhaps something could be done at the doctrine level, if not the

A damage ratio threshold might be a nice addition to the unit 
doctrines. I already do a damage ratio calculation in the new 
victim_here code, but perhaps it is not aggressively tuned enough 
or there are other behavioral factors elsewhere?

> planning level, so that if the odds are too great,
> the unit will run away, to fight another day.

The damage ratio calculation doesn't compel a unit to run away, 
but is supposed to prevent it from going after something that can 
kick the crap out of it. But knowing when to retreat is something 
I want to deal with as well (not in victim_here or 
go_after_victim, of course).

> With a bit of care, some self-organising mass-attack behavior might occur.

I have been considering having each AI side maintain a list of 
tactical coordination objects that could be searched by unit 
planners. The tactical coordinators would have things like the 
unit (or list of units) requesting the coordination, and those 
willing to provide it. Tactical coordinators could come in 
different flavors, such as TC_FORCE_PROTECTION, TC_GANG_ATTACK, 
etc... Vulnerable units could put in force protection requests. 
Units that could take down a more powerful unit, given sufficient 
numbers could put in a gang attack request. So on and so forth.... 
Just an idea I have been toying with; need to think it through 
more.

> I have discussed not sharing plan and task level stuff, only action
> level stuff, with networked games. When this is done,
> differing planning and task code will be allowed for different network
> clients playing the same game, and different ai strategies can be tested
> directly by combat, which should be fun.

I agree. I share your stated goal of thoroughly separating the 
action (referee) level things from the task and plan (AI) level 
things. I have also previously mentioned that I would like to 
invoke an AI as a separate client process, which would be the true 
test of such a separation.

> I am not advocating doing this now, though if someone were to implement
> these or other ideas, I would not object.

I have expressed interest in doing this post-7.5, and am willing 
to work with you on it.

How are the changes to the pathfinder coming along?

Eric


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
  2004-01-03  4:21   ` Peter Garrone
@ 2004-01-03  5:10     ` Hans Ronne
  2004-01-03  5:39       ` Eric McDonald
  2004-01-04 12:57       ` Peter Garrone
  2004-01-03  5:11     ` Eric McDonald
  1 sibling, 2 replies; 12+ messages in thread
From: Hans Ronne @ 2004-01-03  5:10 UTC (permalink / raw)
  To: Peter Garrone; +Cc: xconq7

>However I have found this discussion beneficial in that I will endeavour
>to not change this code while trying to get the ai to fight properly
>with plan_transport and refueling while pathfinding. Some observations
>though.

So how is it going with the path-finding? Any hope of fixing the remaining
bugs?

>I dont see much benefit from saying cities are more important than
>mobile units, by a factor of 10. If an opponent were composed entirely
>of cities, it would be an easy job to defeat them, so it cannot be said
>that cities are more important than mobile units.

I disagree. In most games cities (or more specifically any units that can
build other units) are easily 10 times more important than any mobile
units. Provided, of course that new units are built at a reasonable rate.

That being said, the low level ai code could be more sophisticated. There
are a lot of hacks right now, not only in victim_here, but also at other
points in the low level ai code, where factors 2, 5 or 10 are used rather
arbitrarily. I have replaced some of this stuff with either pre-computed
worth functions or doctrines, but much remains to be done.

>Selecting different units based on too many criteria leads to
>disorganised melee-like warfare, and the different ai-controlled sides
>tend to lose spatial organization. If the attack criteria was to hit the
>nearest unit irrespective, then that does lead to more organised sides,
>I think. Unfortunately this doesnt work for things like aircraft
>intrusions.
>
>Adjacent enemy units should always be attacked. But if an enemy unit is
>not adjacent, then only units having equal or greater mobility should be sent
>after them, perhaps. I think mobility is the key criteria here, and
>ability to operate on the same terrain as the opponent unit.

I agree with that. This is why I wrote the action-reaction code, which
works exactly like that: by attacking adjacent enemy units as quickly as
possible.  In fact, some of the recent discussion about victim_here is kind
of moot since short range fighting is run by the action-reaction code
instead.

>I have discussed not sharing plan and task level stuff, only action
>level stuff, with networked games. When this is done,
>differing planning and task code will be allowed for different network
>clients playing the same game, and different ai strategies can be tested
>directly by combat, which should be fun.

The advantages of broadcasting only actions in network games is something
that most of us agree on. However, even if that is implemented, I would
like to move more ai code from the plan and task level to the ai proper.
There are several good reasons for that:

First, it would make it easier to write and test different AIs, using the
existing pluig-in structure (e.g. mplayer vs. iplayer). Right now, there is
so much ai code in plan.c and task.c that a new ai would not make much of a
dif´ference.

Second, a better separation of AI and task level code is desirable since
the latter is used also by the human interface. The task code should
therefore be only about how to implement specific orders, whether given by
the ai or a human. Accordingly, a task should just be a logical chain of
actions without any tactical or strategic considerations.

Third, it is in my experience very important to consolidate all ai code
that does one thing in the same place. Otherwise, you will end up with race
situations where different parts of the ai code compete with each other. We
have had ample examples of such problems in xconq. For this reason, I am a
little wary about expanding the ai code in victim_here and other low level
ai functions too much. I would rather try to improve the tactical code in
ai.c and the mplayer.

Hans










This can be done already now, at the ai level


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: AI now goes after bases
       [not found] ` <Pine.LNX.4.44.0312311340460.31528-100000@leon.phy.cmich.edu>
@ 2004-01-03  4:21   ` Peter Garrone
  2004-01-03  5:10     ` Hans Ronne
  2004-01-03  5:11     ` Eric McDonald
  0 siblings, 2 replies; 12+ messages in thread
From: Peter Garrone @ 2004-01-03  4:21 UTC (permalink / raw)
  To: Eric McDonald; +Cc: xconq7

On Wed, Dec 31, 2003 at 02:10:25PM -0500, Eric McDonald wrote:
> Hi Jim, others,
> 
> On Wed, 31 Dec 2003, Jim Kingdon wrote:
> 
> > playing the standard game, and I noticed that the AI will now flatten
> > my bases (using fighters and bombers), and it generally didn't used to
> > do that.
> 
> Hmmm... The new victim_here code is also supposed to gauge whether 
> a capturable transport (mobile or immobile) can withstand getting 
> roughed up a little to see whether a unit can give it a "shake 
> down" to discover/kill any occupants the transport might have. Of 
> course, I had Bellum Towns in mind when I wrote that part of the 
> code, and those Towns regenerate some lost HP every turn. Also, 
> Bellum Towns are builder units, whereas I don't think Standard 
> Bases are. If I remember the code correctly, I let it go ahead and 
> mark non-builder capturable transports as potential victims 
> indefinitely (pretend Transports were capturable in the Standard 
> game; you would probably want to pound those until they are below 
> sea level, hopefully with occs included). But perhaps the victim 
> finder should make an assumption that capturable, immobile 
> transports are "facility" or "base" units and spare them an 
> untimely death before being assimilated by the AI's side?
> 
> > been using to beat the AI.  If the AI played to capture the bases
> > instead of flatten them, that would probably be even better.
> 
> Agreed. Unless someone sees a problem with assuming capturable, 
> immobile transports are worth saving, I will modify the AI's 
> victim finder to account for this. (And violate my self-imposed 
> Xconq feature freeze yet again.)
> 
> Eric

I think that bit of code does some silly things sometimes, like telling
sea-bound troop transports to attack aircraft in the middle of a
continent.

However I have found this discussion beneficial in that I will endeavour
to not change this code while trying to get the ai to fight properly
with plan_transport and refueling while pathfinding. Some observations
though.

I dont see much benefit from saying cities are more important than
mobile units, by a factor of 10. If an opponent were composed entirely
of cities, it would be an easy job to defeat them, so it cannot be said
that cities are more important than mobile units.

Selecting different units based on too many criteria leads to
disorganised melee-like warfare, and the different ai-controlled sides
tend to lose spatial organization. If the attack criteria was to hit the
nearest unit irrespective, then that does lead to more organised sides,
I think. Unfortunately this doesnt work for things like aircraft
intrusions.

Adjacent enemy units should always be attacked. But if an enemy unit is
not adjacent, then only units having equal or greater mobility should be sent
after them, perhaps. I think mobility is the key criteria here, and
ability to operate on the same terrain as the opponent unit.

Looking at the victim_here code, I think the initial idea was to give
all possible targets a rating, and to hit the target with the highest
rating. I disagree with the implementation of code to return immediately
when certain targets are detected, because it tends to detract from the
code organisation. It should simply assign a rating to the target.

The AI also tends to send units on suicide missions.
Perhaps something could be done at the doctrine level, if not the
planning level, so that if the odds are too great,
the unit will run away, to fight another day.
With a bit of care, some self-organising mass-attack behavior might occur.

I have discussed not sharing plan and task level stuff, only action
level stuff, with networked games. When this is done,
differing planning and task code will be allowed for different network
clients playing the same game, and different ai strategies can be tested
directly by combat, which should be fun.

I am not advocating doing this now, though if someone were to implement
these or other ideas, I would not object.

Peter

^ permalink raw reply	[flat|nested] 12+ messages in thread

* AI now goes after bases
@ 2003-12-31 23:18 Jim Kingdon
       [not found] ` <Pine.LNX.4.44.0312311340460.31528-100000@leon.phy.cmich.edu>
  0 siblings, 1 reply; 12+ messages in thread
From: Jim Kingdon @ 2003-12-31 23:18 UTC (permalink / raw)
  To: xconq7

This is based on anecdotal observation of a single game, but I was
playing the standard game, and I noticed that the AI will now flatten
my bases (using fighters and bombers), and it generally didn't used to
do that.

This is an improvement.  Building bases is one of the "loopholes" I've
been using to beat the AI.  If the AI played to capture the bases
instead of flatten them, that would probably be even better.

The cause is, I suppose, the victim_here changes.

^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2004-01-05  0:27 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2004-01-03  6:06 AI now goes after bases Hans Ronne
2004-01-03  6:27 ` Eric McDonald
2004-01-03 14:40   ` Hans Ronne
  -- strict thread matches above, loose matches on Subject: below --
2003-12-31 23:18 Jim Kingdon
     [not found] ` <Pine.LNX.4.44.0312311340460.31528-100000@leon.phy.cmich.edu>
2004-01-03  4:21   ` Peter Garrone
2004-01-03  5:10     ` Hans Ronne
2004-01-03  5:39       ` Eric McDonald
2004-01-04 12:57       ` Peter Garrone
2004-01-04 14:46         ` Hans Ronne
2004-01-03  5:11     ` Eric McDonald
2004-01-04 12:57       ` Peter Garrone
2004-01-05  0:27         ` Eric McDonald

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).