Reputation: 11
I am trying to create a model that simulates agents moving from an origin point to an end point across a calorie surface imported from GIS. The calorie surface shows the relative difficulty of movement through any patch on the landscape, indicated with a number between 1 (easiest) and 10 (most difficult) stored as a variable called "difficulty".
I want to have agents prefer the easier direction while moving generally towards the end point. I have been able to import the GIS landscape and assign the "difficulty" variable. I also know how to have the agents move once a target has been identified. My issue is having the agent select the appropriate patch.
What I think I need to do is assign temporary variables to the 8 patches around an agent, where the patch value would be the difficulty value of the raster - X with X being a number that is higher if the patch is closer to the end point. So if an agent is standing in an area where each patch is equally difficult, the closer pixel to the end point would get a value of 8 and the next closest patch would get 7, then down to one for the furthest patches. Then the agent would pick the lowest number and move to that patch.
I suspect I need to do something like having the agent ask patches in-radius 1
to check their distance to the end point and then have the patches compare the values to rank them between 1 and 8, then assign those values to a temporary variable. Then take the difficulty value of the patch and subtract the temporary distance variable to get a temporary value that the agent can decide to move to.
Upvotes: 1
Views: 54
Reputation: 1736
Whether it's clear to you or not, your agent is trying to solve a tradeoff problem. It needs to behave in a way that seeks two objectives: reducing movement difficulty and getting to the destination. Tradeoffs are fundamental to real-world decisions but not so trivial to model and rarely included in ABMs.
The classical way to model your problem is as a mathematical optimization. Define a specific objective function such as minimizing the total difficulty of moving from the starting point to the target, then use a technique like dynamic programming to find the optimal path.
The problem with the classical approach is that it only works if you assume your agent knows the difficulty of all the patches--and that there aren't any other agents out there changing the diffulty for them, or other reasons why patch difficulty changes. But it sounds like you do not assume your agents "know" their whole world and can do dynamic programming... That's why we use ABMs, because we know such assumptions are unrealistic.
My first suggestion is to write out your model in words using the ODD protocol http://jasss.soc.surrey.ac.uk/23/2/7.html, paying special attention to its Design Concepts. That will force you to be explicit about exactly what your agents can sense and what their objectives are, etc. You can't write a good model of the agent's behavior until you decide exactly what problem it is trying to solve and what its constraints are.
You could cook up an ad-hoc behavior approach such as weighting the difficulty of adjacent patches by how much they get the agent closer vs. farther to the target. You could have fun tweaking weighting parameters etc. to make the approach work better. And if the calorie surface is random, you might not be able to do any better.
But if your calorie surface is not random and has trends or channels in it, then a smarter approach could be more efficient (and perhaps more realistic, depending on how smart you want your agent to be). We wrote a book on modeling tradeoff decisions in ABMs, using the same conceptual idea as optimization but with realistic limitations on what agents know, and assuming they use predictions and approximations to make good decisions when optimization is impossible. It is here: https://press.princeton.edu/books/paperback/9780691195285/modeling-populations-of-adaptive-individuals.
Upvotes: 3