Reputation: 339
I'm building a model based on a very large GIS map. I'm building it on a smaller portion of the map because the shapefile won't even load in Netlogo installed on my computer (I already modified the space needed in the JVM settings).
I have access to a large calculation platform which is a cluster of processor. (I understood that one model is running on one processor only) To be able to use the cluster, I must evaluate how much RAM and time the model would need.
how can I monitor the RAM usage in netlogo as we monitor the time ?
would you have advices/rule of thumbs about the extrapolation of RAM and time from a small simulated dataset to a larger one ? I believe that I should do it procedures by procedures, and depending on the commands, "multiplying by the number of extra patches".
For the background, I'm not a computing engineer, I'm a researcher in agronomy/hydrology that do modeling. Thanks for your time !
Upvotes: 2
Views: 158
Reputation: 21
Maybe this package made by Gary Polhill might work for you. By using mgr:mem and mgr:cpu-time during a behaviorspace run you can test the distribution of cpu_time and memory used across different behavior-space runs.
https://github.com/garypolhill/netlogo-jvmgr
Upvotes: 0
Reputation: 339
From the netlogo-users google group, I got the answer that I should just use the "Task Manager" tool from Windows and monitor the specific memory usage of netlogo when running the model.
Upvotes: 1