Reputation: 324
So, the title describes almost all the necessary to answer me. Just one more thing: please, just reply about libraries installed with Python by default, as the app which I'm developing is part of the Ubuntu App Showdown.
Running Python 2.7, Ubuntu 12.04.
Upvotes: 0
Views: 246
Reputation: 1130
You are asking for a number that is nearly impossible to calculate and has very little value.
Any Linux system that is running for an amount of time will have hardly any 'free' ram available. Just cat /proc/meminfo
- the MemFree
entry is usually in order of just a few megabytes.
So, where did that memory go?
The kernel caches all disk access, for starters.
That's usually visible in the Cached
entry. Disk cache will be pruned when you require more memory, so you could add that number to MemFree
.
But, if an application allocates (malloc()
in C) 2 gigabytes on a system with exactly 2 gigabytes of RAM, that usually will just be granted: you get a valid pointer back.
However, none of the RAM is actually reserved for your application - that only happens when your application starts touching memory pages - each touched page will be allocated.
The maximum size you can ask for is available as CommitLimit
.
But the application code itself might not be in RAM either - binary file and libraries are mmapp()ed, so again only pages that are touched are loaded into RAM.
If you run a tool like top
- you get all kinds of memory info per process, including VIRT, RES and SHR.
So, what is the value of knowing how much memory is available? You can start an application that could require significantly more RAM than your system has, and yet it runs... You might even be able to run the application twice or thrice - code pages are shared anyway...
Note: the above answer cuts quite a few corners, the real mechanisms are significantly more complex. And I haven't even started bringing swap space into the story. But this will do for you, I hope...
Upvotes: 1