Reputation: 2530
Software will use memory, no big suprise, but how do you keep this usage to a minimum in comparison to how big your program is.
Best example I think would be Firefox. Some users have experienced it, others haven't, but it's pretty safe to say that all the previous versions of Firefox used much more memory then the current version. Yet still, functionality expands and options are added. I'd expect the memory usage to go up as extra options and such stuff gets added.
So in other words, there must be methods by which to make sure your program doesn't use up the memory of the computer.
So, I'm turning this into a "best-practices" question, asking all of you what your little tricks and tweaks are to make your program do what it does, with less CPU then you'd normally think. And also, what to most certainly avoid.
A little side-question here: I came accross something in a book about C#. Apparently, when coding an Enum, it's possible to set the size of the index of this Enum. With large Enum's, you should let the compiler handle it I guess, but for an Enum which only holds like 2 or 3 items, you can do this:
public enum HTMLTYPE : sbyte
{
HTML401,XHTML10,XHTML11
}
For those of you who don't know why, apparently the amount of memory reserved for the index of any Enum is automatically set to an integer in C#. So in other words, that amount of memory is going to be reserved. But when defining so little things in your Enum, an integer is a waste of space. The book claimed that this could cut down the amount of memory used by the program. I'm wondering if this is true.
EDIT: indeed, it should be memory, darn me. Changed all entries.
Upvotes: 4
Views: 11104
Reputation:
There is a good online book about this:
http://www.cix.co.uk/~smallmemory/
Upvotes: 2
Reputation: 198617
When it comes to optimizations, it's usually best to focus on algorithm design first. If you still don't get the performance you want, then's the time to figure out micro-optimizations.
A disclaimer though: optimizing for memory usage may not always be a good thing. Unfortunately, a lot of the time you have to make decisions about whether to optimize for time (CPU usage) or space (RAM or disk space). While you can sometimes have your cake and eat it too, it's not always that simple.
A little side-question here: I came accross something in a book about C#. Apparently, when coding an Enum, it's possible to set the size of the index of this Enum. With large Enum's, you should let the compiler handle it I guess, but for an Enum which only holds like 2 or 3 items, you can do this:
...
For those of you who don't know why, apparently the amount of memory reserved for the index of any Enum is automatically set to an integer in C#. So in other words, that amount of memory is going to be reserved. But when defining so little things in your Enum, an integer is a waste of space. The book claimed that this could cut down the amount of memory used by the program. I'm wondering if this is true.
I'm not 100% sure about this, but that might not necessarily be true. In fact, that probably isn't true if you're using Mono and deploying this application on other systems. The reason being that different operating systems and processors have different memory alignment requirements. Thus, even though you declare it as an sbyte, it might get coerced into a 32-bit or 64-bit integer by the time it actually goes into memory anyway (OS X is particularly picky about memory alignment).
Now, I could be totally missing the point here and the book could have been totally right in this case. But my point here is more to say "it's a little bit more complicated than that" and to point out that such optimizations may not be portable to other platforms (different OSes, processors, and programming languages).
Upvotes: 2
Reputation: 14531
All of these are good algorithmic approaches to the memory problem, but in practice, you also want to run your code through a profiler to see where the memory and cpu resources are being taken up.
Upvotes: 2
Reputation: 23200
The best way to save memory is to firstly write a code clean way so that you can see the design of the application in it.
Then make a new version of code that's tweaked for memory. This way you are assured that future releases don't work with obfuscated code.
And yes with better memories and CPUs to come lesser you will think about such optimizations.
Upvotes: 2
Reputation: 20824
One technique is to use lazy initialization to create objects only when you need them.
Also, be sure to dispose (or set to null) objects that you no longer need, so they can be garbage collected.
Upvotes: 5
Reputation: 234514
First, you're probably confusing CPU and RAM (aka memory). CPU is the processor, i.e., what runs your code against your data. Memory is where that code and data are stored.
That enum trick should actually be avoided. First, sbyte
isn't CLS-compliant. Then it can limit future expansion. There's always the fact that the CPU always uses entire words (int
in 32-bit architectures and long
in 64-bit architectures). You lose all that, and for what gains? A few bytes off your memory footprint.
More to the point, follow these wise words: Premature optimization is the root of all evil.
That means, only optimize when it is really required. Measure things first. You'll quite likely realize it's not those three bytes from the enum that need cutting back.
Upvotes: 9
Reputation: 8447
It is true and it isn't true. There is a reason behind using int - processor use it naturally (except when running x64 Windows, then more natural would be Int64). This is because in the CPU you have 4 registers 32bit long (or 64bit in x64 mode).
Besides, let's face it: .NET isn't exacly about efficiency, both in memory and CPU. There are practices not to make big mistakes (like string concatenation in a loop rather than using StringBuilder) but enums reduced from 4 bytes to 1 byte isn't worth it.
Upvotes: 2