Xian
Xian

Reputation: 76591

Performance anti patterns

I am currently working for a client who are petrified of changing lousy un-testable and un-maintainable code because of "performance reasons". It is clear that there are many misconceptions running rife and reasons are not understood, but merely followed with blind faith.

One such anti-pattern I have come across is the need to mark as many classes as possible as sealed internal...

*RE-Edit: I see marking everything as sealed internal (in C#) as a premature optimisation.*

I am wondering what are some of the other performance anti-patterns people may be aware of or come across?

Upvotes: 14

Views: 2504

Answers (18)

PEZ
PEZ

Reputation: 17004

Exploiting your programming language. Things like using exception handling instead of if/else just because in PLSnakish 1.4 it's faster. Guess what? Chances are it's not faster at all and that two years from now someone maintaining your code will get really angry with you because you obfuscated the code and made it run much slower, because in PLSnakish 1.8 the language maintainers fixed the problem and now if/else is 10 times faster than using exception handling tricks. Work with your programming language and framework!

Upvotes: 4

Glazius
Glazius

Reputation: 739

Michael A Jackson gives two rules for optimizing performance:

  1. Don't do it.
  2. (experts only) Don't do it yet.

If people are worried about performance, tell 'em to make it real - what is good performance and how do you test for it? Then if your code doesn't perform up to their standards, at least it's something the code writer and the application user agree on.

If people are worried about non-performance costs of rewriting ossified code (for example, the time sink) then present your estimates and demonstrate that it can be done in the schedule. Assuming it can.

Upvotes: 2

Kristopher Johnson
Kristopher Johnson

Reputation: 82545

Some developers believe a fast-but-incorrect solution is sometimes preferable to a slow-but-correct one. So they will ignore various boundary conditions or situations that "will never happen" or "won't matter" in production.

This is never a good idea. Solutions always need to be "correct".

You may need to adjust your definition of "correct" depending upon the situation. What is important is that you know/define exactly what you want the result to be for any condition, and that the code gives those results.

Upvotes: 2

Simon Gibbs
Simon Gibbs

Reputation: 4808

Julian Birch once told me:

"Yes but how many years of running the application does it actually take to make up for the time spent by developers doing it?"

He was referring to the cumulative amount of time saved during each transaction by an optimisation that would take a given amount of time to implement.

Wise words from the old sage... I often think of this advice when considering doing a funky optimisation. You can extend the same notion a little further by considering how much developer time is being spent dealing with the code in its present state versus how much time is saved by the users. You could even weight the time by hourly rate of the developer versus the user if you wanted.

Of course, sometimes its impossible to measure, for example, if an e-commerce application takes 1 second longer to respond you will loose some small % money from users getting bored during that 1 second. To make up that one second you need to implement and maintain optimised code. The optimisation impacts gross profit positively, and net profit negatively, so its much harder to balance. You could try - with good stats.

Upvotes: 4

Mike Dunlavey
Mike Dunlavey

Reputation: 40679

Once I had a former client call me asking for any advice I had on speeding up their apps.

He seemed to expect me to say things like "check X, then check Y, then check Z", in other words, to provide expert guesses.

I replied that you have to diagnose the problem. My guesses might be wrong less often than someone else's, but they would still be wrong, and therefore disappointing.

I don't think he understood.

Upvotes: 2

Laserallan
Laserallan

Reputation: 11312

  1. Using #defines instead of functions to avoid the penalty of a function call. I've seen code where expansions of defines turned out to generate huge and really slow code. Of course it was impossible to debug as well. Inline functions is the way to do this, but they should be used with care as well.

  2. I've seen code where independent tests has been converted into bits in a word that can be used in a switch statement. Switch can be really fast, but when people turn a series of independent tests into a bitmask and starts writing some 256 optimized special cases they'd better have a very good benchmark proving that this gives a performance gain. It's really a pain from maintenance point of view and treating the different tests independently makes the code much smaller which is also important for performance.

Upvotes: 6

Florian Greinacher
Florian Greinacher

Reputation: 14784

Using design patterns just to have them used.

Upvotes: 6

Qubeuc
Qubeuc

Reputation: 982

Do not refactor or optimize while writing your code. It is extremely important not to try to optimize your code before you finish it.

Upvotes: 4

Patrick Cuff
Patrick Cuff

Reputation: 29806

Changing more than one variable at a time. This drives me absolutely bonkers! How can you determine the impact of a change on a system when more than one thing's been changed?

Related to this, making changes that are not warranted by observations. Why add faster/more CPUs if the process isn't CPU bound?

Upvotes: 3

Rob K
Rob K

Reputation: 8926

One that I've run into was throwing hardware at seriously broken code, in an attempt to make it fast enough, sort of the converse of Jeff Atwood's article mentioned in Rulas' comment. I'm not talking about the difference between speeding up a sort that uses a basic, correct algorithm by running it on faster hardware vs. using an optimized algorithm. I'm talking about using a not obviously correct home brewed O(n^3) algorithm when a O(n log n) algorithm is in the standard library. There's also things like hand coding routines because the programmer doesn't know what's in the standard library. That one's very frustrating.

Upvotes: 6

dsimcha
dsimcha

Reputation: 68750

The elephant in the room: Focusing on implementation-level micro-optimization instead of on better algorithms.

Upvotes: 29

dsimcha
dsimcha

Reputation: 68750

Appending to an array using (for example) push_back() in C++ STL, ~= in D, etc. when you know how big the array is supposed to be ahead of time and can pre-allocate it.

Upvotes: 1

annakata
annakata

Reputation: 75852

General solutions.

Just because a given pattern/technology performs better in one circumstance does not mean it does in another.

StringBuilder overuse in .Net is a frequent example of this one.

Upvotes: 2

Xian
Xian

Reputation: 76591

I believe it is a common myth that super lean code "close to the metal" is more performant than an elegant domain model.

This was apparently de-bunked by the creator/lead developer of DirectX, who re-wrote the c++ version in C# with massive improvements. [source required]

Upvotes: 1

krosenvold
krosenvold

Reputation: 77191

Lack of clear program structure is the biggest code-sin of them all. Convoluted logic that is believed to be fast almost never is.

Upvotes: 4

PEZ
PEZ

Reputation: 17004

Premature performance optimizations comes to mind. I tend to avoid performance optimizations at all costs and when I decide I do need them I pass the issue around to my collegues several rounds trying to make sure we put the obfu... eh optimization in the right place.

Upvotes: 8

Sebastian Dietz
Sebastian Dietz

Reputation: 5706

The biggest performance anti-pattern I have come across is:

  • Not measuring performance before and after the changes.

Collecting performance data will show if a certain technique was successful or not. Not doing so will result in pretty useless activities, because someone has the "feeling" of increased performance when nothing at all has changed.

Upvotes: 71

Kev
Kev

Reputation: 16321

Variable re-use.

I used to do this all the time figuring I was saving a few cycles on the declaration and lowering memory footprint. These savings were of minuscule value compared with how unruly it made the code to debug, especially if I ended up moving a code block around and the assumptions about starting values changed.

Upvotes: 17

Related Questions