vtrubnikov
vtrubnikov

Reputation: 2095

How using BigDecimal would affect application performance?

I want to use BigDecimal to represent arbitrary precision numbers like prices and amounts in a low-latency trading application with thousands of orders and execution reports per second.

I won't be doing many math operations on them, so the question is not about performance of the BigDecimal per se, but rather about how large volumes of BigDecimal objects would affect performance of the application.

My concern is that huge amount of short-lived BigDecimal objects will put a strain on a GC and result in larger Stop-The-World pauses in CMS collector - and this is definitely what I would like to avoid.

Can you please confirm my concerns and suggest alternatives to using BigD? Also, if you think my concerns are wrong - please explain why.

Update:

Thanks for all who answered. I am now convinced that using BigDecimal will hurt latency of my application (even though I still plan to measure it).

For the time being we decided to stick with "very non-OOP" solution (but without accuracy hit) - use two ints, one for mantissa and another one for exponent. Rationale behind this is that primitives are placed on stack, not heap, and hence are not subject to garbage collection.

Upvotes: 16

Views: 13821

Answers (7)

oxbow_lakes
oxbow_lakes

Reputation: 134310

If you are developing a low-latency trading program and you genuinely want to compete in latency terms, then BigDecimal is not for you, it is as simple as that. Where microseconds matter, object creation and any decimal math is just too expensive.

I would argue that for almost everyone else, using BigDecimal is a no-brainer because it will have little visible impact on application performance.

In latency-critical systems making trading decisions, any unpredictable garbage-collection pauses are completely out-of-the-question so whilst the current garbage-collection algos are fantastic in normal use, they are not necessarily appropriate when a delay of 5 milliseconds may cost you a lot of money. I would expect that large systems were written in a very non-OOP style, with little or no objects being used aside from some interned Strings (for codes and the like).

You'll certainly need to use double (or even float) and take the accuracy hit, or else use long and measure all amounts in cents, tenths of a cent or satoshis (whatever the smallest unit of account is).

Upvotes: 16

ryan
ryan

Reputation: 11

I work for a team that conducts performance assessments and optimizations on applications, had one application recently that was using Java Big Decimal. Significant performance issues were observed with memory utilization. We later switched to Newton Raphson which allowed us to keep up accuracy with calculations and shown significantly better performance to big decimal.

Just to add.. when we used doubles we saw a massive loss in accuracy as expected

Upvotes: 1

kms333
kms333

Reputation: 3257

why don't you use a long with an implied number of decimal cases? For example, let's say you have 8 decimal places implied, then 0.01 would be 1000000.

Upvotes: 2

Michael Borgwardt
Michael Borgwardt

Reputation: 346457

The big question is: do you actually need arbitrary precision decimal calculations? If the calculations are only done to analyze the data and make decisions based on that, then rounding and binary representation artifacts among the least significant bits are probably irrelevant to you; just go ahead and use double (and analyze your algorithms for numerical stability).

If you're actually doing transactions where the numbers have to add up and precision matters absolutely, then double is not an option. Perhaps you can separate these two parts of your app and use BigDecimal only in the transaction part.

If that is not possible, then you're pretty much out of luck. You'd need a BCD math library, and I don't think Java has one. You can try writing your own, but it will be a lot of work and the result may still not be competitive.

Upvotes: 5

Tadeusz Kopec for Ukraine
Tadeusz Kopec for Ukraine

Reputation: 12413

I'm not sure what are your requirements, but generally when doing financial calculation one cannot afford accuracy hit caused by floating point types. Usually accuracy and proper rounding is more important than efficiency when dealing with money.
If you don't have to deal with percentages and all of the amounts are integer, you can use integer types (int, long or even BigInteger) with one meaning 0.01 of your currency unit.
And even if you think you can afford accuracy hit with double type, it may be worth trying first with BigDecimal and checking if it's really to slow for you.

Upvotes: 1

Tom Hawtin - tackline
Tom Hawtin - tackline

Reputation: 147154

BigDecimal does have performance very much lower than, say, long, double or even Long. Whether that will make a significant difference to your application's performance depends upon your application.

I suggest finding the slowest part of your application and doing a comparative test on that. Is it still fast enough? If not, you might want to write a small immutable class containing a single long, possibly checking for overflows.

Upvotes: 5

Brian Agnew
Brian Agnew

Reputation: 272367

JVMs are pretty good nowadays in terms of handling the creation and destruction of short-lived objects, so that's not the worry it once was.

I would recommend building a mock-up of what you want to do, and measure it. That's going to be worth a lot more than any 'theoretical' answers that you may get :-)

Looking at your particular problem domain, similar systems I've worked on in the past work very well using doubles for the data you want to use BigDecimal for, and it may be worth re-examining your thinking in this area. A cursory glance at BigDecimal shows it has 5 or 6 fields, and the extra memory consumption over a single double may outweigh any functionality benefits you have.

Upvotes: 8

Related Questions