Reputation: 7435
I have two decimal fields, one with a precision of 18, and another with a precision of 200. I would like to have calculations of the first not care about places past 18, but I need to consider that for the larger number.
If I use get_context()
I'm affecting the global, shared precision. Is there something I'm missing about per-operation or per-decimal precision?
Edit: I'm using Python's decimal
module.
Upvotes: 3
Views: 56
Reputation: 281988
There's a global context, but you don't need to use that one. You can construct additional contexts and use them, either explicitly on a per-operation basis:
z = ctx.add(x, y)
b = a.ln(ctx)
or by setting a temporary local context with decimal.localcontext
:
with decimal.localcontext(ctx):
z = x + y
b = a.ln()
The first option is less likely to leak into operations you don't want it to, such as library routines or coroutines, while the second option reduces the likelihood of accidentally leaving the context off of an operation when you need to do a bunch of operations with the same context. At least the context won't leak into other threads, because each thread has its own current context.
Upvotes: 3