benbo
benbo

Reputation: 1528

Avoiding numerical instability when computing 1/(1+exp(x)) python

I would like to compute 1/(1+exp(x)) for (possibly large) x. This is a well behaved function between 0 and 1. I could just do

import numpy as np
1.0/(1.0+np.exp(x))

but in this naive implementation np.exp(x) will likely just return 0 or infinity for large x, depending on the sign. Are there functions available in python that will help me out here?

I am considering implementing a series expansion and series acceleration, but I am wondering if this problem has already been solved.

Upvotes: 3

Views: 2115

Answers (2)

Warren Weckesser
Warren Weckesser

Reputation: 114841

You can use scipy.special.expit(-x). It will avoid the overflow warnings generated by 1.0/(1.0 + exp(x)).

Upvotes: 6

ali_m
ali_m

Reputation: 74182

Fundamentally you are limited by floating point precision. For example, if you are using 64 bit floats:

fmax_64 = np.finfo(np.float64).max  # the largest representable 64 bit float
print(np.log(fmax_64))
# 709.782712893

If x is larger than about 709 then you simply won't be able to represent np.exp(x) (or 1. / (1 + np.exp(x))) using a 64 bit float.

You could use an extended precision float (i.e. np.longdouble):

fmax_long = np.finfo(np.longdouble).max
print(np.log(fmax_long))
# 11356.5234063

The precision of np.longdouble may vary depending on your platform - on x86 it is usually 80 bit, which would allow you to work with x values up to about 11356:

func = lambda x: 1. / (1. + np.exp(np.longdouble(x)))
print(func(11356))
# 1.41861159972e-4932

Beyond that you would need to rethink how you're computing your expansion, or else use something like mpmath which supports arbitrary precision arithmetic. However this usually comes at the cost of much worse runtime performance compared with numpy, since vectorization is no longer possible.

Upvotes: 2

Related Questions