Sometimes, when you have a series of number representing the log-probability of something, you need to add up the probabilities. Perhaps to normalise them, or perhaps to weight them… whatever. You end up writing (or Mike ends up writing):
logsumexp = lambda x: np.log(sum([np.exp(xi) for xi in x]))
Which is going to suck when them members of x are small. Small enough that the precision of the 64 bit float you holding them runs out, and they exponentiate to zero (somewhere near -700). Your code is going to barf when it get to the np.log
part, and finds it can’t log zero.
One solution is to add a constant to each member of x, so that you don’t work so close to the limits of the precision, and remove the constant later:
def logsumexp(x): x += 700 x = np.sum(np.exp(x)) return np.log(x) - 700
Okay, so my choice of 700 is a little arbitrary, but that (-700) is where the precision starts to run out, and it works for me. Of course, if your numbers are way smaller than that, you may have a problem.
Edit: grammar. And I’m getting used to this whole weblog shenanigan. Oh, and <code>blah</code> looks rubbish: I'm going to stop doing that.
Is it OK if
is constrained to be
?
Comment by mikedewar — January 26, 2009 @ 2:20 pm |
Yep. Then
. more often though, x are values of the log-likelihood and as such are ![x \in [-\inf,0] x \in [-\inf,0]](https://s0.wp.com/latex.php?latex=x+%5Cin+%5B-%5Cinf%2C0%5D&bg=ffffff&fg=000000&s=0)
Comment by jameshensman — January 26, 2009 @ 2:51 pm |
Thanks for the tip, James. Here is my new way:
logsumexpr = lambda x: pylab.log(sum([pylab.exp(xi+700) for xi in x]))-700
which is essentially what you said, except:
-calculus and therefore uber geek cred goes to ME
a) it’s a one liner with the
b) it works when x is a list! Yours only works on numpy arrays
Incidentally the ‘r’ in logsumexpr is for ROBUST (oh yeah!)
Comment by mikedewar — January 26, 2009 @ 5:28 pm |
Yikes! This only works for a while. As x gets smaller and smaller (a longer and longer chain gets less and less likely no matter what) then the amount you need to add on to x so that the log() doesn’t barf -inf increases. The trouble is, of course, that if you add much more than 700 to the x varaible the exp() barfs inf. So while adding on a constant works for a while, it doesn’t work forever!
Comment by mikedewar — January 26, 2009 @ 6:22 pm |
Yeah, you’re right. You can write a little bit of code to work out the ‘best’ constant. You’re still screwed if you’ve got a really big range in
: more than about 1400 (i think) and it barfs even if you add a ‘clever’ constant.
Comment by jameshensman — January 26, 2009 @ 10:15 pm |
Typically the best constant is just -max(x). Shouldn’t matter if the range is large: if x_i / x_j is e^{1400}, then x_j for the purposes of this function can be treated as zero.
Comment by chas — December 15, 2009 @ 1:49 am |