James Hensman’s Weblog

January 29, 2009

ipython %bg coolness

Filed under: python — jameshensman @ 3:17 pm

in ipython, I just found the magic function ‘%bg’, or background. If you want to run a long process, but keep the window active, this threads the task and stores the result for you to pick up later. e.g.

In [1]: from numpy import matlib as ml

In [2]: %bg ml.rand(2,2)
Starting job # 0 in a separate thread.

In [3]: %bg ml.zeros(3)
Starting job # 1 in a separate thread.

In [4]: jobs[0].result
Out[4]:
matrix([[ 0.97556473, 0.67794221],
[ 0.9331659 , 0.78887001]])

In [5]: jobs[1].result
Out[5]: matrix([[ 0., 0., 0.]])

Mucho coolness, especially if you have a long process (or lots of them) to complete. I wonder if it’s actually multi-threaded, as in using both of my CPUs?

Advertisements

python.copy()

Filed under: python — jameshensman @ 10:48 am
Tags: ,

Assignment in python is exactly that. Assignment, not copying. For those of us who have switched to the language of the gods from some inferior mortal language (Matlab), this can lead to some frustration.

For example, within my Variational Factor Analysis (VBFA) class, I need to keep a record of something I’m calling b_phi_hat. One of the the methods in the class involves the update of this little vector, which depends on its initial (prior) value, b_phi. Like this:

class VBFA:
import numpy as np
    def __init__(self):
        self.b_phi = np.mat(np.zeros((5,1)))
        #blah blah...
    
    def update_phi(self):
        self.b_phi_hat = self.b_phi
        for i in range(5):
            self.b_phi_hat[i] = self.something()

update_phi() get called 100s of times when the class is used. Spot the problem? It’s on line 8, where b_phi_hat is assigned to b_phi. When the loop runs on the next two lines, it’s modifying the original, not just a copy of the original, i.e. after the first iteration line 8 doesn’t ‘refresh’ b_phi_hat, it keeps it at its current value.

What I should have written is:

import numpy as np
class VBFA:
    def __init__(self):
        self.b_phi = np.mat(np.zeros((5,1)))
        #blah blah...
    
    def update_phi(self):
        self.b_phi_hat = self.b_phi.copy()
        for i in range(5):
            self.b_phi_hat[i] = self.something()

which explicitly makes a copy of the original on line 8, refreshing b_phi_hat with every iteration.

January 28, 2009

Some notes on Factor Analysis (FA)

Filed under: Uncategorized — jameshensman @ 9:21 am

Factor analysis is a statistical technique which can uncover (linear) latent structures in a set of data.  It is very similar to PCA, but with a different noise model.  The model conssits of a set of observed parameters \{\bf{x}_n \}_{n=1}^N (this is your collected data), some latent paramenters \{\bf{z}_n \}_{n=1}^N, with distribution p(\bf{z_n}) = \mathcal{N}(\bf{0}, \bf{I}).  There exists a noisy linear map \bf{A} from the latent space to the observed variables: p(\bf{x}_n) = \mathcal{N}\left(\bf{Az}_n, \bf{\Psi}\right), where \bf{\Psi} is a diagonal matrix.

It’s also possible to assume a mean vector for the observed variables, but I’m going to ignore that for a moment for clarity.

Some simple algebra yields p(\bf{x_n}) = \mathcal{N}\left(\bf{0}, \bf{AA}^\top + \bf{\Psi}\right). It should now be clear that we are modeling the distribution of \bf{X} as a Gaussian distribution with limited degrees of freedom.

Variational Approach

The variational approach involved placing conjugate priors over the model parameters (\bf{A} and \bf{\Psi}), and finding a factorised approximation to the posterior.

The variational approach to FA yields a distinct advantage: by placing an ARD prior over the columns of A, unnecessary components get ‘switched off’, and the corresponding column of A goes to zero. This is dead useful: you don’t need to know the dimension of the latent space beforehand: it just drops out of the model.

Papers
There is a super Masters thesis on variational FA here by a chap called Frederik Brink Nielsen.

An immediate extension of factor analysis springs to mind: if the distribution of the data is just a Gaussian, why not have a mixture of them? It seems Ghahramani was there first: GIYF

More recently, Zhao and Yu proposed an alteration to the Variational FA model, which apparently achieves a tighter bound by making \bf{A} dependent on \bf{\Psi}. this apparently makes the model less prone to under-fitting (i.e. it drops factors more easily). Neural Networks Journal

January 26, 2009

Working with log likelihoods

Filed under: python — jameshensman @ 1:41 pm
Tags: ,

Sometimes, when you have a series of number representing the log-probability of something, you need to add up the probabilities. Perhaps to normalise them, or perhaps to weight them… whatever.  You end up writing (or Mike ends up writing):

logsumexp = lambda x: np.log(sum([np.exp(xi) for xi in x]))

Which is going to suck when them members of x are small. Small enough that the precision of the 64 bit float you holding them runs out, and they exponentiate to zero (somewhere near -700).  Your code is going to barf when it get to the np.log part, and finds it can’t log zero.

One solution is to add a constant to each member of x, so that you don’t work so close to the limits of the precision, and remove the constant later:

def logsumexp(x):
    x += 700
    x = np.sum(np.exp(x))
    return np.log(x) - 700

Okay, so my choice of 700 is a little arbitrary, but that (-700) is where the precision starts to run out, and it works for me. Of course, if your numbers are way smaller than that, you may have a problem.

Edit: grammar. And I’m getting used to this whole weblog shenanigan. Oh, and <code>blah</code> looks rubbish: I'm going to stop doing that.

January 23, 2009

A place for putting things

Filed under: Uncategorized — jameshensman @ 5:19 pm

This is a place for putting things, so that they can be found.  Some of these things include thoughts on what I’m doing at the moment, which consists largely of

  • python
  • probabilistic models of data
  • learning
  • teaching

I’m hoping that by storing my thoughts, they’ll begin to make more sense. I may even start writing thoughts on my thoughts, but mostly I’m just going to put them here for safekeeping.

Create a free website or blog at WordPress.com.