Saturday, November 26, 2011

The Sensor Calibration problem

I often refer to the calibration problem, but I noticed that I never mentioned it clearly anywhere in the blog: It is the simple problem of figuring out the real transform between a state of interest ( it could be a scene) and its measurements (the image of that scene).

Let's imagine you have a measurement device which one applies to an state/input x that yields measurement y. Ideally, the transform is linear (if it is nonlinear, one can linearize around a set point) and we have

y = A x

It turns out that A is either given or has been computed but is imprecisely known. For instance, let us imagine a coded aperture has been put in front of a webcam: we know the generic function A, but sometimes, the holes in the aperture are not perfect, they are not located exactly where they are supposed to be leading to a broken symmetry in the model... In the generic case, we want to solve for a more precise A, so that we consider that any of the measurements taken by the set up is really like:

y = (A+ E) x + epsilon

where E, epsilon and sometimes x's are unknowns. The calibration problem becomes simply trying to figure out E and epsilon with the fewest measurements y and inputs x.as possible i.e. we don't want to spend hours of calibration for one eventual measurement.

There are different ways to go about this, but certainly the most ambitious is when you don't know much about the input x's except for some properties. Mutliple problematic and tools are involved including additive noise, mutliplicative noise, dictionary learning, blind deconvolution....depending on whether you know something about E or x or epsilon, or all these parameters. If you choose x within a certain families of inputs, then the problematic becomes that of low rank issues for instance.

Why is this an issue ? after all we have been doing calibration for a number of years centuries and least squares has been good to us so far ?  In today's measurement systems, including the random measurement systems advocated in compressive sensing, we are facing A's that are very rectangular, i.e. the dimension of the scene x is very large compared to the number of measurements we have the patience to acquire. In particular, with the advent of compressive sensing, A is generally not sparse. Your point and shoot camera has a very sparse A, the random lens imager, not so much. For diverse reasons, A may not be well known and the question becomes, what amount of time and work is needed to get a better estimation of A and its attendant reduced measurements.

Ideally, we also want to investigate the issues of measurement drift to pinpoint how A is drifting over a certain timescale. For instance, if you consider lucky imaging or any instances of imaging with nature, knowing how A changes over time (slowly compared to all the measurements you have been able to take) gives an idea of the turbulence in the air: an information that can be used for other purposes....  


No comments:

Printfriendly