Thursday, September 20, 2012

A single-photon sampling architecture for solid-state imaging

This issue of defining compressive sensing to be not this or not that, is interesting because it shows compressive sensing to be some sorts of direct consequence of a well studied and known subject area. Yes Dictionary learning is not specific to compressive sensing. Yes, image reconstruction is not compressive sensing specific. Yes feature learning (aka signal manifold processing) is not compressive sensing specific. We all agree on that because we know deep down that the tools developed in those areas are an opportunity to make our job faster. Every once in while though, we are reminded that compressive sensing is not just part of the back end process arising in signal processing or machine learning but that it can take the front seat when it comes to actual physical signals acquisition. Many of the instances listed in the compressive sensing hardware list still however rely on off the shelf electronics and few (like this or this instance) look into changing the architecture. Today we have another instance of a changing architecture in the paper that follows. Let us note in the meantime that when looking at high energy photons like Gamma rays, nobody is messing with the Anger Logic yet

Which brings us to today's paper introduced by Ewout van den Berg :

Dear Igor,
It's a pleasure to announce our new paper called "A single-photon sampling architecture for solid-state imaging sensors". The paper presents an architecture for silicon photomultiplier sensor arrays designed to determine with high accuracy the time and location of each detected photon. The design exploits the fact that the photon arrival on the sensor is temporally sparse, especially when the photon flux is low and sampling is done over sufficiently short time intervals. At the first glance this seems to be an ideal setting for compressed sensing. However, given the binary nature of the signals and the high temporal resolution desired, a perfect fit is found instead in group testing. Our design uses group-testing based interconnection networks to connect subsets of pixels to time-to-digital converters (TDCs), which record a time stamp to memory whenever an event is detected on their input during a sampling interval. The paper gives detailed constructions of efficient group-testing designs that guarantee fast and unique recovery of signals up to a certain sparsity level. We compare the number of TDCs used in these designs with theoretical upper and lower bounds on the minimum number of TDCs required. Finally, we show the efficacy of the design based on realistic simulations of scintillation events in clinical positron emission tomography.
The paper is available on arXiv at http://arxiv.org/abs/1209.2262
Best regards,
Ewout
Thanks  Ewout  ! I note that some of the authors have already shown up on my radar screen earlier this summer


Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as LiDAR and positron emission tomography.
The demands placed on on-chip readout circuitry imposes stringent trade-offs between fill factor and spatio-temporal resolution, causing many contemporary designs to severely underutilize the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs, thereby also reducing both cost and power consumption. The design relies on a multiplexing technique based on binary interconnection matrices. We provide optimized instances of these matrices for various sensor parameters and give explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals.
To illustrate the strength of the proposed architecture, we note a typical digitization result of a 120x120 photodiode sensor on a 30um x 30um pitch with a 40ps time resolution and an estimated fill factor of approximately 70%, using only 161 TDCs. The design guarantees registration and unique recovery of up to 4 simultaneous photon arrivals using a fast decoding algorithm. In a series of realistic simulations of scintillation events in clinical positron emission tomography the design was able to recover the spatio-temporal location of 98.6% of all photons that caused pixel firings.



Image Credit: NASA/JPL/Space Science Institute
W00075464.jpg was taken on September 17, 2012 and received on Earth September 17, 2012. The camera was pointing toward SATURN at approximately 1,494,101 miles (2,404,522 kilometers) away, and the image was taken using the CB2 and CL2 filters. 

No comments:

Printfriendly