Thursday, October 31, 2013

Nuit Blanche In Review (October 2013)

Since the last Nuit Blanche In Review (September 2013), we had quite a few entries and the number of members in different groups increased in the Google+ Community (670), the CompressiveSensing subreddit (156), the LinkedIn Compressive Sensing group (2554) or the Advanced Matrix Factorization (769). The Paris based Machine Learning group has now 291 members and the attendant LinkedIn Paris Machine Learning group has 74 members.

This month, several implementations were made available by their respective authors:

Focused entries:

Sunday Morning Insights:
Saturday Morning Videos
Startups
CfP
Other

Image Credit: NASA/JPL-Caltech
This image was taken by Front Hazcam: Right B (FHAZ_RIGHT_B) onboard NASA's Mars rover Curiosity on Sol 439 (2013-10-31 06:46:58 UTC).

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, October 30, 2013

Simultaneous reconstruction of absorption and scattering using joint sparsity in diffuse optical tomography

Jong Chul Ye just sent me the following

Hi Igor,

Hope you are doing well. I would like to bring your attention to our new paper on compressed sensing application to diffuse optical tomography (DOT).

This paper is an extension of our previous work on Compressive DOT (http://dx.doi.org/10.1109/TMI.2011.2125983) for recovery of absorption parameter variations. However, the main breakthrough in our new paper is that we first demonstrate that the joint sparsity principle is so general that it can be even used for the simultaneous reconstruction of both absorption and scattering parameters without updating Green's function. Unlike the common belief that cross-talk artifacts between absorption and scattering images are un-avoidable for the case of CW modulation, our joint sparse recovery approach significantly reduces such cross-talk artifacts. Moreover, our joint sparse recovery approach is faster than the conventional linearized approach even though our approach is exact and does not use any Born approximation.

While this paper is for DOT problems, the same algorithmic appraoch can be used for electromagnetic inverse scattering problems from soft and hard obstacles. Enjoy !

-Jong

==========================================================

Professor
Dept. of Bio and Brain Engineering
KAIST
373-1 Guseong-Dong, Yuseong-Gu
Daejon 305-701, Korea
==========================================================
Thanks Jong

Some optical properties of a highly scattering medium, such as tissue, can be reconstructed non-invasively by diffuse optical tomography (DOT). Since the inverse problem of DOT is severely ill-posed and nonlinear, iterative methods that update Green’s function have been widely used to recover accurate optical parameters. However, recent research has shown that the joint sparse recovery principle can provide an important clue in achieving reconstructions without an iterative update of Green’s function. One of the main limitations of the previous work is that it can only be applied to absorption parameter reconstruction. In this paper, we extended this theory to estimate the absorption and scattering parameters simultaneously when the background optical properties are known. The main idea for such an extension is that a joint sparse recovery step gives us unknown fluence on the estimated support set, which eliminates the nonlinearity in an integral equation for the simultaneous estimation of the optical parameters. Our numerical results show that the proposed algorithm reduces the cross-talk artifacts between the parameters and provides improved reconstruction results compared to existing methods.

Tuesday, October 29, 2013

Task-based evaluation of segmentation algorithms for diﬀusion-weighted MRI without using a gold standard

Abhinav Jha just sent me the following intriguing email

Hello Igor,
My name is Abhinav Jha, Research Fellow, Department of Radiology, Johns Hopkins University. I have visited your blog on many occasions. It is a great source of information and I would like to thank you for all your efforts.
I write this mail to let you know about one of my publications on task-based evaluation of image analysis algorithms in the absence of a gold standard. This work is not directly related to compressive sensing (CS), but I like to view CS as task-specific imaging (on similar lines as Dr. Mark Neifelds approach). In this context, it becomes essential to evaluate CS systems and algorithms based on the task for which the image has been acquired, and that is where my work comes in. What I suggest is a paradigm to evaluate image analysis algorithms based on the task for which the image has been acquired. Also, often in these systems, we do not know of the gold standard, and therefore, the evaluation task becomes complicated. The technique that my paper suggests can work in the absence of a gold standard. Here is a general summary of the paper
With the development of task-specific imaging systems and algorithms, there is a requirement to evaluate these systems/algorithms based on the task for which they were designed. The standard evaluation methodologies are often incapable in performing this task-based evaluation. In this paper, we suggest a framework to perform the task-based evaluation of image analysis algorithms, where we specifically target the problem of evaluating segmentation algorithms. Evaluation of these image-analysis algorithms may require a gold standard to compare against, but that is often unavailable. The evaluation technique that we suggest takes this issue into account.
Please let me know if you need any other documents.
Thanks,
Abhinav
Thanks Abhinav

In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both the ensemble mean square error and precision. We also propose consistency checks for this evaluation technique.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 28, 2013

Randomized Items:Uncompressing CS, Les Houches school on "Statistical physics, Optimization, Inference and Message-Passing algorithms", Upcoming workshop on "Statistical Issues in Compressive Sensing"

The Statistical physics, Optimization, Inference and Message-Passing algorithms School at Les Houches ended on October 11, 2013, here are the presentations for some of the speakers as well the titles of posters. I am told that notes were taken and they would be made available in the future. Stay tuned, in the meantime:
List of lecturers:
First week:

Second week:
• Marc Mezard (ENS Paris): Cavity method: message passing from a physics perspective.
• Andrea Montanari (Stanford Univ.): idem...
• Rudiger Urbanke (EPFL): Error correcting codes and spatial coupling
• Amin Coja-Oghlan (Frankfurt Univ.): Phase transitions in random discrete structures: a rigorous approach

List of posters:
Tuesday:
1. Marco Mondelli: Scaling Exponent of List Decoders with Applications to Polar Codes
2. Andrei Giurgiu: A proof of the exactness of the replica symmetric formula for LDPC codes above the MAP threshold.
3. El-Khatib Rafah: Displacement Convexity - A Framework for the Analysis of Spatially Coupled Codes
4. Santhosh Kumar Vanaparthy: Threshold Saturation for Spatially-Coupled LDPC and LDGM Codes on BMS Channels
5. Hamed Hassani: New lower bounds for CSPs using spatial coupling
6. Jean Barbier: Robust error correction for real-valued signals via message-passing decoding and spatial coupling
7. Francesco Caltagirone: Dynamics and termination cost of spatially coupled mean-field models
8. Cohen Or: Low temperature expansion of steady-state measures of nonequilibrium markov chains via graphical representatio
9. Aurelien Decelle: Belief Propagation inspired Monte Carlo
10. Yuji Sakai: Markov Chain Monte Carlo Method with Skew Detailed Balance Condition
11. Satoshi Takabe: Typical behavior of the linear programming method for combinatorial optimization problems
12. Masahiko Ueda: Calculation of 1RSB transition temperature of spin glass models on regular random graphs under replica symmetric ansatz
13. Wang Chuang: Tensor renormalization group method for spin glass
14. Gino Del Ferraro: Mean field spin glasses treated with PDE techniques
15. Flaviano Morone: Large deviations of correlation functions in random magnets
16. Andre Manoel: Statistical mechanics for the analysis and design of information hiding algorithms
17. Ayaka Sakata: Time evolution of autocorrelation function in dynamical replica theory
18. Alexander Mozeika: Projected generalized free energies for non-equilibrium states
19. Stefan Falkner: A Renormalization Group Approach to Quantum Walks
20. Moffatt, Iain: The Potts-Tutte connection in an external field
Wednesday:
21. Jonas Dittmann: TV regularized tomographic reconstruction from few (X-Ray) projections based on CS
22. Yingying Xu: Statistical Mechanics Approach to 1-Bit Compressed Sensing.
23. Christophe Schulke: Blind Calibration in Compressed Sensing using Message Passing Algorithms
24. Wang Chuang: Partition function expansion for generalized belief propagation.
25. Harrison Elizabeth: Probabilistic Control in Smart-Grids.
26. Maksym Girnyk: A Statistical Mechanical Approach to MIMO Channels
27. Ulugbek Kamilov: Wavelet-Domain Approximate Message Passing for Bayesian Image Deconvolution
28. Rémi Lemoy: Variable-focused local search on Random 3-SAT
29. Alberto Guggiola: Mapping between sequence and response space in the olfactory system in Drosophila
30. Marcus Benna: Long-term memory with bounded synaptic weights
31. Or Zuk: TBA
32. Jack Raymond: Utilizing the Hessian to improve variational inference
33. Andrey Lokhov: Dynamic message-passing equations and application to inference of epidemic origin
34. Aurelien Decelle: Decimation based method to improve inference using the Pseudo-Likelihood method
35. Alejandro Lage Castellanos: TBA
36. Munik Shrestha: Spread of reinforcement driven trends in networks with message-passing approach
37. Dani Martí: Scalability properties of multimodular networks with dynamic gating
38. Abigail Zoe Jacobs: Latent space models for network structure: an application to ecology
39. Shunsuke Watanabe: The analysis of degree-correlated networks based on cavity method
40. Caterina De Bacco: Shortest non-overlapping routes on random graphs.

The DFG/SNF Research Group FOR916 and the DFG Research Training Group 1023 organizes a workshop on Statistical Issues in Compressive Sensing at the University of Göttingen, Germany, November 11-13, 2013. The list of abstract is below:

Image Credit: NASA/JPL/Space Science Institute
N00217830.jpg was taken on October 21, 2013 and received on Earth October 22, 2013. The camera was pointing toward SATURN-DRING at approximately 1,463,952 miles (2,356,003 kilometers) away, and the image was taken using the CL1 and CL2 filters.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

WCSCM Special Session: “Advances in data acquisition and processing techniques for health monitoring of civil infrastructure"

Petros Boufounos just sent me the following:
Igor,

I hope all is well. This special session should be very interesting to compressive sensing researchers. Can you please post the announcement on your blog? A link to the special session on the conference webpage is here: http://www.ma3.upc.edu/6wcscm/minisymposia.html

Thanks!

Petros
Thanks Petros
6th World Conference on Structural Control and Monitoring July 15-17, 2014, Barcelona, Spain (http://www.6wcscm.es/)
Special Session 08: “Advances in data acquisition and processing techniques for health monitoring of civil infrastructure” organized by
Agathoklis Giaralis (City University London, UK)
Satish Nagarajaiah (Rice University, USA)
Scope
Integrated solutions for the condition assessment and for the health monitoring of contemporary civil infrastructure involve the use of large networks of data sensing and processing units commonly connected wirelessly. In this context, the effectiveness of structural health monitoring (SHM) applications relies heavily on the efficiency of such wireless sensor networks (WSNs) to acquire, process, and transmit large quantities of different types of data measurements at a minimum monetary and power consumption cost: an issue of concern for all applications involving the use of large unattended WSNs.
Aim
This special session aims to bring together researchers from diverse fields in engineering and applied mathematics to address the issue of reducing the various costs of WSNs for SHM applications by considering recent advances in data acquisition and processing techniques.
Topics
An indicative non-exhaustive list of pertinent topics include: representation, modelling, acquisition, and analysis of sparse signals in time and/or in space for SHM applications; non-uniform, random, and sub-Nyquist analog data sampling techniques; modelling and processing of imprecise measurement data; efficient data compression and reconstruction algorithms in support of wireless data transmission; condition assessment and damage detection algorithms undertaken within WSNs in a decentralised manner; data fusion techniques for processing and management of SHM measurements within WSNs; robust data processing algorithms for SHM applications assuming non-conventional sampling techniques and data compression algorithms.
Call
Technical papers discussing the algorithmic and theoretical aspects of techniques related to the above indicative topics and their tailoring to meet the practical needs of SHM of different civil structures (buildings, bridges, dams, tunnels, etc.) are mostly welcome. Further, contributions on technological sensing solutions in support of recent algorithmic advancements are also welcome. Finally, opinion papers targeting the issue of exploiting novel theoretical/mathematical advancements and technological tools from diverse fields to facilitate the timely and challenging problem of using WSNs for SHM of civil infrastructure are invited.

Sunday, October 27, 2013

Sunday Morning Insight: Structured Sparsity and Structural DNA Folding Information

In Application of compressed sensing to genome wide association studies and genomic selection, one realizes that connecting GWAS and phenotypes is difficult. One of the underlying reason may lie in part due to the loss of structural information thanks to linear DNA sequencing (such as nanopore sequencing). To comprehend the structural information better, I gathered several figures from the interwebs in a series of figures below. As can be seen, the same DNA can be folded differently and yield different outcomes because non-DNA elements such as histones have a direct impact on allowing certain parts of the DNA to be active (or not). The folding itself may also be behind the reason why some diseases are connected to a large number of genes. Given all this, one wonders if a structured sparsity approach to GWAS studies might be more fruitful in that it could potentially highlight elements of the DNA that are closer to each other and therefore pinpoint to folding information. At the very least, there ought to be a trace of the cyclicality induced by the 147 base pairs wrapped into the nucleosomes.

For some related information, we had a small discussion with Mohammed AlQuraishi 's blog entry on CASP10, and the Future of Structure in Biology a while back.

Different scales in the DNA all the way to the chromosome [2]

Different scales in the DNA all the way to the chromosome [4]

More detailed scales in the DNA all the way to the chromosome [2]

Detailed Sketch of the Histone-DNA complex[1]

Nucleosomes can slide along DNA. When nucleosomes are spaced closely together (top), transcription factors cannot bind and gene expression is turned off. When the nucleosomes are spaced far apart (bottom), the DNA is exposed. Transcription factors can bind, allowing gene expression to occur. Modifications to the histones and DNA affect nucleosome spacing. From [3]

Same figure as before but with different scales [5]

In transformed cells, this scenario is disrupted by the loss of the 'active' histone-marks on tumour-suppressor gene promoters, and by the loss of repressive marks such as the trimethylation of K20 of H4 or trimethylation of K27 of histone H3 at subtelomeric DNA and other DNA repeats. This leads to a more 'relaxed' chromatin conformation in these regions. [6]

Saturday, October 26, 2013

Sunday Morning Insight: Watching P vs NP

Last year, I somehow argued that it would be difficult for Quantum Computing to compete with the Steamrollers even if it meant they could solve NP problems (which we don't really know if they can [6]). This time, it is a little different. See, if you recall, sharp phase transitions are now part of our landscape. By landscape, I do not only mean their discovery [3], but also their use starting with [1] in 2010 and as recently as last week [4]. We collectively yearn for those limits to move (see The Wall in Sunday Morning Insight: Game of Thrones and the History of Compressive Sensing [5]

And like we said last week [2]

or like in the case of Manjhi, move a mountain.

Go read Tim Gowers' What I did in my summer holidays on TiddlySpace and his proposal for Polymath9 and Terry Tao's G+ post on Gilles Pisier's paper on Grothendieck's Theorem, past and present [7] (a more recent version of that paper is here) set up on the SelectedPapers network

It might happen right in front of our eyes.

[7] Grothendieck's Theorem, past and present by Gilles Pisier
Probably the most famous of Grothendieck's contributions to Banach space theory is the result that he himself described as "the fundamental theorem in the metric theory of tensor products". That is now commonly referred to as "Grothendieck's theorem" (GT in short), or sometimes as "Grothendieck's inequality". This had a major impact first in Banach space theory (roughly after 1968), then, later on, in $C^*$-algebra theory, (roughly after 1978). More recently, in this millennium, a new version of GT has been successfully developed in the framework of "operator spaces" or non-commutative Banach spaces. In addition, GT independently surfaced in several quite unrelated fields:\ in connection with Bell's inequality in quantum mechanics, in graph theory where the Grothendieck constant of a graph has been introduced and in computer science where the Grothendieck inequality is invoked to replace certain NP hard problems by others that can be treated by "semidefinite programming" and hence solved in polynomial time. In this expository paper, we present a review of all these topics, starting from the original GT. We concentrate on the more recent developments and merely outline those of the first Banach space period since detailed accounts of that are already available, for instance the author's 1986 CBMS notes.

Saturday Morning Videos: This is why we fly; a phase separator in different gravity levels; concurrent vertical two phase flow

H/t Christian and Cable
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Friday, October 25, 2013

Startup News: GraphLab, InView Corp., Centice

Advice for startups: Please make it easy for people who like what you do. Set up an RSS feed.

Sparse FFT implementation: FFAST, Fast Fourier Aliasing-based Sparse Transform

ClickThroughs on an implementation mentioned here on Nuit Blanche

Think of Nuit Blanche as a crossroad. Often times, I request access to implementations so that you can become a rockstar (Nobody Cares About You and Your Algorithm ), Back in May, I asked Kannan Ramchandran and Sameer Pawar about whether an implementation of their paper

"Kannan t[old] me ...that an implementation should be out by the end of the summer. woohoo! "
Sameer  and Kannan followed through on that e-mail this week with:
Dear Igor,
It's been a while since we communicated. At our end, we have made some development towards making the implementation of FFAST accessible to the people. Although we promised a summer- release of the code, it took us more time to get C++ implementation ready than we estimated. To release the FFAST code to general public still needs some more work in terms of web/user interface. But to get started we thought we can at least release it to you. So for now, we have created a guest login account to one of our servers at Berkeley. You can login remotely as follows:....
Unfortunately, I don't have much time to kick the tires on this implementation but Kannan let me know that if you want to have access to the FFAST implementation as is, you can do so by directly contacting Sameer directly at: spawar@eecs.berkeley.edu

Related:

Thursday, October 24, 2013

Robust Sparse Signal Recovery for Compressed Sensing with Sampling and Representation Uncertainties - implementation -

Yipeng Liu just sent me the following

Dear Igor,
....We propose one method that is more robust to multiplicative uncertainty when recovering sparse signal from compressive measurements. I guess it may be interesting to some of your nuit blanche blog reader. It is online available at: ftp://ftp.esat.kuleuven.ac.be/pub/SISTA//yliu/rl0.pdf
Thank you very much for the daily updating posting! Your blog really help me a lot!
Best Regards,
2013-10-23

Yipeng Liu （刘翼鹏）, PhD, Research Fellow
ESAT-STADIUS, Department of Electrical Engineering, University of Leuven
Email: yipeng.liu@esat.kuleuven.bedr.yipengliu@gmail.com
Kasteelpark Arenberg 10, box 2446, 3001 Heverlee, Belgium

Here is the report:

Compressed sensing (CS) shows that a signal having a sparse or compressible representation can be recovered from a small set of linear measurements. In classical CS theory, the sampling matrix and dictionary are assumed be known exactly in advance. However, uncertainties exist due to sampling distortion, finite grids of the parameter space of dictionary, etc. In this paper, we take a generalized sparse signal model, which simultaneously considers the sampling and dictionary uncertainties. Based on the new signal model, a new optimization model for robust sparse signal recovery is proposed. This optimization model can be deduced with stochastic robust approximation analysis. Both convex relaxation and greedy algorithm are used to solve the optimization problem. For the convex relaxation method, a sufficient condition for recovery by convex relaxation method and the uniqueness of solution are given too; For the greedy sparse algorithms, it is realized by the introduction of a pre-processing of the sensing matrix and the measurements. In numerical experiments, both simulated data and real-life ECG data based results show that the proposed method has a better performance than the current methods.

The implementation is here.
"In Folder A, the codes for Fig. 3 are given. Fig. 4 and 5 can be provided by just a few parameters. In Folder B,  all the data and Figures can be generated by the codes."
Thank you Yipeng

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Wednesday, October 23, 2013

1-Bit Matrix Completion - implementation -

Here is a newer version of a paper with an attendant implementation: 1-Bit Matrix Completion by Mark Davenport, Yaniv Plan, Ewout van den Berg, Mary Wootters
In this paper we develop a theory of matrix completion for the extreme case of noisy 1-bit observations. Instead of observing a subset of the real-valued entries of a matrix M, we obtain a small number of binary (1-bit) measurements generated according to a probability distribution determined by the real-valued entries of M. The central question we ask is whether or not it is possible to obtain an accurate estimate of M from this data. In general this would seem impossible, but we show that the maximum likelihood estimate under a suitable constraint returns an accurate estimate of M when ||M||_{\infty} <= \alpha, and rank(M) <= r. If the log-likelihood is a concave function (e.g., the logistic or probit observation models), then we can obtain this maximum likelihood estimate by optimizing a convex program. In addition, we also show that if instead of recovering M we simply wish to obtain an estimate of the distribution generating the 1-bit measurements, then we can eliminate the requirement that ||M||_{\infty} <= \alpha. For both cases, we provide lower bounds showing that these estimates are near-optimal. We conclude with a suite of experiments that both verify the implications of our theorems as well as illustrate some of the practical applications of 1-bit matrix completion. In particular, we compare our program to standard matrix completion methods on movie rating data in which users submit ratings from 1 to 5. In order to use our program, we quantize this data to a single bit, but we allow the standard matrix completion program to have access to the original ratings (from 1 to 5). Surprisingly, the approach based on binary data performs significantly better.
Of note in the conclusion
However, matrix completion from noiseless binary measurements is extremely ill-posed, even if one collects a binary measurement from all of the matrix entries. Fortunately, when there are some stochastic variations (noise) in the problem, matrix reconstruction becomes well-posed.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Tuesday, October 22, 2013

CSJob: Postdoc, imaging techniques for earthquake science and geophysics, Harvard

Brendan Meade just sent me the following:
Igor-
Brendan
Professor of Earth & Planetary Sciences
Harvard University
Cambridge, MA 02138

The Department of Earth & Planetary Sciences at Harvard University invites applications from prospective postdoctoral scholars for positions in the application of contemporary imaging techniques to problems in earthquake science and geophysics.  We seek novel applications of sparse recovery and machine learning algorithms to the analysis of geodetic data networks and imaging with physics based kernels.  The duration of the fellowship is one year and renewable for up to two additional years with satisfactory progress and continued availability of funding. Applicants should send (1) a statement of experience and interests, (2) curriculum vitae, and (3) the names and addresses of three references electronically to Professor Brendan Meade (c/o Bridget Mastandrea, bmastandrea@fas.harvard.edu).  Review of applications will begin on December 19th, 2013. Harvard University is an affirmative action/equal opportunity employer and applications from women and minorities are encouraged.
Thanks Brendan !
Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

Monday, October 21, 2013

The Red Wedding in Science and Technology

At meetups and other places, I am always surprised by how the idea of science and technology being at the center of epic fights of ideas is not more widely known. I think it probably has to do with the lack of scientific background most science reporters have. Then again, not everybody is really that interested in hearing about those battles, so it might be just a bias. In the end, most people are customers of technologies and rarely care about how it got in front of them. Yet, it is a story, I should have described in Game of Thrones and the History of Compressive Sensing

A year ago, I wrote about the steamrollers, (see Predicting the Future: The Steamrollers) i.e a set of "laws" that we are all following without realizing it. The resurgence of neural networks in recent years or simply the ability to do complex Bayesian computations is just an outgrowth of this phenomena. On the hardware side of things, I commented earlier (see Do Not Mess with CMOS) that it is always risky to bet the farm on a technology that competes with one of these steamrollers, i.e. CMOS. And yesterday, I noted Vladimir's feed showing off a video of the IMEC hyperspectral camera.

Outdoor demonstration of IMEC's hyperspectral imaging camera from IMEC imaging on Vimeo.

This technology is said to enable new possibilities for non-scanning, real-time acquisition of hyperspectral image data-cubes at video-rates.
We've seen a similar set up here on Nuit Blanche, namely a video summary of the work on coded aperture ( Video Compressive Sensing by Larry Carin , compressive hyperspectral camera and Compressive video ). What is the difference between these two approaches ? They both use CMOS technology ! well not exactly in the same manner, see, one is fully using CMOS with no care for signal compression while it expects economies of scale to build very large FPAs, while the other restricts itself to a certain CMOS size and expects an intelligent approach to get more information.

In the IMEC camera, the signal is directly acquired and stored in memory while in the aecond approach, we yearn for the coded aperture to reconstruct the hyperspectral cubes faster than a blink of an eye. In effect, the compressive approach is contingent on having faster reconstruction algorithms or in simpler terms, is the embodiement between dumb economies of scale and an intelligent approach to dealing with data.

In the end, the race for getting faster solvers is all about winning a war. As spectators who have sometimes invested too much of their thought processes in a particular technology, the winning of a dumb CMOS solution might trigger the same reactions as those of numerous watchers of Games of Thrones's Red Wedding (do not click on the previous link nor watch the video below if you haven't seen Episode 9, Season 3)

Which one do you think will win ?

PS: I am not saying that CS cannot be included in the IMEC camera, it surely can but if the architecture is easy to build, there is no reason to expect that or similar cameras get more information through CS, as they will always look to increase the silicon instead of bettering the  algorithm.

Join the CompressiveSensing subreddit or the Google+ Community and post there !
Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

CSJob: Shell-MIT Postdoctoral Researcher

Piotr Indyk just sent me the following:

Hi Igor,
Hope things are going well with you. I was wondering if it would be possible for you to announce our new postdoc position on your blog ? It did wonders last time:)
Thanks!
Piotr

No problem Piotr! Here it is:
Shell, a global group of energy and petrochemicals companies, and Massachusetts Institute of Technology seek applications for a postdoctoral researcher to pursue energy research at MIT in Cambridge, Massachusetts with MIT faculty on a Shell-sponsored research project. The goal of the project is to examine applications of compressive sensing and machine learning methods to geophysical and petroleum engineering problems. This postdoctoral candidate will focus specifically on applications of these methods to estimating reservoir fluid flow parameters given observations in drilled wells.
Application due date: November 22, 2013.
Eligibility Requirements:
* Ph.D. and excellent academic record
* Strong background in one or more of the following fields:
- Petroleum and reservoir engineering
- Control theory
- Data assimilation
- Statistical inference, probability theory
- Numerical methods
We seek candidates with demonstrated experience and interest in energy research; an excellent academic record; and academic background in the appropriate field(s) specified above. Candidates should have interest in integrating the following focus areas of the current project into their research: machine learning; compressive sensing/sparse approximations; convex optimization. Successful candidates should have received their Ph.D. within three years of the post-doc start date. Postdoctoral Researchers must be in residence in Cambridge and must participate in Shell-MIT Postdoctoral Researcher activities coordinated by the MIT Energy Initiative (MITEI).
Each Shell-MIT Postdoctoral Researcher in Energy will be hired as an MIT employee and will receive an annual stipend of $50,000-$52,000 including benefits. This is a one-year position with opportunity for one-year renewal, contingent on satisfactory performance. Target start date is February 2014.
Applications should be submitted directly to the MIT Energy Initiative. Program description and application information can be found at http://mitei.mit.edu/shell-mit-postdoc