Thursday, January 19, 2012

MIT-FFT: Please think of the marketers, they are people too !


Nuit Blanche has been scooped by TechCrunch! My excuse? I was too busy asking questions at MIA 2012 ( more on that later). So let us provide some context, ever since Empirical Evaluation of a Sub-Linear Time Sparse DFT Algorithm by Mark Iwen, Anna Gilbert and Martin Strauss, I had not seen much activity on one of the cornerstone algorithm of the 20th century seen through the eyes of sparsity, a subject of central interest in compressive sensing. After talking to himMark eventually put an implementation of the Ann Arbor FFT (AAFFT) on Sourceforge and according to the site it got about 10,000 downloads. It is a large success in our community but it definitely is not a blockbuster outside of it. The idea is to perform an FFT on a signal known to be sparse (irrespective to where it is sparse) and be computationally more efficient than the current FFT count. A week ago, a different algorithm from MIT appeared on the interwebs. that is an improvement over the AAFFT and is the subject of the TechCrunch story. Without further due here is: Nearly Optimal Sparse Fourier Transform by Haitham HassaniehPiotr IndykDina KatabiEric Price. The abstract reads:
We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an n-dimensional signal. We show:
* An O(k log n)-time algorithm for the case where the input signal has at most k non-zero Fourier coefficients, and
* An O(k log n log(n/k))-time algorithm for general input signals.
Both algorithms achieve o(n log n) time, and thus improve over the Fast Fourier Transform, for any k = o(n). Further, they are the first known algorithms that satisfy this property. Also, if one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = n^{\Omega(1)} . We complement our algorithmic results by showing that any algorithm for computing the sparse Fourier transform of a general signal must use at least \Omega(k log(n/k)/ log log n) signal samples, even if it is allowed to perform adaptive sampling.


Folks, you need a name for this algorithm, I know you did the hard work, but please think of the marketers, they are people too! Since we already have the Fastest Fourier Transform in the West. (FFTW), I wonder if FFTE, Faster Fourier Transform in the East, would work since MIT is on the East coast...you know the usual East coast - West coast thingy ... oh well...

Call me a snob but what's interesting to me is that while this new algorithm is certainly a game changer, I am much more interested in this paper who Piotr Indyk also co-wrote which in my mind is certainly more of a 21st century problemApproximate Nearest Neighbor: Towards Removing the Curse of Dimensionality by Sariel Har-PeledPiotr IndykRajeev Motwani. The abstract reads:

We present two algorithms for the approximate nearest neighbor problem in high dimensional spaces. For data sets of size n living in IRd , the algorithms require space that is only polynomial in n and d, while achieving query times that are sub-linear in n and polynomial in d. We also show applications to other high-dimensional geometric problems, such as the approximate minimum spanning tree.

 Credit Photo: NASA, Mars from Opportunity's Camera, Sol 2820


Liked this entry ? subscribe to Nuit Blanche's feed, there's more where that came from. You can also subscribe to Nuit Blanche by Email, explore the Big Picture in Compressive Sensing or the Matrix Factorization Jungle and join the conversations on compressive sensing, advanced matrix factorization and calibration issues on Linkedin.

1 comment:

Dick Gordon said...

One man’s curse is another species’ blessing. See:

Gordon, R. (1994). Evolution escapes rugged fitness landscapes by gene or genome doubling: the blessing of higher dimensionality. Computers & Chemistry 18(3), 325-332.
http://www.sciencedirect.com/science/article/pii/0097848594850267

Abstract

Evolution on rugged landscapes ceases when a local maximum is attained. This poses the problem of how evolution could approach or attain a global maximum, especially for large genomes for which quasispecies are ineffective. I show how increasing the dimensionality of the landscape, which occurs every time there is a gene or higher order duplication (up to polyploidy), may solve this problem. Epistasis or complementarity between the duplicated genes provides an all uphill pathway towards the global maximum. The evolution of hemoglobin and other dimeric and tetrameric proteins provides a testable case, since fitness is readily defined.

Printfriendly