Wednesday, November 23, 2011

Feedbacks on Adaptivity in Compressive Sensing, a post peer review of SL0 and Pre-Peer Review Publishing


It's really quite simple: let the "error" of an algorithm be the squared ratio of noise in the output to noise in the input.
  • Non-adaptively, the error is at least (k log (n/k)) / m and this is achievable. (standard)
  • Adaptively, the error can be as low as (k log log (n/k))/m. (I believe Jarvis gets this for some range of parameters; our algorithm gets it everywhere.)
  • Adaptively, the error can't be lower than k/m. (this paper.) 
 There's no contention in the results, only in whether an exponential improvement in the dependence on n "doesn't help" or is "fundamentally better."

The reference Eric is talking about is: On the Power of Adaptivity in Sparse Recovery by Piotr Indyk, Eric Price, David Woodruff.(featured here). Mark Davenport replied with: 

Eric,
I agree there is no contention between the results you mention and the claims in our paper, but we're not just arguing semantics here.
It's a little more than that -- if the SNR is just a little bit higher than the "worst-case" level, then the error can be as low as k^2/(nm). That's a vast improvement over k/m (assuming that k << n). So there is seemingly the potential for a vast improvement, way beyond just removing the log factor.

None of the results you mention are aiming at this level of improvement, because it is indeed impossible to obtain in general (i.e, for all input signals).
In the current review process, this very interesting discussion and insight would be hidden from us, how is Science served ? Let us pile on this thing a little further, in A Post Peer Review of SL0, an anonymous commenter made the thoughtful comment:
SL0 is too sensitive to noise. This limits its applications to many fields, like gene expression, network mining, sparse representation of natural signals (biosignals, etc), DOA estimation, ....
We should not over-praise algorithms behaving better in noiseless scenarios. After all, sparsity based signal processing/machine learning covers many fields. And over-praising such an algorithm may mislead your readers with various background/applications.
I emphasized the first sentence because there is something perverse at play and it makes my blood literally  boil. After reading the only comment of that thread:

Regarding sensitivity of SL0 to noise, what about Robust SL0 , it is less sensitive to noise compared to the original SL0?
I wrote to one of the author, Massoud Babaie-Zadeh, one of the author of SL0 and he tells me they even had a better implementation of this Robust SL0 dealing with the projection a little differently, but that got rejected in the peer review process and is therefore not implemented in SL0. So yes, anonymous is right, SL0 does not work well with noise because the peer-review process does not let it. How are "gene expression, network mining, sparse representation of natural signals (biosignals, etc), DOA estimation, ...." held back since 2009 because of peer review ? How is Science served ? 

Which leads us to these interesting and thoughtful comments on the subject of pre-publication peer review, I have received many comments on how To Serve Science in just a few hours this is certainly an issue that is currently not addressed effectively, here are the comments:

Serguey Ten started with:

This arxiv-thing could be even bigger than peer-review - the article would have "tail" of question about difficult places from novices in the area, explanations from author/experts, reports on 3rd party implementations - all blurring the edges between paper, book on the subject and textbook. Would be great!
Then Laurent Duval added:

For ONLY one additional reading on the topic, I suggest Opinion 101: The Newly launched Journal Rejecta Mathematica IS a JOKE (but so are all math journals!) by Doron Zeilberger: "Let me conclude with a revolutionary proposal to save paper, and disk-space, by making all math journals virtual. Only keep the arxiv, but for each paper, just mention what journal it got accepted to."

I also add the June 2011 issue of the IT Society newsletter where A. Ephremides (The Historian’s Column, page 7) recalls that, half a century ago "The papers were FIRST published in the Transactions and THEN they were presented at the Symposium!! That is a total reversal of what we do today. The distinct advantage of this arrangement was that the audience had the benefit of studying the papers carefully ahead of time, which enabled an in-depth discussion after the presentation. Not a bad idea!. This would be a "revolution" indeed, in its radical sense: going back to the same place, althought a bit later.

Don't you think a conference (why not video/virtual?) would be a great post-peer review place?

Then Thomas Arildsen added
ArXiv is already great, IMO because it's a great place to monitor new research appearing as opposed to trying to keep track of all relevant journals in the field. Plus, the latter has quite a delay. This extra layer would be a huge leap forward. I can't help thinking; why doesn't "somebody" do that.
then Petros Boufousnos provided more to the idea:

This is in some view very similar to the concept of "working paper" in economics and other fields. From what I understand (and I can't say I understand it completely) economists publish a "working paper" in the community, which is circulated, discussed, cited, presented in conferences etc. The paper is in some sense a living document and keeps changing as new comments/suggestions pile in. This is in some sense what Igor calls "post-publication peer review." Once the paper is in a final good state and accepted by the community, a journal publishes it, which is the official validation of the paper. I am not sure about two things: a) If the journal actively contacts the author and picks up the paper or the author submits to the journal and b) how intensive is the blind peer review process by the journal's editorial process once the validation in the community exists for the working paper.
This is a model that I think works quite well, especially for a slow-moving field such as economics. In our fields, arXiv can definitely serve this purpose. A meta-service might be necessary for posting comments/feedback, although this could in principle be embedded in arXiv. The field of economics shows that this is not even necessary since e-mail, conferences, blogs etc. can play that role. Still, a more formal comment/review/discussion process would be nice. (BTW, integrating google scholar citations with Google+ and maybe blogger would be a great platform for this... One Google to rule us all!)
What is necessary, however, is that the community understands and accepts this process. This might be harder than it seems. We have been having arXiv for so long now, posting papers and citing then, and still people don't accept it. For example, a recent paper I submitted to a conference relied and cited on results in my Universal Scalar Quantization paper on arXiv, before it had been accepted by Trans. IT. One of the comments of one reviewer was "The paper depends heavily on an unreviewed open-source published document [...] I cannot trust the cited work is correct so it must be reviewed before it can be used for a serious paper [...]" and proceeded to trash the remaining of the submission. Thankfully the remaining reviewers saw the value of the paper and accepted it. However, this incident demonstrates the mentality of some in the community and the importance of understanding the process. Still, I am positive that we will find a way to work within this context.
This will have a couple of side-effects and possibly some unintended consequences:
a) The bar for pre-publishing will become lower before the community accepts a paper. This means a higher volume of papers will appear and some personal way to sort through them will be necessary (already the volume of CS-related papers is overwhelming me... the "to-read" pile keeps growing.) This might mean more reliance to personal connections and trust, making the community more closed instead of bringing the openness desired.
b) The paper will actually take longer to be validated in a proper journal, although it will be cited in the meantime. In a fast moving community this might be an issue. This might give more power to the "difficult" conferences, such as NIPS, which are basically following the old review process. If you want your paper appearing faster, just submit to NIPS. Given the lack of true response/second review process in these conferences, I find these conferences to have more quirky and biased reviews than journals.
These are unintended consequences we will need to combat and take into account as we move to a system based on a pre-publication/open comments process. However, I still find the advantages worthy. A wide volume of comments and pre-publications suggestions will pressure an editor to reject a comment of an obnoxious reviewer (or one who has a personal grudge against you) instead of rejecting the paper.
Anyway... sorry for the long rant/comment :-) I hope it contributes something.




Image Credit: NASA/JPL/Space Science InstituteN00178189.jpg was taken on November 21, 2011 and received on Earth November 22, 2011. The camera was pointing toward SUTTUNGR, and the image was taken using the CL1 and CL2 filters. This image has not been validated or calibrated.

2 comments:

Anonymous said...

Hi
I'm surprised no one has mentionned f1000.com and academia.edu !

Igor said...

F1000 sounds very interesting in terms of business model. I am not sure how it would work in science and engineering besides the biomedical literature.

Right now i am trying to think of who should pay for the system to be self sustaining, i don't have a good answer yet.

Igor.

Printfriendly