Wednesday, January 31, 2007

Thermodynamics of Muscle


In the unpublished work of E.T Jaynes I came accross an interesting statement on muscle and thermodynamics, namely:

  • Jaynes, E. T. 1983, `The Muscle As An Engine ,' an unpublished manuscript
  • Jaynes, E. T., 1989, `Clearing up Mysteries - The Original Goal,' in Maximum-Entropy and Bayesian Methods, J. Skilling (ed.), Kluwer, Dordrecht, p. 1;


  • whereby Jaynes shows that in effect, the muscle is not violating the Second Law of thermodynamics because the work in a muscle is being performed on a very small scale using only one (in any event few) degree of freedom from a large molecule. Jaynes passed away in 1998 but he made a statement that his views on biology and thermodynamics would not be acknowledged until 20 years from the time he wrote his paper. This would be 2009.

    Fuel powered muscles have already made the headlines (here) but they do not use large molecules to produce work. It is also interesting to note that some shape memory alloys metals (Ti-Ni) are considered for devising muscles . However since they are conductors, they are not, according to Jaynes' statement, optimal to provide the highest efficiency (since heat gets to spread easily). All this should point to a MEMS based solution.

    Sharp panoramic view of the tiny


    It looks like making panoramas is not just for the big stuff. I just came accross an interesting paper on image stitching and microscopy. They do not seem to have evaluated algorithms based on SIFT and Ransac to do the stitching like Autostich or Autopano Pro. In the paper, Thevenaz and Unser mention that they obtain some type of superresolution. In this field, one is interested in removing unfocused objects whereas in other fields, some people are interested in surprising occurences (like a jet flying by) or in inserting new artifacts.

    Thursday, January 25, 2007

    Another link between cognition deficit and diabetes: The case of Autism.

    In a previous entry, I mentioned the recent breakthrough making the connection between Diabetes and the Central Nervous System where it was shown that the nervous system was stopping insulin from being efficiently used leading to type 2 diabetes. It looks as though the connection is not one way. According to this paper, using an hormone that affects insulin sensitivity has some affect on subjects with Autism. From the paper:

    The exact causes for autism are largely unknown, but is has been speculated that immune and inflammatory responses, particularly those of Th2 type, may be involved. Thiazolidinediones (TZDs) are agonists of the peroxisome proliferator activated receptor gamma (PPARgamma), a nuclear hormone receptor which modulates insulin sensitivity, and have been shown to induce apoptosis in activated T-lymphocytes and exert anti-inflammatory effects in glial cells. The TZD pioglitazone (Actos) is an FDA-approved PPARgamma agonist used to treat type 2 diabetes, with a good safety profile, currently being tested in clinical trials of other neurological diseases including AD and MS.
    ...
    There were no adverse effects noted and behavioral measurements revealed a significant decrease in 4 out of 5 subcategories (irritability, lethargy, stereotypy, and hyperactivity). Improved behaviors were inversely correlated with patient age, indicating stronger effects on the younger patients. CONCLUSIONS: Pioglitazone should be considered for further testing of therapeutic potential in autistic patients.


    This is very surprising, because it has been shown that similar medication (like Lovastatin) could improve substantially the cognitive abilities of people with NF1. That drug (Lovastatin) is generally used for reducing the amount of cholesterol and certain fatty substances in the blood.

    What's the link between Neurofribromatisis and Autism ? Genetically, Nf-1 and some form of Autism have been linked through the breakdown of the nervous system signaling in the brain. Matthew Belmonte and Thomas Bourgeron explain:

    It is often tacitly assumed that the relation between gene expression and cellular phenotype, or the relation between individual neuronal properties and emergent neural phenotype, is monotonic and independent. That is to say, we assume that (i) an abnormal loss of function in a gene or in a cellular process ought to produce a phenotype opposite to that found in the case of an abnormal gain of function, and (ii) this relation between dosage and phenotype is the same regardless of the individual's genetic, environmental or developmental context. We make these assumptions of monotonicity and independence for the same practical reason that a physicist posits a frictionless surface, a statistician contrives a stationary process, or a novelist invents thematic characters and plots: they simplify complex relationships for which we have no exact models, and they are often close enough to reality to make useful predictions about real-world processes. They are, however, fictions.

    For counterexamples to such assumptions, we can look to pharmacology, where the classic dose-response curve is the strongly non-monotonic 'inverted-U' surrounding an optimal dosage, and where a drug's kinetics and therapeutic effect can depend strongly on competitive or synergistic factors arising from other drugs or from individual variation. Careful characterization of neurodevelopmental disorders suggests similar dose-response relations between genes and developmental processes. Such relations are especially likely to exist, and to evoke profound effects, in the case of genes that regulate the activity of large networks of genes or proteins. Several instances of such genes are relevant to autism.

    The tumor suppressors TSC1/TSC2 and NF1 are GTPase-activating proteins with widespread effects on cell survival, cell structure and cell function, whose disruption causes tuberous sclerosis and type-1 neurofibromatosis, both of which are comorbid with autism. Knocking out Kras in Nf1-deficient mice (an Nf1+/- and Kras+/- double-knockout) restores the wild phenotype, illustrating the importance of interactions at the network level. Both NF1 and the TSC complex negatively regulate the phosphoinositide-3 kinase pathway, as does the tumor suppressor PTEN. Mutations in PTEN, a regulator of cell size and number, have been identified in people with autism and macrocephaly, and PTEN knockouts produce anxiety behaviors, deficits in social behaviors and increased spine density reminiscent of the FXS phenotype. These cases illustrate the crucial nature of appropriate gene dosage in establishing optimal numbers of neurons and synapses during development.


    The fact that pioglitazone is more capable in kids early on would fit with the idea that the plastiticty of the brain/central nervous system signaling can made more normal early on and that it becomes more difficult as time passes.

    References:
    [1] Boris M, Kaiser C, Golblatt A, Elice MW, Edelson SM, Adams JB, Feinstein DL. Effect of pioglitazone treatment on behavioral symptoms in autistic children. J Neuroinflammation. 2007 Jan 5;4(1):3

    [2] Matthew K Belmonte, Thomas Bourgeron,
    Fragile X syndrome and autism at the intersection of genetic and neural networks Nature Neuroscience - 9, 1221 - 1225 (2006)

    Sunday, January 21, 2007

    Wednesday, January 03, 2007

    Nothing short of a revolution, part 3 : It does not need to be nonlinear


    And then something happened in 1999: Emmanuel Candès and David Donoho noticed that with curvelets, one did not need an adaptive or a nonlinear scheme to converge on images with edges[1]. This is important because as stated before, the only way people were using wavelets and related functions (ridgelets,...) was through a nonlinear scheme, i.e. compute all the wavelet or fourier coefficients first and then do a threshold on them in order keep only the largest ones. While in the wavelet case, i.e. in 1-d, the issue is not really a big deal, it becomes a nightmare when you try to use this type of technique in multidimensional spaces. If you have to compute all the moments of an integral equation and then remove the ones that are too small (because wavelets sparsify certain integral kernel), you have barely made progress over the state of the art like fast multipole methods. So for the first time since the arrival of wavelets, there is hope that there is a linear process by which one can produce a sparse representation of a function or set of data. Ok so now we have a way to get a truely sparse approximation of a function using basis pursuit, we have tons of nice families of functions to do this and we now know that in order to find a sparse representation of a function we don't need a nonlinear scheme, why is this related to compressed sensing ?....

    References

    [1] Curvelets - a surprisingly effective nonadaptive representation for objects with edges. Curves and Surfaces, L. L. Schumaker et al. (eds), Vanderbilt University Press, Nashville, TN. Compressed Postscript (ps.gz) / (pdf)

    Tuesday, January 02, 2007

    Nothing short of a revolution. Part deux: Pursuing a dream


    With the wavelet bang came another realization and then something odd happened. In 1994, Donoho and Chen published a paper that summarized the state of the affairs on approximation:

    - The realization that now, with wavelets of all kinds being added daily on top of the old approximating polynomials, there are too many functions to be used to decompose a signal and there is no good way in figuring out which one is the best to use. The situation is so bad that you can write an entire dictionnary of different wavelet names.

    - Some signals are a composition of various functions and distributions (diracs,...), so it does not make sense to decompose them with respect to only one family of function. If you use only one type of decomposing function, you will not get a sparse decomposition. Take for instance the case of Gibbs phenomenon where one tries to approximate a dirac with a bunch of sines and cosines.

    - Most decomposition algorithms are based on a least square approach. In other words, the criterion with which one decides whether a series of approximation has converged in based on a $L_2$ distance or the euclidian distance. The definition of a scalar product is therefore essential but no one knows why some scalar products work better than other. In the end, many formulation of a variety of engineering problems rely on so-called weak formulation such as the finite element method but sometimes, no one knows really why they should be using these methods. Case in point, albeit a non traditional one: Neutron and Radiation Transport. There are many weak formulation of the linear transport equation even though we know that it has distribution as eigenfuctions. Yet neither the scalar product of the weak formulation ($L_2$) nor the one induced by the scalar product of the eigendistribution can constrain any of the solution to be positive (an initial requirement) or have a direct physical meaning.

    - If the scalar product is induced by the decomposing family of function, then how can one try to decompose a function simultaneously along different set of functions ?

    - The $L_2$ distance criterion never converges toward a local approximation. The approximation will have many coefficients that are very close to each other. There will be no way to apply a blind threshold to these coefficients because each and everyone of them count in the minimization of the criterion.

    - The problem that you have always wanted to solve is an $L_0$ problem not an $L_2$. The $L_0$ is the criterion is really about the sparsity of the approximation.

    - $L_0$ problem is NP-complete so there is no way you can figure a solution in your lifetime on average.


    Donoho and Chen showed that if you were to solve an $L_1$ problem instead of an $L_0$, the solution was likely very close to the $L_1$. Solving an $L_1$ problem is akin to performing linear programming or an optimization. Because of its over reliance on least square approach, the whole engineering field could be shattered as a result. Yet, the method by which one can obtain a solution are significantly time consuming at that time that it was not likely to make a dent in the weak formulation business anytime soon. The lingering thought is still that it will eventually prevail and change our views on how to solve engineering problems in the future. At some point, even Google was using some code word related to the $L_1$ approach to find future employees :-). There needs to be other stepping stones to reach the final climax of this story though.

    Printfriendly