Over at the Chesney Research website, I posted an introduction to compressive sensing. There’s already lots of good information out there on what is compressive sensing. This introduction is a little different because it gives an overview of compressive sensing without getting too far into the math and stays focused on why you would want to use it.
About a month ago, the National Science Foundationannounced new funding opportunities. Proposals for these grants are due in the Fall of this year. The program has three tiers of funding: small, medium, and large.
Amount ($ M)
12/1 – 12/17
up to 0.5
up to 3
9/1 – 9/15
0.5 – 1.2
up to 4
11/1 – 11/28
1.2 – 3
up to 5
There is no limit on the number of principal investigators that a funded project can have, but a PI may only submit two proposals.
address data of unprecedented scale, complexity, and rate of acquisition.
[Projects may support] processing needs for data from disparate and uncoordinated sources
demonstrate effectiveness in dimensions such as quality, scalability, accountability, interactivity, social responsibility, or resiliency to incomplete, uncertain, inconsistent, contaminated, or manipulated data.
should advance contemporary applications of societal importance
So say you want to use compressive sensing to lower the sampling rate of your measurement system. You first need to measure it in a basis which is sparse. You do this so you can sample it randomly far below Nyquist and save power in the process. However, what if the signal you are trying to measure is sparse in time? Then you’re in a little bit of a pickle, because you can’t sample this signal randomly without potentially losing information.
What I learned last week is that to compressively sense signals that are sparse in time, you need to first transform them into a domain that is not sparse in time and then sample randomly. One way to do this is to use the Hadamard transform to convert your signal of interest which is sparse in the time domain into a signal that is sparse in the frequency domain without any loss of information. The Hadamard transform is like taking a bunch of little DFT’s on your signal and putting them into a matrix.
However, if your signal is sparse in time, you probably still care exactly when your signal did the thing that is of interest to you, when it did whatever it was that you couldn’t afford to miss by sampling randomly. The latency or the group delay of your compressive sensing of a sparse in time signal is, ostensibly, going to be a pretty important piece of information to you. Unfortunately, I cannot find one paper that addresses latency or group delay in compressive sensing. To be sure, this has got to be incredibly difficult, if not impossible to predict. I would think that the iterative nature of the recovery algorithm alone (even though the order of its computational complexity can be assessed) must be very difficult to quantify in terms of time or samples.
Don’t worry, it’s just the fundamentals I don’t get; but it’s not for lack of traffic on the subject. There are some great resources on the internet about compressive sensing — which is a groundbreaking technique for reconstructing seemingly lost signals. Igor Carron has a great up-to-the-minute blog, Nuit Blanche on the latest news on compressive sensing and Rich Baranuik has a treasure trove of papers on the subject at http://dsp.rice.edu/cs. NPR did a story on compressive sensing that provides a good illustration of compressive sensing in practice. They have an example of an undersampled piece of music reconstructed using compressive sensing techniques.
Human ears usually pick up frequencies under 16 kHz, some as high as 20 kHz. Most speech is easily recognizable with only the frequency components at or below 4 kHz (for phone calls, speech is usually sampled at 8 kHz). CD’s are sampled at a rate of 44 kHz. In the demonstration on NPR, a musical track is sampled at a rate on the order of 100′s of Hz.
Hit Play on the control above to hear the undersampled version (sampled at ~100′s of Hz).
By using some fancy math, Jordan Ellenberg, from the University of Wisconsin, is able to faithfully reconstruct the original recording.
Hit Play on the control above to hear the sample reconstructed.
Compressive sensing has a wide range of possible applications. In general, there are usually three cases where compressive sensing is appropriate to use in a sampled system.
When the data is soon compressed after sampling
When the data is soon encoded for transmission
When the data is sparse
As it turns out, compressive sensing relies on some fairly complicated matrix algebra to work. Below are some of the topics I’ve encountered while reading about compressive sensing to try to figure out how it works.
A matrix is considered sparse if it can be represented with a much smaller set of values than its full size without much loss of information. For instance, take the matrix [1.0, 0.01, 0.01, 0.01, 0.01] and envision a scenario where it is treated, for all intents and purposes as [1.0, 0.0, 0.0, 0.0, 0.0]. If that representation is good enough for whatever calculation is performed on it, then the matrix is sparse. Say you wanted to know roughly what the average value of that matrix is. You could say it’s about 0.2 (as opposed to 0.208, which would be more accurate). The advantage is a far faster calculation: 1.0 / 5, instead of (1.0 + 0.1 + 0.1 + 0.1 + 0.1) / 5.
L1 Norm Minimization
L1 norm minimization is the technique the receiver employs to reconstruct the compressively sensed signal. The Wired article on compressive sensing gives a great overview of L1 minimization. In the example in the article, the L1 minimization algorithm creates a high resolution image from a low resolution one. It does this by
filling in shapes of color based on nearby known pixels. It iteratively follows this procedure, using smaller and smaller shapes until the picture is complete.
The L1 norm is a measure of distance in a 2-dimensional Cartesian coordinate system that gives the distance between two points. However, this distance is calculated as the sum of absolute difference of the coordinates of the two points. This is also known as taxicab geometry. It assumes that the only way to travel from one point to the other is in a gridded, rectilinear fashion, like streets in a major metropolitan area.
But how does L1 minimization preform this magic? I don’t know. This feat, I guess, is usually executed on a host CPU because when performace metrics are given for a new approach to L1 norm minimization, they are given with the qualification of what kind of machine was used. What’s more, this particular bit of matrix math, the part that actually recovers the desired signal, is iteratively executed. This means there’s really no telling how long this will take, which seems to make compressive sensing inappropriate for real-time sensing applications that rely on low-latency, such as feedback control loops.
Restricted Isometry Property
Isometry is a distance-preserving map between metric spaces.
characterizes matrices which are nearly orthonormal, at least when operating on sparse vectors. … There are no known large matrices with bounded restricted isometry constants, but many random matrices have been shown shown to remain bounded. The current smallest upper bounds for any large rectangular matrices are for those of Gaussian matrices
and further it defines RIP as….
Let A be an m × p matrix and let s < p be an integer. Suppose that there exists a constant δs such that, for every m × s submatrix As of A and for every vector y,
Then, the matrix A is said to satisfy the s-restricted isometry property with restricted isometry constant δs.
This really just completely confuses me. I guess the restricted isometry constant bounds the distance preserving transform between the two spaces? I have no idea, I guess I need to learn some more matrix math…