Suppose you have 3 sensors: A, B, C. Each has a set of readings that goes into a column of a matrix, **b**:

A matrix **A** is constructed such that **b _{A}**,

**b**,

_{B}**b**represent compressed measurements in some domain.

_{C}**A**is Gaussian, since this is a universal sensing matrix for compressive sensing (CS), due to its poor correlation to anything other than itself. [1]

The solution for each **x**_{i} (i ∈ A,B,C) represents the data recovered from these compressive measurements and can be found with min ||**x**_{i}||_{1}, or some other sparsity-promoting technique.

I am interested in the condition when all sensors (A,B,C) agree. **Consensus**, agreement among multiple sensors, is important for many applications. One example is safing functions. Safing is checking for agreement among, sometimes redundant, sensors before engaging a drastic action, for example, deploying the airbags in the event of an automobile crash. When all sensors agree (or agree enough), then the dangerous function of deploying an airbag towards the driver and passengers can be performed.

**x**

_{i}||

_{1}), we seek to diversify the number of sensors that have witnessed a significant event, that is, have found a nonzero sparse solution. So the algorithm is a sparsity-based linear program wrapped inside a diversity-based linear program. When all sensors agree, a peak-to-sum ration (PSR) is minimized, if the sensitivity and units of each sensor are normalized. Since we are nesting two linear programs together, it is important that the innermost program, recovering the compressively-sensed (CS) data, seeks a sparse solution, since this can be computationally efficient.

The l1-norm minimization routine is used as an example of a CS recovery routine and exists inside the maximum diversity routine.

Where **x**(:,i) indicates the i^{th} column of **x**. The objective function value of the l1-norm minimization of each column of **x** is stored in a vector, **u**. This is the innermost routine. The outermost routine seeks to maximize the diversity of **u** by minimizing its peak-to-sum ratio.

Let’s look at some example solutions. We’ll just look at types of solutions that we can reach with this algorithm, independent of the **b** and **A** that actually generated them.

In the first example, above, **all sensors seem to be in agreement** that something happened (of magnitude 10). This is indicated by the low PSR, which has a lower bound of 1/N (= 1/3, in this case). There must be a little bit of noise in the readings, since the agreement isn’t perfect, but this could be considered close enough to declare consensus. Notice, also, that the sensors don’t get exactly the same reading; the readings are basically shifted by one index. If the columns of **x** represent samples in time, then each sensor is phase-shifted with respect to the others. In this algorithm, we don’t have to explicitly time-align the data — which can be a computationally-intensive step. We just accept the data as is, even with small difference between sensors.

In the second example, above, 2 of the 3 sensors seem to agree. One sensor appears to be broken, however, this has not completely destroyed our PSR. We could still **limp along** with 2 functional sensors for a little while, until we are in a position to address the potentially defective sensor. The algorithm has mitigated the risk of one sensor dying by using a diversity-promoting strategy.

In the third example, above, **something is clearly busted**. One sensor has detected a significant event, but the majority have not. A simple majority voting scheme would not be suspicious of this data set, but our nested diversity/sparsity linear program is. Notice that the PSR is getting closer to its upper bound, which is 1. At this level of PSR, our algorithm diagnoses our system of sensors as broken and can take action accordingly, instead of making a bad decision.

The strategy of reaching a consensus among sensors by using both sparsity- and diversity-promoting techniques have made this system more robust. However, the way the actual computation is performed hasn’t been made clear yet. Sparsity-based recovery techniques have been well covered and recently, I posted about how to solve a diversity-based integer program. Next I’ll look at how to nest these linear programs.

**References**

[1] Baraniuk, Richard. “Compressive Sensing,” *IEEE Signal Processing Magazine*. July 2007.