August 12th, 2012
Suppose you have 3 sensors: A, B, C. Each has a set of readings that goes into a column of a matrix, b:
A matrix A is constructed such that bA, bB, bC represent compressed measurements in some domain. A is Gaussian, since this is a universal sensing matrix for compressive sensing (CS), due to its poor correlation to anything other than itself. 
The solution for each xi (i ∈ A,B,C) represents the data recovered from these compressive measurements and can be found with min ||xi||1, or some other sparsity-promoting technique.
I am interested in the condition when all sensors (A,B,C) agree. Consensus, agreement among multiple sensors, is important for many applications. One example is safing functions. Safing is checking for agreement among, sometimes redundant, sensors before engaging a drastic action, for example, deploying the airbags in the event of an automobile crash. When all sensors agree (or agree enough), then the dangerous function of deploying an airbag towards the driver and passengers can be performed.
In this example, I am interested in the case when each sensor finds some nonzero sparse solution. Previously
, I developed the notion of the diverse solution to a set of linear equations that require a positive integer solution. The most diverse solution is the one that has values of roughly similar value. In another post, I showed that the diverse solution can be used to mitigate risk
For the safing function we want to mitigate the risk that our sensor data is in error, indicating that we should erroneously employ our drastic action. We use multiple redundant sensor to mitigate this risk. The agreement among sensors is maximized when diversity is maximized across all sensors. Even though we are seeking a sparse solution for each individual sensor (for example, min ||xi||1), we seek to diversify the number of sensors that have witnessed a significant event, that is, have found a nonzero sparse solution. So the algorithm is a sparsity-based linear program wrapped inside a diversity-based linear program. When all sensors agree, a peak-to-sum ration (PSR) is minimized, if the sensitivity and units of each sensor are normalized. Since we are nesting two linear programs together, it is important that the innermost program, recovering the compressively-sensed (CS) data, seeks a sparse solution, since this can be computationally efficient.
The l1-norm minimization routine is used as an example of a CS recovery routine and exists inside the maximum diversity routine.
Where x(:,i) indicates the ith column of x. The objective function value of the l1-norm minimization of each column of x is stored in a vector, u. This is the innermost routine. The outermost routine seeks to maximize the diversity of u by minimizing its peak-to-sum ratio.
Let’s look at some example solutions. We’ll just look at types of solutions that we can reach with this algorithm, independent of the b and A that actually generated them.
In the first example, above, all sensors seem to be in agreement that something happened (of magnitude 10). This is indicated by the low PSR, which has a lower bound of 1/N (= 1/3, in this case). There must be a little bit of noise in the readings, since the agreement isn’t perfect, but this could be considered close enough to declare consensus. Notice, also, that the sensors don’t get exactly the same reading; the readings are basically shifted by one index. If the columns of x represent samples in time, then each sensor is phase-shifted with respect to the others. In this algorithm, we don’t have to explicitly time-align the data — which can be a computationally-intensive step. We just accept the data as is, even with small difference between sensors.
In the second example, above, 2 of the 3 sensors seem to agree. One sensor appears to be broken, however, this has not completely destroyed our PSR. We could still limp along with 2 functional sensors for a little while, until we are in a position to address the potentially defective sensor. The algorithm has mitigated the risk of one sensor dying by using a diversity-promoting strategy.
In the third example, above, something is clearly busted. One sensor has detected a significant event, but the majority have not. A simple majority voting scheme would not be suspicious of this data set, but our nested diversity/sparsity linear program is. Notice that the PSR is getting closer to its upper bound, which is 1. At this level of PSR, our algorithm diagnoses our system of sensors as broken and can take action accordingly, instead of making a bad decision.
The strategy of reaching a consensus among sensors by using both sparsity- and diversity-promoting techniques have made this system more robust. However, the way the actual computation is performed hasn’t been made clear yet. Sparsity-based recovery techniques have been well covered and recently, I posted about how to solve a diversity-based integer program. Next I’ll look at how to nest these linear programs.
 Baraniuk, Richard. “Compressive Sensing,” IEEE Signal Processing Magazine. July 2007.
Tags: agree, consensus, decision, diversity, integer program, safing, sensors, sparsity, strategy
July 7th, 2012
Posted in Uncategorized | Comments Off
Minimize risk by maximizing diversity. Whether you’re a trader deciding which stocks to invest in, an investor deciding which companies to invest in, or a manager trying to decide which projects to support, you can minimize your risk by investing in a portfolio of products. This approach fits very nicely with the notion of diversity developed earlier.
Suppose you have 5 companies seeking investment. An investor can purchase up to 30% of a company. Each company must project its value at the end of the investment period and the investor has this information, along with the price per share for each company. Different companies have different prices per share and different projected valuations at the end of the investment period.
The investor cashes out and earns a share of the value of the company when the investment period is complete, proportional to the amount of the company the investor owns. For example, if the investor owns 3000 of the company’s 10000 shares, 30% of the valuation of the company at the end of the investment period goes to the investor.
The investor can only purchase whole shares of the company (no fractional shares, this is like a minimum investment). The investor may not ’short’ a company, that is, sell shares of a company that the investor does not own. So the number of shares an investor chooses to purchase must either be 0 or some positive integer less than or equal to the number of shares for sale.
Companies may or may not meet their valuation projections. Companies may go under, resulting in a valuation of 0 at the end of the investment period. The investor does not know how likely a company is to meet its goal or fold altogether. The goal is to avoid modeling this unknown and to develop a robust strategy based on the only two things the investor knows for sure: price per share and the projected valuation. The investor seeks a 10% return and has $100,000 to invest.
Let’s write some equations to figure out what the investor should do. We’ll use x to represent the amount of money invested in each company.
The restriction on how much of each company the investor can own amounts to a restriction on the amount of money the investor can invest in each company. Let’s say that for company 1, the maximum investment is 50000, for company 2, its 40000, and for company 3 its 60000.
Each company expects to turn the investment they receive into a return to the investor. We’ll use p to represent the percentage return on investment each company claims they will return to the investor at the end of the investment period. If a company claims a 10% return on their investment, then their value for p is 1.1, since they will return the initial investment, plus another 10% on top of that. The investor has a goal of 10% return on total investment, so weighted sum of p, weighted by the amount invested, must be greater than 110000.
That last equation can be rewritten as:
So now we have our equations in the form Ax<=b, where
To find the most diverse solution, the optimization problem is
As opposed to the most common way of solving this problem, which is to focus on maximizing return. One way of maximizing return is to ignore the constraint of getting at least 10% on the investment and just get as much as possible. That linear program is
Another way of maximizing the return is to leave the constraint as a minimum bound on the return. That just tells the solver of the linear program to quit once 10% return is reached. That kind of violates the spirit of maximizing your return and is not very illustrative when comparing it to maximizing diversity. We’ll treat max return as truly getting as much return as possible, without regard for how difficult the problem is for the solver, and drop the minimum return requirement for max return.
Let’s look at the two strategies: max diversity and max return. Let’s say that as a company offers more return, it’s price per share is higher and, even accounting for differences in the number of shares available, that this translates into a higher maximum investment. So the companies with a higher maximum investment are promising a bigger return. Let’s say p is:
The max return solution is:
This solution has a projected return of 14.1%. This solution is 3-sparse for N=5, or 60% sparse. It concentrates as much investment as possible into the highest returning company. Once that investment is maxed out, then the rest goes into the second highest returning company until the total investment limit is reached.
The most diverse solution is:
It has a PSR of 0.20834 and a return of 10.00016%.
What happens if one of the companies fails? If it’s the high-return company, 4, the max return solution, since it is sparse, only returns 33.6% of the investor’s money ($33600). However, the diverse solution is much more robust and still returns over 86% of the investor’s money ($860411) if company 4 fails and the others meet their targets. For a 4% gain on return, the risk that you would lose over 50% of your money is clearly not worth it.
Maximizing return could give a sparse solution, even when you don’t specifically seek it. The sparse solution is not robust to the risk of total failure, in this case, zero return. The minimum PSR solution sacrifices return to deliver a diverse solution which is more robust to failure, that is, it is much more likely to have an acceptable return. Notice that I am comparing max return vs. the diverse solution as strategies. Allowing the sparse solution is what makes max return less robust. When considering how to invest, the sparse solution is focusing your efforts, which could promise more return, but is less robust. The diverse solution is hedging your bets. Once the minimum return goal is met the diverse solution mitigates risk by spreading it out across a portfolio of investments.
Tags: applied math, diversity, integer program, investing, linear program, portfolio, risk, strategy
Posted in Uncategorized | Comments Off