9.5 Examples

Norm and Penalty Approximation

In the first example we solve the norm approximation problems

\begin{displaymath}
\begin{array}{ll}
\mbox{minimize} & \Vert Ax - b\Vert _\i...
...ay}{ll}
\mbox{minimize} & \Vert Ax - b\Vert _1
\end{array},
\end{displaymath}

and the penalty approximation problem

\begin{displaymath}
\begin{array}{ll}
\mbox{minimize} & \sum_k \phi((Ax-b)_k)...
...
2\vert u\vert-9/4 & \vert u\vert \geq 3/2.\end{array}\right.
\end{displaymath}

We use randomly generated data.

The code uses the Matplotlib package for plotting the histograms of the residual vectors for the two solutions. It generates the figure shown below.

from cvxopt.random import normal
from cvxopt.modeling import variable, op, max, sum
import pylab

m, n = 500, 100
A = normal(m,n)
b = normal(m)

x1 = variable(n)
op(max(abs(A*x1-b))).solve()

x2 = variable(n)
op(sum(abs(A*x2-b))).solve()

x3 = variable(n)
op(sum(max(0, abs(A*x3-b)-0.75, 2*abs(A*x3-b)-2.25))).solve()

pylab.subplot(311)
pylab.hist(A*x1.value-b, m/5)
pylab.subplot(312)
pylab.hist(A*x2.value-b, m/5)
pylab.subplot(313)
pylab.hist(A*x3.value-b, m/5)
pylab.show()

\includegraphics[width=\linewidth]{figures/normappr.eps}

Equivalently, we can formulate and solve the problems as LPs.

t = variable()
x1 = variable(n)
op(t, [-t <= A*x1-b, A*x1-b<=t]).solve()

u = variable(m)
x2 = variable(n)
op(sum(u), [-u <= A*x2+b, A*x2+b <= u]).solve()

v = variable(m)
x3 = variable(n)
op(sum(v), [v >= 0, v >= A*x3+b-0.75, v >= -(A*x3+b)-0.75, v >= 2*(A*x3-b)-2.25, v >= -2*(A*x3-b)-2.25]).solve()

Robust Linear Programming
The robust LP

\begin{displaymath}
\begin{array}{ll}
\mbox{minimize} & c^T x \\
\mbox{subje...
...leq 1}
(a_i+v)^T x \leq b_i, \qquad i=1,\ldots,m
\end{array}\end{displaymath}

is equivalent to the problem

\begin{displaymath}
\begin{array}{ll}
\mbox{minimize} & c^Tx \\
\mbox{subjec...
...x + \Vert x\Vert _1 \leq b_i, \qquad i=1,\ldots,m.
\end{array}\end{displaymath}

The following code computes the solution and the solution of the equivalent LP

\begin{displaymath}
\begin{array}{ll}
\mbox{minimize} & c^Tx \\
\mbox{subjec...
...i, \qquad i=1,\ldots,m \\
& -y \preceq x \preceq y
\end{array}\end{displaymath}

for randomly generated data.

from cvxopt.random import normal, uniform
from cvxopt.modeling import variable, dot, op, sum 
from cvxopt.blas import nrm2

m, n = 500, 100
A = normal(m,n)
b = uniform(m)
c = normal(n)

x = variable(n)
op(dot(c,x), A*x+sum(abs(x)) <= b).solve()

x2 = variable(n)
y = variable(n)
op(dot(c,x2), [A*x2+sum(y) <= b, -y <= x2, x2 <= y]).solve()

1-Norm Support Vector Classifier

The following problem arises in classification:

\begin{displaymath}
\begin{array}{ll}
\mbox{minimize} & \Vert x\Vert _1 + {\bf 1...
...bject to} & Ax \succeq {\bf 1}-u \\
& u \succeq 0.
\end{array}\end{displaymath}

It can be solved as follows.
x = variable(A.size[1],'x')
u = variable(A.size[0],'u')
op(sum(abs(x)) + sum(u), [A*x >= 1-u, u >= 0]).solve()
An equivalent unconstrained formulation is
x = variable(A.size[1],'x')
op(sum(abs(x)) + sum(max(0,1-A*x))).solve()