Linear programming with scipy.optimize¶

scipy.optimize library provides a function linprog that solves linear programming problems.

Here is link to the documentation of this function: https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html

Importing linprog function:

In [1]:
from scipy.optimize import linprog

Example 1¶

We want to maximize

$$f(x_1, x_2) = 200 x_1 + 100 x_2$$

Subject to:

\begin{align} x_1 + x_2 & \leq 100\\ 100 x_1 + 300 x_2 & \leq 27000\\ 4 x_1 + x_2 & \leq 280 \\ x_1 & \geq 0\\ x_2 & \geq 0 \end{align}

scipy.optimize.linprog finds a minimum, so we replace the objective function by

$$g(x_1, x_2) = -200 x_1 - 100 x_2$$

In the matrix form:

$$g(x_1, x_2) = \begin{bmatrix} -200 & -100 \\ \end{bmatrix} \cdot \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix} $$

Constraints: $$ \begin{bmatrix} 1 & 1 \\ 100 & 300 \\ 4 & 1 \\ \end{bmatrix} \cdot \begin{bmatrix} x_1 \\ x_2 \\ \end{bmatrix} \leq \begin{bmatrix} 100 \\ 27000 \\ 280 \end{bmatrix} $$

Use linprog function:

In [2]:
res = linprog(c = [-200, -100], 
              A_ub = [[1, 1], [100, 200], [4, 1]],
              b_ub = [100, 27000, 280],
              bounds = [(0, None), (0, None)], 
             )
res
Out[2]:
        message: Optimization terminated successfully. (HiGHS Status 7: Optimal)
        success: True
         status: 0
            fun: -16000.0
              x: [ 6.000e+01  4.000e+01]
            nit: 2
          lower:  residual: [ 6.000e+01  4.000e+01]
                 marginals: [ 0.000e+00  0.000e+00]
          upper:  residual: [       inf        inf]
                 marginals: [ 0.000e+00  0.000e+00]
          eqlin:  residual: []
                 marginals: []
        ineqlin:  residual: [ 0.000e+00  1.300e+04  0.000e+00]
                 marginals: [-6.667e+01 -0.000e+00 -3.333e+01]
 mip_node_count: 0
 mip_dual_bound: 0.0
        mip_gap: 0.0

Return values:

  • x: array with coordinates that give the minimum of the objective function
  • fun: minimal value of the objective function
  • slack: values of slack variables (automatically created)
  • success: True is an optimal solution has been found
  • status: it is 0 if everything went fine, otherwise a code indicating what went wrong
  • nit: number of iterations performed

These values can be accessed individually:

In [3]:
res.x
Out[3]:
array([60., 40.])
In [4]:
res.slack
Out[4]:
array([    0., 13000.,     0.])
In [5]:
res.fun
Out[5]:
-16000.0
In [6]:
res.status
Out[6]:
0

Example 2¶

Here is an example of an unbounded linear program (there is no minimum):

We want to minimize

$$f(x_1, x_2) = -x_1$$

Subject to:

\begin{align} x_1 - \frac{2}{3}x_2 & \leq 2\\ - x_1 - x_2 & \leq -1 \\ x_1 & \geq 0\\ x_2 & \geq 0 \end{align}
In [7]:
res = linprog(c = [-1, 0], 
              A_ub = [[1, -2/3], [-1, -1]],
              b_ub = [2, -1],
              bounds = [(0, None), (0, None)], 
             )
res
Out[7]:
       message: The problem is unbounded. (HiGHS Status 10: model_status is Unbounded; primal_status is At upper bound)
       success: False
        status: 3
           fun: None
             x: None
           nit: 2
         lower:  residual: None
                marginals: None
         upper:  residual: None
                marginals: None
         eqlin:  residual: None
                marginals: None
       ineqlin:  residual: None
                marginals: None

Efficiency of the simplex method¶

Theoretically, it would be possible to solve any linear program without using the simplex method, as follows:

  • compute all basic feasible solutions
  • compute the value of the objective function for all these basic feasible solutions
  • pick the basic feasible solution with the largest value.

The problem is, that for larger linear programs the number of all feasible solutions is too big to compute in practice. The advantage of the simplex method is that it goes through a sequence consisting of only some basic feasible solutions. Most basic feasible solutions are never computed by the simplex method, since this method never goes to a basic feasible solution that gives a lower value of the objective function that the one that has been already found. This reduces the number of computatons by a lot.

For example, the code below randomly creates a linear programming problem with 100 variables and 100 inequality constraints. Such problem can have up to $10^{59}$ basic feasible solutions. This means that a computer that could calculate 1,000,000,000 such solutions per second, would need about $10^{43}$ years to check all of them. The code prints the number of pivot steps used in the simplex method computations, and the time taken to compete these computatioms.

In [8]:
import numpy as np
from time import time

nconstr = 100
nvar = 100
rng = np.random.default_rng()
st = time()
res = linprog(c = rng.uniform(-10, 5, nvar), 
              A_ub = rng.uniform(0, 10, (nconstr, nvar)),
              b_ub = rng.uniform(0, 10, nconstr)
             )
et = time()
print(res.message)
print(f"Number of pivot steps: {res.nit}")
print(f"Computed in {et - st:.2f} seconds")
Optimization terminated successfully. (HiGHS Status 7: Optimal)
Number of pivot steps: 19
Computed in 0.01 seconds