Why You Should Learn Linear Programming

I attended university in pursuit of an undergraduate degree twice over a number of years.  At the school I first attended, I studied mathematics and computer science, during this time, I took a course in linear programming and convex optimization.  That course was highly illuminating.  I had already learned what linear programming and the simplex method were, from a previous computer science course on algorithms.

A linear program is an optimization problem,  basically involving a set of inequalities, called constraints, and an expression to optimize, called an objective function.  Linear programming and mathematical optimization in general teach you to think about resource allocation problems in very generalized terms.  One could for example, completely ignore the mechanisms of markets and trade, and just measure outcomes.  Then trade and markets could simply be considered an algorithm for resource allocation among a population.  What is optimal, and how to coordinate an optimal outcome, are two very different problems.  In a pure theory sense, optimal allocations can be achieved in from a wide variety of organizational or decision making structures.

Linear programming is quite accessible.  It is helpful to have some familiarity with calculus, just to have a basic level of mathematical proficiency and concepts, however, calculus is not used directly in conventional linear programming solving techniques(ie the simplex method).

Linear programs can be constructed to represent a wide variety of optimization problem, albeit perhaps with significant simplification of the domain in question.  Common problems include maximizing profit with limited raw materials, certain simple scheduling or logistics problems, and much more.

 Say for example, you want to cook various recipes for a bake sale.  You are confident you can sell all your baked goods, at the going price.  But each baked good uses a different proportion of your available ingredients and has a different amount of profit.

Consider if we have the following recipes each of which uses a different amount of ingredients.

  • 4 Recipes: brownies, cake, cookies, fudge
  • 4 Ingredients: Chocolate, frosting, flour, sugar

We can make fractional batches in any proportion, we are limited by the available ingredients, and we use the prices of each batch to measure which combinations are better than others.

Linear programming uses the same technical ideas as economics, without the overhead of dealing with political allocation issues.  Understanding how to optimize without these political issues, helps you to understand the potential effects of these rules much more objectively.  As is often the case, mathematics can be very value neutral, or better stated, the issue of convincing people to work together, is a distinct matter from logistics and engineering.

I am not going to go over the simplex method, or interior point methods(which generalize to any convex optimization problem), in this post.  Rather I just wish to convey what a Linear Program is, what solutions look like, and why this is important.

Why Linear Programs Are Important In Economics and Beyond

Learning linear programming and the more general techniques of convex optimization, also help highlight why convexity is such an important topic in economics.  This is not because all problems feature convexity, nor because convexity is always a reasonable or practical assumption, but rather because most general optimization solving techniques require assumptions like either convexity, or analytic functions, for the mathematics to work.

 So you must learn convexity to build a good mathematical toolkit, and also to be able to recognize when problems have simple globally optimal solutions.

Additionally, many problems may have structures that locally have features like convexity, monotonicity, etc. So many of these techniques can be used, even if the assumption does not apply to whole problem space, but only locally.  You can then break up your search into two steps, identify a sub problem, and then optimize.  This is like trying to find the tallest mountain in the world, but visiting a regions and then asking where their tallest mountains are.  Not everywhere location is equally likely to have the solution.

Finally, learning these optimization/search techniques under these assumptions, is a often a necessary first step to learn more generalized or advanced solving methods.


An Example Trial and Error Approach to Solving Linear Programs

 It is often beneficial or at least fun, when one encounters a new mathematical problem, to try to solve it on your own. In this case, if we new nothing about linear programming, one technique we might try is simple trial and error.

So here is a linear program.  I am not going to specify what the variables represent, although it could easily be applied to the example I just gave of a bake sale, as it will have four primary variables(recipes), and four inequalities(ingredients), and one objective function(profit).

 

Typically, we rewrite variables using the same letter with a subscripted index.  Or we can similarly write things in vector/linear algebra format.

 We choose a bunch of random amounts of brownies, cake, cookies and fudge, and see if there are sufficient ingredients, and if there are, we measure how much profit we would make from that particular combination.

We won't go over how the variables specifically relate to each of our recipes and ingredients.  If you want to understand this clearly, then you should probably actually go out and learn linear programming.  For now, it is sufficient to know that this is the math, behind that specific problem.

The python script is here: dart_optimum.py  







The first technique used by this script is very similar to repeatedly hitting a dartboard, to measure the area of a circle.  Such crude techniques can be very powerful, and yet incredible simple.  The tradeoff is that you typically need a bunch or repeated computations.  Thankfully, computers and programming make this incredibly cheap for small problems.  Where such simple approaches break down, is if the problem has sufficient complexity, so that trial and error is way to slow.  Combinatorics is the branch of mathematics studying how one might generate patterns or sequences, and the size of combinatorial possibilities often grows exponentially with the number of steps or choices.

While there are many optimization problems, that may require pure trial and error, such as cracking a password, mining bitcoin, or the class of problems known as NP hard, other computational problems lend themselves to efficient algorithmic search.

Imagine you are trying to find a matching item in a sorted list of items.  One technique to do this is called binary search.  Basically, you split the list in half, by checking the middle item.  If that item comes before the item we are looking for in the ordering that was used to sort the list, then we continue are search with the second half of the list. Otherwise, if the middle item comes after our search item, we look in the first half of the list.  By repeating this process, we are able to find the item in Big Oh O(log2 N) time.  The specification of the logarithmic base in that problem, is technically unnecessary, as big oh notation, only considers search time within some constant factor (as the search space grows arbitrarily large).  This is known as asymptotic runtime analysis.

Well, with a linear program, there is fortunately a specific algorithm for finding the optimal solution.  Because all the constraints are linear inequalities, and the objective function is linear as well, we know that we never need to "backtrack" from a better solution, toward less optimal ones.

In a linear program, the linear inequalities that are the constraints form a multi-dimensional shape, known as a simplex.  The number of dimensions is just the number of variables which can be adjusted.  Often times, these inequalities are converted into equalities with "slack" variables, that simply represent how far below the limit of the inequality the expression evaluates to.



 


Comments

Popular posts from this blog

Higher Rates Lock In A Guaranteed Rate of Inflation

An Actually Accurate Criticism of MMT

Interest Income is an Upward Transfer Which Squeezes Workers Harder