Operations Research: A recursive art-and-science?

Disclaimer: The purpose of this post is to look at research we’re doing in a humorous way rather than to criticize recent research.

The “is x an art or a science?” is a very cliché question (x being an element of a set, not a vector of decision variables). My belief is that although we most often work on scientific aspects of Operations Research, applying OR in practice requires quite a touch of art. In particular, any OR project requires solving a few unstructured decision problems, some of them may be recursive.

Say, you have a scheduling problem to solve. Will you use constraint programming, an integer programming or a combinatorial optimization approach to model your problem? How will you solve it? These are all important choices, affecting the performance of the approach. But then again, how to you define performance? How do you measure that performance? Science provides tools, but in real life projects, some gut-feeling choices need to be made based on experience and judgement.

Two examples of the limitations of prescriptive science come to mind:

Selecting a method for solving a decision problem

Say you use two or more objectives to rank solutions on your given scheduling problem. Deciding which multi-criteria decision making (MCDM) approach to use on a given problem is itself a multi-criteria decision making problem (Guitouni et al., 1991). Say you really believe in the relevance of MCDM methods. So you use a MCDM approach to select the best approach to select a solution for your decision problem. But then, selecting an appropriate method to select your method is complex, so you may want to either do an arbitrary choice based on gut feeling (your favourite OR teacher will tell you that this is bad) or use a MCDM approach… well, you get the idea of the algorithm:

Algorithm Select_MCDM_approach(P as problem)
1. Define Problem P (alternatives, criteria, etc.)
2. Select_MCDM_approach (P)
3. Apply chosen technique to find a solution to your problem.

Selecting a solver to solve a class of mixed-integer programming problems

If I force myself to use a mono-objective model, it gets simpler, right? Not quite so… let’s assume for a moment that your final user is a homo economicus that seeks to maximize his profit and cares for nothing else.  Selecting a “solver” is far from evident. Will you use a code you programmed yourself? A commercial solver? Say you limit your choices to commercial MIP solvers. Will you use the fastest? How do you define ‘fastest’? Fastest on average? On the average of what? Do you use a set of benchmark instances from the literature or create an instance given to you by your client? (By now you sense a vague potential for MCDM here).

But wait, there’s more. You can tune the parameters of these solvers to hopefully find the best tuning. As Paul A. Rubin pointed out during a talk at INFORMS 2012 annual conference, tuning an optimization method is itself a complex optimization problem. Some methods have been proposed to tune metaheuristics, in particular. If someone wanted to look for the optimum solver-parameter duet, it would require quite a design of experiments (and some hours of computer time, too.)  It also requires the hypothesis that the benchmark used is representative of the future instances to be solved. Then you would have to select

Why it still does matter

In journal papers, we can pretend that we know the ‘real deal’ answer. That’s fine. But truth is, these “meta-problems” may very well fascinate me more than the classical problems I studied during my Ph.D.  They are complex, and they have as much impact on the performance of OR-based decision support systems than the lastest cutting plane algorithm paper you just read. I learned a lot about theoretical OR by trying to find out how solvers work. It made me read papers and research that I would never have read otherwise. I won’t say that I’m a good OR practitioner, but it definitely made me better.

Speak Your Mind