Using commercial solvers in academic research

Over the last two years or so, I’ve ran into a couple of discussions about the use of commercial solvers in academic OR projects. There is often a moderate sense of unease when using commercial MILP solvers in our research. The problem is not the commercial property of the software – at least, not since major solver developers have started to offer free-of-charge academic licenses to academic researchers – but in the fact that several components of the solver are black-box components. Although several components are well-known and codes or pseudo-codes are available (cutting plane generation, for instance), some other parts of the package aren’t (such as preprocessing or root node heuristics).

For the most part, what this means to researchers is that it’s very difficult to understand what part of a commercial solver is responsible for the performance of a given application. “This model is solvable in less than one hour using a commercial software” isn’t much of a scientific explanation. What if a pharmacist said : “I don’t know what this blue pill is, but it makes my patients feel a lot better”? That being said, why are we using commercial solvers in our projects while open-source and/or freeware solvers such as CBC and SCIP exist?

Because commercial solvers are fast. I mean, really fast.

Hans Mittelmann’s MIP benchmarks, in date of May 25th, shows a performance ratio (measured in terms of geometric means of runtimes across a large set of MIP instances) ranging between 11 and 15 to 1 between CBC and CPLEX, depending on how many threads you use. That means CBC is 11 times slower than CPLEX. Gurobi and XPress are relatively close in terms of performance, but GLPK, CBC and  LPSOLVE aren’t. So whenever you design a new algorithm that improves upon an open-source solver, you’re going to get some criticism that you still don’t beat the best guys on the market.

So, what should we do? Compete with or against a Ferrari with undisclosed components or work only with tools that we may fully understand and describe? The question remains open.

Comments

  1. Mikael Vejdemo-Johansson says:

    For academic research that _uses_ LP solvers, instead of academic research on how to build LP solvers, the case is slightly more manageable.

    I am currently working with a project where LP only shows up because it has to; I’m trying to compute a similarity measure that turns out to be re-phrasable as an LP problem. The difference between different solvers meant that CVXOPT couldn’t even represent the problem, Matlab would have taken about 6 months to compute everything, and CPLEX did it in a week.

    At that stage, and when all I really care about is the solution and that it is a solution to a linear program, I am willing to contemplate taking IBM’s word for the integrity of the blackbox guts of their solvers.

  2. If the purpose of the research is to provide a practical solution (model and/or solution strategy) to a real-world problem, and if the “owners” of the problem can plausibly justify the cost of commercial solvers, then I think it is perfectly fine to benchmark against them. The black box nature of the solver would bother me only if the point of the research were to make assertions about the performance of component algorithms of the solver (for instance, a claim that strong branching is the best choice for multi-vehicle routing problems involving ox-drawn carts).

Speak Your Mind

*