5 reasons why solver developers should not listen (too much) to academia [dual]

We academia are very eager to offer advice. Here are a few reasons why solver developers should sometimes refrain from listening too much to what academia has to say. I don’t mean ignoring scientific literature, but rather “advice” and requests for features from graduate students and professors. You can get a set of arguments supporting another opinion on the subject on the associated primal post.

Why you shouldn’t listen too much to academia:

  1. Most often than not, academia don’t pay licence fees for solvers. That means we’re not your actual customers, no matter how loud or noisy we are.
  2. We’re not exactly like your customers. We focus on different things. We expect and want different things. If you’re not careful, we may distract you from what you do best.
  3. Big picture is not our traditional focus. The simple fact that we have to write 10-pages research papers defines how we do  research. We make small, provable contributions without taking too many external interactions into account, because it would limit our ability to explain what happens.
  4. Our improvements may not be applicable to your products.  As @JFPuget once pointed out, tested improvements over GNU solvers such as CBC or GLPK may yield very different results when implemented in a commercial solver. It may cause issues such as increased memory usage or occasional decreased performance.
  5. It’s almost impossible to remove all sources of bias in evaluation. This one is tricky. Efficient implementation is often as important as the quality of any idea, yet it’s quite difficult to separate quality of an idea versus quality of implementation just by looking at results from a paper. Yet, tt’s rare to see a researcher or even a team of researchers who is equally and extremely proficient with all the methods he is testing in a paper comparing several methods. One excellent idea may yield mitigated results if implemented badly. Anotehr discussion of bias in scientific experimentations can be found in this post by John J. Cook.

That being said, I still consider that solver developers should take a look at what is done in the academic world. Evidence from the evolution of state-of-the-art solvers also seem to support this claim, where techniques first proposed in research papers were integrated to MIP solvers, resulting in significant performance improvements.


  1. A somewhat less concise version of this article would be interesting to read. In particular, I would be very interested in the details that academia mostly ignores (e.g. memory usage, consistency in solving times for different problems, etc.) from the perspective of the industry.


  1. […] We academia enjoy giving advice to others. Here are a few reasons why solver developers should sometimes listen to what academia have to say. You can get a set of opposite arguments on the associated dual post. […]

Speak Your Mind