A relevant #orms resolution for 2017: Update your solvers!

Making resolutions during the New Year has become something of a tradition. Some are followed, many are not. Here is one resolution I took years ago that has been pretty relevant. I would encourage you to use it as well :

Update your solvers whenever a new version is released.

Like, every time. It’s worth it. Two reasons:

Performance Improvements

Every major release of mathematical programming solver comes with pretty significant performance improvements. For instance, Gurobi reports 22% average improvements on mixed-integer programming (MIP) and 10% on pure linear programming models between versions 6.5 and 7. CPLEX is usually in the same ballpark of version-to-version improvements. Mosek reports 40% mean improvements for conic quadratic problems between versions 7 and 8. These are pretty significant, especially when compounded over multiple versions. I give a personal example here.

New features

Over the last few years, solver developers have added some pretty significant new functionality. Just for 2016, CPLEX included automatic Benders Decomposition into its 12.7 release, and Gurobi added native support for multiobjective optimization, among others. Before you implement your own customised algorithm, be sure to check what’s new, you could discover that it is now included in a standard release!

Why take some time to write about it?

You’d be surprised how often I see students running models on older workstations or servers using outdated solvers, like CPLEX 12.4 or Gurobi 6.0.5. I sometimes referee papers comparing the authors’ custom algorithms against even older versions.

Here are links on where to find new information about new versions and performance improvements (in alphabetical order). Those usually get presented in scientific conferences as well, the INFORMS Annual Conference in November being the biggest rendez-vous if you want to hear about what’s new.

  • CPLEX : you can generally hear about what’s new and relevant on this blog.
  • FICO Xpress : it’s a bit hard to find but they sometimes release a webinar about their new features here.
  • Gurobi has a very convenient page about what’s in the latest release.
  • Mosek : you can find links to copy of slides presented in conferences. They usually talk about their performance improvements there.


  1. Mark L. Stone says

    “I sometimes referee papers comparing the authors’ custom algorithms against even older versions.”

    How much of this is due to the author(s) not having access to new versions as opposed to trying to make their algorithm look better by comparing to outdated competition? What are the editorial outcomes when you come across such things as a referee?

    • That is a relevant question. For academic users, it is still possible to get updated versions at no cost, so there is no reason for using older versions. This could however happen when someone leaves academia and is no longer in an environment where (s)he can get access to such licences. While it can happen, it was not the case in the papers I referred to in the post as some of their authors were faculty.

      As for the editorial outcomes, this alone is not a reason for rejection. That being said, it weakens an empirical proof of performance. There was a case where a paper only provided marginal improvement over the solver it compared to, and that solver was badly outdated (5-6 years).

Speak Your Mind


This site uses Akismet to reduce spam. Learn how your comment data is processed.