Is your research paper a star, a comet, or an asteroid?

One of the tasks I find challenging is determining what papers to recommend for introducing grad students to a particular field. How to split the assignments between classics and current hot topics? In order to help students anticipate what kind of papers they are about to read, I came up with a metaphor that I find quite useful. I thought I might share that with you in case it’s useful to you.


In terms of research papers, a star is a masterpiece that has a profound and lasting impact in a field. It is well written, has deep implications, provokes thought and will likely to be worth reading again in several decades. It proposes either a sound technique that can be applied to many problems, or such tremendous results that are likely to remain unbeaten for a long time. I also put in that category some papers that thoroughly summarize a broad range of advances in a field and allows newcomers in the field to understand many key concepts without having to read tens and tens articles.  Beethoven’s 9th Symphony (op.125) is a good music example.

In my field (supply chain network design), there is a paper which perfectly fits that definition: A.M. Geoffrion and G.W. Graves’ Multicommodity Distribution System Design by Benders Decomposition, published in Management Science in 1974. More on this in an upcoming blog post.


A comet is a bright, shiny object that generates a lot of attention. It is at the center of a hot topic or trend in research.  However, its relevance is tied to a particular time and context, and its intrinsic value diminishes quickly over time. Either the method was replaced by something more effective or the discussion has evolved.

As a researcher, one usually remembers the papers that were comets at the time she was introduced to that particular topic. It’s important to know what are the current comets in the sky. What is worth reading in that category tends to change rather quickly, and it is a good idea to keep this list updated frequently if you don’t want your students to waste their time on outdated approaches.

When I started working on SCND, international supply chains were a hot topic. Multinational could use transfer prices to affect the taxes they paid in each country. It also made the models more challenging to solve. However, the freedom to fix these prices was practically removed by governments to prevent tax evasion, thereby killing the practical relevance of these problems. 


Asteroids form the vast majority of research papers. Most of them are not relevant to you (or your students) unless they are very close to the particular topic or approach you are currently working on. In many broad fields (like vehicle routing), it’s impossible to even read 10% of the asteroids out there. Don’t waste your time by reading too many papers of that kind. Only read what’s very close to your research, unless you want to spend your life writing literature reviews.

In SCND, there are hundreds of papers proposing heuristics to solve a particular sets of instances or a very specific formulation. Usually their algorithm is only slightly faster than the previous state-of-the-art, and the article’s value gets to near-zero as soon as someone publishes a faster algorithm.

What to read, then?

The more I think about this problem, the more I think that you can’t skip on the classics (aka stars), especially at the Ph.D. level. If these papers are particularly difficult to understand, it can be replaced by a book chapter or another reading that is easier to understand.  In terms of research, it is common for people who don’t master the stars to mistake comets for stars and asteroids for comets.

How problematic are your Big M constraints? an experiment [primal]

This post investigates whether it is still relevant to be careful about the coefficients used in so-called Big M constraints when formulating mixed-integer programming (MIP) models. I make some experiments with two-echelon supply chain design models to show that using too large values often cause the models to be harder to solve. The impact is much greater on less-performing solvers than on commercial solvers.  In the associated dual post, I explain what are Big M constraints and why they tend to cause problems for solving MIPs.

A bit of context

When I started learning about integer programming in the early 2000s, I often got the following advice regarding modelling using Big M constraints:

  1. Use as few Big M constraints as possible;
  2. If you need to use them, make the coefficients (the Ms) as small as possible.

I always kept this advice but I never really tested the assumptions in practice. It’s a fact that solvers are several orders of magnitude faster than they were when I learned modelling. So is this advice still relevant? I wanted to test it out.

The experiment

I put together three supply chain design instances. More precisely, they are Two-Echelon (hierarchical) Uncapacitated Facility Location models. These are of size 30 x 30 x 50 and are identical in structure to the models shown here except for the single source constraints which I did not use.  Each instance has thus 60 Big-M constraints, one for each facility, that ensures that the facility can process products only if it is built. I used three different levels of coefficients: (i) the smallest (tighest) possible value, (ii) a larger value but in the same order of magnitude as the smallest value, and (iii) a huge value that is at least 100 times larger than the best value.  The instances are identical except for these 60 coefficients. I run the models through three solvers: CPLEX, CBC and GLPK, with a time limit of 2 hours, and compute the geometric mean of run times (in seconds).
Keep in mind that these are the same supply chains! CBC is strongly affected by the change, needing on average 6.5 more time for option (ii) and a whopping 15.5 times for (iii). CPLEX is also affected but to a much lesser extent, by a factor of about 2 and 4. GLPK struggles with these models even when they are tightly formulated. When higher values are used for the constraints, GLPK simply can’t solve them within the allotted time, with a final gap between 5 and 18% depending on instances.

The commercial solver is not only faster, it is also less affected by the unnecessarily large coefficients. I didn’t post results with Gurobi but the performances are quite comparable with CPLEX. This also shows the canyon between free and commercial solvers in terms of performance.

If you found this post useful or insightful, please share it with your colleagues, friends or students!

Extra: Why choose this formulation?

For full disclosure, I mentioned in the dual post that the model I use is not the most efficient one for solving this problem. I used it anyways for two reasons:

  • The model is easy to understand, compared to more complex variants of supply chain design which have many types of binary variables.
  • The best (tightest) values of the Big M coefficients are very straightforward to obtain in this model.


What are Big M constraints? [dual]

This post presents a class of constraints that are used very often in mixed-integer programming (MIP) models. I explain what they are, why they are important and why using too large values for the big M is problematic. The associated primal post … [Continue reading]

A relevant #orms resolution for 2017: Update your solvers!

Making resolutions during the New Year has become something of a tradition. Some are followed, many are not. Here is one resolution I took years ago that has been pretty relevant. I would encourage you to use it as well : Update your solvers … [Continue reading]

The recommendation game [dual]

People who teach or supervise students have a tremendous influence on what solvers get adopted by the community. Once they finish their studies, many students will continue to use the tools they have learned in school; it is simply more efficient. In … [Continue reading]

The recommendation game [primal]

People who teach or supervise students have a tremendous influence on what solvers get adopted by the operations research / industrial engineering community. Once they finish their studies, many students will continue to use the tools they have … [Continue reading]

Facility location : not so difficult (with the proper tools)

In a previous post, I generated a few capacitated facility location instances (CFLP) and I ran these through MIP solvers. The instances were solved pretty quickly overall. In a comment, professor Matteo Fischetti suggested I compare my results with … [Continue reading]

Facility location : mistake, issue and results

About two weeks ago, I generated a few capacitated facility location instances (CFLP) for some students to play with. When I ran these through the CPLEX and Gurobi solvers, all of them were solving very quickly. Gurobi in fact seemed to find the … [Continue reading]

Facility location : presolved to optimality!

  ** IMPORTANT NOTICE ** This post has temporarily been suspended as some readers noticed potential problem with the model files. I will issue a corrected post shortly. I am sorry for any inconvenience. Marc-André … [Continue reading]

It’s time talk more about barrier in class

In today's post, I argue that optimization classes and textbooks should put a greater emphasis on interior point methods. During my many years of study, I have been exposed to quite a bit of simplex theory. I also had to perform many primal and … [Continue reading]