Using the CPLEX tuning tool on difficult models

The question on how a tuning tool can help solve difficult optimization models has been intriguing to me. For those new to the concept, the tuning tool tries different parameter settings and seeks to find good parameters for the solver for a given optimization model. The use of this tool is rather straightforward on easy models, it is less obvious to use it efficiently on harder models. I talked about this with Dr. Ed Klotz from IBM Cplex. This post is a collection of some of his advice on the topic.

Dr. Klotz’s first observation is that you are often tempted to use the tuning tool after you just ran the model with default settings. Since you are not interested in settings that take longer than the default, you can set the tuning time limit (different from the solver’s time limit) to the time needed to solve the model with default settings.

Second observation is that the tuning tool might run for a very long time. Another piece of advice is to put a time limit on the full tuning process (through the solver’s time limit parameter) so you don’t need to force terminate CPLEX. If you do, you will not be able to grab the most efficient parameters that were found by CPLEX. You can easily set this time limit so CPLEX works overnight and you get the results during the morning.

If the above tips fail to achieve interesting results, you can either try running your  model on a faster machine, or in last resort, trying to run it on a cluster of machines by using distributed concurrent MIP tool through the remote object. If these pieces of advice seem unsatisfactory, please remember than while free food may be abundant in seminars, free lunches in mixed-integer programming are indeed very rare.

OR-Enabled Collaborative Logistics

Today’s post is about collaborative logistics, one field of study in which I recently started to work in. It is a tricky field for many reasons, and I will try to explain why and how. Operations research is a key enabler of collaboration. Logistics is essentially management of the activities of transporting and holding materials from origins to consumers. It has its roots in military science.

In collaborative logistics, two companies coordinate their activities in order to lower costs or improve service. Two manufacturing companies from Canada and that ship less-than-truckload (LTL) to the United States of America could pool their transportation needs so as to ship full truckloads and thus reduce total cost. In many cases, savings of 5% or more are possible by this kind of collaboration. Another key idea to reduce logistics costs is backhauling. If you ship a truckload from A to B, then the truck might return empty from B to A. If you can find another company whose need is from B to A (or close to that), you can share the cost of the full trip and save serious money.

From my example, it may seem easy, but in fact, there are some important hurdles along the way. First and foremost, it requires companies to share some sensitive information: where are your customers, what you sell them, and how much you used to pay for your transportation. You have to share that at the very beginning of the project, to get an idea of how much you would save by working together. If one of the companies later says, finally, that it’s not interested, then, it still has all the information from the other company. That’s why getting a neutral third-party to collect the data and compute these savings is good idea.

How OR can help?

Getting a feasible collaborative logistics plan is not simple. In particular, the question of who pays how much of what is not obvious. Many elements have to be checked out, and those familiar with game theory should have a pretty good idea of those: the plan has to generate savings for all companies, as well as making sure that no individual company is better off by itself than with the collaboration.

Although for some simple “games” – aka situations – there are some analytical formulas to determine whether collaboration can be made or not, in many cases, it’s very difficult. There are also formulas to share the benefits of collaboration among participants, but sometimes the solutions are non-practical, as they are grossly unfavorable to one of the participants. Fortunately, many of these “games” can be formulated as mathematical programming problems (often LPs). If the model is feasible, then a plan meeting all the requirements exists; if not, then we know the collaboration is unlikely to succeed, at least for this period, as some of the the participants have no rational interest in participating in the collaboration.

I’ll try to find a numerical example that puts into motion these rather vaporous concepts.

Seeds and parallel MIP solvers, extended edition

SLSM-2000seeds-CPLEX126-4t

In a previous post, I showed that changing the random seed on a given problem yielded varies solution times by between 20% to 40% on average, depending on the solver and the setting. I decided to push that experiment a little further and sample the … [Continue reading]

Dynamic pricing horror stories

The hotel industry has accepted these principles behind revenue management and dynamic prices. Many of these dynamic pricing strategies are driven by OR- or analytics-based models, too. Too much of a good idea can turn into the opposite of the … [Continue reading]

ILS 2014, a buffet of logistics and OR

ESTELLE_MAERSK

This post was originally written for OR Complete. I re-post it here in case you missed the original posting. ILS - a short name for the International Conference on Information Systems, Logistics and Supply Chain, is a bi-annual conference focusing … [Continue reading]

Uncertainty, Models, and the Chaos Out There

Future is dominated by uncertainty. In terms of supply chain management – my main field of application – it comes in terms of variation in lead times or demand levels, or in the form of severe events such as hurricanes which can severely disrupt a … [Continue reading]

Local search: uses (good and bad)

Some time ago @lambdadmitry asked me to elaborate  a bit about some discussion about the relevance of local search, which originated from a tweet by @JeffLinderoth. I apologize that it took me so long to complete this post, but I found some … [Continue reading]

Seeds and Parallel MIP solvers, part I

RandomSeeds-SLSM-Gurobi

Over recent years, mixed-integer programming (MIP) solver developers worked really hard to provide parallel codes that are both fast and quite stable. A few years ago, using a parallel code resulted in huge variations in run times: successive runs of … [Continue reading]

Building the perfect MIPping machine [dual]

Yesterday’s post explained why I prefer run solver on fast machines over big machines and gave a short example why. In today’s post, I provide some very personal guidelines on how to determine what kind of machine you might need to solve MIPs.  … [Continue reading]

Do you need a bigger computer to solve this MIP? [primal]

When I have difficult time solving a mixed-integer optimization problem (MIP), one of the most common reflexes is to run the problem on a bigger or faster machine.  However, when solving MIPs, throwing more processor power at a problem will not … [Continue reading]