# Introduction to the Travelling Salesman Problem

--

TSP (Travelling Salesman Problem) is a famous problem described by the following question:

Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?

Despite how simple it is to describe the problem (even one child would understand what we are looking for), there is no efficient algorithm yet to find the optimal solution. It’s classified as NP-Hard Problem, no polynomial time method has been found so far. As we can see in this article, a modern normal computer can find the optimal solution until 22–24 points (cities).

I find it inspiring that today, despite our great technological advances, there are problems so easy to describe still unresolved. We are not talking about quantum physics or fluid mechanics! We just want to find the shortest route given a certain number of points to visit!

In this article, we’ll explain and implement code on algorithms to find the optimal solution and also some approximations. We are going to use cSharp and the code will result in a plugin for AutoCAD, which provides a very friendly GUI.

You can find here the Github repository related to this article.

# Plugin interface

This form is the program’s interface that will allow the user to “play” with different TSP algorithms and quantity of nodes, it has 3 tabs as can be seen in the next picture. Just in case, in this article I explain how to setup a plugin for AutoCad.

Clicking on `insert sample nodes`

, a certain quantity of nodes will be drawn in CAD’s model, then the user can choose an algorithm to solve the TSP for that nodes. The next picture shows an example with 20 nodes.

# TSP algorithms introduction

A naive approach to solving TSP would be the brute-force solution, which means finding all possible routes given a certain number of nodes. This is a very expensive way to solve it, with a time complexity of O(n!). To be exact, the brute-force time complexity is (n-1)!/2. Imagine you have n nodes, then, if we want to compute all possible paths, we must pick one random start node, then we have (n-1) options for the next node, and (n-2) for the next, etc… This gives us (n-1)!, but we should consider that the path (1 > 2 > 3 > 4 > 1) is the same as (1 > 4 > 3 > 2 > 1), that’s the reason we divide by 2.

In this article, we’ll analyze 2 ways of computing the optimal solution, the Integer Linear Programming and Dynamic Programming approach, which are slightly better than the brute-force method. Afterward, we’ll move to explore 2 approximation algorithms, which run much faster than the previous ones and are not so bad in precision, they are called 2T (Double-Tree) and Christofides approximation. In the worst-case scenario, 2T would be 2 times the optimal, and Christofides solution would be 1.5 times.

Finally, we’ll talk about Google OR-Tools Routing library, which is free and provide powerful approximations to the TSP that run very fast and combine more than 1 algorithm strategy.

# Optimal solution approaches

## Integer Linear Programming

Linear Programming (LP) is a powerful way to solve problems, and part of its beauty is its simplicity, we only need to formulate (express) our input in the required way, then LP will do the rest of the job returning the output solution. This “formulation” consists in:

- Cost Function to optimize (maximize or minimize):

- Variables must be positive or equal to 0:

- List of constraints:

This can be expressed in matricial form as follows:

Integer Linear Programming adds one more constraint, and that is our variables (x), which must be positive integers, meaning:

The key part of using LP is finding the correct formulation for the problem. Sometimes there’s more than 1 possible formulation, and one can be more efficient than the other. In fact, we’ll explore 2 possible ways to formulate the TSP, and we’ll see how they differ in their performance.

We can follow our intuition to think about the formulation of this problem, we need to define our `variables`

, `constraints`

and the `objective function`

. It’s easy to think about it if we work with an example:

**Variables**:

What we are looking for is a tour that passes through all nodes at just one time (with the minimum length). We can declare our variables as the edges of the complete graph formed by the nodes. If the variable (edge) is equal to 1 means forms part of the optimal tour, otherwise, if it’s equal to 0, does not belong to the optimal tour.

**Objective function**:

We want to find the tour with the minimum distance, so it makes sense to write our objective function as follows:

**Constraints**:

Intuitively we can state:- Each node is a start point of an edge that belongs to the optimal tour:

- Each node is an end point of an edge that belongs to the optimal tour:

So… Are we ready? That’s all?…

Unfortunately not! There’s something we are not taking into account…

There isn’t any constraint to eliminate possible **subtours**! The following picture shows 3 setups without the subtour elimination constraints. Note that is also possible a subtour with only 2 nodes, which starts from one node and comes back to it.

So… We need to add some constraints to eliminate these subtours… ¿How can we do it? Next, we’ll discuss 2 possible ways to do it:

**Method 1 to eliminate subtours**

Adding constraints relating each **subset size** to its number of **activated edges**. An “active edge” means its variable is equal to 1, so belongs to the optimal tour. This means, for example, if we have the subset {0,1,2}, we can only have activated 2 possible edges, but not 3. We can state, for each possible subset:

Expressed more formally:

This method it’s easy to understand, but it adds a big number of constraints, and this causes the LP algorithm to be quite inefficient. Next are computed the number of constraints added by this way of eliminate subtours:

**Method 2 to eliminate subtours**

There is another way to eliminate subtours, which may be less intuitive, but very smart, that provides a more compact formulation. It was discovered by Miller, Tucker and Zemlin in 1960. This formulation introduces new `time variables`

, which we call *ui*.

The idea is to find a relation between *xi*, *ui* and *uj*.

We can model this using the `big number technique`

:

Where *M* is some large number, we can choose `M=n-1`

because *ui* ∈ [1, n-1]. We can sum up these time constraints as follows:

It’s important to note that only node 0 is not restricted by these constraints. With this formulation we have drastically reduced the number of constraints, from 2ⁿ to n.

Here you can find the code implementation of the ILP formulations commented above, we use the Linear Solver offered by OR-Tools library from Google. Using this code, the ILP formulation with time variables runs faster than the other one.

## Dynamic Programming

These are the steps to solve a Dynamic Programming problem:

- Identify the
`recurrence relation`

and solve the problem with a`top-down`

approach - Optimize solution adding
`memoization`

- Optimize solution using iteration,
`bottom-up`

approach

**Identifying the Recurrence Relation**

Let’s compute manually one example to see if we can detect the recurrence relation, we are going to work with a 4-node graph with the following distance matrix. It’s not symmetric, but that’s perfectly fine, imagine it’s a road system where the route going from node A to B is shorter than vice versa.

The next diagram shows all possible tours we can take starting from node 0, for example, 0 > 1 > 2 > 3 > 0, 0 > 1 > 3 > 2 > 0, etc.

If we look carefully we can see that in every node we are doing the same, next is presented the recurrence relation:

Or expressed in a more general way:

Where *g(i,S)* is the minimum cost from node *i* to the subset *S* of nodes, in other words, is the optimal cost of the subset {i ∪ S}, starting from some `starting_node`

(in our example is 0), and ending in *i*. Here you can find the code implementation that solves the problem using this recurrence relation with a `top-down`

approach.

**Optimizing solution with memoization**

Now that we have solved the problem using the recurrence relation, the next step is to try to find if we can avoid certain recursion calls using memoization. We can create a 2D table to store the computed values of our recurrent function, each row can correspond to a certain subset {i ∪ S} and each column to the last index visited *i*. Therefore, there will be 2ⁿ rows and *n* columns. The following table corresponds to the example we’re working on.

We can also use this *memoization* table to compute the optimal tour, meaning the order of node indexes, starting and ending in 0, that has the minimum cost. Our example is “0 -> 1 -> 3 -> 2 -> 0”. Here is the code including this optimization.

**Bottom-up approach**

Here is presented the code that solves the TSP problem avoiding recursion with a bottom-up approach. The memo table is filled from the bottom of the tree to the top.

The Dynamic Programming approach has O(n² * 2ⁿ) time, which is a great improvement comparing it with *brute-force*. As can be seen in the following table, for n ≥ 10, DP time complexity beats the brute-force time.

# Approximation solution approaches

As we have seen, the optimal solution approaches run in exponential time, so we can’t use them for more than 24–25 nodes. What can we do? The TSP problem appears many times in our daily lives, for example, companies need a solution to schedule their delivery orders with the minimum cost possible.

For this reason, TSP problem has some approximation solutions that run much faster than the optimal algorithms. We are going to analyze 2 approx. algorithms, they are called 2T Double-Tree approximation and Christofides algorithm. In the worst-case scenario, 2T would be 2 times the optimal solution, and Christofides 1.5 times.

## Minimum Spanning Tree (MST)

Both approximation solutions (2T and Christofides) are based on the concept of `minimum spanning tree`

(MST). What is a MST?

A minimum spanning tree (MST) or minimum weight spanning tree is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight.

As you can see in the code, we use the Kruskal’s Algorithm with a Union-Find data structure to find the MST.

The cost of a MST is a lower bound of the optimal solution for the TSP problem.

**¿Why Tɢ is a lower bound of Hɢ*?**

This is easy to demonstrate: if we take the TSP Optimal Solution and remove one edge, we have a tree (which is not the minimum as MST).

## 2T approximation TSP (Double-Tree)

Once we have understood the concept of the MST and checked that is always a lower bound of the TSP solution, it’s easy to build an algorithm that will return an approximation of the TSP problem with a maximum error of 2T (meaning in the worst-case scenario our approximation will be the double of the optimal solution).

These are the steps to build the 2T approximation:

- Find an MST which we call
*Tɢ*. - Duplicate the edges of
*Tɢ.* - Find an Eulerian tour using Hierholzer algorithm.
- Shortcut the Eulerian tour (remove duplicate vertices)

If we do these steps, in the worst-case scenario we have visited every edge twice (DFS traversal), that’s the reason it’s called 2T approximation. It’s worth saying that when we remove duplicates (step 4), the cost only can decrease due to the triangle inequality. Here you can find the code implementation of this algorithm.

## 1.5T approximation TSP (Christofides algorithm)

We can improve the Double-Tree approximation with Christofides algorithm, which in the worst-case scenario will be 3/2 times the optimal solution. First, we’ll explain which are the steps and afterward we’ll demonstrate why is a 1.5T approx.

- Find an MST which we call
*Tɢ*. - Find the subset of vertices in
*Tɢ*with odd degree, which we call S (there will always be an even number of vertices with odd degree (later we’ll explain why). - Find a Minimum Perfect Matching
*M*on*S*. As you can see in the code, we use linear programming to find*M*. - Add the set of edges of
*M*to*Tɢ*. As you can see in the image below, multi-edge is allowed (look at edges between nodes*P*and*N*). - Find an Eulerian Tour.
- Shortcut the Eulerian Tour (remove duplicate vertices).

Now we’ve understood the steps of Christofides algorithm, let’s try to understand the reason behind them.

**¿Why do we want to find the set of odd-degree vertex S?**

The main strategy of Christofides algorithm is to find an Eulerian tour from the MST and then “shortcut it” (removing the duplicate nodes). To have an Eulerian tour in a graph we need every vertex to be even degree. We want to find the set of odd vertices because we need “somehow” to turn them into even.

**¿Why do we compute the Minimum Perfect Matching on S?**

The idea is to add one degree to every odd-vertex, we can achieve this by finding a perfect matching on S (set of odd-degree nodes), if we do this, we have achieved our goal and find an Eulerian tour. The Minimum Perfect Matching is the optimal way to add these edges (adding the min cost possible).

**¿Why there will always be an even number of odd-degree vertex?**

We know by the handshaking lemma that the sum of all vertex degrees in a graph is double the number of edges:

Where *V* is the set of all vertices in *G*. Let’s divide *V* into 2 sets of vertices:

*V*=*R*∪*S**R*= Set of even-degree vertices in*G**S*= Set of odd-degree vertices in*G*

So we can express the handshaking lemma as follows:

The right side of the equation (2 |E|) is an even number, so the left side has to be even as well. By definition, the sum of even-vertex degrees is also even.

It means that the sum of odd-vertex must be even as well to maintain the whole left side equation even.

We need the sum of odd numbers to be even, it means *p* is even.

**¿Why Christofides is a 1.5 approximation of the TSP?**

The first step to perform Christofides is to find an MST (similar to the 2T Double-Tree discussed before), we already know that is a lower bound of optimal solution on G. Then the question is why adding the Minimum Perfect Matching edges adds, in the worst-case scenario, an error of 0.5 T. To understand this, let’s think about these 2 perfect matching shown in the following picture, which is made based on the optimal solution TSP of the set S.

M₁ and M₂ are perfect matching on Hꜱ*, but not the perfect matching M, so we can state:

This implies that c(M) is is lesser or equal to the average of c(M₁) and c(M₂).

As said before, MST is a lower bound of the optimal solution TSP, meaning:

We also now:

- Set S has fewer vertex than G, so, by the triangle inequality:

- M₁ ∪ M₂ = Hꜱ*

Then we can conclude:

So finally:

Here you can find the code implementation of Christofides algorithm.

# Google OR-Tools library

Until now we have explored some optimal algorithms approaches (linear and dynamic programming) and some approximation algorithms (Double-Tree and Christofides). I think it’s worth understanding things from the base, and in computer science, test your knowledge by implementing the concepts in code yourself. As much as you can, avoid “black boxes”.

However, once we know what we are talking about, it’s also important to explore which tools are out there that are already implemented, optimized and maintained, maybe there is an open-source tool we can use to achieve our goal. This is also important because we can build from there instead of “reinventing the wheel” from the base.

Google OR-Tools is an open-source library that can help us a lot with the TSP problem and related concepts (for example linear and integer programming). “OR” stands for “Optimization Research”.

OR-Tools is an open source software suite for optimization, tuned for tackling the world’s toughest problems in vehicle routing, flows, integer and linear programming, and constraint programming.

We are going to add the feature to use OR-Tools to solve TSP. OR-Tools provide also an approximation of the TSP problem, but applies a `first solution strategy`

and afterward refines it with other algorithms.

First solution strategies are listed here, some of them are:

- CHRISTOFIDES:

We know about it! - PATH_CHEAPEST_ARC:

Starting from a route “start” node, connect it to the node which produces the cheapest route segment, then extend the route by iterating on the last node added to the route. - GLOBAL_CHEAPEST_ARC:

Iteratively connect two nodes which produce the cheapest route segment.

Next image shows different TSP solutions obtained by the library OR-Tools, for a 50 vertex graph, with different first-solution-strategies.

As we have seen, OR-Tools TSP implementation provides very good approximations, and the algorithms run quite fast even when dealing with graphs with many vertices. On my computer (which is not a super-computer) it takes 3.19 s to give a solution for a graph of 500 points, and 13.47 s for one of 1,000 points.

Another cool thing about OR-Tools is that has the feature to solve `vehicle routing`

problems, which can be seen as an extension of the TSP problem. Imagine that you have a company that has to deliver 200 different points in the city, and you have 4 vehicles. What would be the route that you would give to each vehicle in order to optimize the delivery time? Well… OR-Tools can help you with this!

Here you can find the code implementation of Google Or-Tools in our plugin.

You can find the full code in this Github Repository.

If you enjoyed this story, please click the 👏 button and share to help others find it! Feel free to leave a comment below. You can connect with me on LinkedIn, My blog, Twitter, Facebook.