/
MODIFIED MST ALGORITHMS
FOR TSP TOURS
By
XIAOZHEN WANG
Bachelor of Laws
East China Nonnal University
Shanghai, China
1982
Master of Laws
East China Nonnal University
Shanghai, China
1985
Submitted to the Faculty of the
Graduate College of the
Oklahoma State University
in partial fulfillment of
the requirements for
the Degree of
MASTER OF SCIENCE
May, 2000
MODIFIED MST ALGORITHMS
FOR TSP TOURS
Thesis Approved:
B. e~
Dean of the Graduate College
11
ACKNOWLEDGMENTS
This thesis develops some modifications of the MST algorithm for generating a
TSP tour. The TSP (Traveling Salesman Problem) is a classical problem in combinatorial
optimization. Generally the solution of the TSP is to generate the shortest cyclic tour
which visits every vertex in a complete graph G exactly once and starts and ends at the
same vertex. Since the TSP tour is a NP-hard problem, we can not solve it in the
polynomial time. In the real world, there are many approximation algorithms those that
are near-optimal solutions for generating the TSP tour in the polynomial time. Of these
algorithms, the MST (Minimum Spanning Tree) algorithm is an early classic solution, but
the quality of the result of the algorithm is seemed as not good enough for a practical
purpose. On the other side, there are many heuristic methods, which can generate
reasonably efficient results but do not guarantee feasible solutions and are used without a
theoretical guarantee.
The subject of the thesis is to combine the MST algorithm with some efficient
heuristic methods to innovate some modified variety of the MST algorithm in the TSP
field. The study focused on the symmetric TSP that satisfying the triangle inequality
criteria. Two modified algorithms are developed. One combines the MST algorithm with
a modified variety of the edge exchange algorithm. The other combines the MST with a
random insertion algorithm. Both the modifications of the MST algorithm are expected to
improve the quality of the TSP tour with keeping the advantage of the MST algorithm.
III
I wish to ex.press my sincere appreciation to Professors Jacques LaFrance, John P.
Chandler and Jing Peng, who have served as members of my master committee. The
special thanking goes to my major adviser, Dr. Lafrance, for his trust, concern and
valuable guidance. I am also thankful to Dr. Chandler and Dr. Peng for their advice and
encouragement.
My mother, my wife and my brothers deserve my deep appreciation for their
encouragement, patience and financial support.
iv
TABLE OF CONTENTS
Chapter Page
1. INTRODUCTION. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1
TSP and MST. . . . . . . . . . . . . . . . . . . . . . . . . . . .1
Subject and Purpose . . . . . . . . . . . . . . . . . . . . . . . .5
The Frame of the Thesis . . . . . . . . . . . . . . . . . . . . . .6
11 LITERATURE SURVEY. . 7
History 7
Algorithm Review. . . . . . . . . . . . . . . . . . . . . . . . .10
Some Conclusions Regarding Previous Work. 14
m.
IV.
V.
VI.
RESEARCH STATEMENT . . . . . . . . ..
IMPLEMENTATION DESIGN . . . . . . . .
Basic Assumptions.
New Algorithm 1.
New Algorithm 2.
EXPERIMENT RESULTS .
SUMMARY AND CONCLUSIONS
15
19
19
20
22
23
33
REFERENCES ................... 35
APPENDIXES . . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . . . . . .. 39
Table!: The Improvement Results to the MST Algorithm. . . . . . . . . . . .. 40
Table2: The Ratio to the Lower-bound (MST Tree) Al
Table3: The Improvement of the Newl to the 3-opt and Insertion Algorithms. . 042
Table4: The Improvement of the New2 to the 3-opt and Insertion Algorithms. . 042
Main Programming Codes of Implementation. . . . . . . . . . . . . . . . . . . .43
\'
Figure
1.
2.
3.
4.
5.
6.
7.
8.
LIST OF FIGURES
Page
A TSP Tour Based on the MST Algorithm. . . . . . . . . . . . . . . . . .20
The Average Length of the TSP Tour Generated by Different Algorithms ..23
The Length Ratio of the Empirical Algorithms to the MST Tree. . . . . . .25
The Improvement Ratios of the New Algorithms to the MST Algorithm. . 25
Comparison of the New Algorithm New 1 to the Selected Heuristics . 26
Comparison of the New Algorithm New2 to the Selected Heuristics . 26
A TSP Tour Generated hy the MST Algorithm (Preorder Walking) ..... 27
A TSP Tour Generated by the 3-opt Algorithm. . . . . . . . . . . . . . . .28
9. A TSP Tour Generated by the Newl Algorithm .28
10. A TSP Tour Generated by the New2 Algorithm .29
11. A TSP Tour Generated by the MST Algorithm (Preorder Walking) . .30
12. A TSP Tour Generated by the Random Insertion Algorithm. . . .30
13. A TSP Tour Generated by the New Algorithm New2. .31
14. A TSP Tour Generated by the New Algorithm New 1. .31
Vl
CHAPTER I
INTRODUCTION
TSP andMST
The Traveling Salesman Problem (TSP) is a classical problem in combinatorial
optimization. A TSP is hascd on the assumption that there is a traveling salesman who
wants to visit every city exactly once in a certain area, and end up at the city which he
starts from. While doing so he also tries reduce cost and time by making the shortest trip
possible. So the major task for the salesman is to find the so-called the "shortest
Hamiltonian tour" in a complete graph G =(V, E). Here V represents the cities (vertices);
and E represents the paths (edges) between different cities.
A Hamiltonian tour itself is a journey in a complete graph, which the subject
visits every vertex exactly once through the path between any two vertices in the graph,
and forms a closed cycle by starting and ending the tour in the same vertex.
The TSP has many variations and many formulations have been developed since
it was first discussed in the eighteenth century. [Noschang 1997] This thesis will only
focus on the symmetric TSP of G = (V, E) on a Euclidean plane which satisfies the
triangular inequality. The symmetric TSP is the basic TSP formulation, which means that
the weight of any path between any two vertices is always the same in the any two
directions connecting the two vertices. On a Euclidean plane, the weight of the path is
determined by the length. At the same time, there is an asymmetric TSP in which every
path will be associated with different weights based on different directions. This thesis
focuses mainly on the algorithms, which are used to generate an optimal Hamiltonian
tour of the symmetric TSP.
The TSP is an important computational model. It is the basis of many real world
problems, and it is also the concept that forms a central component of many real-world
applications. [Noschang 1997] For example, wire distribution, network connection and
design of printed circuit boards are the common cases related with the TSP theory and the
associated algorithms.
But the TSP is also a NP-hard problem. That means we cannot solve it in
polynomial time. Since the TSP is so important, we must find some efficient algorithms
to solve the problem, and not just to provide a solution to its NP-hard property.
Generally, there are three ways of finding a solution. Firstly, if the size (the number of
vertices) of the problem were relatively small, an enumeration algorithm may be used to
exhaust all possible tours for finding out the shortest tour. Secondly, we can design some
exponential algorithms to solve the problem perfectly. Lastly, if the size of the problem
were relatively large, the enumeration algorithm and the exponential algorithm will
become infeasible. In this case, the feasible way we can go is to find some near-optimal
solutions that can be implemented in polynomial time. These solutions belong to a class
known as approximation algorithms. [Cormen, Leiserson, and Rivest 1995] Among
approximation algorithms there is a subclass called heuristic methods. The heuristic
methods simplify or educate the guess that reduces or limits the search for solutions.
2
Heuristics do not guarantee feasible solutions, and are often used with no theoretical
support. In fact, the TSP is only one of the few major success cases for optimization
techniques using heuristic methods. [Aarts and Lenstra 1997] Many approximation
algorithms and heuristic methods have been developed to solve the TSP, and this is still
an active area. The possibility of finding a better algorithm, or getting some better quality
of result from existing algorithms, is still an open problem of major interest.
Of those algorithms concerning the TSP, the Minimum Spanning Tree (MST) tour
is an early and classic resolution. The MST is the shortest path tree of a given graph
G =(V, E) in which V is the set of vertices and E is the set of edges that connect two
vertices. Although there may be many paths (edges) among the vertices, the MST is a
tree that links all nodes (vertices) in the graph with least paths (V-1) and generates the
shortest total length of the paths. Based on this quality of the MST, there is the MST
algorithm frequently used to demonstrate the approximately optimal solution of the TSP.
[Lawler, Lenstra, Kan and Shmoys 1985]
If the MST were a cyclical graph, it could have been a direct optimal solution of
the symmetric TSP since it already connects every node only once, and the total length of
the MST paths is the shortest or most cost-efficient. But a MST is not a cyclical graph.
There is no way to generate a cyclical tour and visit every node (vertex) only once by
directly walking through the all paths of the MST. For generating a Hamiltonian tour
from a MST, we must add some complement methods or techniques to convert the MST
to a cyclic tour. It means that for generating a TSP tour with the MST algorithm we need
at least two basic steps. First, generating a MST to get a set of the shortest paths. Second,
3
based on the MST, using some techniques to re-connect the paths of the MST and finally
to construct a Hamiltonian tour.
Around the MST algorithm, there are several techniques to be devised to convert
a MST to a near-optimal cyclical tour for solving the TSP. First of all, there are two
similar algorithms. One is using the depth first traversal of the MST to construct the TSP
tour [Lawler, Lenstra, Kan and Shmoys 1985], and the other is using a preorder walk of
the MST to construct the TSP tour. [Connen, Leiserson, and Rivest 1995] The
complexity of the both methods is 0(n2
), while the quality of the resulted tour Tis
T(n) <= 20PT(n).
The Christofrides algorithm has different features from the above algorithms. This
method requires a MST first for getting the set of vertices. Then it converts the MST to
get a new connected graph G'. G' is an Euler graph with an Euler tour. By traversing the
Euler tour, a TSP tour is generated. With this algorithm, constructing the MST needs
O(n\ and constructing a TSP tour needs O(n\ But the quality of resulted T(n) is
improved as T(n) <= 3120PT(n). [Cormen, Leiserson, and Rivest 1995] [Aarts and
Lenstra 1997] Generally the algorithms based on the MST are seen as the non-practical
efficient algorithms to solve the TSP since either the worst case bound of the resulted
tour is 2, or the time cost is relatively high, up to O(n\ [Connen, Leiserson, and Rivest
1995]
Meanwhile, during the last several decades, there were relatively many heuristic
methods invented to solve TSP tours. For example, the edge-exchange method (k-opt
method) [Lin 1965] [Aarts and Lenstra 1997], the nearest neighbor method [Aarts and
Lenstra 1997] [Connen, Leiserson, and Rivest 1995], the insertion method [Dakin 1997],
~-
4
f
and the strip algorithm [Gu 1998]. Also, some general algorithms are adopted as the
strategy for solving TSP, such as the dynamic programming algorithm [Smith 1987], the
greedy algorithm [Aarts and Lenstra 1997] , and the branch-and-bound algorithm. Of
these algorithms, some are efficient to get near optimal results under some conditions and
constraints, but they are generally time-consuming. There is also a great weakness: we
have no precise idea about how far the solutions are from the optimal results. [Noschang
1997]
From the achievement of heuristic methods, the basic idea of this research is that
if we can combine some heuristic techniques with the MST algorithm, then we may be
able to improve the result of the MST algorithm. This kind of thinking has led the author
to look for some efficient methods to locate better solutions for the TSP.
Subject and Purpose
Based on the acknowledgment of the inefficiency of the MST algorithm when it
constructs the TSP tour, the main purpose of this research is to find some more efficient
techniques that can improve the result of the MST algorithm. Here the improvement
means to generate a shorter TSP tour or to recei ve a more optimized solution than the
original MST algorithm. Meanwhile, the improvement should not increase the time cost
of total computing. For seeking these techniques, we need to turn our eyes to other
heuristics that solved the TSP better than the MST algorithm. When we studied some
efficient heuristics carefully, we got the main idea and the subject of this research. The
idea is to try combining some techniques of some heuristics with the classical MST
\
5
algorithm to improve the petfonnance of the MST algorithm. Based on the idea, the
subject of this research is to modify the MST algorithm with some technique
combinations. Two new methods are devised by adding some techniques in the classical
MST algorithm; these techniques are used to convert a MST to a TSP tour.
For implementing the purpose and the subject, there are two main tasks needed to
carry out. The first is to design and experiment the two new algorithms. The second is to
evaluate them by comparing them with the original MST algorithm and some selected
heuristics, and to see if the results satisfying our expectation. Otherwise, for learning the
new algorithms visually, someom (the graphic user interface) design will be done and
the graphical results of the experiment will be shown on the screen.
The Frame of the Thesis
In Chapter 1, the basic concepts of the research are given. Then there is a concise
discussion of literature review in Chapter 2. Chapter 3 is the detailed statements of the
research tasks. Chapter 4 is the description of the design of the new algorithms.
In Chapter 5, we will discuss the experiment results of the new algorithms based on
various comparisons. Finally we will represent our conclusions in Chapter 6.
"
6
CHAPTER II
LITERATURE SURVEY
History
Mathematicians have been interested in the traveling salesman problem since the
18th century. In 1759, there was a "knight's tour" problem discussed by both Euler and
Vandennonde. [Noschang 1997] In 1856, Hamilton developed his "Iconsian Calculus"
for graph problems marked as the Icosian Game. [Noschang 1997] In 1930, Menger
mentioned the "messager problem" for finding the shortest Hamiltonian path. [Noschang
1997] But the term of "traveling salesman problem" may have been used at the first time
in 1931 or 1932. [Lawler, Lenstra, Kan and Shmoys 1985]
Probably the main reason that the TSP is one of the most interesting problems of
mathematicians and computer scientists is its genre: combinatorial optimization.
[Lawler, Lenstra, Kan and Shmoys 1985] As a classical combinatorial optimization
problem, the TSP served as an initial proving ground for optimization algorithms and
heuristics, and became one of the major success stories for optimization. [Johnson and
McGeoch 1995] Up to now, the TSP has developed into a class of its own. Many variants
of the TSP have been researched: the asymmetric traveling salesman problem, the multi-alesmen
problem, the graphical traveling salesman problem, the bottleneck traveling
7
salesman problem, the prize collecting traveling salesman problem, the vehicle routing
traveling salesman problem, the Euclidean traveling salesman selection problem, the
circulant traveling salesman problem, the Chinese postman problem, the on-line traveling
salesman problem, the rural postman problem, the maximum or minimum latency
problem, the Eulerian tour problem, the Hamiltonian cycle traveling salesman problem,
the bipartite traveling salesman problem, the remote traveling salesman problem, the
geometric covering salesman problem, the tree and tour cover traveling salesman
problem, the k-template traveling salesman problem, and some others. [Noschang 1997]
[Lawler, Lenstra, Kan and Shmoys 1985] Of them, there is the symmetric traveling
salesman problem, which is the classical TSP. [Noschang 1997]
The main purpose of the research of the classical TSP is to find an algorithm that
can figure out an optimal tour (the shortest tour) for the salesman. But since there are
(n-l)! possible tours for a single TSP with n number of cities, the computing time
becomes a very important property of the algorithm. Up ti 11 now, the classical TSP is still
considered to be a NP-hard problem. [Garey and Johnson 1979] That means it is
impossible to find a real optimal algorithm which can solve the TSP in a time that
increases polynorniaJly with the size n of the problem. [Noschang 1997] Under these
assumptions, the researchers of the TSP have two general alternatives: either to find out
some heuristics that merely result in "near optimal" tours, or to develop some
optimization algorithms that run in a practically acceptable time. [Johnson and McGeoch
1995] Many ideas and algorithms that are related to the combinatorial optimization have
been developed to solve the TSP for last several decades.
Around 1954, Dantzig, Fulkerson and Johnson mentioned the resolutions to the
8
TSP with the linear programming concept. [Willard 1970]
In 1958 and the following years, Gomory invented several algorithms with the
cutting planes and facets method to generate optimal TSP tours. His algorithms gave
effects on the development of the integer programming. [Lawler, Lenstra, Kan and
Shmoys 1985]
Then in 1960, Miller, Tucker and ZemJin provided their integer programming
solutions of the TSP. [Miller, Tucker and Zemlin 1960] [Willard 1970]
In 1962, Bellman, also Held and Karp, introduced an enumerative optimization
method the dynamic programming and applied it to solve the TSP. [Ghalib 1979]
[Lawler, Lenstra, Kan and Shmoys 1985]
One year later, the so-called "branch and bound" algorithm was used to solve the
TSP by Little, Murty, Sweeney and Karel. [Willard 1970] [Lawler, Lenstra, Kan and
Shmoys 1985]
In 1965, Lin provided his edge-ex.change method. It was further defined and
developed in 1973 by Lin and Kernighan. [Lin 1965] ] [Lawler, Lenstra, Kan and
Shmoys 1985]
In 1970s, based on the broad discussion about NP concept, the focus of the TSP
research was shifted to look for approximation algorithms and heuristic methods.
[Lawler, Lenstra, Kan and Shmoys 1985] In 1975, Papadimitrion and Steiglitz discussed
the complexity of the local search the heuristics for the TSP and gave a polynomial
predictable evaluation. [Ghalib 1979] [Papadimitrion and Steiglitz 1977] In 1976,
Christofides, based on the MST algorithm, presented his elegant heuristic that gave the
best result in the worst-case analysis of the TSP resolution up to now.
9
In the two recent decades, the research interesting is generally focused on
heuristics. Many near-optimal methods have been innovated such as the neighborhood
search, the simulated annealing, the Tabu search, the genetic algorithm, the neural
network, the clustering heuristics, the ant systems, the mimetic algorithm and space-filling
heuristic. [Noschang 1997] [Lawler, Lenstra, Kan and Shmoys 1985]
Algorithm Review
Of all algorithms and heuristics related to the TSP, the MST algorithm is a
classical polynomial resolution for the TSP. Since every TSP tour is a spanning tree with
an extra edge approaching the starting vertex the second time, we can generate a TSP tour
from a MST based on the triangular inequality criteria. [Lawler, Lenstra, Kan and
Shmoys 1985}
Around the MST, there are several techniques to be devised to convert a MST to a
near-optimal TSP tour. One method applies the depth-first traversal of the MST. Starting
at an arbitrary leaf of the minimum spanning tree, the depth-first traversal will traverse
every edge of the tree exactly twice. Based on the double path of the MST, a TSP tour
can be generated by using some "shortcuts" to replace some edges and to avoid visiting a
vertex twice. [Lawler, Lenstra, Kan and Shmoys 1985] The other method uses a
preordered walk of the MST to generate a cycle tour. [Connen, Leiserson, and Rivest
1995] Both algorithms arc based on a double path of the MST. By walking along the
double path, a cyclic TSP tour can be generated. These techniques are called the MST
algorithm, or the double MST algorithm. [Lawler, Lenstra, Kan and Shmoys 1985] The
/
10
MST algorithm gives the lower bound of the optimal TSP tour immediately since the
MST is always shorter than the real optimal TSP tour. Furthennore, the algorithm
guarantees that the resulted TSP tour is not longer than the double path of the MST, and
the quality of resulted tour T(n) is T(n) <= 20PT(n). Thus the MST algorithm also gives
us an upper bound of the TSP tour. Meanwhile, The complexity of the MST algorithm is
dominated by the complexity of the MST generation, which is O(n2
). [Lawler, Lenstra,
Kan and Shmoys 1985]
Based on the MST algorithm, another better heuristic is created: the Christofrides
algorithm. [Aarts and Lenstra 1997] This algorithm requires a MST first for getting the
set of degreed vertices. Then it computes a minimum length matching M on the vertices
of odd degree in the MST, and connects M with the MST to get a new connected graph
G'. G' is an Euler graph that includes an Euler tour. Within the Euler tour, a TSP tour can
be easil y found by traversing the Euler tour while using some shortcuts to replace some
edges so as to avoid visiting a vertex more than once. With this algorithm, constructing
the MST needs 0(n2
) and constructing a TSP tour needs O(n\ But the quality of the
resulted tour T(n) is improved as T(n) <= 3/20PT(n).
There are also some other methods to generate the TSP tour based on the MST.
For example, there is a parallel algorithm used in those computers that have more than
one processor. In this method, the technique for getting the TSP tour is similar with the
techniques in the double MST algorithm. But its time cost is decreased to O(n logz n).
[Dale and Walker 1996]
Beyond the MST algorithm for the TSP tour, there are many other heuristic
algorithms for the TSP. Most of these heuristics can improve more the quality of the TSP
11
tour than the MST algorithm does. These Heuristics can be roughly divided into two
strategies. One is called the tour construction; the other is called the local search
optimization. [Lawler, Lenstra, Kan and Shmoys 1985] [Aarts and Lenstra 1997]
The tour construction strategy is focused on generating a TSP tour gradually.
Normally, the heuristics under this strategy start their computing at an arbitrary vertex on
graph G. The heuristics will then select an optimal vertex one by one and connect it to the
prior vertex or insert it into the prior existing sub-tour. The well-known algorithms in this
group are: the nearest-neighbor algorithm, the insertion algorithm, the greedy algorithm
and the Clarke-Wright algorithm. [Gu 1998] In fact, the MST algorithms for the TSP tour
are also under this group. There are also other algorithms adopting a different strategy to
do the tour construction. J. L. Bentley has suggested a bitonic tours method to solve the
euclidean TSP, that is, constructing a TSP tour from the leftmost vertex on the euclidean
plane and going strictly left to the rightmost vertex, and then going strictly right to left
back to the starting vertex. [Carmen, Leiserson and Rivest 1995] P. Yager andJ. P.
Chandler presented the strip algorithm. The algorithm divides all points on the graph G
into equal-width strips, then starts from the leftmost strip and visits the points in the trip
with the "Bollstrophedonic (as the ox plows) algorithm", and proceeds through the strips
one by one until all points are connected in a cyclic tour. [Ou 1998]
The local search optimization strategy is focused on improving a starting tour
repeatedly. All algorithms of this strategy are based on the dynamic programming
discipline. They start generating a TSP tour from an arbitrary Hamiltonian tour no matter
how it was constructed. From the start tour, local search algorithms do some
improvement to generate a new and improved tour. This technique is called local
12
optimization. Then the improved tour becomes a new start tour for next iteration of the
tour improvement. Under this group, there are the simple local search algorithm, the
simulated annealing algorithm, the Tabu search algorithm, the genetic algorithm and the
neural network algorithm. [Johnson and McGeoch 1995]
The simple local search algorithm is a well-known algorithm. It uses the edgeexchange
method to improve the start tour until no better result can be received. This
algorithm takes time under O(nk
), k is the number of the edges that are selected to be
broken and exchanged. The simulated annealing algorithm and the Tabu search algorithm
adopt the technique called "uphill moving" that accepts some worse local search results
as the base of further searching. The uphill-move technique is expected to get the better
solutions with a higher probability. The complexity of the Tabu search seems as Q(n\
and the running time of the simulated annealing is at least 8(n2
). The genetic algorithm is
generally a globe optimal algorithm. The TSP application of the genetic algorithm was at
the first time presented by Brady in 1985. [Johnson and McGeoch 1995] It uses
recombination techniques and the fitness measure to generate the better generation of
some "population" until some limitation of the reproduction is reached. The last selected
population is seemed approximately optimal. The neural network algorithm can be
divided into two classes. The first organizes neurons based on some integer algorithm.
The second views the neurons as the points in space and uses some geometric algorithm
to approach a near-optimal TSP tour. The time consumption of the neural network
algorithm ranges from 8(n4
) to 8(n\ [Johnson and McGeoch 1995] [Noschang 1997]
lLawler, Lenstra, Kan and Shmoys 1985]
13
Some Conclusions Regarding Previous Work
In order to solve the TSP, many algorithms and heuristics have been created, but
there are still possibilities to improve the algorithms' computing quality when they are
applied to solve the TSP. Since there have been respectable mature algorithms invented
for the TSP in the last several decades, many new algorithms or heuristics are currently
created by the modification of some existing algorithms to improve the resulted quality of
the original algorithms. Meanwhile, the combining of different algorithms is also a
significant trend in the TSP field to create new algorithms. (Lawler, Lenstra, Kan and
Shmoys 1985] [Martin and Otto 1994]
However, as far as the MST algorithm goes, there is hardly any effort to modify
the MST algorithm so as to improve the computing quality of the MST algorithm for the
TSP tour. Considering the MST algorithm's theoretical achievement - it provides us the
approved upper bound and the lower bound, it is worth trying to find some fast, easy and
efficient way to modify the algorithm, and getting some better computing quality in
practice.
14
CHAPTER III
RESEARCH STATEMENT
This research is focused on a complete and Euclidean graph G. G is a set of
vertices V and symmetric edges E. It satisfies the criterion of triangular inequality.
Through research, two algorithms are innovated. The purpose of the designed algorithms
is to modify the MST algorithm for a better approximate resolution of the TSP tour.
The basic idea of innovation of the new algorithms is to modify the MST
algorithm by combining the MST algorithm with some heuristic methods. Here, the
research is focused on the edge-exchange method and insertion method. The idea is to
adopt the techniques of the two methods to modify the second step of the MST algorithm,
so as to convert a MST to a TSP tour more efficiently. For implementing the new
algorithms, we also need to adopt the techniques of searching, sorting and inserting, as
well as to rely on the knowledge of greedy strategy and dynamic programming strategy.
Based on the above ideas, several tasks are formulated and listed below.
1. Design of the two new algorithms. The first adopts 4-opt technique to convert a MST
to an approximately optimal TSP tour. The second adopts insertion technique to
convert MST to TSP tour. The edge-exchange technique (k-opt) and the insertion
technique are relatively mature and efficient techniques invented to solve the
~ombinatorial problems including the TSP. Originally, both techniques start their
15
!.
computation on an arbitrary or random base, and they use greedy iteration until there
is no shorter tour to be found. They can get much better optimal results than the MST
algorithm in practice, but they take much more computing time than the MST
algorithm, especially in large-scale cases. Based on a random distribution of all
vertices, both algorithms must do many trial-and-error computing to find the shortest
connection among vertices. For example, if the edge-exchange algorithm wants to
check all k-pairs combination of all random trial tours, it will take O(Nk
) time to
complement the computation of the TSP tour, which is a long period of time.
There are some variants of the edge-exchange algorithm that start computation
under some preconditions. For example, there are some empirical implementations of
the algorithms that start edge-exchange at some pre-selected tours such as the nearest
neighbor tour, the greedy tour and the Clarke-Wright tour. [Lawler, Lenstra, Kan and
Shmoys 1985] [Aarts and Lenstra 1997] However, all these empirical computations
do not select a MST tour as their start tour. As a new alternative of the edge-exchange
technique, our first new algorithm is designed to select a MST tour as the tart tour
before using the edge-exchange method to generate a better TSP tour.
Meanwhile, the insertion algorithm uses dynamic programming philosophy to
generate an optimal TSP tour. There are also several variations of this algorithm. All
of them start at a trial-smallest optimal tour visiting the first three arbitrary vertices
that are distributed randomly. Then the algorithm will insert new vertex one after
another into the tour, and use the branch-and-bound algorithm continuously to get the
new optimized tour until all vertices are included. This technique needs a time cost of
0(n2
). [Dakin 1997] The difference between this technique and our second new
16
algorithm is that our algorithm starts insertion from a minimum spanning tree that is a
set of the ordered vertices generated by the MST algorithm. The purpose of this
design is to keep the advantage of the MST, and to expect a better result than the one
from the original MST algorithm.
2. Implementing the coded applications of the new algorithms and perfonn experiments
with the new algorithms to see if the results satisfy the expectations. The expected
results should shorten the tour length, while not increasing time cost. It is assumed
that, in the worst case, these new algorithms will return the same results as the
original MST algorithm. Generally, the innovated algorithms will get better results.
3. For checking the results of the new algorithms, we will compare them with the results
of the MST algorithm with preordered walk method, a random 3-opt algorithm and a
random insertion algorithm. First of aU, experiments of these algorithms wiJI be done
with various pseudo-random data inputs. Then the statistic data of the output will be
collected and some comparisons will be analyzed based on the statistic data. OUf
conclusion will be based on these empirical results.
4. Implementing a Visual Basic GUI of the experiment to have the computations and the
results visible and more understandable.
/
17
The significance of the research will be discussed at the end of this section. Simply
speaking, the modification of the MST algorithm for the TSP tour will let us apply the
MST algorithm better, and take advantage of the MST algorithm more efficiently. In
addition, since the MST algorithm for the TSP tour is a typical example in many
textbooks for demonstrating the TSP, our research can be applied as a teaching
application to help students understand and think about algorithms deeply and actively.
The reasons why the new algorithms focused on the MST algorithm are:
1) There are some algorithms to generate MST relatively quickly ( the least is
O(n)) [Oliver 1993], so it will make total complexity more efficient.
2) We can evaluate the experimental results of the new algorithms easily with
the MST. Since we know the total length of the minimum spanning tree (MST) is
surely less than the length of its optimal tour, we can use the length of the MST
as the lower bound of the real optimal tour, and apply the length of the MST to
compare with, and evaluate, the results of the new algorithms.
18
"
" ,
CHAPTER IV
IMPLEMENTATION DESIGN
Basic Assumptions
In this research, the basic experimental computing of the designed algorithms will
be carried out by a pseudo-random method. It means that the implemented programs of
the algorithms will create all vertices (their coordinates) via a pseudo-random method.
All the coordinates of the vertices will be valued in the range of (0, 1). In addition, the
programs will also accept the input data from a given file, or from manual drawing.
The experimental output of the research will be available for basic statistic
analysis. The main point is to find the average trend or the genera) quality of the results.
In this research, we will use C to develop the programs of all of the algorithms.
Since most of TSP algorithms were implemented with C, C++ and Java, using C makes it
available to compare our research results with that from other algorithms.
Otherwise, we will use Visual Basic to construct a visual interface to run all of the
algorithms and present the results visually. Selecting Visual Basic to implement OUI is
based on personal interest, and also on the fact that no one in OSU has used Visual Basic
for this purpose before.
/
19
"
,
All algorithms will be run in a single processor computer with Windows NT
platform, and with Visual Basic application support (VB5 or newer version).
New Algorithm 1
From Figure 1 that represents a TSP tour generated by the MST algorithm, we can
find that some connections of vertices are apparently not optimal. These non-optimal
connections are generated by the preordered walk method of the MST algorithm. If we
can rearrange these non-optimal connections, we could generate a shorter tour to improve
Figure 1: A TSP Tour Based on the MST Algorithm
the quality of the result of the MST algorithm. Based on this idea, the algorithm New1 is
des)gned.
20
The design of New 1 is based on the edge-exchange technique. First, we just
generate a MST by using the preordered walk method to get a nonnal TSP tour. In this
TSP tour, we assumed that some non-optimal paths must be existed and they must be the
ones of the longest paths in the tour. Next, we use the edge-exchange method to break
several of the first few longest paths in that starting tour, and try to rearrange and connect
those broken points. If we can use this technique to find a new and shorter TSP tour, then
we can replace the fanner longer tour with this new tour. This algorithm will be repeated
based on branch-and-bound algorithm until there is no further shorter tour could be
found. The complexity of this method should be less than O(k), k is the number of the
iteration. The worst case of the New1 will be same as the original MST algorithm since
the upper bound of the New! tour is the tour resulted from the original MST algorithm.
Steps ofAlgorithm New1
1. Construct a MST S by the Prim's algorithm and the heap sort
2. Construct a Hamiltonian tour T of S using the preorder-walking method
:I. Count the length L of T as the value of the upper bound
4. Find the four longest edges of T
5. Break the four longest edges to get four sub paths
6. Use 4-opt method to generate a new tour T' shorter than the upper bound
7. If we can find T', let T <- T', up-bound = the length L' of T'
8. Repeat 3 - 7
9. Return T as the result when we cannot find T'
21
New Algorithm 2
The design of the algorithm New2 is based on the insertion technique. The
difference between the aJgorithm New2 and the common insertion algorithms is obvious.
The latter normaJly use randomly selected vertices on graph G to do insertion. But our
algorithm New2 is based on a MST. The MST provides an ordered set of all vertices to
start the insertion operation and finally to complement a TSP tour. The complexity of the
insertion of the algorithm New2 should be 0(n2
) and its result will be better than the one
of the MST algorithm.
Steps ofAlgorithm New2
1. Construct a MST S by the Prim's algorithm and the heap sort
2. Select the first three nodes in S to construct the beginning sub tour T
3. Insert the next node in S into T, based on the technique that will find the most
proper position of it in T so as to generate the new shortest tour T'
4. Let T <- T'
5. Repeat step 3 - 4 until all nodes in S is inserted in T
6. Return T as the result
22
/
CHAPTER V
EXPERIMENT RESULTS
For comparison of the results of the new algorithms with the original MST
algorithm, we ran all of the algorithms with the pseudo-random cases in a set of 40, 80,
160, 320,640 and 1280 cases, and iterated every case for 100 times. Also, for evaluating
the quality of the results of the new algorithms, we ran two heuristics, the random 3-opt
algorithm and the random insertion algorithm under the above conditions. However in the
640 and 1280 cases, 3-opt algorithm was iterated 5 times since the running time was too
long. The experimental result is represented as the Figure 2.
60
50
Length 40
of the Tour 30
20~~
10
o
.MSTTree
.MSTtour
CNewl tour
C New2 tour '
.3-0PT
• Insertion
40 80 160 320
The Size of Cases
640 1280
Figure 2: The Average Length of the TSP tours Generated by Different Algorithms
23
From Figure 2, it can be observed that our algorithms New1 and New2 can get
shorter tours in all designed cases than the tours generated by MST algorithm. There are,
however, different improvement trends between New1 and New2.
For algorithm New1, its improvement is depended on the size of the case. In small
size cases (number less than 40), it can improve the tour quality by about 12%. In some
individual cases the improvement can be as high as 25%. But along with the number of
the case increasing, its improvement will decrease and when the number of the case is
more than one thousand, there was no significant improvement.
For algorithm New2, its achievement is not as good as that of the algorithm New1
in small size cases. But in the mid-size cases (number around 320), it can improve the
tour quality by about 15% and in some individual cases the improvement can be as high
as 35%. On the whole, it demonstrates a good and stable improvement (around 9%) in all
Iarge-size cases.
Figure 3 and Figure 4 (see below) show the achievement of the new algorithms.
Figure 3 shows the length ratios of all empirical algorithms to the length of the MST tree.
In this figure, the value line of the MST tree can be thought of as the theoretical lower
bound of the TSP tours, and the value line of the MST algorithm can be thought of as the
theoretical upper bound of the TSP tours. This figure lets us know how far the new
algorithms are from the lower bound and from the upper bound.
Figure 4 shows the improvement ratios of the new algorithms and other two
selected heuristics to the result of the MST algorithm. This figure lets us know how much
improvement the new algorithms have made to the MST algorithm.
24
2.3
2.1
1.9
~ 1.7
;::
ma: 1.5
1.3
1.1
0.9
/
~
..... ~ .....
----~ - ~
~ ~
I • • • • • •
I I I I I
40 80 160 320 640 1280
The Size of Cases
~MSTTour
...... New1
-New 2
-3-opt
---.-Insetlon
---+- MST Tree
Figure 3: The Length Ratio of the Empirical Algorithms to the MST Tree
30
20 • • • • • • J~
~ :{!. 10 .I
~ -.- Insertion III
.E
ii 0 -+-New 2 a::
'E
II ~ -10 -+-New 1 >
0 ... -3-0PT
Q,
.§ -20
-30
-40
40 80 160 320 640 1280
Size of The Cases
Figure 4: The Improvement Ratios of the New Algorithms to the MST Algorithm
25
Meanwhile, we compare the new algorithms with the random 3-opt algorithm and
the random insertion algorithm. These two heuristic methods are implemented based on
Dr. Chandler's project assignment [Chandler 1998] and Robert Dakin's article. [Dakin
1997]
Figure 5 and Figure 6 show the results of the comparison of our new algorithms to
the two selected heuristics.
50
40
Length
30
Comparison 20
10
0
40 80 160 320 640 1280
• New 1
.3·0PT
Cllnsertlon
Size 01 the Cases
Figure 5: Comparison of the New Algorithm New 1 to the Selected Heuristics
50
40
Length
30
Comparison 20
10
0
40 80 160 320 640 1280
New 2
.3-0PT
[J Insertion
Size of the Cases
Figure 6: Comparison of the New Algorithm New2 to the Selected Heuristics
26
From these figures we know that our new aJgorithms are better than the random
3-opt algorithm but the random insertion aJgorithm is better than both the new
aJgorithms.
However, in some arbitrary cases, our new algorithms can generate much better
results than both heuristics. Figure 7 to Figure 10 demonstrated that our new algorithms
generated the real optimal tour and the random 3-opt aJgorithm did not.
Figure 7: A TSP Tour Generated by the MST Algorithm (Preorder Walking)
27
Figure 8: A TSP tour Generated by the 3-opt Algorithm
Figure 9: A TSP tour Generated by the Newl Algorithm
28
Figure 10: A TSP tour Generated by the New2 Algorithm
Although the random insertion algorithm generally gets the best result of the TSP
tour in our experiment, in some arbitrary cases the algorithm New2 can generate better
results than the random insertion algorithm does. Figure 11 to Figure 14 show one of
these cases.
29
••••• Runnin9 .....~ at alII_Mot
number ofvett8k::e. is 31.
I toIDIl ngth at tile loud. 5.0991 ee.
I ..\IlI....9. Ilngth 10 5.099188
Figure 11: A TSP Tour generated by the MST Algorithm (Preorder Walking)
Figure 12: A TSP Tour Generated by the Random Insertion Algorithm
30
Figure 13: The TSP Tour Generated by the New Algorithm New2
••••• Running re8ult of eJgori1m newl •••••
e number of wrteic::e'S is 31.
the number of no bettsr dlange.is O.
elate! length of the tour ie ~ ~08412.
Figure 14: A TSP Tour Generated by the New Algorithm New1
31
J.... ..
Finally, in our experiment, the new algorithm New1 has exactly the same
complexity O(nlg(n» as the MST algorithm, and the complexity of the new algorithm
New2 is O(n2
). It only takes a slightly longer time to run than the MST algorithm.
Meanwhile, the complexity of the random insertion algorithm is O(n2
). The random 3-opt
algorithm has the complexity O(kn). Here k is the number of the iterations to do the edge
breaking and exchanging. If k is designed to be equal to n, the complexity will be O(n2
).
Otherwise the random 3-opt algorithm takes much more memory space than all other
algorithms in our experiment.
32
CHAPTER VI
SUMMARY AND CONCLUSIONS
In this research, we implemented both the new algorithms and performed
empirical running of them based on the pseudo-random input data. We also implemented
a VB GUI to demonstrate the results of the experiment.
Through the whole experiment of the new algorithms, we found that the new
algorithms have truly improved the quality of the MST algorithm. Explicitly speaking,
the new algorithm New1 can apparently improve the quality of the original MST
algorithm in small-size cases; and the new algorithm New2 can apparently improve the
quality of the original MST algorithm in mid- and large-size cases. Although the new
algorithms can achieve better results than the original MST algorithm in practice, they
still cannot change the upper bound of the MST algorithm since they are limited by the
property of the MST tree.
Otherwise, in our experiment, we found that the random-insertion algorithm has
,the best practical result although its upper bound cannot be decided explicitly, and our
new algorithms are generally betterthan the random 3-opt algorithm.
Therefore, the results of the new algorithms satisfied the expectations of the
design. But the improvement to the MST algorithm is not good as we have expected,
especially as far as the new algorithm New1 goes. Our experiment also demonstrated the
33
)
.r.
)
,1
fact that the modification of the MST algorithm is always limited by the way of the
combination of vertices under the MST tree, so it is difficult to break the upper bound of
the MST algorithm.
34
.10
..
"
REFERENCES
[Aarts and Lenstra 1997] Emile Aarts and Jan Karel Lenstra, Local Search in
Combinatorial Optimization, John Wiley & Sons, Chichester, England, 1997
[Anderson 1996] E. J. Anderson, Mechanisms for Local Search, European Journal of
Operational Research, Vol. 88 (1996) pp. 139 - 151
[Atallah 1999] Mikhail J. Atallah, Algorithms and Theory of Computation Handbook,
CRC Press, 1999
[Cannon 1984] Kenneth Andrew Cannon, Least Squared Approach To The Traveling
Salesman Problem, Master's Report, Business Administration, Oklahoma State
University, 1984
[Chandler 1998] John P. Chandler, COMSC 5413 Program 3: Traveling Salesman
problem, Computer Science, Oklahoma State University, 1998
[Chatterjee, Carrera and Lynch 1996] Sangit Chatterjee, Cecilia Carrera, and Lucy A.
Lynch, Genetic Algorithms and traveling Salesman Problems, European Journal of
Operational Research, Vol. 93 (1996) pp. 490 - 510
[Cormen, Leiserson and Rivest 1995] Thomas H. Cormen, Charles E. Leiserson, and
Ronald L. Rivest, Introduction To Algorithms, The MIT Press and McGraw-Hili Book
Company, New York, 1995
[Dakin 1997] Robert Dakin, The Travelling Salesman Problem, 1996-1997
http://www.pccug.org.au/-dakin/tsp.html
35
.>
~.
) ...
'.. ".
[Dakin 1997] Robert Dakin, A Simple Insertion Algorithm, 1996-1997
http://www.pccug.org.aU/-dakin/tspbb.html
[Dale and Walker 1996] Nell Dale and Henry M. Walker, Abstract Data Types-
Specifications, Implementations, and Applications, D. C. Heath and Company,
Massachusetts, 1996
[Fredman, Johnson, McGeoch and Ostheimer] M. L. Fredman, D. S. Johnson, L. A.
McGeoch and G. Ostheimer, Data Structures for Traveling Salesmen,
http://www.ing.unJp.edu.ar/cetad/mos/TSPBIB home.htmJ
[Garey and Johnson 1979] M. R. Garey and D. S. Johnson, Computers and Intractability:
A Guide to the Theory ofNP-completeness, W. H. Freeman, San Francisco, 1979
[Ghalib 1979] Hani M. Ghalib, The Traveling Salesman Problem: Literature Survey and
Comparison of Two Computational Methods, Master's Report, Computer Science,
Oklahoma State University, 1979
[Gu 1998] Yan Gu, Animation Of Approximation Algorithms For Traveling Salesman
Problem In Java, M. S. Thesis, Computer Science, Oklahoma State University, 1998
[Johnson and McGeoch 1995] David S. Johnson and Lyle A. McGeoch, The Traveling
Salesman Problem: A Case Study in Local Optimization, 1995
http://www.ing.unlp.edu.ar/cetad/mosfTSPBIB home.htmJ
[JUnger, Reinelt and Thienel 1994] Michael JUnger, Gerhard Reinelt, and Stefan Thienel,
Provably Good Solutions For the Traveling Salesman Problem, revised version of June
1993, 94_31ps.ps, Preprint IWR, Heidelberg, 1994
http://www.ing.unlp.edu.ar/cetad/mosITSPBIB home.html
[Kureichick, Miagkikh and Topchy] Victor M. Kureichick, Victor V. Miagkikh and
36
l
.>
')
.f ' ..
Alexander P. Topchy, Genetic Algorithm for Solution of the Traveling Salesman
Problem with New Features Against Premature Convergence,
http://www.ing.unlp.edu.ar/cetad/mosITSPBlBhome.html
[Lawler, Lenstra, Kan and Shmoys 1985] E. L. Lawler, J. K. Lenstra, A. H. G. Rinnooy
Kan and D. B. Shmoys, The Traveling Salesman Problem - A Guided Tour of
Combinatorial Optimization, John Wiley & Sons, 1985
[Lin 1965] Shen Lin, Computer Solutions of the Traveling Salesman Problem, Bell
System Technical Journal, Vol. 44 (1965), pp. 2245 - 2269
[Manber 1989] Udi Manber, Introduction To Algorithms - A Creative Approach,
Addison-Wesley Publishing Company, 1989
[Martin and Otto 1994] Olivier C. Martin and Steve W. Otto, Combining Simulated
Annealing with Local Search Heuristics, 1994
http://www.ing.unlp.edu.ar/cetad/mosITSPBlB home.html
[Miller, Tucker and Zemlin 1960] C. E. Miller, A. W. Tucker and R. A. Zemlin, Integer
Programming and the Traveling Salesman Problem, Journal, Association of Computing
Machinery Vol. 7 (1960), pp. 326 - 329
[Moscato 1998] Pablo Moscato, TSPBIB Home Page, 1998
http://www.ing.unlp.edu.ar/cetad/mosITSPBlBhome.htm]
[Noschang 1997] Mark H. Noschang, The Traveling Salesman Problem - a review of
theory and current research, 1997
http://www.ececs.uc.edu/-mnoschan/sale.html
[Oliver 1993] Ian Oliver, Programming Classics -Implementing the World's Best
Algorithms, Prentice Hall, New York, 1993
37
;.
,"
t
.1
>;
'.,
[Papadimitrious and Steiglitz 1977] C. Papadimitrious and K. Steightz, On the
Complexity of Local Search for the Traveling Salesman Problem, SIAM Journal on
Computing, Vol. 6 (1977), No.1, pp. 76 - 83
[Smith 1987] Harry F. Smith, Data Structures - Fonn and Function, Harcourt Brace
Jovanovich, Publisher, 1987
[Voudouns and Tsang 1995] Chris Voudouns and Edward Tsang, Guided Local Search,
Technical Report CSM-247, 1995
http://www.ing.unlp.edu.ar/cetad/mosITSPBIDhome.html
[Willard 1970] Edward Payson Willard, An Algorithm for Optimal Ship Routing For
Seismic Data Collection, Ph.D. Thesis, Industrial Engineering and Management,
Oklahoma State University, 1970
[Yagiura and Ibarald 1996] Mutsunori Yagiura and Toshihide Ibarald, The Use of
Dynamic Programming in Genetic Algorithms for Permutation Problems, European
Journal of Operational Research, Vol.92 (1996) pp. 387 - 401
[Zweig 1995] Geoffrey Zweig, An Effective Tour Construction and Improvement
Procedure for the Traveling Salesman Problem, Operations Research, Vol. 43 (J 995)
pp. 1049 - 1057
38
)
'.,"..,.
;l
);..
APPENDIX
.:,>.
.'...
~I
..I
,1
I'
..
Tablel: The Improvement Results to the MST Algorithm
Num. of 40 80 160 320 640 1280
Vertices
MST 6.746908 9.269029 12.996129 18.164780 25.530018 35.975517
New1 5.950680 8.457107 12.374555 17.659966 25.336664 35.894157
Improving 11.80 8.76 4.78 2.78 0.76 0.23
%
New 2 6.084479 8.446555 11.834032 16.533146 23.169161 32.551655
Improving 9.82 8.87 8.94 14.49 9.25 9.52
%
3-opt 5.953973 9.228397 12.685232 19.697220 29.944220 49.043861
Improving 11.75 0.44 2.39 -8.44 -17.29 -36.33
%
Insertion 5.551500 7.667203 10.666630 14.942585 21.007906 29.444775
Improving 17.72 17.28 17.92 17.74 17.71 18.15
%
40
.) ''....,
:>
"""
~l
,.I
'I
11
1)
).
)1
"
Table2: The Ratio to the Lower-bound (MST Tree)
Num. of 40 80 160 320 640 1280
Vertices
MSTTree 4.338862 6.064266 8.489203 11.903287 16.721451 23.461361
MST 6.746908 9.269029 12.996129 18.164780 25.530018 35.975517
Ratio 1.555 1.528 1.531 1.526 1.527 1.533
New 1 5.950680 8.457107 12.374555 17.659966 25.336664 35.894157
Ratio 1.371 1.395 1.458 1.484 1.515 1.530
New 2 6.084479 8.446555 11.834032 16.533146 23.169161 32.551655
Ratio 1.402 1.393 1.394 1.389 1.384 1.387
3-opt 5.953973 9.228397 12.685232 19.697220 29.944220 49.043861
Ratio 1.372 1.522 1.494 1.655 1.791 2.082
Insertion 5.551500 7.667203 10.666630 14.942585 21.007906 29.444775
Ratio 1.279 1.264 1.257 1.255 1.256 1.255
41
)
,
..~
:).
"
Table3: The Improvement of the Newl to the 3-opt and Insertion Algorithms
Num. of 40 80 160 320 640 1280
Vertices
New 1 5.950680 8.457107 12.374555 17.659966 25.336664 35.894157
3-opt 5.953973 9.228397 12.685232 19.697220 29.944220 49.043861
Improving 0.06% 8.36% 2.45% 10.34% 15.39% 26.50%
to 3-opt
Insertlon 5.551500 7.667203 10.666630 14.942585 21.007906 29.444775
Improving -7.19% -10.30% -16.01% -18.19% -20.61% -21.90%
to Inserti on
Table4: The Improvement of the New2 to the 3-opt and lnsertion Algorithms
Num. of 40 80 160 320 640 1280
Vertices
New 2 6.084479 8.446555 11.834032 16.533146 23.169161 32.551655
3-opt 5.953973 9.228397 12.685232 19.697220 29.944220 49.043861
Improving -2.19% 8.47% 6.71% 16.06% 22.63% 33.35%
to 3-opt
Insertion 5.551500 7.667203 10.666630 14.942585 21.007906 29.444775
Improving -9.60% -10.17% -10.94% -10.64% -10.29% -10.55%
to Insertion
42
II
"
'1
i
\
I
),
j.
Main Programming Codes of Implementation
11===========================================--===============================
General Functions
void heapfy(vtx *pa, int i, int size)
{
int I, r, hsize, large;
vtx lemp;
I = left(i);
r = right(i);
hsize = size;
large = i;
j f(1<hsize)
{
if((pa+I)->cosl«pa+large)->cost)
large = I;
if(r<hsize)
{
if((pa+r)->cost«pa+large)->cost)
large = r;
if(large !=i)
{
temp = pa[i];
pa[i] = pa[large);
pa[large) = temp;
heapfy(pa, large, size);
I
}
v===========================================================================
void heapsort(vlx *pa, inl size)
{
inl i;
vtx temp;
for(i=(sizel2-1); i>=O; i--)
heapfy(pa, i, size); Ilgel the shortest edge and put it in parOl
43
II
,I .'
•"
temp =pa[O]; Ilchange the position of the shortest edge to the end of pa
for(i=O; i<size-l; i++)
pari] =pa[i+l];
pa[i] = temp; /lput the shortest edge in pari]
J
/1==--========================--========--====--=======
float urandO
{
intQ,R;
static int Seed =56789;
int M=2147483647;
int A =16807;
float URand;
Q=MI A;
R=M%A;
do
{
Seed = A * (Seed % M);
Seed = A * (Seed % Q) - R * (Seed I Q);
if(Seed < 0)
Seed =Seed + M;
URand =(float)Seed 1M;
}
while (!«URand >0.0) && (URand <1.0»);
return URand;
}
1/===================================================================
Program for generating a minimum spanning tree
int main(int argc, char * argv[])
{
int j, k, n, repeat, nochange, flag, size;
float sumcos, aversum, longest, shortest;
char titlel [80] = "========== Running Results of MST Tree ==========";
char title2[80] = "========= Paths of the Resulted MST Tree =========";
char origcity[30] = "";
char treetour[30] = "C:\\wxiaozh\\msttree";
char treesum[30] = "C:\\wxiaozh\\treesum";
char treedata[30] = "C:\\wxiaozh\\treedata";
vtx *origi;
vtx *que;
vtx *mst;
jf (argc<3)
(
printf("Running command should be as following example: \n");
printf("execution-file input-file number-of-vertex rnumber-of-iterateI\n");
printf(" *.exe inputfileinteger [intergerl \n");
exit (0);
J
44
)
j =atoi(argv[2)); Ilstore the number of vertex
strcpy(origcity, argYl I)); /lget the input file
n = 1; /linitial the iteral number as 1
if (argc>3) n = atoi(argv[3)); Ilif fourth argument, get the number of iterate
repeat = n;
nochange =0;
flag = 1;
longest = 0.0;
shortest =999999.0;
aversum = 0.0;
origi = dynmem(j); IIset memory for computing
que = dynmem(j);
mst = dynmem(j);
while(n>O) /lthis loop is for iterate times of executing
{
if (n == 1) IIset vertex array with input data file
setvertices(origcity, origi, j);
else l/initial the array of vertices with internal random function
setvertex(origi, j);
setstart(que, origi, j); /lselect the start vertex
initmst(mst, que,j); /lset the first node and its value in MST tree
size = j-l;
k =0;
/lgenerating a MST tree in the following while loop
while(size>O) /lfollows maybe not right! Check it.
(
setvertexcost(mst, que, k, size);
heapsort(que, size); Ilsort and tind the next shortest node
k += 1;
size -= 1;
rnst[k] = que[size]; Ilput the next shortest node in mst
} lIthe end of while(size>O)
sumcos = treelength(mst, j); /lcount the total lengths of the MST tree
aversum = aversum + sumcos;
if(sumcos>longest) longest =sumcos;
if(sumcos<shortest) shortest = sumcos;
n--;
) lithe end of while (n>O) flag =-1;
I nochange = -1 ;
aversum =aversurnlrepeat;
writetree(treetour, mst, j); IIwrite an data file of all vertices coordinates
printdata(treesum, title1, sumcos, aversum, longest, shortest, repeat, nocbange, j);
free(origi);
free(que);
free(mst);
45
)
)
} llmain's end
11===
Program for implementing the MST algorithm
void setmst(vtx *sourse, vtx *destinate, int size)
I
int start. reset = -1;
int i = 0, k = 0, count = 0;
do
I
if(sourse[i).idx > 0) lithe node is available
{
sourse[i).idx = 0;
destinate[count) = sourse[i); I/put it in destinate
k = count;
count++;
}
if(count>=size) return;
i++;
if(i>=size) i = I;
if(sourse[i).idx>O && destinate[k].id=sourse[i).pnt)
continue;
else
(
start = i;
do
{
reset = search(sourse, start, size, destinate[k].id);
k--;
start = I;
} while(reset < 0 && k >= 0);
if(reset>=O) i = reset;
}
} while(count < size);
}
1/==========================================================================
int search(vtx *mst, int begin, int size, int id)
{
int i;
I for(i=begin; i<size; i++)
{
if(mst[i].idx>O && mst[i].pnt==id) l!here need new node
return i;
return (-I);
}
1/==============--============================================================
46
Program for implementing the New1 algorithm
float exch(vtx *que, vtx *adt, vtx *mst. int size. float cos) /lcos is the upper bound
{
int q Ih, q It, q2h, q2t, q3h, q3t. q4h, q4t;
float seql. seq2, seq3, seq4;
vtx ql[2], q2[2], q3[2], q4[2];
int *order;
float sum, sub, upbound;
int i, j, mode, flag;
order = (int *) calloc (size, sizeof(int»;
if (order == NULL)
(
printf("out of space\n");
exit (0);
}
flag = 4; //initiating the number of exchanged edges
upbound = cos; /Iinitiating the upper bound as original length
while(flag<=4) //flag;>4 means no improvement can get, exit loop
{
I/call setbreakO to break 3 or 4 longest edges and return break points
setbreak(que, q I, q2, q3. q4, flag, size);
Ilcall symbleO to set all break point's position
symble(q l,q2,q3,q4,&q 1t,&q I h,&q2t,&q2h,&q3t,&q3h,&q4t,&q4h,flag);
for (i=O; ksize; i++)
adt[i] = que[iJ;
I/copy the new order in adt
/
seql = count(adt, qlh. qlt, size); Ilcount every sequence length
if (seq 1<0. סס OO1) /Ithere is a single break node
{ seql = insertsingle(que, adt. qlh, size);
if (seq 1>0. סס OO1 && seq I<upbound) l/insert the single node successfully
{ upbound = seq I;
continue;
}
seq2 = count(adt, q2h, q2t, size);
if (seq2<O.OOOOI) //there is a single break node
( seq2 = insertsingle(que, adt, q2h, size);
if (seq2>0.OOOOI && seq2<upbound) I/insert the single node successfully
{ upbound = seq2;
continue;
}
seq3 = count(adt, q3h, q3t, size);
if (seq3<0.OOOOI) /Ithere is a single break node
( seq3 = insertsingle(que, adt, q3h, size);
if (seq3>O.OOOOI && seq3<upbound) l/inserl the single node successfully
{ upbound =seq3;
continue;
47
if (flag=4)
{
seq4 = count(adt., q4h, q4t, size);
if (seq4<O.OOOOl) Iltltere is a single break node
{ seq4 = insertsingle(que, adt, q4h, size);
if (seq4>O.OOOOl && seq4<upbound) lIinsert the single node successfully
{ upbound = seq4;
continue;
lIif everything is ok, count all possible edge-switch cost
mode = switchcost(ql, q2, q3, q4, &sub, flag);
sum = seq 1 + seq2 + seq3 + sub;
if (flag==4)
sum = sum + seq4;
Ilfloat can not compare exactly so I use error rate to control the loop
if «upbound-sum)<=O.OOOOl II mode==l)
{
flag++;
continue;
}
if (flag 4)
getorder4(q 1t, q2h, q2t, q3h, q3t, q4h, q4t, q 1h, size, mode, order);
else
getorder3(qi t, q2h, q2t, q3h, q3t, q Ih, size, mode, order);
for 0=0; i<size; i++)
{
j = order[i];
que[i] = adt[j];
}
Ilrecord the new order in que
/
upbound = totalcost(que, size); IlcriticaI step, change upper bound
/lif (flag==4) flag--; Ilback to 3-opt mode
lIthe end of main while loop (flag<=4)
free(order);
return upbound;
}
11===========================================================================
void getorder4(int q It,int q2h,int q2t,int q3h,int q3t,int q4h,i ot q4t, int q I h. int size, int mode, int *order)
{
switch (mode)
(
case I:
setorder4(qlt, q2h, q2t, q3h, q3t, q4h, q4t, q Ih, size, order); break;
case 2:
setorder4(qlt, q2h, q2t, q3h, q3t, q4t, q4h, q I h, size, order); break;
48
case 3:
setorder4(q It, q2h, q2t, q3t, q3h, q4h, q4t, qlh, size, order); break;
case 4:
setorder4(qlt, q2h, q2t, q3t, q3h, q4t, q4h, qlh, size, order); break;
case 5:
setorder4(qlt, q2t, q2h, q3h, q3t, q4h, q4t, qlh, size, order); break;
case 6:
setorder4(ql t, q2t, q2h, q3h, q3t, q4t, q4h, q lh, size, order); break;
case 7:
setorder4(qlt, q2t, q2h, q3t, q3h, q4h, q4t, qlh, size, order); break;
case 8:
setorder4(qlt, q2t, q2h, q3t, q3h, q4t, q4h, q lh, size, order); break;
case 9:
setorder4(qlt, q4t, q4h, q3t, q3h, q2t, q2h, qlh, size, order); break;
case 10:
setorder4(q 1t, q4h, q4t, q3t, q3h, q2t, q2h, qlh, size, order); break;
case 11:
setorder4(qit, q4t, q4h, q3h, q3t, q2t, q2h, qlh, size, order); break;
case 12:
setorder4(qlt, q4h, q4t, q3h, q3t, q2t. q2h, qlh, size, order); break;
case 13:
setorder4(qit, q4t, q4h, q3t, q3h, q2h, q2t, qlh, size, order); break;
case 14:
setorder4(qlt, q4h, q4t, q3t, q3h, q2h, q2t, qlh, size, order); break:;
case 15:
setorder4(qlt, q4t, q4h, q3h, q3t, q2h, q2t, q Ih, size, order); break;
case 16:
setorder4(qlt, q4h, q4t, q3h, q3t, q2h, q2t, q1h, size, order); break;
case 17:
setorder4(qlt, q2h, q2t, q4h, q4t, q3h, q3t, qlh, size, order); break;
case 18:
setorder4(ql t, q2h, q2t, q4h, q4t, q3t, q3h, q lh, size, order}; break;
case 19:
setorder4(qlt, q2h, q2t, q4t, q4h, q3h, q3t, qlh, size, order); break;
case 20:
setorder4(q I t, q2h, q2t, q4t, q4h, q3t, q3h, q Ih, size, order); break;
case 21:
setorder4(qlt, q2t, q2h, q4h, q4t, q3h, q3t, q lh, size, order); break;
case 22:
setorder4(q It, q2t, q2h, q4h, q4t, q3t, q3h, q lh, size, order); break;
case 23:
setorder4(q It, q2t, q2h, q4t, q4h, q3h, q3t, qlh, size, order); break;
case 24:
setorder4(q It, q2t, q2h, q4t, q4h, q3t, q3h, q 1h, size, order); break;
case 25:
setorder4(qlt, q3t, q3h, q4t, q4h, q2t, q2h, qlh, size, order); break;
case 26:
/ setorder4(qIt, q3h, q3t, q4t, q4h, q2t, q2h, qlh, size, order); break;
case 27:
setorder4(qlt, q3t, q3h, q4h, q4t, q2t, q2h, qlh, size, order); break;
case 28:
setorder4(qlt, q3h, q3t, q4h, q4t, q2t, q2h, qlh, size, order); break;
case 29:
setorder4(q It, q3t, q3h, q4t, q4h, q2h, q2t, q Ih, size, order); break;
case 30:
setorder4(qlt, q3h, q3t, q4t, q4h, q2h, q2t, qlh, size, order); break;
49
I
case 31:
setorder4(q 1t. q3t, q3h, q4h, q4t, q2h, q2t, q1h, size, order); break;
case 32:
setorder4(q It, q3h, q3t, q4h, q4t, q2h, q2t, qlh, size, order); break;
case 33:
setorder4(q It, q3h, q3t, q2h, q2t, q4h, q4t, q 1h, size, order); break;
case 34:
setorder4(qlt, q3h, q3t, q2h, q2t, q4t, q4h, qlh, size, order); break;
case 35:
setorder4(qlt, q3h, q3t, q2t, q2h, q4h, q4t, qlh, size, order); break;
case 36:
setorder4(qll, q3h, q3t, q2t, q2h, q4t, q4h, qlh, size, order); break;
case 37:
setorder4(q1t, q3t, q3h, q2h, q2t, q4h, q4t, q1h, size, order); break;
case 38:
setorder4(qlt, q3t, q3h, q2h, q2t, q4t, q4h, qlh, size, order); break;
case 39:
setorder4(q 1t, q3t, q3h, q2t, q2h, q4h, q4t, qlh, size, order); break;
case 40:
setorder4(qlt, q3t, q3h, q2t, q2h, q4t, q4h, qlh, size, order); break;
case 41:
setorder4(qlt, q4t, q4h, q2t, q2h, q3t, q3h, qlh, size, order); break;
case 42:
setorder4(qlt, q4h, q4t, q2t, q2h, q3t, q3h, qlh, size, order); break;
case 43:
setorder4(qIt, q4t, q4h, q2h, q2t, q3t, q3h, qlh, size, order); break;
case 44:
setorder4(qlt, q4h, q4t, q2h, q2t, q3t, q3h, qlh, size, order); break;
case 45:
setorder4(qlt, q4t, q4h, q2t, q2h, q3h, q3t, qIh, size, order); break;
case 46:
setorder4(qIt, q4h, q4t, q2t, q2h, q3h, q3t, qlh, size. order); break;
case 47:
setorder4(q 1t, q4t, q4h, q2h, q2t, q3h, q3t, q lh, size, order); break;
case 48:
setorder4(q it, q4h, q4t, q2h, q2t, q3h, q3t, q lh. size. order); break;
default:;
}
}
v===========================================================================
void setorder4(int pl,int p2.int p3,inl p4,int p5,int p6,int p7,int p8,int size,int *ary)
{
int i;
ary[O] =pI;
for 0=1; i<size; i++)
(
if (p2>p3)
ary[i] =p2--;
else if (p2<p3)
ary[i] = p2++;
else
(
ary[i) = p2:
break;
50
I
}
for (++i; i<size; i++)
{
if (p4>p5)
ary[i] = p4--;
else if (p4<p5)
ary[i] = p4++;
else
{
ary[i] = p4;
break;
}
)
for (++i; i<size; i++)
(
if (p6>p7)
ary[i) = p6--;
else if (p6<p7)
ary[i] = p6++;
else
{
ary[i] =p6;
break;
}
I
for (++i; i<size; i++)
{
ary[i] = p8%size;
p8++;
}
)
~===========================================================================
void setbreak (vtx *que,vlx *q I, vtx *q2,vtx *q3,vtx *q4,int flag,int size)
(
float largl, larg2, larg3, larg4;
int breakl, break2, break3, break4, i;
int brak[4) = {-I, -1, -I, -1 );
largl = larg2 = larg3 = larg4 = 0.0;
breakl = break2 = break3 = break4 = 99999999;
for(i=O; i<size; i++)
{
/ if(que[i).cost>largl)
(
larg4 = larg3;
brak[3] = brak[2);
Jarg3 = larg2;
brak[2) = brak[ 1);
larg2 = largl;
brak[l] = brak[O];
Jargl = que[i].cost;
brak[Ol = i;
51
/
continue;
}
else if(que[i).cost>larg2)
{
larg4 = larg3;
bralc[3] = brak(2);
larg3 = larg2;
brak(2) = brak[ 1);
larg2 = que[i).cost;
bralc[ 1) = i;
continue;
}
else if(que[i).cost>larg3)
{
larg4 = larg3;
brak[3] = brak[2);
larg3 = que[i).cost;
brak[2) = i;
continue;
}
else if(que[i}.cOS1>larg4)
{
larg4 = que[i).cost;
brak(3) :;;: i;
continue;
}
else;
}
for(i=O; i<tlag; i++)
{
if(brak[i)<breakl)
{
break4= break3;
break3 = break2;
break2 = break1;
break! = brak[i];
continue;
}
else if(brak[i}<break2)
{
break4 =break3;
break3 = break2;
break2 = brak[i);
continue;
}
else if(brak[i)<break3)
{
break4 = break3;
break3 = brak[iJ;
continue;
}
else if(brak[i]<break4)
{
break4 =brak[i);
52
continue;
I
else;
}
q I [0] = que[break1];
q 1[O].idx = breakI;
q2[0] = que[breakl+I];
q2[0].idx = breakI+I;
q2[1] = que[break2];
q2[l].idx = break2;
q3[0] = que[break2+I);
q3[0].idx = break2+1;
q3[1] = que[break3];
q3[ I].idx = break3;
if(flag==4)
(
q4[0] = que[break3+ I);
q4[0].idx = break3+ I;
q4[1] = que[break4J;
q4Dj.idx = break4;
qI[l] = que[(break4+1)%size];
ql[I].idx = (break4+I)%size;
)
else
{
gI[I] = que[(break3+I)%size];
q I [1 ].idx = (break3+ I)%size;
I
fi===========================================================================
int switchcost(vtx *qI, vtx *q2, vtx *q3, vtx *q4, float *sub. int flag)
(
int I, m, n, p, index, mode;
float sum, sb;
index = I; Ilinitiating the value for mode
sb = 9999999.0; Ilinitiating the value for record sub length
sum = 0.0; Ilfor counting various combination
for(l=O; 1<2; 1++) Ilfiqure out the shortest combination of edges
I
for(m=O; m<2; m++)
I
for(o=O; 0<2; 0++)
(
if(flag=4)
{
for(p=O; p<2; p++)
I
sum += cost(q 1[1].x, q I[I].y, q2[m].x, q2[m].y);
sum += cost(q2[ml\ I).x, q2[ml\ I j.y, q3[n].x, q3[n].y);
sum += cost(q3[nl\l).x, q3[nl\l).y, q4[p].x, q4[p).y);
53
sum += cost(q4[pJ\l].x, q4[pJ\I].y, ql[lJ\llx, ql[I"I].y);
if (sum < sb)
{
sb = sum;
mode = index;
}
sum = 0.0;
sum += cost(ql[l].x, ql[l].y, q2[m].x, q2[m].y);
sum += cost(q2[m" l].x, q2[mJ\ 1].y, q4[n].x, q4[n).y);
sum += cost(q4[n"I].x, q4[n"1].y, q3[p].x, q3[p].y);
sum += cost(q3[p" Ilx, q3[p" 1] .y, ql [I" 1].x, q1[I" l).y);
if (sum < sb)
{
sb = sum;
mode = index+ 16;
}
sum =0.0;
sum += cost(ql[I].x, ql[I).y, q3[m].x, q3[m).y);
sum += cost(q3[m"I].x, q3[m"I).y, q2[n).x, q2[nly);
sum += cost(q2[n"1).x, q2[n"l).y, q4{p).x, q4[p].y);
sum += cost(q4[p"I).x, q4[pJ\1).y, qlWl].x, ql[l"l].y);
if (sum < sb)
{
sb = sum;
mode = index+32;
}
sum = 0.0;
index += 1;
}
}
else
{
sum += cost(ql [I].x, ql[I).y, q2[m).x, q2[m].y);
sum += cost(q2[mJ\I].x, q2[m"I).y, q3[n).x, q3[n].y);
sum += cost(q3[nJ\1I.x, q3rnJ\ I].y, q IW1].x, q 1W l].y);
if (sum < sb)
(
sb =sum;
mode = index;
)
sum = 0.0;
index += I;
}
}
*sub = sb;
return mode;
}
~===========================================================================
54
float insertsingle (vtx *que. vtx *origi. int start. int size)
{
int i. j. k. position;
float upbound. limit. sumcos. length;
vtx temp;
if (start==O)
{j = size-I;
k =1;
)
else if (start=(size-I»
{ k=O;
j = start-I;
)
else
{j=start-l;
k = starHI;
)
upbound = cost(origi[startj.x. origi[startj.y. origi[j).x. origi[j).y) +
cost(origi[start).x. origi[start).y. origi[k].x. origi[k).y)cost(
origi[j).x, origi[j)I.y. origi(k).x, origi[k).y);
limit = upbound;
if (start==O)
{ k= 2;
j = I;
)
else if (start==(size-l»
{ k =1;
j = 0;
}
else
{j=start+l;
k=j+l;
if(k>=size) k=O;
for(i=O; i<size-2; i++)
{
sumcos = 0.0;
if(j>=size) j=O;
if(k>=size) k=O;
length = cost(origi[start).x, origi[start].y, origiU).x, origi[j).y);
sumcos += length;
length = cost(origi[start).x, origi[start).y, origi[k).x, origi[k).y);
sumcos += length;
length = cost(origiU).x, origi[j).y, origi[k].x, origi[k).y);
sumcos -= length;
if «upbound-sumcos»O'()OOO 1)
{
upbound = sumcos;
55
position = k; lithe biggest position will be equal to count
}
j++; k++;
if((limit-upbound)<O.OOOOl)
return 0.0;
Iithat means no improving
temp = origi[start];
k = start;
if (positioo>start)
( for(; k<position; k++)
origi[k) = origi[k+ 1];
)
if (position<start)
{ fore; k.>position; k--)
origi[k] = origi[k-l];
}
origi[k] = temp; llinsert the single node into the shortest position
for (i=O; i<size; i++)
que[i] = origi[iJ;
Ilrecord the new order in que
upbound = totalcost(que, size); I/critical step, change upbound
return upbound;
}
//=================================================
Program for implementing the New 2 algorithm
void insertion (vtx *que, vtx *origi, int size)
(
int count, i, j, k, position;
float upbound, sumcos, length;
for(count=O; count<3; count++)
que[count] = origi[countl;
while(count<size) /Ithat is 0(nI\2) algorithm
(
upbound = 9999999.0;
for(j=O; j<count; j++)
{
sumcos = 0.0;
i = j+l;
if(i-=count) i=O;
length = cost(origi[countl.x., origi[count].y, que[j].x, que[j).y);
SUffiCOS += length;
length = cost(origi[count].x, origi[count}.y, que[i].x, que[i].y);
SUffiCOS += length;
length = cost(que[j).x., que[j].y, que[i].x. que[i!.y);
sumcos -= length;
if (sumcos<upbound)
{
56
upbound =sumcos;
position =j+I; lithe biggest position will be equal to count
}
}
k = count;
while(k>position) lIinsert the new node into the shortest position
{
que[k] =que[k-l];
k--;
}
que[k] = origi[count];
count++;
} lithe end of while(count<size)
l
1/=====--=====--===========--=============================
Program for implementing the random 3-opt algorithm
lIimplementation 3-opt algorithm for TSP tour and used for VB interface
lIusing random start point to do exchange edges
IIrevised countO and setorderO, do 0.45sqrt(n)*n switching to implementation the tour
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <string.h>
#include "common.h"
#define TOUR 0
#define RAND 1
void setstart(vtx *dester. vtx *source, int j);
float count(vtx. *adt, int a, int b, int size);
float bell(vtx. *que, vtx. *adt, int sepl, int sep2. int start, jnt size, float cos);
void setorder(int pUnt p2,int p3,int p4,int p5,int p6,int size,int *ary);
11--------------------------------------------------------------
int main(int argc, char * argv[])
{
int i, j, k, I, m, n, repeat, nochange, flag, size;
float sumcos, aversum, longest, shortest;
char title 1[80] = "========= Running Results of 3-0pt Algorithm =========";
char title2[80] = "========= Paths of the Resulted 3-0pt Tour ==========";
char origcity[30] = "";
char opttour(30] = "C:\\wxiaozh\\opttour";
char optsum[30] ="C:\\wxiaozh\\optsum";
char optdata[30] ="C:\\wxiaozh\\optdata";
vtx *origi;
vtx *adt;
vtx *que;
if (argc<3)
{
57
printf("Running command should be as following example: \n");
printf("execution-file input-file number-of-vertex [number-of-iterate]\n");
printf(" *.ex.e inputfile integer [interger] \n");
exit (0);
j = atoi(argv[2]);
strcpy(origcity, argv[l});
n = 1;
if (argc>3) n = atoi(argv[3]); /lif fourth argument. get the number of iterate
repeat = n;
nochange = 0;
flag =I;
longest = 0.0;
shortest = 999999.0;
aversum = 0.0;
origi = dynmem(j); Ilset memory for computing
que = dynmem(j);
adt = dynmem(j);
while(n>O) Ilthis loop is for iterate times of executing
(
if (n = 1) Ilset vertex array with designed input data file
setvertices(origcity. origi. j);
else Ilget input data from program's random function
setvertex(origi, j);
for(i=O; i<j; i++)
que[i)=origi[i];
lIthe optimal formula to get the value of k
Ilk = (int)(sqrt(j)*4.5);
k = j; II running time will be j square
sumcos = 999999.0;
while(k--) Ilk is the times to change the start vertex of the graph so
{ Ilthat new cut-swap can be used
setstart(adt, que. j);
size = (int)(sqrt(j)*4.5);
Ilsize = j;
while(size--) Ilsize is the times to do cut-swap when the start vertex
{ /lis not changed. The total complexity is k squared.
while(l)
(
m = (int)(urandO* l000)%j; Ilget a random cutting point of array
1= (int)(urandO* l000)%j; Ilget a random cutting point of array
if(abs(m-I»2 && m>L && 1>1)
break;
sumcos = beH(que, adt, m, I, 0, j, sumcos); Ilcritical function doing 3-opt
58
} //while(size--) end
} IIwhile(K--) end
aversum = aversum + sumcos;
if(sumcos>longest) longest = sumcos;
if(sumcos<shortest) shortest = sumcos;
n--;
} lIend of while (n>O)
nochange = -I;
aversum = aversurn/repeat;
writevertices(opttour, que, j); IIwrite an data file of all vertices coordinates
printdata(optsum, titlel, sumcos, aversum, longest, shortest, repeat, nochange, j);
#ifTOUR
printpath(optdata, title2, que, j); IIwrite the whole tour of the MST algorithm
#endif
free(origi);
free(que);
free(adt);
} lIend of main
11=======================================================================
void setstart(vtx *dester, vtx *source, int j)
{
int start, f, idx;
start =(int)(urandO* lOOO)%j; /lget a random start point
#if RAND lIif random starting mode, change the tirst node
f=O;
idx = 0;
#else lIif not random starting mode, change only from second node
while(start==O)
start = (int)(urandO*IOOO)%j; //here start should be !=O
f= I;
idx = I;
dester[O] = source(O]; /Ialways keep the first node non-change
#endif
forC f<j; f++)
(
dester[f] = source[start++];
if(start>=j)
start = idx;
I
}
float bell(vtx *que, vtx *adt, int sepl, int sep2, int start, int size, float cos)
{
int qlh, qlt, q2h, q2t, q3h, q3t;
float sum, seq I, seq2, seq3;
59
vtx q 1[2], q2[2], q3[2];
float sub = 999999.0; /Iinitiating the value for record sub length
int index = I; lIinitiating the value for mode
int i, j, mode, I, m, n;
int *order;
qlt = start;
q2h = start + 1;
q2t = (sepl<sep2)?sepl:sep2;
q3h = q2t + 1;
q3t = (sepl>sep2)?sepl:sep2;
qlh=q3t+l;
q1[0] = adt[q 1t];
ql[l] = adt[qlhJ;
q2[0] = adt[q2h];
q2[1] = adt[q2t];
q3[0) = adt[q3h);
q3[l) = adt[q3t);
order = (int *) calloc (size, sizeof(int);
if (order = NULL)
(
printf("cao not get space for order array.\n");
exit (0);
}
seql = count(adt, q Ih, q It, size); IIcount every sequence length
seq2 = count(adt, q2h, q2t, size);
seq3 = count(adt, q3h, q3t, size);
sum = 0.0;
for(l=O; 1<2; 1++)
(
fnr(m=O; m<2; m++)
(
for(n=O; 0<2; 0++)
{
sum += cost(q1[I).x, q 1[I] ,yo q2[m).x, q2[m].y);
sum += cost(q2[m"I].x, q2[m"I).y, q3[n].x, q3[n].y);
sum += cost(q3[n"I].x, q3[n"I],y, ql[l"ll.x. ql[I"l].y);
if (sum < sub)
(
sub = sum;
mode = index;
I
sum = 0.0;
index += I;
I
sum = seq 1 + seq2 + seq3 + sub;
if(sum>cos)
return cos;
60
switch (mode)
{
case 1:
setorder(q 1t, q2h, q2t, q3h, q3t, q 1h, size, order); break;
case 2:
setorder(q It, q2h, q2t, q3t, q3h, q1h, size, order); break;
case 3:
setorder(qit, q2t, q2h, q3h, q3t, qlh, size, order); break;
case 4:
setorder(q It, q2t, q2h, q3t, q3h, q 1h, size, order); break;
case 5:
setorder(qIt, q3t, q3h, q2t, q2h, qlh, size, order); break;
case 6:
setorder(qlt, q3h, q3t, q2t, q2h, qlh, size, order); break;
case 7:
setorder(q It, q3t, q3h, q2h, q2t, q I h, size, order); break;
case 8:
setorder(qlt, q3h, q3t, q2h, q2t, qlh, size, order); break;
default:;
I
for (i=O; i<size; i++)
{
j = order[i];
que[i] = adt[j];
I
for (i=O; i<size; i++)
adt[i) = que[i];
Ilrecord the new order in que
Ilcopy the new order in adt
free(order);
return sum;
I
v=======================--===================================================
Program for implementing the randum insertion algorithm
void insertion(vtx *origi, vtx *que, int size)
(
int count, i, j, k, position;
float upbound, sumcos, length;
fore count=O; count<3; count++)
que[count] = origi[count];
while(count<size) I/follows maybe not right! Check it.
{
upbound = 9999999.0;
forU=O; j<count; j++)
{
SUffiCOS =0.0;
i =j+l;
if(i==count) i=O;
length = cost(origi[count).x, origi[count).y, queUl.x, que[j).y);
sumcos += length;
length = cost(origi[count].x, origi[count].y, que[i].x, que[i].y);
61
sumcos += length;
length = cost(queUJ.x. queUl.y. que[i].x, que[i].y);
sumcos -= length;
if (sumcos<upbound)
{
upbound = sumcos;
position = j+ 1;
}
}
k = count;
while(bposition)
{
que[k] = que[k-lJ;
k--;
}
que[k] = origi[count];
count++;
}lithe end of whiJe(count<size)
62
VITA
Xiaozhen Wang
Candidate for the Degree of
Master of Science
Thesis: MODIFIED MST ALGORITHMS FOR TSP TOURS
Major Field: Computer Science
Biographical:
Personal Data: Born in Shanghai, China, the son of Chongdao Wang and
Wenqin Zhang.
Education: Received Bachelor of Law degree in Political science and a Master of
Law in Sociology from East China Nannal University, Shanghai, China in
July 1982 and July 1985, respectively. Completed the requirements for the
Master of Science degree with a major in Computer Science at Oklahoma
State Uni versity in May 2000.