

small (250x250 max)
medium (500x500 max)
Large
Extra Large
large ( > 500x500)
Full Resolution


CULTURAL PARTICLE SWARM OPTIMIZATION MOAYED DANESHYARI Bachelor of Science Electrical Engineering Sharif University of Technology Tehran, Iran 1995 Master of Science Biomedical Engineering Iran University of Science and Technology Tehran, Iran 1998 Master of Science Physics Oklahoma State University Stillwater, Oklahoma 2007 Submitted to the Faculty of the Graduate College of the Oklahoma State University in partial fulfillment of the requirements for the Degree of DOCTOR OF PHILOSOPHY July 2010 ii CULTURAL PARTICLE SWARM OPTIMIZATION Dissertation Approved: Dean of the Graduate College Dr. Gary G. Yen Dissertation Adviser Dr. Carl D. Latino Dr. Louis G. Johnson Dr. R. Russell Rhinehart Dr. A. Gordon Emslie iii ACKNOWLEDGEMENTS I would like to first thank my academic advisor, Professor Gary G. Yen, for his guidance, support and especially patience with all ups and downs during the years of studies that this dissertation was gradually constructed. If it were not with his flexibility with my different situations and his providing me the freedom to fully experience all aspects of academic research especially in the last two years, this academic research could never be completed. I would also like to extend my appreciation to the other committee members whose guidance, comments and review of the research work were of great importance for improving the quality of this document. My thanks also go to all my previous colleagues at the Intelligent Systems and Control Laboratory at Oklahoma State University that accompanied my progress throughout part of my research by offering me new ideas. I should also mention my thankfulness to my colleagues in my current profession as Assistant Professor at Elizabeth City State University whose help and flexibility to give me more free time to focus on my Ph.D. research work was a great help. Finally, I would like to express my gratitude for my parents, Farideh and Ahmad and my sister Matin who have always supported me throughout my years of studies and provided the understanding only possible although living far from me. iv Last, but not least, I would like to specially thank my family, my wife, Lily and my little son, Ryan, for their understanding, help, support and providing appropriate environment for me to work on my research during years of studying for doctorate degree. If it were not her verbal and spiritual support and his innocence and happiness to encourage me in working more, this study could never be accomplished. Moayed Daneshyari v Table of Contents Chapter Page CHAPTER I INTRODUCTION .......................................................................................................... 1 CHAPTER II LITERATURE REVIEW ............................................................................................. 12 CHAPTER III SOCIETTY AND CIVILIZAION FOR OPTIMIZATION .......................................... 29 3.1 Introduction ......................................................................................................... 29 3.2 Socialbased Algorithm for Optimization ........................................................... 31 3.2.1 Proposed Modifications ............................................................................... 39 3.3 Simulation Results .............................................................................................. 42 3.4 Discussions ......................................................................................................... 44 CHAPTER IV DIVERSITYBASED INFORMATION EXCHANGE FOR PARTICLE SWARM OPTIMIZATION .......................................................................................................... 46 4.1 Introduction ......................................................................................................... 46 4.2 Review of Related Work ..................................................................................... 48 4.3 Diversitybased Information Exchange among Swarms in PSO ........................ 54 4.4 Simulation Results .............................................................................................. 61 4.5 Discussions ......................................................................................................... 71 CHAPTER V CULTURALBASED MULTIOBJECTIVE PARTICLE SWARM OPTIMIZATION...................................................................................................................................... 73 5.1 Introduction ......................................................................................................... 73 5.2 Review of Literature ........................................................................................... 77 5.2.1 Related Works in Multiobjective PSO ......................................................... 77 5.2.2 Related Work in Cultural Algorithm for Multiobjective Optimization ....... 79 5.3 Culturalbased Multiobjective Particle Swarm Optimization ............................. 80 vi 5.3.1 Acceptance Function .................................................................................... 81 5.3.2 Belief Space ................................................................................................. 82 5.3.2.1 Situational Knowledge .......................................................................... 82 5.3.2.2 Normative Knowledge .......................................................................... 84 5.3.2.3 Topographical Knowledge .................................................................... 86 5.3.3 Influence Functions ...................................................................................... 89 5.3.3.1 Adapting Global Acceleration .............................................................. 89 5.3.3.2 Adapting Local Acceleration ................................................................ 91 5.3.3.3 Adapting Momentum ............................................................................ 93 5.3.3.4 Selection..................................................................................... 94 5.3.3.5 Selection ..................................................................................... 95 5.3.4 Global Archive ............................................................................................. 96 5.3.5 Timedecaying Mutation Operator .............................................................. 98 5.4 Comparative Study and Sensitivity Analysis ...................................................... 99 5.4.1 Comparison Experiment ............................................................................ 100 5.4.1.1 Parameter Settings .............................................................................. 100 5.4.1.2 Benchmark Test Functions ................................................................. 100 5.4.1.3 Qualitative Performance Comparisons ............................................... 102 5.4.1.4 Quantitative Performance Evaluations ............................................... 103 5.4.2 Sensitivity Analysis ................................................................................... 118 5.5 Discussions ....................................................................................................... 136 CHAPTER VI CONSTRAINED CULTURALBASED OPTIMIZATION USING MULTIPLE SWARM PSO WITH INTERSWARM COMMUNICAION ................................... 139 6.1 Introduction ....................................................................................................... 139 6.2 Review of Literature ......................................................................................... 142 6.2.1 Related Work in Constrained PSO ............................................................ 142 6.2.2 Related Works in Cultural Algorithm for Constrained Optimization ........ 146 6.3 Cultural Constrained Optimization Using MultipleSwarm PSO ..................... 147 6.3.1 MultiSwarm Population Space ................................................................. 149 6.3.2 Acceptance Function .................................................................................. 150 vii 6.3.3 Belief Space ............................................................................................... 151 6.3.3.1 Normative Knowledge ........................................................................ 152 6.3.3.2 Spatial Knowledge .............................................................................. 153 6.3.3.3 Situational Knowledge ........................................................................ 155 6.3.3.4 Temporal Knowledge.......................................................................... 156 6.3.4 Influence Functions .................................................................................... 158 6.3.4.1 Selection ................................................................................... 158 6.3.4.2 Selection ................................................................................... 158 6.3.4.3 Selection................................................................................... 159 6.3.4.4 InterSwarm Communication Strategy ............................................... 159 6.4 Comparative Study............................................................................................ 162 6.4.1 Parameter Settings ..................................................................................... 162 6.4.2 Benchmark Test Functions ........................................................................ 163 6.4.3 Simulation Results ..................................................................................... 164 6.4.4 Convergence Graphs .................................................................................. 172 6.4.5 Algorithm Complexity ............................................................................... 178 6.4.6 Performance Comparison ........................................................................... 178 6.4.7 Sensitivity Analysis ................................................................................... 179 6.5 Discussions ....................................................................................................... 183 CHAPTER VII DYNAMIC OPTIMIZATION USING CULTURALBASED PARTICLE SWARM OPTIMIZATION ........................................................................................................ 186 7.1 Introduction ....................................................................................................... 186 7.2 Review of Literature ......................................................................................... 191 7.2.1 Related Work in Dynamic PSO ................................................................. 191 7.2.2 Related Works in Cultural Algorithm for Dynamic Optimization ............ 196 7.3 Cultural Particle Swarm for Dynamic Optimization ........................................ 196 7.3.1 Multi Swarm Population Space ................................................................. 198 7.3.2 Acceptance Function .................................................................................. 201 7.3.3 Belief Space ............................................................................................... 202 7.3.3.1 Situational Knowledge ........................................................................ 202 viii 7.3.3.2 Temporal Knowledge.......................................................................... 203 7.3.3.3 Domain Knowledge ............................................................................ 204 7.3.3.4 Normative Knowledge ........................................................................ 208 7.3.3.5 Spatial Knowledge .............................................................................. 212 7.3.4 Influence Functions .................................................................................... 215 7.3.4.1 pbest Selection .................................................................................... 215 7.3.4.2 sbest Selection ..................................................................................... 216 7.3.4.3 gbest Selection .................................................................................... 216 7.3.4.4 Diversity based Migration Driven by Change .................................... 216 7.4 Experimental Study ........................................................................................... 218 7.4.1 Benchmark Test Problems ......................................................................... 219 7.4.2 Comparison Algorithms ............................................................................. 220 7.4.3 Comparison Measure ................................................................................. 222 7.4.4 Simulation Results ..................................................................................... 222 7.5 Discussions ....................................................................................................... 232 CHAPTER VIII CONCLUSION ........................................................................................................... 235 BIBLIOGRAPHY ........................................................................................................... 241 APPENDIX A BENCHMARK TEST FUNCTIONS FOR MULTIOBJECTIVE OPTIMIZATION PROBLEMS ............................................................................................................... 262 APPENDIX B BENCHMARK TEST FUNCTIONS FOR CONSTRAINED OPTIMIZATION PROBLEMS ............................................................................................................... 265 APPENDIX C BENCHMARK TEST FUNCTIONS FOR DYNAMIC OPTIMIZATION PROBLEMS ............................................................................................................... 289 ix List of Figures Figures Page 3.1 Flowchart for socialbased single objective optimization 34 3.2 Flowchart for identifying leaders 35 3.3 Flowchart on how to migrate individuals 38 3.4 Pseudocode for individuality importance in intrasociety migration 40 3.5 Schema for Spring Design problem 43 3.6 Comparison for best objective function for proposed modifications 44 4.1 Ring and random sequential migration 56 4.2 Main algorithm for diversitybased multiple PSO 56 4.3 Schema of swarm neighborhood 59 4.4 Main algorithm for diversitybased multiple PSO with neighborhood 60 4.5 Benchmark function F1 with five peaks and four valleys 65 4.6 Final best particles for F1 65 4.7 Benchmark function F2 with 10 peaks 66 4.8 Final best particles for F2 66 4.9 Benchmark function F3 with two peaks and one valley 67 4.10 Final best particles for F3 67 4.11 Benchmark function F4 with five peaks 68 4.12 Final best particles for F4 68 4.13 Benchmark function F5 with six peaks 69 4.14 Final best particles for F5 69 5.1 Schema of particle’s movement in MOPSO 76 x 5.2 Pseudocode of the cultural MOPSO 81 5.3 Schema of the adopted cultural framework 82 5.4 Representation of situational knowledge 83 5.5 Schematic view of choosing the ith element of situational knowledge 84 5.6 Representation of normative knowledge 85 5.7 Schema on how normative knowledge can be found and updated 88 5.8 Representation of knowledge in each cell 88 5.9 Example of cell representation 89 5.10 Schema of local grid for the personal archive 93 5.11 Method of selecting from topographical knowledge 96 5.12 selection procedure from personal archive 97 5.13 Pareto fronts comparison on test function ZDT1 105 5.14 Pareto fronts comparison on test function ZDT2 106 5.15 Pareto fronts comparison on test function ZDT3 107 5.16 Pareto fronts comparison on test function ZDT4 108 5.17 Pareto fronts comparison on test function DTLZ5 109 5.18 Pareto fronts comparison on test function DTLZ6 110 5.19 Box plot of hypervolume indicator for all test function 111 5.20 Box plot for additive binary epsilon indicator on test function ZDT1 115 5.21 Box plot for additive binary epsilon indicator on test function ZDT2 115 5.22 Box plot for additive binary epsilon indicator on test function ZDT3 116 5.23 Box plot for additive binary epsilon indicator on test function ZDT4 116 5.24 Box plot for additive binary epsilon indicator on test function DTLZ5 117 5.25 Box plot for additive binary epsilon indicator on test function DTLZ6 117 5.26 Sensitivity analyses with respect to minimum personal acceleration 123 5.27 Sensitivity analyses with respect to maximum personal acceleration 124 5.28 Sensitivity analyses with respect to minimum global acceleration 125 5.29 Sensitivity analyses with respect to maximum global acceleration 126 xi 5.30 Sensitivity analyses with respect to minimum momentum 127 5.31 Sensitivity analyses with respect to maximum momentum 128 5.32 Sensitivity analyses with respect to grid size 129 5.33 Sensitivity analyses with respect to population size 130 5.34 Sensitivity analyses with respect to mutation rate 131 6.1 Pseudocode of the cultural constrained particle swarm optimization 148 6.2 Schema of the cultural framework adopted 151 6.3 Representation for normative knowledge 152 6.4 The schema to represent how the spatial knowledge is computed 154 6.5 Representation of spatial knowledge for each particle 155 6.6 Representation for situational knowledge 156 6.7 Representation for temporal knowledge 157 6.8 Convergence graphs for problems 174 6.9 Convergence graphs for problems 175 6.10 Convergence graphs for problems 176 6.11 Convergence graphs for problems 177 7.1 Pseudocode of the culturalbased dynamic PSO 198 7.2 Schema of the cultural framework adopted here 201 7.3 Representation for situational knowledge 203 7.4 Representation for temporal knowledge 203 7.5 Representation for the domain knowledge 206 7.6 Representation of normative knowledge 208 7.7 Representation for spatial knowledge 212 7.8 Sigmoid function to compute repulsion factor in spatial knowledge 213 7.9 Comparison of OEV as a function of elapsed iterations on function MP1 225 7.10 Comparison of OEV as a function of peak numbers on function MP1 225 xii 7.11 Comparison of OEV as a function of dimension on function MP1 226 xiii List of Tables Tables Page 3.1 Comparison of results for Spring Design problem 44 4.1 Results for optimal found and mean best objective for F1, F2, F3 and F5 70 4.2 Mean best objectives for F6, F7, F8, and F9 71 5.1 Parameter settings for all MOPSOs 101 5.2 Testing of the distribution of IH values using MannWhitney test 112 5.3 Testing of the distribution of using MannWhitney test 118 5.4 Parameter selection for sensitivity analysis 119 5.5 Statistical test to check sensitivity to minimum personal acceleration 132 5.6 Statistical test to check sensitivity to maximum personal acceleration 132 5.7 Statistical test to check sensitivity to minimum global acceleration 133 5.8 Statistical test to check sensitivity to maximum global acceleration 133 5.9 Statistical test to check sensitivity to minimum momentum 134 5.10 Statistical test to check sensitivity to maximum momentum 134 5.11 Statistical test to check sensitivity to grid size 135 5.12 Statistical test to check sensitivity to population size 135 5.13 Statistical test to check sensitivity to mutation rate 136 6.1 Parameter settings for cultural CPSO 162 6.2 Summary of 24 benchmark test functions 165 6.3 Error values for different FEs on problems 166 6.4 Error values for different FEs on problems 167 xiv 6.5 Error values for different FEs on problems 168 6.6 Error values for different FEs on problems 169 6.7 Number of function evaluations to achieve the fixed accuracy level, Success Rate, Feasibility Rate, and Success Performance 171 6.8 Summary of statistical results found by cultural CPSO 173 6.9 Computational complexity 178 6.10 Comparison of cultural CPSO with the stateoftheart constrained optimization methods in terms of feasible rate 180 6.11 Comparison of cultural CPSO with the stateoftheart constrained optimization methods in terms of success rate 181 6.12 Sensitivity analysis with respect to personal acceleration 182 6.13 Sensitivity analysis with respect to swarm acceleration 183 6.14 Sensitivity analysis with respect to global acceleration 184 6.15 Sensitivity analysis with respect to rate of information exchange 185 7.1 Parameter settings for different paradigms 221 7.2 OEV index after 500,000 FEs on test problem MP1 224 7.3 OEV index after 500,000 FEs on test problem DF2 227 7.4 OEV index after 500,000 FEs on test problem DF3 228 7.5 OEV index after 500,000 FEs on test problem DF4 228 7.6 OEV index after 500,000 FEs on test problem DF5 229 7.7 OEV index after 500,000 FEs on test problem DF6 231 7.8 Pvalues using MannWhitney ranksum test 231 7.9 OEV index after 50,000 FEs using default parameters 232 B.1 Data set for test problem 282 B.2 Data set for test problem 283 xv Nomenclature Number of decision variables; dimension of decision variables Number of particles; number of individuals; population size Number of constraints Number of objectives Number of swarms; number of societies Number of inequality constraints Tolerance for equality constraints Population of the ith swarm, number of individuals in the ith society Inequality constraint Equality constraint Personal best particle in PSO Global best particle in PSO Neighborhood best particle in PSO Swarm best particle in PSO xvi Inequality constraint Equality constraint Personal acceleration in PSO Global acceleration in PSO Neighborhood acceleration in PSO Swarm acceleration in PSO Momentum in PSO 1 CHAPTER I INTRODUCTION Computational intelligence approaches based upon the psychosocial studies inspired from either the human or animal society have been the subject of the emerging research known as swarm intelligence. There has been some research in the area of swarm intelligence focused on optimization in the spirit of the particle swarm [1], ant colony system [2] and cultural algorithms [3]. While the population based heuristics adopted in swarm intelligence do not mathematically guarantee to always find the global optimum of the search space, they perform greatly well in different types of optimization problems. Particle swarm optimization (PSO) is an imitation of the collaborative behavior of the birds flying together with the means of their information exchange, while ant colony is based on the fact that individual ants interact with each other through their pheromone trails. Cultural algorithm (CA) is a dual inheritance system in which the collective behavior of the population of individuals constructs the belief space which will in turn be accessible to all individuals in the population space. Additionally, the multinational algorithm [4] solves difficult multimodal optimization problems by using heuristics imitating political interactions among nations. 2 In another heuristic, based on the relation between society and civilization [5] the intersociety versus intrasociety relationship among the individuals facilitates on building an optimization model. The whole population of individuals, called the civilization, is clustered into different societies based on their Euclidean closeness of the individuals. The performance of individuals will be a measure to decide which individuals are the leaders of the society. The rest of the individuals are to follow them in a way to improve themselves which leads to migration (intrasocitey interaction). From the civilization viewpoint, the leaders of the societies will improve themselves by migrating toward the bestperforming leaders who are the civilization leaders (intersociety interaction). The weakness of this paradigm is its lack on using existing information from all of the individuals. Particle swarm optimization is based on the changes of the positions and velocities of the particles in a manner that optimizes a goal function. PSO has demonstrated a promising performance for many optimization problems; yet its fast convergence often leads to premature convergence in which the local optima of the goal function are found instead of the global one. The tradeoff between fast convergence and being trapped in local optima is even more critical in multimodal functions. In order to escape from the local optima and avoid premature convergence, the search for global optimum should be diverse. Many researchers have improved the performance of the PSO by enhancing its ability with a more diverse search. Specifically, some have proposed to use multiple swarms each running PSO, and then exchange information 3 among them. The weakness of these algorithms is their lack on considering a diverse list of information to exchange, consequently premature convergence. Exchanging information among clusters has also been adopted as an important design in several computational methods. Distributed genetic algorithm [6] employs GA mechanism to evolve several subpopulations in parallel. At regular intervals, migration among subpopulations takes place. During the migration stage, a proportion of each subpopulation is selected and sent to another subpopulation. The migrant individuals will replace others based on a replacement policy. Several population based heuristics have been developed to solve multiobjective optimization problems (MOPs) among which multiobjective evolutionary algorithm (MOEA) and multiobjective particle swarm optimization (MOPSO) are two popular paradigms. Although there exist many research on single objective PSO suggesting dynamic weights for the local and global acceleration, but most MOPSO researchers assume that all particles should move with the identical momentum, local, and global acceleration. To our best knowledge, there have not been any studies to consider a case in which particles fly with different “personalized” weights for the momentum, local, and global acceleration. Employing a personalized weight for each particle assigns a proper jump contributing to the effectiveness of the overall performance of the algorithm. One computational aspect is the difficulties of tuning proper value for the momentum, personal, and global acceleration in MOPSO in order to attain the best results for different test functions. From a biological point of view, work presented in [7] has also 4 shown that societies that can handle more complex tasks contain polymorphic individuals. Polymorphism is a significant feature of social complexity that results in differentiated individuals. The more differentiated the society, the easier it can handle complex tasks. Differentiation applies in principal to complex societies of prokaryotic cells, multicellular organisms, as well as to colonies of multicellular individuals such as ants, wasps, bees, and so forth. The colony performance is improved if individuals differentiate in order to specialize on particular tasks. As a result of differentiation, individuals perform functions more efficiently. In their study it has been shown the colony’s ability to higher cooperative activity when tackling tasks is a direct consequence of differentiation among other factors. There are few studies in the MOPSO research area that have tackled the issue of variable momentum for the particles although in all of them momentum is identical for all particles at a specific iteration. Some MOPSO paradigms have proposed simple strategies to adapt the momentum by simply decreasing the momentum throughout swarming while other MOPSO algorithms choose a random value for momentum at every iteration. To the best knowledge of the author, there is no noticeable study in MOPSO on adapting personalized dynamic momentum and acceleration based upon the need for the particles to exploration or exploitation. Constrained optimization problem is another area that has been solved using population based paradigms during the last two decades. Swarmbased algorithms have recently been developed to handle constraints in these type of problems. Although there 5 are few research studies on PSO to solve constrained optimization problems, none of these studies adopt the information from all particles to perform communication within PSO in order to share common interest and to act synchronously. When particles share their information through communication with each other, they will be able to efficiently handle the constraints and optimize the objective function. From a sociological point of view, study has shown that human societies will migrate from one place to another in order to handle their own life constraints and limitations as well as to reach a better economical, social, or political life [8]. People living in different societies migrate in spite of the different value systems and cultural differences. Indeed the cultural belief is an important factor affecting the issues underlying the migration phenomena [9]. On the other hand, finding the appropriate information for communication within swarm can be computationally expensive. One computational aspect is the difficulties of finding the appropriate information to communicate within PSO in order to be able to simultaneously better handle the constraints and optimize the objective function. The optimum solution for many realworld optimization problems changes over time. In such cases known as dynamic optimization problems, the heuristics should track the change as soon as it happens and responds promptly. For example, in job scheduling problems new jobs arrive or machines may break down during operations results a need for dynamic job schedules to accommodate the changes over time [10]. In another example, dynamic portfolio problem, the goal is to obtain an optimal allocation of assets to maximize profit and minimize investment risk [11]. 6 There are four major categories of uncertainties that have been dealt with using population based evolutionary approaches: noise in the fitness function, perturbations in the design variables, approximation in the fitness function, and dynamism in optimal solutions [12]. While noise and approximation bring uncertainty in the objective function, perturbation introduces uncertainty in the decision space. The source of change can be because of the possible change in the objective function, constraints, environmental parameters, or problem representations during optimization process. These changes may affect the height, width, or location of optimum solution or a combination of these three parts [13]. The application of PSO to dynamic optimization problems has been studied by various researchers. There are some issues with the PSO mechanism that needs to be addressed. Maintaining outdated memory is one issue in dynamic optimization problems. When a problem changes, a previously good solution stored as neighborhood or personal best may no longer be good, and will mislead the swarm towards false optima. Diversity loss is another problem in which population normally collapses around the best solution. In dynamic optimization, the partially converged population after a change is detected should quickly rediversify, find the new optimum and reconverge [10]. A number of adaptations have been applied to PSO in order to solve these difficulties. In general, a good evolutionary heuristic to solve DOPs should reuse as much information as possible from previous iterations to increase the optimization search. Among the researches performed in dynamic PSO none of these studies exploits information from all particles 7 to perform rediversification through migration and repulsion. When particles share their information through migration process, they will be able to quickly rediversify and move efficiently towards new optimum by reconverging around it. In order to construct the environment required for this redivergence and reconvergence, we need to establish groundwork to assist us to utilize this information. The major groundwork is the belief space of cultural algorithm assisting the particles in an organized informational manner to locate the necessary information. Discussed in psychosocial texts, attitudinal similarity is a leading factor to attraction among individuals while dissimilarity leads to repulsion in interpersonal relationship [14], as a result people often diverge from members of other social groups by selecting different cultural attitudes or behaviors [15]. Indeed different cultural beliefs lead to repulsion and increase the possibilities of divergence in ideas and in turn open up the doors to new opportunities. One challenge is the difficulty to find the appropriate information to use so that it can be relied on for a quick rediversification when a change happens in the environment. Using many concepts from the cultural algorithm, such as spatial knowledge, temporal knowledge, domain knowledge, normative knowledge and situational knowledge, the information will be organized competently and successfully in order to adopt in several steps of the PSO’s updating mechanism in addition to rediversification and repulsion among swarms. The special rediversification problem to deal with the change in dynamic is an important task that can be solved more efficiently when we have access to 8 the knowledge throughout the search process that is performed by the cultural algorithm as the computational framework. The remaining structure of this dissertation is as following. In Chapter II, a comprehensive literature survey is performed on related computational intelligence paradigms to prepare for the following chapters. Chapter III firstly elaborates on a paradigm based upon the intrasociety and intersociety interaction in order to simulate an algorithm to solve single objective optimization problems. Next the proposed modifications to this socialbased heuristics will be introduced. This proposal has two aspects: one is based upon the idea of adopting information from all individuals in the society (i.e., not only the best performing individuals). The second proposal is based on the fact that different societies have different collective behavior. Politically speaking, the collective behavior of the societies have been quantified into a measure called the liberty rate. In the real sociological context, individuals in a democratic society will have more flexibility and freedom to choose a better environment to live. In contrast, individuals in a dictatorship society will suppress the politically environmental change. While individuals in a liberal society can freely move to be closer to the leaders, individual in a less liberal society will have restriction to move near the leaders. Hence the higher liberty rate a society has, the more flexibility an individual in such society can move. At the end of this chapter, simulation result for a real world mechanical problem is used to test the performance of two proposed modifications. In Chapter IV, a heuristic is proposed to diversify the search space using a novel 9 threelevel particle swarm optimization in a multiple swarm population space. The PSO mechanism is customized to incorporate three levels of searching process. In the lowest level, particles follow the best behaving particle in their own swarm; in next level, particles follow the best performing particle in the neighboring swarms, and finally in the highest level, particles track the whole population’s best behaving particle. A novel algorithm is proposed to define the neighboring swarms based upon the closeness between representatives of each pair of swarms. After a specified number of iterations, the swarms communicate with each other. Each swarm assembles two lists, a sending list and a replacement list. To prepare these two sets of particles, diversity measure is considered as the primary goal instead of the performance of the particles alone. When particles are approaching the local optima, several of them will have similar positional information. This similar redundant information will be replaced by particles from other swarms to diversify the search space. At the end of this chapter, the simulated study is tested to solve benchmark multimodal optimization problems which demonstrate efficiency of the proposed heuristic and its potential to solve difficult optimization problems. Chapter V proposes an innovative algorithm adopting the cultural information that exists in the belief space to adjust flight parameters of multiobjective particle swarm optimization (MOPSO) such as personal acceleration, global acceleration, and momentum. A belief space has been constructed containing three sections of knowledge as the groundwork to perform MOPSO and adapt the parameters. Every particle in 10 MOPSO will use its own adapted momentum and acceleration (local and global) at every iteration to approach the Pareto front. Cultural algorithm provides the required groundwork enabling us to employ the information stored in different belief space efficiently and effectively. The proposed cultural MOPSO is then evaluated against the stateoftheart MOPSO models, showing very competitive and well performing outcome. Finally a comprehensive sensitivity analysis has been performed for the cultural MOPSO with respect to its tuning parameters. In Chapter VI, a novel heuristics is proposed based upon the information extracted from belief space to facilitate the interswarm communication among multiple swarms in particle swarm optimization to solve constrained optimization problems. The cultural computational framework is to find the leading particles in the personal level, swarm level, and global level. Every particle will move using a threelevel flight mechanism and then particles divide into several swarms and interswarm communication takes place to share the information. The performance of the proposed cultural constrained particle swarm optimization (CPSO) has been compared against ten stateoftheart constrained optimization paradigms on 24 benchmark test problems. The comprehensive simulation results demonstrate cultural CPSO to be very effective and efficient. Chapter VII proposes an innovative computational framework according to cultural algorithm to solve dynamic optimization problems using knowledge stored in the belief space in order to rediversify and repel the population right after a change takes 11 place in the dynamic of the problem. Thus the algorithm can comfortably compute the repulsion factor for each particle and locate the leading particles in the personal level, swarm level and global level. Each particle in the proposed culturalbased dynamic PSO will fly through a mechanism of three level flight incorporated with a repulsion factor. After a change takes place, particles regroup into several swarms and a diversitybased migration among swarms along with repulsive mechanism implemented in repulsion factor will take place to increase the diversity as quickly as possible. Finally, Chapter VIII discusses the concluding remarks on how swarm, culture, and society help in solving single objective, multiobjective, constrained, and dynamic optimization problems. The suggestions of the future work of this study are also proposed in this chapter. 12 CHAPTER II LITERATURE REVIEW In this chapter, we briefly review the related work that will assist in understanding the background concepts required for this dissertation. Population based computational intelligence heuristics has extensively evolved from natural evolutionarybased Genetic Algorithms (GA) [1617] over decades of research work. Computational intelligence approaches based upon the psychosocial behavior inspired from either human or animal society have been the subject of the emerging research for a decade. Some concepts borrowed from sociology have shown great improvements in the performance of computational methods. Migration of individuals between concurrent evolving populations has shown its potential to improve the genetic algorithms mechanism [18]. In distributed GA [6] the sociologically inspired concept of communication shows great improvement in the performance of GA. The population is divided into several subpopulations each evolving an independently GA while at regular time intervals, these GAs communicate with each other. Sociological researchers have constructed models to mimic the behavior of human and animal societies. Heppner and Grenander studied synchronization in groups of small birds like pigeons developing a flocking heuristics based upon the social interactions such 13 as attraction to a roost, attraction to flockmates and preserving the velocity [19]. Deneubourg and Goss has shown that the interaction between the individuals and their environment produces different collective patterns on decision making process by introducing a mathematical model [20] which is naturally observed to be essential in the schools of fishes, flocks of birds, groups of mammals, and many other social aggregates. Millonas proposed a model of the collective behavior of a large number of locally acting organisms [21] in which organisms move probabilistically between local cells in space, but with different weights. The evolution and the flow of the organisms construct the collective behavior of the group. This model could successfully analyze movements of ants as swarming organism. Reynolds developed a computer animator of a simulated bird based upon the local perception of the dynamic environment, the laws of simulated physics ruling its motion, and a set of simulated behaviors [22]. Akhtar et al. proposes a sociobehavioral simulated model [23] based upon the concept that the behavior of an individual changes and improves due to social interaction with the society leaders who are identified using a Pareto rank scheme. On the other hand, the leaders of all societies themselves improve their own behavior which leads to a better civilization. Ursem introduced multinational evolutionary algorithm based on the relationship between different nations and their political interaction in order to optimize a profit function [4]. Ray and Liew adopted the intersociety and intrasociety relationship among the individuals and the leaders to optimize the single objective optimization problem [5]. The whole population, clustered into several groups, evolves in two stages. 14 Individuals within group follow the group’s best performing individual, and in the whole population, the very best performing individual leads all groups’ leaders. Ursem elaborates the idea of sharing among agents in a social entity as a means of maintaining multiple peaks in multimodal optimization problems [24]. Deneubourg and coauthors proposed a probabilistic model to explain behavior of ants as social agents [25] which was then followed by Goss et al. showing how sharing information among ants which was done by laying trail and following it could help to solve foraging problem in their societies [26]. Inspired by their research, Dorigo et al. introduced a new computational paradigm, Ant Colony Optimization (ACO) model, that could be adopted to solve engineering optimization problems. ACO’s main characteristic was a positive feedback for rapid discovery of good solution of optimization problem, a distributed computation to avoid premature convergence, and a greedy heuristic to find acceptable solution in the early stages of the search process [2, 27]. The ACO model has been successfully applied to symmetric and asymmetric Travelling Salesman Problem (TSP) as a classical difficult combinatorial optimization problem [2829], quadratic assignment problem [30], adaptive routing [31], jobscheduling problem [2]. Sahin et al. reported applying the antbased swarm algorithm on forming different patterns through interaction among mobile robots [32]. Kennedy and Eberhart introduced the particle swarm optimization (PSO), an algorithm based on imitating behavior of flocking birds. It mimics grouping of birds as particles, their random movement, and regrouping them again to generate a model so that 15 it can solve engineering optimization problems [1, 33]. Particles are known with their positions and velocities and can be updated using: , (2.1) , where is the velocity of the particle, is the position of the particle, is the best position of each particle ever experienced, and is the best position among all particles. and are random numbers uniformly generated in the range of . , , and are personal, social, and momentum coefficients [34] that are predefined constant values. The movement of the particles has been analyzed to understand the mechanism underlying the PSO and its relation to other population based heuristics [35]. The analysis of the particles’ trajectory while moving [36] has led to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. The effects of various population structure and topologies on the performance of particle swarm algorithm have shown that von Neumann configuration consistently outperforms other types of topological configurations of particles’ neighborhood [3739]. Several versions of PSO have been developed. Discrete PSO was introduced [40] operating on discrete binary variables whose trajectories are defined as changes in the probability that a coordinate will take on a zero or one value. Comparing with GA on some multimodal optimization problems, discrete PSO showed competitive results [4116 42]. A modified PSO using constriction factor [43] performed well comparing with the original PSO. Particle swarms are also developed to track and optimize dynamic landscape systems [44]. Particle swarm optimization has also been modified to perform permutation optimization problems such as Nqueens problem [45] by defining particles as permutations of a group of unique values and updating velocity based upon the similarity of two particles. The permutation of the particles change with a random rate defined by their velocities. Clustering population into several swarms has been extensively studied. Stereotyping of the particles is investigated [46] in which substitution of cluster centers for shows better performance of the PSO suggesting that PSO is more effective when individuals are attracted toward the center of their own clusters. AlKazemi and Mohan divided the population into two sets at any given time, one set moving to the while another moving in opposite direction by selecting appropriate fixed values for in each set [47]. After some iterations, if the would not improve, then the particles would switch their group. Baskar and Suganthan introduced a concurrent PSO consisting two swarms in order to search concurrently for a solution along with frequent passing of information, the of two swarms [48]. After each exchange, the two swarms had to track the better found. One of the swarms was using regular PSO while the other was using the FitnesstoDistance ratio PSO [49]. Their approach improved the performance over both methods in solving single objective optimization problems. ElAbd and Kamel added a twoway flow of information between 17 two swarms improving its performance [50]. In their algorithm, when exchanging the best particle between two swarms, this particle is used to replace the worst particle in another swarm. The two swarms perform a fixed number of iterations, and then the best particles inside each swarm will replace the worst particles in the other swarm only if they have a better fitness. This makes it possible for both swarms to exchange new information from the other swarm’s experience. Krohling et al. proposed coevolutionary PSO in which two populations of PSO are involved [51]. One PSO runs for a specified number of iterations while the other remains static and serves as its environment. At the end of such period, values obtained in previous cycles have to be reevaluated according to the new environment before starting evolution. Particle swarm optimization has been widely applied for multiobjective optimization problems (MOPs) called multiobjective particle swarm optimization (MOPSO) to find a diverse set of potential solutions, known as Pareto front. There have been several algorithms to extend PSO to handle diversity issue in MOPs. Parspopoulos et al. [52] introduced vector evaluated particle swarm optimizer (VEPSO) to solve multiobjective problems. A VEPSO is a multiswarm variant of PSO in which each swarm is evaluated using only one of the objective functions of the problem under consideration, and the information it possesses for this objective function is communicated to the other swarms through the exchange of their best experience. In VEPSO, the velocity of the particles in each swarm is updated using the best previous position, , of another selected swarm. Selection of this swarm in the migration 18 scheme can be either random or in a sequential order. Ray and Liew [53] used Pareto dominance and combined concepts of evolutionary techniques with the particle swarm. This algorithm uses crowding distance to preserve diversity. Hu and Eberhart [54] in their dynamic neighborhood PSO proposed an algorithm to optimize only one objective at a time. The algorithm may be sensitive to the optimizing order of objective functions. Fieldsend and Singh [55] proposed an approach in which they used an unconstrained elite archive to store the nondominated individuals found along the search process. The archive interacts with the primary population in order to define local guides. Mostaghim and Teich [56] introduced a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of the PSO. Li [57] adopted the main idea from NSGAII into the PSO algorithm. Coello Coello et al. [58], on the other hand proposed an algorithm using a repository for the nondominated particles along with adaptive grid to select the global best of PSO. The algorithms proposed to solve MOPs using PSO are based upon promoting the nondominated particles at any given time, not exploiting the information of all particles in the population. Many MOPSO paradigms are focused on the methods of selecting global best [53, 5556, 5864], or personal best [65]. Most MOPSOs adopt constant value for momentum and accelerations; however some MOPSOs use some simple dynamic to change the parameters. Indeed, one of the difficulties of the PSO and/or MOPSO is to deal with tuning the right value for the momentum, personal and global acceleration in order to get the best results for different test functions. Hu and Eberhart [54] in their dynamic 19 neighborhood MOPSO model and also Hu et al. [66] in the MOPSO with extended memory adopted a random number on the range (0.5,1) as the varying momentum. However both personal and global acceleration are constant values. Sierra and Coello Coello [62] in their crowding and dominance based MOPSO used random value at the range (0.1,0.5) for the momentum and random values at the range (1.5,2.0) for the personal and global acceleration. They adopted this scheme to bypass the difficulties of fine tuning of these parameters for each test function. Zhang et al. [64] introduced intelligent MOPSO based upon AgentEnvironmentRules model of artificial life. In their model, along with adopting some immunity clonal operator, the momentum was decreased linearly from 0.6 to 0.2, but the personal and global acceleration remained constant. Li [67] proposed an MOPSO based upon maxmin fitness function. In his model, while the personal and global acceleration were set constant, the momentum was gradually decreased from 1.0 to 0.4. Zhang et al. [68] adopted a linearlydecreasing momentum from 0.8 to 0.4 for their MOPSO algorithm. However the personal and global acceleration were kept fixed. Mahfouf et al. [69] introduced adaptive weighted MOPSO in which they included adaptive momentum and acceleration. Using comparison study with other wellbehaved algorithms, they demonstrated that the MOPSO search capability is enhanced by adding this adaptation. Ho et al. [63] noted the possible problem of selecting personal and global acceleration independently and randomly. He mentioned because of its stochastic nature they may both be too large or too small. In the former case, both personal and global experiences 20 are overused and as a result the particle will be driven too far away from the optimum. For the latter case, both personal and global experiences are not fully used and as a result the convergence speed of the algorithm is reduced. They used sociobiological activity such as hunting to assure that individuals balance between the weight of their own knowledge and the group’s collective knowledge. In other words, they mentioned that the personal and global acceleration are somehow related to each other. When one acceleration is large, the other one should be small, and vice versa. Using this concept, they modified the main equation of PSO, Equation (2.1) to include a dependent acceleration and momentum [63]. Particle swarm optimization algorithms have been successfully developed to solve constrained optimization problems. Hu and Eberhart generated particles in PSO until the algorithm could find at least one particle in the feasible region and then adopted it to find best personal and global particles [70]. Parsopoulos and Vrahatis used a dynamic multistage penalty function to handle the constraints [71]. The penalty function consisted of weighted sum of all constraints violation with each constraint having a dynamic exponent and a multistage dynamic coefficient. A comparison of preserving feasible solution method [70] and dynamic penalty function [71] demonstrated that the convergence rate for dynamic penalty function algorithm was faster than that of feasible solution method [72]. Hu et al. modified the PSO mechanism to solve constrained optimization problems. PSO starts with a group of feasible solutions and a feasibility function is used 21 to check if the newly explored solutions satisfy all the constraints. Only feasible solutions are kept in the memory [73]. Linearly constrained optimization problems are the basis for a modified version of PSO in which the movement of the particles in the vector space is mathematically guaranteed by the velocity and position update mechanism to always find at least a local optimum [74]. In the constrained PSO, particles that satisfy constraints move to optimize the objective function while particles that violate constraints move in order to satisfy the constraints [75]. Krohling and Coelho adopted Gaussian distribution instead of uniform distribution for the personal and global term random weights of the PSO mechanism to solve constrained optimization problems formulated as minmax problems. They used two populations of the PSO simultaneously, first PSO focuses on evolving the variable vector while the vector of Lagrangian multiplier is kept frozen, and the second PSO is to concentrate on evolving the Lagrangian multiplier while the first population is maintained frozen. The use of normal distribution for the stochastic parameters of the PSO seems to provide a good compromise between the probability of having a large number of small amplitude around the current points, i.e., finetuning, and small probability of having large amplitudes, that may cause the particles to move away from the current points and escape from the local optima [76]. In masterslave PSO [77], master swarm is to optimize objective function while slave swarm is focused on constraint feasibility. Particles in the master swarm only fly toward the current better particles in the feasible region, and they will not fly toward 22 current better particles in the infeasible region. The slave swarm is responsible for searching feasible particles by flying through the infeasible region. Particles in slave swarm only fly toward current better particles in the infeasible region, and they will not fly toward current better particles in the feasible region. The feasible/infeasible leaders from swarm will then be communicated to lead the other swarm. By exchanging flight information between swarms, algorithm can explore a wider solution space. Zheng et al. adopted an approach that congregates neighboring particles in the PSO to form multiple swarms in order to explore isolated, long and narrow feasible space [78]. They also applied a dynamic mutation operator with dynamic mutation rate to enhance flight of particles to feasible region more frequently. For constraint handling a penalty function has been adopted as to how far the infeasible particle is located from the feasible region. Saber et al. [79] introduced a version of PSO for constrained optimization problems. In their version of PSO, the velocity update mechanism uses a sufficient number of promising vectors to reduce randomness for better convergence. The coefficient velocity in the positional update equation is a dynamic rate depending on the error and iteration. They also reinitialized the idle particles if there are particles that are not improving for some iterations. Li et al. [80] proposed dual PSO with stochastic ranking to handle the constraints. One regular PSO evolves simultaneously along with a genetic PSO, a discrete version of PSO including a reproduction operator. The better of the two positions generated by these two PSOs is then selected as the updated position. FloresMendoza and MezuraMontes [81] used Pareto dominance concept for constraint 23 handling technique on a biobjective space, with one objective being sum of the inequality violation constraints and the second objective being sum of the equality violation constraints in order to promote better approach to feasible region. They also adopted a decaying parameter control for constriction factor and global acceleration of the PSO to prevent the premature convergence and to advance the exploration of the search space. Ting et al. [82] introduced a hybrid heuristic consisting PSO and genetic algorithm to tackle constraint optimization problem of load flow algorithm. They adopted twopoint crossover, mutation, and roulettewheel selection from genetic algorithms along with the regular PSO to generate the new population space. Liu et al. [83] incorporated discrete genetic PSO with differential evolution (DE) to enhance the search process in which both genetic PSO and DE update the position of the individual at every generation. The better position will then be selected. Particle swarm optimization algorithms have been effectively developed to solve dynamic optimization problems (DOP) as well. Carlisle and Dozier [84] adjusted PSO mechanism to prevent making position/velocity decision according to the outdated memory by periodic resetting. Particles periodically replace their pbest vector with their current position, forgetting their past experiences. Eberhart and Shi [44] proposed that for small perturbation, the initialization of the swarm can start from old population, while large perturbation needs reinitialization. In detection and response paradigm [85] gbest and the second global best are evaluated to detect changes, then the positions of all particles are rerandomized to respond to the change. Charged swarm avoids collision 24 among particles based upon the force between electric charges which is inversely proportional to distance squared [86]. Atomic PSO [87] and quantum PSO [88] follow the structure of the chemical atom including a cloud of electrons randomly orbiting with a specific radius around the nucleolus. An anticonvergence operator [89] assists interaction among swarms. Also an excluding operation defines a radius to include the best solution of the swarm. These close swarms compete with each other in order to promote diversity. The winner, the swarm with the best function value at its swarm attractor, will remain, while the loser will be reinitialized in the search space [89]. Swarms birth and death [90] was proposed by allowing multiple swarms to regulate their size by bringing new swarms to existence, or diminishing redundant swarms. This dynamic swarm size can be an alternative for anticonvergence and exclusion operators in the PSO mechanism. In partitioned hierarchical PSO for dynamic optimization problems [91], the population is partitioned into some treeform subhierarchies for a limited number of iterations after a change is detected. These subhierarchies continue to independently search for the optimum, resulting a wider spreadout of the search process after the change has occurred. The topmost level of treeform hierarchies which contain the current best particle does not change, but all lower subhierarchies (subswarms) reinitialize the position and velocity and reset their personal best positions. These subhierarchies are rejoined again after a predefined number of iterations. 25 By adopting dynamic macromutation operator [92], PSO is able to maintain the diversity throughout the search process in order to solve DOPs. Every coordinate of each particle will undergo an independent mutation with a dynamic probability which possess its highest value when the change occurs in the dynamic landscape and gradually decreases till the next change takes place. The unified PSO in which the exploration and exploitation term of the PSO mechanism are unified into a unification factor has also been adopted for solving DOPs [93]. Zhang et al. [94] proposed a direct relation between the inertia weight of the particle and the change. In their model, the new gbest and pbest for each particle affect the inertia weight of the particle whenever a change in gbest or pbest occurs. Pan et al. [95] modified the PSO paradigm using a probability based movement of particles based upon the concept of energy change probability in Simulated Annealing (SA). The particle will move to the next position computed through traditional PSO heuristics only with a specific probability that exponentially depends on the difference between the objective values of the current and next iterations. In species based PSO [96], the population is divided into some swarms, each surrounding a dominating particle called seed identified from the objective function values of the entire population. The new seed should not fall within the predefined radius of all previously found seeds in order to promote diversity. The seeds are then selected as the neighborhood best for different swarms. In multistrategy ensemble PSO [97], particles are divided into two sections, part I uses a Gaussian local search to quickly seek global optimum in the current environment, while part II uses differential mutation to 26 explore the search space. The position of particles in part II do not follow the traditional PSO mechanism, instead each particle in part II is determined by the particle in part I through a mutation strategy. Liu et al. [98] introduced a modified PSO to solve DOPs in which many compound particles exist. Each compound particle includes three single particles equilaterally distanced from each other in a triangular shape. A special reflection scheme is proposed to explore the search space more comprehensively in which the position of the worst particle among three in the compound will be replaced with the reflected one. In each compound particle, after reflection is performed, a representative among these three particles is probabilistically chosen based upon the objective function values and distance from other two member particles. The representative member particles will then participate in PSO update mechanism. The two nonrepresentative particles will also move in the same distance/direction as representative particle has been moved in order to preserve the valuable information. Recently a computational framework has been developed by Reynolds known as cultural algorithm (CA) based upon a dual inheritance system where information exists at two different levels: population level and the belief level [3]. Culture is defined as storage of information which does not depend on the individuals who generated and can be potentially accessed by all society members [3]. CA is an adaptive evolutionary computation method which is derived by cultural evolution and learning in agentbased societies [3, 99]. CA consists of evolving agents whose experiences are gathered into a 27 belief space consisting of various forms of symbolic knowledge. CA has shown its ability to solve different types of problems [3, 99107] among which CAEP (cultural algorithm along with evolutionary programming) has shown successful results in solving MOPs [107]. Researchers have identified five basic sections of knowledge stored in belief space based upon the literature in cognitive science and semiotics: situational knowledge, normative knowledge, topographical knowledge [105], domain knowledge, and history knowledge [106]. Situational knowledge is a set of exemplary individuals useful for experiences of all individuals. Situational knowledge guides all individuals to move toward the exemplar individuals. Normative knowledge consists a set of promising ranges. Normative knowledge provides standard guiding principle within which individual adjustments can be made. Individuals jump into the good range using normative knowledge. Topographical or spatial knowledge keeps track of the best individuals which have been found so far in the promising region. Topographical knowledge leads all individuals toward the best performing cells in the search space [105]. Domain knowledge adopts information about the problem domain to lead the search. Domain knowledge about landscape contour and its related parameters guides the search process. Historical or temporal knowledge keeps track of the history of the search process and records key events in the search. It might be either a considerable move in the search space or a discovery of landscape change. Individuals use the history knowledge for guidance in selecting a move direction. Domain knowledge and history knowledge are useful on dynamic landscape problems [106]. The knowledge can swarm 28 between different sections of belief space [108110] which in turn affect the swarming of population. Becerra and Coello Coello [104] proposed cultured differential evolution for constrained optimization. The population space in their study was differential evolution (DE) while the belief space consist of situational, topographical, normative, and history knowledge. The variation operator in DE was influenced by the knowledge source of belief space. Yuan et al. [111] introduced chaotic hybrid cultural algorithm for constrained optimization in which population space as DE and belief space including normative and situational knowledge. They incorporated a logistic map function for better convergence of DE to use its chaotic sequence. Tang and Li [112] proposed a cultured genetic algorithm for constrained optimization problems by introducing a triple space cultural algorithm. The triple space includes belief space, population space in addition with anticulture population consisting individuals disobeying the guidance of the belief space, and going away from the belief space guided individuals. The effect of disobeying behavior enhanced by some mutation operations makes the algorithm faster and less risky for premature convergence, by awarding the most successful individuals and punishing the unsuccessful population. 29 CHAPTER III SOCIETTY AND CIVILIZAION FOR OPTIMIZATION 3.1 Introduction Computational intelligence approaches based upon the psychosocial behavior inspired from either the human or animal society have been the subject of the emerging research for less than a decade. There has been some research in this area focused on optimization in the spirit of the particle swarm intelligence [1] or ant colony system [2]. Particle swarm optimization is an imitation of the collaborative behavior of the birds flying together with the means of information exchange, while ant colony is based on the fact that individual ants interact with each other through their pheromone trails. Additionally, Ursem [4] introduced another ideas based on the relationship between different nations and how to interact between the countries in order to optimize a profit function. More recently, in an attempt to mimic the interactional behavior between societies and within civilization, social algorithm had been proposed [5, 113]. Social algorithm adopts the intersociety and intrasociety relationship among the individuals and the leaders to optimize the single objective optimization problem. The whole population of individuals, called the civilization, is clustered into different societies based on the 30 Euclidean closeness of the individuals. The performance of individuals will be a measure to decide which individuals are the leaders of the society. The rest of the individuals are to follow them in a way to improve themselves which leads to migration (intrasocitey interaction). From the civilization viewpoint, the leaders of the societies will improve themselves by migrating to the bestperforming leaders who are the civilization leaders (intersociety interaction) [114115]. Ray and Liew have successfully demonstrated the performance of their model in single objective optimization problems [5]. Their model seems to be an alternative competitive paradigm to particle swarm heuristics. What was used in their model is mostly by throwing the information of the nonleader individuals away and replacing with those of the corresponding leaders. What is proposed in this chapter involves two aspects. Firstly, using the information of the individual, individual’s talent is computed which equips each individual with different ability to invoke intra or intersociety interaction. Secondly, different society might have different collective behavior measure, called the liberty rate. In the real sociological relationship, a democratic society will have more flexibility and freedom to choose a better environment to live. In contrast, a dictatorship society will discourage individual to change the environment in reaching the leaders. While individuals in a liberal society can migrate easily to be closer to the leaders, individual in a less liberal society will have difficulty to move near the leaders. Hence the higher liberty rate a society has, the more flexibility an individual in such society can move. 31 The chapter is followed by Section 3.2 elaborating basics of social algorithms, including its motivation and how to build the societies in a civilization, how to identify the leaders of such societies, and how to migrate intra or intersocially. It also proposes a novel modification which is based on the idea of using more information from the middleclass individuals. In Section 3.3 the proposed algorithm has been applied on single objective optimization problems to test its efficiency. In Section 3.4, the concluding remarks are discussed in applying social algorithm to solve optimization problems. 3.2 Socialbased Algorithm for Optimization In this section, the details of socialbased algorithm are reviewed to solve single objective optimization problems and then the proposed methods on improving this heuristics are introduced. The general single objective function optimization problem is as the following form: , (3.1) , , (3.2) , , (3.3) 32 where is the number of inequality constraints and is the total number of inequality and equality constraints, respectively, is the dimensional decision space variable. Because of limitation in computer simulations and accuracy of the variables considered, it is much easier to check the validity of an inequality than that of equality. As has been suggested by research in population based heuristics dealing with constraint handling, each equality constraint of is originally transformed into a set of two simultaneous inequalities as and where is an infinitesimal positive constant representing the accuracy of the algorithm. For example with , the algorithm should proceed in a way that the following condition satisfies: which will substitute for the sake of accuracy. Therefore each equality constraint transforms to two inequalities constraints resulting total number of inequality constraints as as following: , , (3.4) , . (3.5) Now assume there are individuals in the population as potential solutions for the constrained optimization problem. A constraint satisfaction factor, , is defined to quantify how much dissatisfied the th constraint ( ) is made using the th individual, , ( ), and formulated as following: 33 , , . (3.6) Based on this definition, when a constraint is satisfied by an individual, the assigned value for constraint violation factor, , is zero. If the th constraint is not met ( ) by the th individual, the negativevalued is assigned as constraint violation factor, , to show how much the constraint is violated. Then a ranking scheme is performed for each constraint as to assign the rank of one to individuals who satisfy that constraint the most. Therefore, for the th constraint, individuals with the highest ( ) considering their sign will be assigned a rank of one, and individuals with the second highest ( ) again considering the sign will be assigned a rank of two, and so forth. After performing this nondominated ranking scheme for all constraints, a matrix is formed as the rank matrix in which rankone means that those individuals are nondominated for a specific constraint [5]. It can be seen that if there is one or more feasible individual for a specific constraint, those will be considered as rankone individuals. Figure 3.1 demonstrates the main flowchart of the socialbased algorithm. The civilization is formed with individuals that are initialized as uniform random numbers. Then each individual in the population of the civilization is evaluated using objective function value. The individuals are categorized into societies using a nonsupervised classification algorithm proposed by Ray and Liew [5] according to their closeness to each other. Notice that the number of societies may vary by time. Then the leaders of 34 each society are identified using a leader identification scheme which will be discussed in Figure 3.2. Next the individuals within the societies will move towards the nearest leader in their society using a migration scheme that will be discussed in Figure 3.3. Figure 3.1 Flowchart for socialbased single objective optimization In the global level, the leaders of the civilization will then be identified through the same leaderidentification scheme. Then the leaders of the societies will move 35 towards the global leaders of the civilization using the same migration scheme. This process continues until the termination criteria are met, i.e., the current iteration reaches a predefined maximum iteration, . In Figure 3.2, a flowchart is depicted to explain the leader identification scheme. Figure 3.2 Flowchart for identifying leaders 36 As shown in this flowchart, a set of individuals are given to find their leaders. The leaders should be the best behaving individuals considering both objective values and constraints. The objective function value for each individual and constraint violation factors for each individual are computed using Equations (3.1) and (3.6), respectively. Through nondominated ranking scheme, the rank matrix will be constructed. Leaders are identified among rankone individuals whose objective function value is less than the average of objective function values of all individuals in the given set of individuals. This means that if there are any feasible individuals, the best ones shall be selected due to their objective function values. There might be a situation that there is no rankone individual whose objective value is less than the average of all. In such case, simply all rankone individuals will be assigned as leaders. The leader identification scheme is used for both society and civilization level. Figure 3.3 shows the details of the migration scheme used in both intrasociety and intersociety level. Assume that an dimensional individual is given along with a set of leaders, . Before applying the migration scheme, it has to be noted that each dimension of the individual must be normalized as following: , , (3.7) 37 where is the th dimesnion of the dimensional decision variable (individual) which has the lowest limit of and highest limit of , respectively. The normalized invidual, , will be in the range of . Next, Euclidian distance between the normalized individual, , and the th member of the leader set, , will be computed as: . (3.8) Next, the closest leader to the individual will be selected as whose distance is: . (3.9) Then, each dimension of the normalized individual will be migrated using the above computed lowest distance through a random normal distribution value as following: , , (3.10) where is a random number with normal distribution with mean zero and a fixed standard deviation, , and is the new location of the th dimension of the 38 individual. As the final step, the migrated position should be rescaled back to the original scale as following: Figure 3.3 Flowchart on how to migrate individuals , . (3.11) It should be noted that migration scheme explained here is adopted in both intrasociety level, migrating the individuals towards their society leaders, and intersociety 39 level, migrating society leaders towards the civilization leaders. Therefore performance of individuals will improve within each society by migrating towards the closest society leader, and in a global view, the performance of society leaders will also improve by migrating towards the best behaving leader in the whole civilization. 3.2.1 Proposed Modifications In this subsection, two proposed modifications are presented. Socialbased algorithm has shown its promise in some single objective optimization problems [5]. What is used in this model is mostly by throwing the information of the nonleader individuals away and replacing with that of the correspondent leaders, as shown in Equation (3.10). However, in the real life it occurs differently. Individuals keep their characteristics along with imitating from some good samples. In the real society, average individuals do not completely throw their past behavior away, but would continuously change it, keeping the history of their behavior. Having the history of the individuals in the local search (intrasociety interaction) helps the individuals keep the information that might be useful later. In the global search, the algorithm is leadercentric preventing to diverse chaotically. Therefore, the exploitation of the intrasociety migration is based on the importance of previous location of the individual. In the intersociety migration the rule remains leadercentric. Figure 3.4 shows the pseudocode for the individuality importance in intrasociety migration. Individual and set of leaders are given. 40 The individual is normalized using Equation (3.7) and then the Euclidian distance between normalized individual and each member of the leader set is computed using Equation (3.8) and the lowest distance is computed as Equation (3.9). Then the normalized individual will be migrated considering individuality importance as following: , . (3.12) Figure 3.4 Pseudocode for individuality importance in intrasociety migration Finally each dimension of migrated individual should be rescale back into its original scale using Equation (3.11) In another modification scheme, Liberty Rate is proposed. A democratic society has more flexibility and freedom to choose better situation to live. In contrast, a dictatorship society restricts change of the situation and reaching the leaders. While Individual and set of leaders are given Normalize in each of its dimensions using its maximum and minimum limits using Equation (3.7) Compute the Euclidian distance between normalized and each member of the leader set, Migrate normalized individual considering individuality importance using Equation (3.12) Rescale back each dimension of the migrated individual into its original scale using Equation (3.11) 41 individuals in a liberal society can migrate easily to be closer to the leaders, individual in a less liberal society will experience difficulty to move near the leaders. So giving preferences to approach to the leaders for different societies will improve the convergence to the optimized solution. Different society will have different collective behavior measure, called Liberty Rate. The higher liberty rate a society has, the more flexible individuals in such society can move. The Liberty Rate of a society is proposed as the relative ratio of average objective functions of the society over the average objective functions of the civilization, formulated as following: , (3.13) where is a predefined normalization constant and is the measure of the collective behavior of the th society, defined as the average of the objective values of the individuals who belong to the th society, formulated as: , (3.14) where is number of individuals in the th society. , the measure for the civilization’s collective behavior is also defined as: . (3.15) 42 Then to migrate each individual in the th society, a libertybased migration is performed as following: , . (3.16) 3.3 Simulation Results Spring Design is a mechanical design problem [116] to minimize the weight of a tension/compression spring as shown in Figure 3.5. There are nonlinear inequality constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the mean coil diameter, , the wire diameter, , and the number of active coils, , along with four inequality constraints. The mathematical formulation of the problem is as the following: , (3.17) (3.18) , , , , 43 with the following limits on variables: 0.25 1.3 1 x , 0.05 2.0 2 x , 2 15 3 x . Figure 3.5 Schema for Spring Design problem [117] Figures 3.6 demonstrate the simulation results for Spring Design problem using the proposed modifications compared with the original algorithm. Population size is 30 which is 10 times the number of decision variables as suggested in [5]. This result is after 50 independent runs are performed for all three algorithms. We can see the effect of defining liberty rate in comparison of two modifications. The convergence time and the best value for objective functions in the case that both modifications have been applied have been improved compared to the original method. The comparison results are also shown in Table 3.1. It is noticeable that although two modifications give better results for best objective function, but the algorithm is not robust and the results for the worst objective function and mean objective function are not improved. For the original version, the standard deviation of algorithm discussed in Equation (3.10) has been considered as 1. 44 Figure 3.6 Comparison for best objective function for two proposed modifications: Original model (blue), modified by Individuality Importance (green) and modified by Liberty Rate and Individuality Importance (red) Table 3.1 Comparison of results for Spring Design problem Algorithms Original Method Individuality Importance Liberty Rate and Individuality Importance Best Objective Value (kg.m) 0.0464 0.0379 0.0331 Mean Objective Value (kg.m) 0.0464 6.2388 6.1516 Worst Objective Value (kg.m) 0.0464 32.2617 31.6833 3.4 Discussions In this chapter, two modifications have been suggested for socialbased algorithm. These modifications have been tested on a real world benchmark problem: the Spring Design problem. The simulation results demonstrate that adding two modifications facilitate the performance of the original algorithm resulting a better best objective 45 values. The first modified algorithm, individualitybased social algorithm outperforms the original social algorithm, while the liberty/individualitybased social algorithm outperforms both the original social algorithm and the individualitybased social algorithm in finding the best objective values. Both modified algorithms have migration policy better than the original social algorithm. The original algorithm is basically biased around the best performing individual which may result settling into a local optimum, while both modified versions are based upon individual’s previous performance. The results of modified version of social algorithm is based upon two hypotheses, one is that information from all individuals must be collected and exploited to migrate to the best leader, while the other is that the rate of convergence in different societies is not necessarily the same and depends on the relative collective behavior of the individuals in the society with respect to the civilization. Indeed the result in this case is improved because of giving more weight to diversity to the search in the individual space. If we just throw away all the nonleaders individuals, we lose a lot of information that might be critical in the search process, however getting information from the other nonleaders individuals might add to the convergence time. The best objective values obtained in both modified versions are better than original social algorithm; however the worst and mean values are not better than original algorithm, since we are keeping the diversity while evolving. This also implies room for further improvements in the future research. 46 CHAPTER IV DIVERSITYBASED INFORMATION EXCHANGE FOR PARTICLE SWARM OPTIMIZATION 4.1 Introduction Particle swarm optimization (PSO) is based on the changes of the positions and velocities of the particles in a manner that optimizes a goal function. PSO has demonstrated a promising performance for many problems; yet its fast convergence often leads to premature convergence in local optima. The tradeoff between fast convergence and being trapped in local optima will be even more critical in multimodal functions having many local optima very close to each other. In order to escape from the local optima and avoid premature convergence, the search for global optimum should be diverse. Many researchers have improved the performance of the PSO by enhancing its ability with a more diverse search. Specifically, some have proposed to use multiple swarms each running independent PSO, and then exchange information among them. Exchanging information among clusters has also been adopted as an important design in several computational methods. Distributed genetic algorithm [6] employs GA mechanism to evolve several subpopulations in parallel. During frequent migration 47 among subpopulations, some individuals from each subpopulation will be sent to another subpopulation to replace other individuals based upon a replacement policy. In another algorithm known as society and civilization model [5], individuals from multiple societies would cooperate with each other in order to enhance their performance. The migration in this model occurs in two levels; first, the migrating of individuals inside each society toward the society leaders (intrasociety level), and second, the migrating of society leaders toward the civilization leaders (intersociety level). In this study, a method borrowed from distributed genetic algorithm is employed to exchange information among multiple swarms in PSO. At regular intervals, each swarm prepares two sets of particles. One set is the particles that must be sent to another swarm and another set is the particles that must be replaced by individuals from other swarms. To prepare these two sets of particles, diversity measure is considered as the primary goal instead of only performance of the particles. When particles are approaching the local optima, several of them will have similar positional information. This similar redundant information will be replaced by particles from other swarms. This algorithm also proposes a new paradigm to find each swarm’s neighbors. The neighborhood between swarms is defined by the use of Hamming distance between representatives of each pair of swarms. The particle’s movement in the space is based on one variation of PSO with three basic terms, each one leading the particles toward the best particles in its own swarm, in its swarm’s neighborhood, and in the whole population. 48 The structure of this chapter is organized as follows. Section 4.2 reviews the related studies in this field. In Section 4.3, the proposed algorithm is explained in detail. The main ideas of the proposed method are shown. In Section 4.4, the simulation of the proposed algorithm is performed on a set of hard benchmark problems. Section 4.5 summarizes the benefits of the proposed paradigm on PSO and outlines the future work for multiobjective optimization problems due to the nature of the diversity promotion proposed. 4.2 Review of Related Work Kennedy and Eberhart [1] introduced the particle swarm optimization, an algorithm based on imitating behavior of flocking birds. It mimics grouping of birds as particles, their random movement, and regrouping them again to generate a model so that it can solve engineering optimization problems. Particles are known with their positions and velocities and can be updated using: , (4.1) , where is the velocity of the particle, is the position of the particle, is the best position ever experienced of each particle, and is the best position ever attained 49 among all particles. and are random numbers uniformly generated in the range of . , , and are personal, social, and momentum coefficients that are predefined. The main problem for PSO is its fast convergence to local optima. Later, Kennedy [46] introduced the stereotyping of the particles in which substitution of cluster centers for showed appreciable improvement of the PSO performance. His research suggested that PSO is more effective when individuals are attracted toward the center of their own clusters. In multimodal problems, the search effort needs to be diverse in order to find the global optimum among a set of many local optima. The fast converging behavior of the PSO makes this issue so critical for multimodal problems. To achieve a more diverse search, AlKazemi and Mohan [47] divided the population into two sets at any given time, one set moving to the while another moving in opposite direction by selecting appropriate fixed values for in each set. After some iterations, if the is not improved, then the particles would switch their group. Baskar and Suganthan [48] in their concurrent PSO used two swarms to search concurrently for a solution along with frequent passing of information, which was the of two cooperating swarms. After each exchange, the two swarms had to track the better found. One of the swarms was using regular PSO, and the other was using the FitnesstoDistance ratio PSO [49]. Their approach improved the performance over both methods in solving single objective optimization problems. ElAbd and Kamel [50] further improved the previous algorithm by adding a twoway flow of information between two swarms. In 50 their algorithm, when exchanging the best particle between two swarms, this particle is used to replace the worst particle in another swarm. The two swarms perform a fixed number of iterations, and then the best particles inside each swarm will replace the worst particles in the other swarm only if they have a better fitness. This makes it possible for both swarms to exchange new information from the other swarm’s experience. Krohling et al. [51] proposed coevolutionary PSO in which two populations of PSO are involved. One PSO runs for a specified number of iterations while the other remains static and serves as its environment. At the end of such period, values obtained in previous cycles have to be reevaluated according to the new environment before starting evolution. Although these algorithms used information exchange among swarms, but none of them adopted specific paradigm based on promoting diversity in selecting and sending particles from one swarm to another. On the other hand, one of the main concerns in multiobjective optimization problems (MOP) is also to search for a diverse set of potential solutions, known as Pareto front. There have been several algorithms to extend PSO to handle diversity issue in MOPs. Parspopoulos et al. [52] introduced vector evaluated particle swarm optimizer (VEPSO) to solve multiobjective problems. A VEPSO is a multiswarm variant of PSO in which each swarm is evaluated using only one of the objective functions of the problem under consideration, and the information it possesses for this objective function is communicated to the other swarms through the exchange of their best experience. In VEPSO, the velocity of the particles in each swarm is updated using the best previous 51 position, , of another selected swarm. Selection of this swarm in the migration scheme can be either random or in a sequential order. Ray and Liew [53] used Pareto dominance and combined concepts of evolutionary techniques with the particle swarm. This algorithm uses crowding distance to preserve diversity. Hu and Eberhart [54] in their dynamic neighborhood PSO proposed an algorithm to optimize only one objective at a time. The algorithm may be sensitive to the optimizing order of objective functions. Fieldsend and Singh [55] proposed an approach in which they used an unconstrained elite archive to store the nondominated individuals found along the search process. The archive interacts with the primary population in order to define local guides. Mostaghim and Teich [56, 60] introduced a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of the PSO. Li [57] adopted the main idea from NSGAII into the PSO algorithm. Coello Coello et al. [58], on the other hand, proposed an algorithm using a repository for the nondominated particles along with adaptive grid to select the global best of PSO. The algorithms proposed to solve MOPs using PSO are based upon promoting the nondominated particles at any given time, not exploiting the information of all particles in the population. The information exchange through migration in order to increase the search ability of the algorithm has been used in some other innovated paradigms. Ray and Liew [5] introduced their society and civilization model for optimization in accordance with simulation of social behavior. Individuals in a society interact with each other in order to 52 improve. Such improvement is done by information acquisition from the betterperforming individuals or leaders in that society. This intrasociety interaction will improve the individual’s performance, but cannot improve the leader’s performance. The leaders do communicate externally with the leaders of other societies to improve. This intersociety communication leads to migration of the leaders to developed societies, which in turn, moves the overall poorperforming societies toward betterperforming ones. At first, population is clustered into several mutually exclusive ones based on their distance in parametric space. Then objective functions along with constraints (if any) lay down a ranking system to choose the leaders in each cluster, and then migration in two levels will take place. Society and civilization model showed competitive results on single objective constrained optimization problems with respect to GAs. The concept of having multiple sets had been originally introduced and used in distributed genetic algorithm (DGA) [6]. In DGA, population is divided into several subpopulations each running its own GA independently. At regular time intervals, interprocessor communication will happen. During this migration stage, a proportion of each subpopulation is selected and sent to another subpopulation. The migrant individuals will replace others based on replacement policy. In another kind of distributed evolutionary algorithm, Ursem [4] adopted his multinational evolutionary algorithm using a spatially separated model. He applied a fitnesstopology function, instead of clustering, to decide on the relationship between a point and a cluster. The algorithm was to find all peaks of a multimodal function in unconstrained optimization problems. 53 In DGA, there are different policies on selection of migrants and replacement of individuals within each of the subpopulations. CantuPaz [118] showed that sending the fittest individuals of the population and replacing individuals with low fitness produces the best results. Denzinger and Kidney [18] used a diversity measure to select individuals for migration. Power et al. [119] used a method for selection based on a diverse set of individuals rather than the highly fit ones. The reason is to avoid like information to be sent to another subpopulation. Sometimes the majority of individuals can be located very close to each other, especially in the last steps of convergence. Therefore, by selecting the fittest individuals, the similar individuals from a small area will be sent to the next subpopulation. In case the algorithm is likely to be trapped in a local optima, this similar information is useless to diversify the search and get away from the local optima. Instead, the basis is to choose a diversified list of individuals to send to the other GAs. The sending list will be filled by the following individuals in this order: (1) an average individual of the subpopulation as representative of the population, (2) m individuals based on closeness to this representative whose fitness is better than representative, (3) m individuals based on closeness to this representative whose fitness is poorer than representative, and finally (4) the fittest individual in the subpopulation. There will also be a replacement list that will be filled in the following order: (1) individuals having similar genetic information, by order of fitness, with least fit ones being replaced before better fit ones, and (2) individuals with lowest fitness values. Their method was applied to single objective multimodal optimization and showed significantly better results when 54 compared to standard DGA with sendbestreplaceworst strategy. 4.3 Diversitybased Information Exchange among Swarms in PSO The underline principle of the proposed algorithm is based upon the idea of exploiting the information of all particles in the population. The population will be divided into P swarms, and each swarm will perform a PSO paradigm. After some predefined iterations, the swarms will exchange information based on a diversified list of particles. Each swarm prepares a list of sending particles to be sent to the next swarm, and also prepares a list of replacement particles to be replaced by particles coming from other swarms. Each swarm chooses the leaders of the next generation from the updated swarm after exchange of particles. To select the list of particles to send, algorithm uses a strategy according to the locations of the particles in the swarm and their objective values instead of their objective values alone. A list is prepared in the following order. Priority S1: The higher priority in the selection of particles is given to a particle that has the least average Hamming distance from others. This particle is considered as the representative of the swarm. The average Hamming distance between each pair of particles in the swarm is calculated and then the least among them is found. Priority S2: The closest particles to the representative particle whose objective value is greater than that of the representative will be chosen. is a value that depends on the rate of information exchange, , (a predefined value between 0 and 1) among 55 swarms, and population of each swarm, : . (4.2) Priority S3: The closest particles to the representative particle whose objective value is less than that of the representative will be chosen. Priority S4: The best performing particle in the swarm will be chosen. Depending on the predefined fixed value for allowable number of the sending list, the sending list will be filled in each swarm. There will also be a replacement list that each swarm prepares, based on the similar positional information of particles in the swarm. When swarms are approaching local optima, many locations of particles are similar to each other. Each swarm will then remove this excess information through its replacement list. The replacement list in each swarm is prepared in the following order. Priority R1: Particles with identical parametric space information, by the order of their objective values, with the least objective values will be replaced first. Priority R2: Particles with the lowest objective values will be replaced when all particles of the last priority have already been in the replacement list. This information exchange among swarms can happen in a ring sequential or random order between each pair of swarms as shown in Figure 4.1. Each swarm accepts the sending list from other swarm and will replace it with its own replacement list. After 56 information exchange completes, the and will be selected. This algorithm is shown in Figure 4.2. Figure 4.1 Ring and random sequential migration: Migration can be (a) in ring sequential order between swarms1 and 2, then between swarm 2 and 3, etc. or (b) in a random order between swarms. i, k, s, t, j are random numbers between 1 and n. Figure 4.2 Main algorithm for diversitybased multiple PSO (DMPSO) To further overcome the premature convergence problem, especially in multimodal objective optimization, and to increase the ability of communication among particles about common interest information, a concept of neighborhood is proposed to 1 2 3 4 n 5 i k s t n j (a) (b) Initialize population at time t 1. Cluster population into P swarms using kmeans clustering. If t tMigration , then: a. Prepare the sending list and replacement list for each swarm; b. Exchange particles between pairs of swarms, using sending and replacement lists of each swarm; c. Perform the PSO on new swarms using Equation (4.1). Else: Perform PSO on each swarm using Equation (4.1). Repeat the above steps until stopping criteria are met. ( max t t ) 57 promote the particles in a neighborhood to utilize and share information among themselves. For the PSO schema, a threelevel mechanism is adopted. In personal level, particle in a swarm will follow the leader of the swarm that is the best behaving particle in that swarm. In neighborhood level, the particle will simultaneously follow the best behaving particle in its neighborhood to achieve a synchronized behavior in the neighborhood and to share the information, and finally in the global level, particles of each swarm will follow the best behaving particle in the whole population, seeking a global goal. This paradigm of PSO is formulated as: , (4.3) , where is the velocity of the particle, is the position of the particle, is the best position in the cluster, is the best position among all particles and is the best position among the particles’ neighborhood. , , and are random numbers uniformly generated in the range of . Thus particles always move statistically towards the direction of , , and in order to use the past experience in the search process. , , and are constant values representing the weight of each of the terms of personal, global, and neighborhood behavior and is the momentum for previous velocity. It should be noted that the unified PSO [120] integrates the local best and global best PSOs into a single equation to update the velocity of particles based on the global 58 best particle, the neighboring best particle, and the particle’s own best position, while in the proposed paradigm, velocity updates using the best particle in the cluster of particles, the best particle in the neighboring swarms, and the best global particle with no restriction on the weights. To find the neighborhood among particles in PSO, there have been different strategies used by researchers [37, 121]. Some have applied ring neighborhood, the von Neumann neighborhood, or some other topological neighborhoods. The proposed definition of neighborhood is to define neighboring swarms according to the average as representative of each swarm to decide whether the swarms are in neighborhood of each other. In the ith swarm with the particles of , the representative, , is defined by centroid of all particles: (4.4) The interswarm distance between swarms i and j, , is defined by the inner products of two vectors: , (4.5) where is the kth element of the representative . 59 Swarms are defined to be in a neighborhood if and only if all interswarm distances among them are less than the average interswarm distance: , , … and , where , where P is the number of swarms. Figure 4.3 Schema of swarm neighborhood: Swarms 1, 2 and 3 are in a neighborhood, since , and but swarm 4 does not belong to this neighborhood. Notice that even but . Swarms 4, 5 and 6 form another neighborhood, because , and . Swarm 3 does not belong to this neighborhood because even but . (Solid circles denote the representative points of each swarm) For example, for two of them, swarms i and j are in a neighborhood if and only if . If but , then swarm k does not belong to this neighborhood. In Figure 4.3, an example with six swarms is shown. Swarms 1, 2 and 3 are in a neighborhood because , and but swarm 4 does not belong to R13 Swarm 3 Swarm 2 R23 R12 R45 R56 R46 Swarm 4 Swarm 5 Swarm 6 R14 R34 Swarm 1 Size of R 60 this neighborhood. Notice that even but . Swarms 4, 5 and 6 form another neighborhood because , and . Swarm 3 does not belong to this neighborhood because even but . A brief explanation of the proposed algorithm is shown in Figure 4.4. The population is initialized and then clustered into P swarms using the kmeans clustering method. Then the neighbor sets of each swarm will be found using the Equations (4.4) and (4.5) and the rule mentioned above as shown in Figure 4.3. Figure 4.4 Main algorithm for diversitybased multiple PSO with neighborhood (NDMPSO) Initialize population at time t 1. Cluster population into P swarms. If t tMigration , then: a. Prepare the sending list and replacement list for each swarm; b. Exchange particles between pairs of swarms, using sending and replacement lists of each swarm; c. Find the neighbor sets of each swarm. (N , i 1,2,..., P) i ; d. Perform the PSO on each new swarm: o Find the , , and for each new swarm, o Apply the modified version of PSO, Equation (4.3). Else: a. Find the neighbor sets of each swarm. (N , i 1,2,..., P) i ; b. Perform the PSO on each swarm: o Find the , , and for each swarm, o Apply the modified version of PSO, Equation (4.3). Repeat the above steps until stopping criteria are met. ( max t t ) 61 To perform the PSO according to Equation (4.3), we have to find the best performing particle in each swarm, namely the , the best performing particle in the allneighbor sets of that swarm, namely the and the best performing particle in the whole population, . This process will be iterated until the time for migration is reached. At regular fixed intervals, each swarm prepares a list of particles to send to the next swarm, a list of particles that must be replaced from other particles coming from other swarms; then exchange of particles between each of the two swarms will happen according to Figure 4.1. This algorithm including clustering, information exchange, and flight of particles will continue until the stopping criteria are met. 4.4 Simulation Results The proposed algorithm was tested using some benchmark problems, which are often used to examine GA solving multimodal problems [4, 122]. These problems adopted from [119] vary in difficulty and dimension. In order to test the proposed algorithm, its performance has been compared with two distributed genetic algorithms [118119]. One of them is DGA with a standard migration policy (SDGA), bestsentworstreplaced [118]. The other one is a DGA with diversitybased migration policy (DDGA) [119]. In order to draw a fair comparison, the same rate of information exchange as their migration rate has been adopted. The main population for the proposed algorithms DMPSO and NDMPSO was set as 50 particles. The kmeans clustering 62 method was used with m=6 swarms. The coefficients of , , , and are selected as 1.4, 1.4, 1.4, and 0.8, respectively. The rate of information exchange is varied with the values of 0.05, 0.2, and 0.4. At the time interval of , particles are exchanged among swarms. The first problem used to test the proposed algorithm is F1 [119] with five peaks and four valleys between each of the two neighboring peaks. This function is depicted in Figure 4.5. Figure 4.5(a) shows a 3D landscape while Figure 4.5(b) displays the contour map of the function F1. The results of applying both proposed algorithms are shown in Table 4.1. The optimal solution found (in percentage) is calculated out of 30 independent runs for each algorithm. The solution is considered to be optimal when the optimal objective value of 2.5 is reached. The best objective values for the final solution is averaged over 30 runs to obtain the mean best objective reported in the table. Each algorithm is performed for three values of rate of information exchange, 0.05, 0.2, and 0.4. The best performing algorithm in each case is shown in bold face. The graphical view of the location of the best particles of the final solution is depicted in Figure 4.6. Figure 4.6 (a) and (b) are for DMPSO with rate of information exchange equal to 0.05 and 0.4, and (c) and (d) are for NDMPSO with rate of information exchange equal to 0.05 and 0.4, respectively. Figure 4.6 shows that some of the particles in DMPSO will be trapped in local maxima (0.897,0) and (0.897,0). In NDMPSO, most particles approached toward (0,0), the global maximum. The results in Table 4.1 show that both proposed algorithms perform better and NDMPSO outperforms all of them. 63 The second benchmark problem is F2 [119] with 10 peaks shown in Figure 4.7. The results of the algorithms are also shown in Table 4.1. The graphical view of the best particles for both algorithms is obtained in Figure 4.8. This figure shows that some particles in DMPSO are trapped in a local maximum while in NDMPSO most particles reach the global maximum. Results reported in Table 4.1 also show the better performance of NDMPSO. The next benchmark function is F3 [119], shown in Figure 4.9, with two close peaks and a valley between them. The results of the algorithms are summarized in Table 4.1 as well, and the graphical view of the best particles is depicted in Figure 4.10. Figure 4.10 shows that in DMPSO some of the particles are trapped in local maximum at (1.444,0), while in NDMPSO most of the particles reached the global maximum at (1.697,0). Table 4.1 also illustrates that NDMPSO is outperforming other algorithms. The next benchmark function is F4 [119] with a total of five peaks, one global maximum and four local maxima in its neighborhood, shown in Figure 4.11. The results and the graphical presentation of the best particles in Table 4.1 and Figure 4.12 show once again that NDMPSO has less particles trapped in four local maxima located at the corners of the variable space. The results obtained in Table 4.1 confirm a higher number of found optimal solutions. The benchmark function F5 [119] has six peaks, two of which are global maxima as shown in Figure 4.13. The results of the algorithms are shown in Figure 4.14 and Table 4.1. The DDGA in this problem outperforms the proposed algorithms when rate of information exchange is 0.05 and 0.2. On the other hand, with a 64 higher rate, the proposed algorithm performs better, i.e., when more particles are exchanged, PSO shows more superiority. The benchmark function of F6 [119] has a variable dimension. Three different dimensions of 25, 40 and 50 have been used here. The rate of the information exchange for this case and the remaining benchmark functions has been fixed at 0.1. The best objective value for the final solutions is averaged over 30 runs and shown in Table 4.2. The NDMPSO outperforms the other three algorithms at dimensions 25 and 40 but at dimension 50, DDGA performs better. The benchmark function of F7 [119] has also a variable dimension, and three dimensions of 25, 40, and 50 have been adopted here. Results in Table 4.2 show that NDMPSO outperforms the others at dimensions 25 and 40 but again, at dimension 50, DDGA outperforms others. The next benchmark function, F8 [119], has 10 variables. NDMPSO also outperforms other algorithms in this case. And finally, the last benchmark function F9 [119] has 40 variables. NDMPSO performs better than other algorithms as well. In general, NDMPSO outperformed other algorithms in several benchmark functions. It was outperformed in some cases, especially problems with very high dimension, by DDGA. It might be due to the nature of GA that recombination demonstrates a better performance in high dimension; however it needs to be tested more. 65 (a) (b) Figure 4.5 Benchmark function F1 with five peaks and four valleys: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.6 Final best particles for F1: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 66 (a) (b) Figure 4.7 Benchmark function F2 with 10 peaks: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.8 Final best particles for F2: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 67 (a) (b) Figure 4.9 Benchmark function F3 with two peaks and one valley: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.10 Final best particles for F3: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 68 (a) (b) Figure 4.11 Benchmark function F4 with five peaks: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.12 Final best particles for F4: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 69 (a) (b) Figure 4.13 Benchmark function F5 with six peaks, two of which are global maxima: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.14 Final best particles for F5: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 70 Table 4.1 Results for optimal found (%) and mean best objective for F1, F2, F3 and F5 Algorithms SDGA DDGA DMPSO NDMPSO Max F1 Optimal Found (%) r = 0.05 0% 0.8% 10.0% 13.3% r = 0.2 0% 13.3% 13.3% 20.0% r = 0.4 0% 15.8% 16.7% 23.3% Mean best objective r = 0.05 1.98898 2.48217 2.4801 2.4863 r = 0.2 1.97855 2.4605 2.4745 2.4793 r = 0.4 2.02455 2.48811 2.4891 2.4905 Max F2 Optimal Found (%) r = 0.05 0% 22.5% 33.3% 53.3% r = 0.2 0.8% 5.8% 13.3% 26.6% r = 0.4 0% 17.5% 23.3% 33.3% Mean best objective r = 0.05 6.73371 8.58322 8.6739 8.6783 r = 0.2 6.70137 8.63548 8.6532 8.6621 r = 0.4 7.31735 8.68075 8.6923 8.6953 Max F3 Optimal Found (%) r = 0.05 0% 3.3% 16.7% 20.0% r = 0.2 0% 20% 23.3% 33.3% r = 0.4 0% 23.3% 43.3% 53.3% Mean best objective r = 0.05 4.67853 4.812 4.8121 4.8127 r = 0.2 4.7159 4.810 4.8117 4.8136 r = 0.4 4.73849 4.81496 4.8151 4.8155 Max F4 Optimal Found (%) r = 0.05 3.3% 4.2% 16.7% 26.6% r = 0.2 0% 42.5% 43.3% 53.3% r = 0.4 0% 35% 36.7% 43.3% Mean best objective r = 0.05 1.34999 1.48242 1.49016 1.49127 r = 0.2 1.33163 1.49341 1.49281 1.49332 r = 0.4 1.29936 1.49123 1.49178 1.49341 Max F5 Optimal Found (%) r = 0.05 11.7% 67.5% 43.3% 53.3% r = 0.2 14.2% 69.2% 33.3% 36.7% r = 0.4 0.8% 22.5% 36.6% 43.3% Mean best objective r = 0.05 0.970634 1.03 1.0283 1.0297 r = 0.2 0.975198 1.03006 1.0281 1.0288 r = 0.4 0.941727 1.02874 1.0293 1.0297 71 Table 4.2 Mean best objectives for F6, F7, F8, and F9 Algorithms Dimension SDGA DDGA DMPSO NDMPSO Max F6 25 8456.46 8935.65 8745.8 8947.3 40 12578 13131.1 13002.4 13135.1 50 15004.6 15554.9 15387.5 15423.6 Max F7 25 8682.31 9093.61 9026.5 9098.4 40 12796.1 13324.3 13304.5 13331.3 50 15124 15810.5 15723 15799 Max F8 10 1.9513 1.97217 1.9673 1.97221 Max F9 10 605.201 627.921 616.436 628.142 4.5 Discussions A paradigm for particle swarm optimization is presented in order to increase its ability to search widely and to overcome its premature convergence problem. The proposed algorithm uses multiple swarms and exchanges particles among them in regular intervals. The exchanged particles are selected according to the locations of the particles based on a promotional diversity strategy and their correspondence objective values. Furthermore, the PSO was modified using a new neighborhood term that helps the neighboring swarms share the common interest information. The neighborhood for each swarm is found using an unsupervised algorithm according to the interswarm distances between representatives of each pair of swarms. The proposed algorithms, NDMPSO, showed a great performance compared to DMPSO and two versions of distributed genetic algorithm that have similar conceptual basis with the proposed algorithm. The DMPSO showed competitive results compared to DGAs. The NDMPSO showed better 72 performance compared to DMPSO, indicating that sharing information in the neighborhood of swarms helps them to escape from local optima and locate the global optimum. As a drawback of both proposed algorithms, they show dependence of their performance on the rate of information exchange. A range of rate has been selected from 0.05 to 0.4 which reveals no conclusion on what rate is better for a specific application. Further work is needed to find an optimum exchange rate. Due to the nature of the diversity promotion of the proposed algorithm that works well for multimodal problems, it can be a promising candidate as a basis for solving multiobjective optimization. 73 CHAPTER V CULTURALBASED MULTIOBJECTIVE PARTICLE SWARM OPTIMIZATION 5.1 Introduction Population based heuristic for solving multiobjective optimization problems (MOPs) has gained much attention. Multiobjective evolutionary algorithm (MOEA) and multiobjective particle swarm optimization (MOPSO) are two popular population based paradigms introduced within the last decade. MOPSO adopts the particle swarm optimization (PSO) paradigm [1] which in turn mimics behavior of the flocking birds. Although there exist many researches on single objective PSO suggesting dynamic weights for the local and global acceleration [123], most MOPSO researchers assume that all particles should move with the constant momentum, local, and global acceleration. However there have not been many studies to consider a possibility in which particles fly with different “personalized” weights for the momentum, local, and global acceleration. Some may argue that there is no need to have a personalized weight for each particle. Even if an algorithm applies the same weight for all particles, for some particles requiring smaller weight, they will unnecessarily jump far away from the optimum, while 74 for some other particles that need greater weight, they will unsatisfactorily move slowly, resulting in both situations an inefficient design. On the other hand employing a personalized weight for each particle assigns an appropriate amount of jump and contributes to the effectiveness of the performance of the algorithm. From a biological point of view, study [7] has shown that societies that can handle more complex tasks contain polymorphic individuals. Polymorphism is a significant feature of social complexity that results in differentiated individuals. The more differentiated the society, the easier it can handle complex tasks. Differentiation applies in principal to complex societies of prokaryotic cells, multicellular organisms, as well as to colonies of multicellular individuals such as ants, wasps, bees, and so forth. The colony performance is improved if individuals differentiate in order to specialize on particular tasks. As a result of differentiation, individuals perform functions more efficiently. In the study it has been shown the colony’s ability to higher cooperative activity when tackling tasks is a direct consequence of differentiation among other factors. There are few studies in the MOPSO that have tackled the issue of variable momentum for the particles although in all of them momentum is identical for all particles at a specific iteration. Some MOPSO paradigms have proposed simple strategies to adapt the momentum by decreasing the momentum throughout swarming [57, 64, 6768, 124], while other MOPSOs choose a random value for momentum [54, 6263, 66, 69] at every iteration. 75 The MOPSO, similar to PSO, is based upon a simple flight of the particle: , (5.1) , (5.2) where is the dth dimension of the position of the ith particle at time ( and ). is the dth dimension of the velocity of the ith particle at time . is the dth dimension of the personal best position of the ith particle at time , and is the dth dimension of the global best position at time . and are the constant values that are called personal and global acceleration which give different importance to personal or global term of Equation (5.1). and are uniform random numbers from to give stochastic characteristics to the flight of particles. is the velocity momentum of the particles. In Figure 5.1, it can be seen how three vectors which affect the flight of particles depend so much on the momentum, global, and local acceleration. When particles need to be used as exploiter or explorer the emphasis on each term in Equation (5.1) should be different. Therefore not all particles should have the same values for momentum, local, and global acceleration. To the best knowledge of the author, there is no appreciable work in MOPSO on adapting personalized momentum and acceleration based upon the need for the particles to exploration or exploitation. Adaptation of these important factors in the flight of particles is an important task that cannot be solved unless we have access to the knowledge throughout the search process. In this study, a computational framework is 76 proposed based on cultural algorithm (CA) [3, 125] adopting knowledge stored in belief space in order to adapt flight parameters of MOPSO. Cultural algorithms have been frequently used to vary parameters of individual solution in optimization problems [126127]. Proposed paradigm resorts different types of knowledge in belief space to personalize the parameters of the MOPSO for each particle. Every particle in MOPSO will use its own adapted momentum and acceleration (local and global) at every iteration to approach the Pareto front. Cultural algorithm provides required groundwork enabling one to employ the information stored in different sections of belief space efficiently and effectively. By incorporating CA into the optimization process, we categorize the information of the belief space and adopt it in a systematic manner. Information in the belief space provide required parameters needed for the optimization process whenever it is needed. As a result the optimization process will be more competent and successful. Figure 5.1 Schema of particle’s movement in MOPSO: Vectors affecting how particle moves in MOPSO due to gbest, pbest and its velocity. The remaining sections complete the presentation of this chapter. In Section 5.2, principles of cultural algorithm and related works in MOPSO are briefly reviewed. In 77 Section 5.3, the proposed cultural MOPSO is elaborated. In Section 5.4, simulation results are evaluated on the benchmark test problems in comparison with the stateoftheart MOPSO models. This section also includes a sensitivity analysis for the proposed cultural MOPSO. Finally, Section 5.5 summarizes the concluding remarks and future work of this study. 5.2 Review of Literature 5.2.1 Related Works in Multiobjective PSO Hu and Eberhart [54] in their dynamic neighborhood MOPSO model and also Hu et al. [66] in the MOPSO with extended memory adopted a random number on the range (0.5,1) as the varying momentum, however both personal and global acceleration are constant values. Sierra and Coello Coello [62] in their crowding and dominance based MOPSO used random value at the range (0.1,0.5) for the momentum and random values at the range (1.5,2.0) for the personal and global acceleration. They adopted this scheme to bypass the difficulties of fine tuning these parameters for each test function. Zhang et al. [64] introduced intelligent MOPSO based upon AgentEnvironmentRules model of artificial life. In their model, along with adopting some immunity clonal operator, the momentum was decreased linearly from 0.6 to 0.2, but the personal and global acceleration remained constant. Li [67] proposed an MOPSO based upon maxmin 78 fitness function. In his model, while the personal and global accelerations were set constant, the momentum was gradually decreased from 1.0 to 0.4. Zhang et al. [68] adopted a linearl
Click tabs to swap between content that is broken into logical sections.
Rating  
Title  Cultural Particle Swarm Optimization 
Date  20100701 
Author  Daneshyari, Moayed 
Keywords  constrained optimization, cultural algorithm, dynamic optimization, multiobjective optimization, particle swarm optimization, swarm intelligence 
Department  Electrical Engineering 
Document Type  
Full Text Type  Open Access 
Abstract  In this dissertation, a computational paradigm is proposed that adopts the particle swarm optimization within a cultural framework. The cultural algorithm adopted here consists of two space, population space and belief space including five sections, situational knowledge, normative knowledge, topographical knowledge, history knowledge, and domain knowledge. Several innovative paradigms have been proposed including diversitybased particle swarm optimization, culturalbased multiobjective particle swarm optimization, culturalbased constrained particle swarm optimization, and culturalbased dynamic particle swarm optimization. Comparative study between different proposed algorithm and the stateoftheart algorithms in the corresponding field, demonstrated the potential existing in culturalbased particle swarm optimization that resulted in competitive results verifying the effectiveness and efficiency of the proposed cultural particle swarm optimization to solve different types of optimization problem from single objective optimization problems, multiobjective optimization problems, constrained optimization problems, and dynamic optimization problems. 
Note  Dissertation 
Rights  © Oklahoma Agricultural and Mechanical Board of Regents 
Transcript  CULTURAL PARTICLE SWARM OPTIMIZATION MOAYED DANESHYARI Bachelor of Science Electrical Engineering Sharif University of Technology Tehran, Iran 1995 Master of Science Biomedical Engineering Iran University of Science and Technology Tehran, Iran 1998 Master of Science Physics Oklahoma State University Stillwater, Oklahoma 2007 Submitted to the Faculty of the Graduate College of the Oklahoma State University in partial fulfillment of the requirements for the Degree of DOCTOR OF PHILOSOPHY July 2010 ii CULTURAL PARTICLE SWARM OPTIMIZATION Dissertation Approved: Dean of the Graduate College Dr. Gary G. Yen Dissertation Adviser Dr. Carl D. Latino Dr. Louis G. Johnson Dr. R. Russell Rhinehart Dr. A. Gordon Emslie iii ACKNOWLEDGEMENTS I would like to first thank my academic advisor, Professor Gary G. Yen, for his guidance, support and especially patience with all ups and downs during the years of studies that this dissertation was gradually constructed. If it were not with his flexibility with my different situations and his providing me the freedom to fully experience all aspects of academic research especially in the last two years, this academic research could never be completed. I would also like to extend my appreciation to the other committee members whose guidance, comments and review of the research work were of great importance for improving the quality of this document. My thanks also go to all my previous colleagues at the Intelligent Systems and Control Laboratory at Oklahoma State University that accompanied my progress throughout part of my research by offering me new ideas. I should also mention my thankfulness to my colleagues in my current profession as Assistant Professor at Elizabeth City State University whose help and flexibility to give me more free time to focus on my Ph.D. research work was a great help. Finally, I would like to express my gratitude for my parents, Farideh and Ahmad and my sister Matin who have always supported me throughout my years of studies and provided the understanding only possible although living far from me. iv Last, but not least, I would like to specially thank my family, my wife, Lily and my little son, Ryan, for their understanding, help, support and providing appropriate environment for me to work on my research during years of studying for doctorate degree. If it were not her verbal and spiritual support and his innocence and happiness to encourage me in working more, this study could never be accomplished. Moayed Daneshyari v Table of Contents Chapter Page CHAPTER I INTRODUCTION .......................................................................................................... 1 CHAPTER II LITERATURE REVIEW ............................................................................................. 12 CHAPTER III SOCIETTY AND CIVILIZAION FOR OPTIMIZATION .......................................... 29 3.1 Introduction ......................................................................................................... 29 3.2 Socialbased Algorithm for Optimization ........................................................... 31 3.2.1 Proposed Modifications ............................................................................... 39 3.3 Simulation Results .............................................................................................. 42 3.4 Discussions ......................................................................................................... 44 CHAPTER IV DIVERSITYBASED INFORMATION EXCHANGE FOR PARTICLE SWARM OPTIMIZATION .......................................................................................................... 46 4.1 Introduction ......................................................................................................... 46 4.2 Review of Related Work ..................................................................................... 48 4.3 Diversitybased Information Exchange among Swarms in PSO ........................ 54 4.4 Simulation Results .............................................................................................. 61 4.5 Discussions ......................................................................................................... 71 CHAPTER V CULTURALBASED MULTIOBJECTIVE PARTICLE SWARM OPTIMIZATION...................................................................................................................................... 73 5.1 Introduction ......................................................................................................... 73 5.2 Review of Literature ........................................................................................... 77 5.2.1 Related Works in Multiobjective PSO ......................................................... 77 5.2.2 Related Work in Cultural Algorithm for Multiobjective Optimization ....... 79 5.3 Culturalbased Multiobjective Particle Swarm Optimization ............................. 80 vi 5.3.1 Acceptance Function .................................................................................... 81 5.3.2 Belief Space ................................................................................................. 82 5.3.2.1 Situational Knowledge .......................................................................... 82 5.3.2.2 Normative Knowledge .......................................................................... 84 5.3.2.3 Topographical Knowledge .................................................................... 86 5.3.3 Influence Functions ...................................................................................... 89 5.3.3.1 Adapting Global Acceleration .............................................................. 89 5.3.3.2 Adapting Local Acceleration ................................................................ 91 5.3.3.3 Adapting Momentum ............................................................................ 93 5.3.3.4 Selection..................................................................................... 94 5.3.3.5 Selection ..................................................................................... 95 5.3.4 Global Archive ............................................................................................. 96 5.3.5 Timedecaying Mutation Operator .............................................................. 98 5.4 Comparative Study and Sensitivity Analysis ...................................................... 99 5.4.1 Comparison Experiment ............................................................................ 100 5.4.1.1 Parameter Settings .............................................................................. 100 5.4.1.2 Benchmark Test Functions ................................................................. 100 5.4.1.3 Qualitative Performance Comparisons ............................................... 102 5.4.1.4 Quantitative Performance Evaluations ............................................... 103 5.4.2 Sensitivity Analysis ................................................................................... 118 5.5 Discussions ....................................................................................................... 136 CHAPTER VI CONSTRAINED CULTURALBASED OPTIMIZATION USING MULTIPLE SWARM PSO WITH INTERSWARM COMMUNICAION ................................... 139 6.1 Introduction ....................................................................................................... 139 6.2 Review of Literature ......................................................................................... 142 6.2.1 Related Work in Constrained PSO ............................................................ 142 6.2.2 Related Works in Cultural Algorithm for Constrained Optimization ........ 146 6.3 Cultural Constrained Optimization Using MultipleSwarm PSO ..................... 147 6.3.1 MultiSwarm Population Space ................................................................. 149 6.3.2 Acceptance Function .................................................................................. 150 vii 6.3.3 Belief Space ............................................................................................... 151 6.3.3.1 Normative Knowledge ........................................................................ 152 6.3.3.2 Spatial Knowledge .............................................................................. 153 6.3.3.3 Situational Knowledge ........................................................................ 155 6.3.3.4 Temporal Knowledge.......................................................................... 156 6.3.4 Influence Functions .................................................................................... 158 6.3.4.1 Selection ................................................................................... 158 6.3.4.2 Selection ................................................................................... 158 6.3.4.3 Selection................................................................................... 159 6.3.4.4 InterSwarm Communication Strategy ............................................... 159 6.4 Comparative Study............................................................................................ 162 6.4.1 Parameter Settings ..................................................................................... 162 6.4.2 Benchmark Test Functions ........................................................................ 163 6.4.3 Simulation Results ..................................................................................... 164 6.4.4 Convergence Graphs .................................................................................. 172 6.4.5 Algorithm Complexity ............................................................................... 178 6.4.6 Performance Comparison ........................................................................... 178 6.4.7 Sensitivity Analysis ................................................................................... 179 6.5 Discussions ....................................................................................................... 183 CHAPTER VII DYNAMIC OPTIMIZATION USING CULTURALBASED PARTICLE SWARM OPTIMIZATION ........................................................................................................ 186 7.1 Introduction ....................................................................................................... 186 7.2 Review of Literature ......................................................................................... 191 7.2.1 Related Work in Dynamic PSO ................................................................. 191 7.2.2 Related Works in Cultural Algorithm for Dynamic Optimization ............ 196 7.3 Cultural Particle Swarm for Dynamic Optimization ........................................ 196 7.3.1 Multi Swarm Population Space ................................................................. 198 7.3.2 Acceptance Function .................................................................................. 201 7.3.3 Belief Space ............................................................................................... 202 7.3.3.1 Situational Knowledge ........................................................................ 202 viii 7.3.3.2 Temporal Knowledge.......................................................................... 203 7.3.3.3 Domain Knowledge ............................................................................ 204 7.3.3.4 Normative Knowledge ........................................................................ 208 7.3.3.5 Spatial Knowledge .............................................................................. 212 7.3.4 Influence Functions .................................................................................... 215 7.3.4.1 pbest Selection .................................................................................... 215 7.3.4.2 sbest Selection ..................................................................................... 216 7.3.4.3 gbest Selection .................................................................................... 216 7.3.4.4 Diversity based Migration Driven by Change .................................... 216 7.4 Experimental Study ........................................................................................... 218 7.4.1 Benchmark Test Problems ......................................................................... 219 7.4.2 Comparison Algorithms ............................................................................. 220 7.4.3 Comparison Measure ................................................................................. 222 7.4.4 Simulation Results ..................................................................................... 222 7.5 Discussions ....................................................................................................... 232 CHAPTER VIII CONCLUSION ........................................................................................................... 235 BIBLIOGRAPHY ........................................................................................................... 241 APPENDIX A BENCHMARK TEST FUNCTIONS FOR MULTIOBJECTIVE OPTIMIZATION PROBLEMS ............................................................................................................... 262 APPENDIX B BENCHMARK TEST FUNCTIONS FOR CONSTRAINED OPTIMIZATION PROBLEMS ............................................................................................................... 265 APPENDIX C BENCHMARK TEST FUNCTIONS FOR DYNAMIC OPTIMIZATION PROBLEMS ............................................................................................................... 289 ix List of Figures Figures Page 3.1 Flowchart for socialbased single objective optimization 34 3.2 Flowchart for identifying leaders 35 3.3 Flowchart on how to migrate individuals 38 3.4 Pseudocode for individuality importance in intrasociety migration 40 3.5 Schema for Spring Design problem 43 3.6 Comparison for best objective function for proposed modifications 44 4.1 Ring and random sequential migration 56 4.2 Main algorithm for diversitybased multiple PSO 56 4.3 Schema of swarm neighborhood 59 4.4 Main algorithm for diversitybased multiple PSO with neighborhood 60 4.5 Benchmark function F1 with five peaks and four valleys 65 4.6 Final best particles for F1 65 4.7 Benchmark function F2 with 10 peaks 66 4.8 Final best particles for F2 66 4.9 Benchmark function F3 with two peaks and one valley 67 4.10 Final best particles for F3 67 4.11 Benchmark function F4 with five peaks 68 4.12 Final best particles for F4 68 4.13 Benchmark function F5 with six peaks 69 4.14 Final best particles for F5 69 5.1 Schema of particle’s movement in MOPSO 76 x 5.2 Pseudocode of the cultural MOPSO 81 5.3 Schema of the adopted cultural framework 82 5.4 Representation of situational knowledge 83 5.5 Schematic view of choosing the ith element of situational knowledge 84 5.6 Representation of normative knowledge 85 5.7 Schema on how normative knowledge can be found and updated 88 5.8 Representation of knowledge in each cell 88 5.9 Example of cell representation 89 5.10 Schema of local grid for the personal archive 93 5.11 Method of selecting from topographical knowledge 96 5.12 selection procedure from personal archive 97 5.13 Pareto fronts comparison on test function ZDT1 105 5.14 Pareto fronts comparison on test function ZDT2 106 5.15 Pareto fronts comparison on test function ZDT3 107 5.16 Pareto fronts comparison on test function ZDT4 108 5.17 Pareto fronts comparison on test function DTLZ5 109 5.18 Pareto fronts comparison on test function DTLZ6 110 5.19 Box plot of hypervolume indicator for all test function 111 5.20 Box plot for additive binary epsilon indicator on test function ZDT1 115 5.21 Box plot for additive binary epsilon indicator on test function ZDT2 115 5.22 Box plot for additive binary epsilon indicator on test function ZDT3 116 5.23 Box plot for additive binary epsilon indicator on test function ZDT4 116 5.24 Box plot for additive binary epsilon indicator on test function DTLZ5 117 5.25 Box plot for additive binary epsilon indicator on test function DTLZ6 117 5.26 Sensitivity analyses with respect to minimum personal acceleration 123 5.27 Sensitivity analyses with respect to maximum personal acceleration 124 5.28 Sensitivity analyses with respect to minimum global acceleration 125 5.29 Sensitivity analyses with respect to maximum global acceleration 126 xi 5.30 Sensitivity analyses with respect to minimum momentum 127 5.31 Sensitivity analyses with respect to maximum momentum 128 5.32 Sensitivity analyses with respect to grid size 129 5.33 Sensitivity analyses with respect to population size 130 5.34 Sensitivity analyses with respect to mutation rate 131 6.1 Pseudocode of the cultural constrained particle swarm optimization 148 6.2 Schema of the cultural framework adopted 151 6.3 Representation for normative knowledge 152 6.4 The schema to represent how the spatial knowledge is computed 154 6.5 Representation of spatial knowledge for each particle 155 6.6 Representation for situational knowledge 156 6.7 Representation for temporal knowledge 157 6.8 Convergence graphs for problems 174 6.9 Convergence graphs for problems 175 6.10 Convergence graphs for problems 176 6.11 Convergence graphs for problems 177 7.1 Pseudocode of the culturalbased dynamic PSO 198 7.2 Schema of the cultural framework adopted here 201 7.3 Representation for situational knowledge 203 7.4 Representation for temporal knowledge 203 7.5 Representation for the domain knowledge 206 7.6 Representation of normative knowledge 208 7.7 Representation for spatial knowledge 212 7.8 Sigmoid function to compute repulsion factor in spatial knowledge 213 7.9 Comparison of OEV as a function of elapsed iterations on function MP1 225 7.10 Comparison of OEV as a function of peak numbers on function MP1 225 xii 7.11 Comparison of OEV as a function of dimension on function MP1 226 xiii List of Tables Tables Page 3.1 Comparison of results for Spring Design problem 44 4.1 Results for optimal found and mean best objective for F1, F2, F3 and F5 70 4.2 Mean best objectives for F6, F7, F8, and F9 71 5.1 Parameter settings for all MOPSOs 101 5.2 Testing of the distribution of IH values using MannWhitney test 112 5.3 Testing of the distribution of using MannWhitney test 118 5.4 Parameter selection for sensitivity analysis 119 5.5 Statistical test to check sensitivity to minimum personal acceleration 132 5.6 Statistical test to check sensitivity to maximum personal acceleration 132 5.7 Statistical test to check sensitivity to minimum global acceleration 133 5.8 Statistical test to check sensitivity to maximum global acceleration 133 5.9 Statistical test to check sensitivity to minimum momentum 134 5.10 Statistical test to check sensitivity to maximum momentum 134 5.11 Statistical test to check sensitivity to grid size 135 5.12 Statistical test to check sensitivity to population size 135 5.13 Statistical test to check sensitivity to mutation rate 136 6.1 Parameter settings for cultural CPSO 162 6.2 Summary of 24 benchmark test functions 165 6.3 Error values for different FEs on problems 166 6.4 Error values for different FEs on problems 167 xiv 6.5 Error values for different FEs on problems 168 6.6 Error values for different FEs on problems 169 6.7 Number of function evaluations to achieve the fixed accuracy level, Success Rate, Feasibility Rate, and Success Performance 171 6.8 Summary of statistical results found by cultural CPSO 173 6.9 Computational complexity 178 6.10 Comparison of cultural CPSO with the stateoftheart constrained optimization methods in terms of feasible rate 180 6.11 Comparison of cultural CPSO with the stateoftheart constrained optimization methods in terms of success rate 181 6.12 Sensitivity analysis with respect to personal acceleration 182 6.13 Sensitivity analysis with respect to swarm acceleration 183 6.14 Sensitivity analysis with respect to global acceleration 184 6.15 Sensitivity analysis with respect to rate of information exchange 185 7.1 Parameter settings for different paradigms 221 7.2 OEV index after 500,000 FEs on test problem MP1 224 7.3 OEV index after 500,000 FEs on test problem DF2 227 7.4 OEV index after 500,000 FEs on test problem DF3 228 7.5 OEV index after 500,000 FEs on test problem DF4 228 7.6 OEV index after 500,000 FEs on test problem DF5 229 7.7 OEV index after 500,000 FEs on test problem DF6 231 7.8 Pvalues using MannWhitney ranksum test 231 7.9 OEV index after 50,000 FEs using default parameters 232 B.1 Data set for test problem 282 B.2 Data set for test problem 283 xv Nomenclature Number of decision variables; dimension of decision variables Number of particles; number of individuals; population size Number of constraints Number of objectives Number of swarms; number of societies Number of inequality constraints Tolerance for equality constraints Population of the ith swarm, number of individuals in the ith society Inequality constraint Equality constraint Personal best particle in PSO Global best particle in PSO Neighborhood best particle in PSO Swarm best particle in PSO xvi Inequality constraint Equality constraint Personal acceleration in PSO Global acceleration in PSO Neighborhood acceleration in PSO Swarm acceleration in PSO Momentum in PSO 1 CHAPTER I INTRODUCTION Computational intelligence approaches based upon the psychosocial studies inspired from either the human or animal society have been the subject of the emerging research known as swarm intelligence. There has been some research in the area of swarm intelligence focused on optimization in the spirit of the particle swarm [1], ant colony system [2] and cultural algorithms [3]. While the population based heuristics adopted in swarm intelligence do not mathematically guarantee to always find the global optimum of the search space, they perform greatly well in different types of optimization problems. Particle swarm optimization (PSO) is an imitation of the collaborative behavior of the birds flying together with the means of their information exchange, while ant colony is based on the fact that individual ants interact with each other through their pheromone trails. Cultural algorithm (CA) is a dual inheritance system in which the collective behavior of the population of individuals constructs the belief space which will in turn be accessible to all individuals in the population space. Additionally, the multinational algorithm [4] solves difficult multimodal optimization problems by using heuristics imitating political interactions among nations. 2 In another heuristic, based on the relation between society and civilization [5] the intersociety versus intrasociety relationship among the individuals facilitates on building an optimization model. The whole population of individuals, called the civilization, is clustered into different societies based on their Euclidean closeness of the individuals. The performance of individuals will be a measure to decide which individuals are the leaders of the society. The rest of the individuals are to follow them in a way to improve themselves which leads to migration (intrasocitey interaction). From the civilization viewpoint, the leaders of the societies will improve themselves by migrating toward the bestperforming leaders who are the civilization leaders (intersociety interaction). The weakness of this paradigm is its lack on using existing information from all of the individuals. Particle swarm optimization is based on the changes of the positions and velocities of the particles in a manner that optimizes a goal function. PSO has demonstrated a promising performance for many optimization problems; yet its fast convergence often leads to premature convergence in which the local optima of the goal function are found instead of the global one. The tradeoff between fast convergence and being trapped in local optima is even more critical in multimodal functions. In order to escape from the local optima and avoid premature convergence, the search for global optimum should be diverse. Many researchers have improved the performance of the PSO by enhancing its ability with a more diverse search. Specifically, some have proposed to use multiple swarms each running PSO, and then exchange information 3 among them. The weakness of these algorithms is their lack on considering a diverse list of information to exchange, consequently premature convergence. Exchanging information among clusters has also been adopted as an important design in several computational methods. Distributed genetic algorithm [6] employs GA mechanism to evolve several subpopulations in parallel. At regular intervals, migration among subpopulations takes place. During the migration stage, a proportion of each subpopulation is selected and sent to another subpopulation. The migrant individuals will replace others based on a replacement policy. Several population based heuristics have been developed to solve multiobjective optimization problems (MOPs) among which multiobjective evolutionary algorithm (MOEA) and multiobjective particle swarm optimization (MOPSO) are two popular paradigms. Although there exist many research on single objective PSO suggesting dynamic weights for the local and global acceleration, but most MOPSO researchers assume that all particles should move with the identical momentum, local, and global acceleration. To our best knowledge, there have not been any studies to consider a case in which particles fly with different “personalized” weights for the momentum, local, and global acceleration. Employing a personalized weight for each particle assigns a proper jump contributing to the effectiveness of the overall performance of the algorithm. One computational aspect is the difficulties of tuning proper value for the momentum, personal, and global acceleration in MOPSO in order to attain the best results for different test functions. From a biological point of view, work presented in [7] has also 4 shown that societies that can handle more complex tasks contain polymorphic individuals. Polymorphism is a significant feature of social complexity that results in differentiated individuals. The more differentiated the society, the easier it can handle complex tasks. Differentiation applies in principal to complex societies of prokaryotic cells, multicellular organisms, as well as to colonies of multicellular individuals such as ants, wasps, bees, and so forth. The colony performance is improved if individuals differentiate in order to specialize on particular tasks. As a result of differentiation, individuals perform functions more efficiently. In their study it has been shown the colony’s ability to higher cooperative activity when tackling tasks is a direct consequence of differentiation among other factors. There are few studies in the MOPSO research area that have tackled the issue of variable momentum for the particles although in all of them momentum is identical for all particles at a specific iteration. Some MOPSO paradigms have proposed simple strategies to adapt the momentum by simply decreasing the momentum throughout swarming while other MOPSO algorithms choose a random value for momentum at every iteration. To the best knowledge of the author, there is no noticeable study in MOPSO on adapting personalized dynamic momentum and acceleration based upon the need for the particles to exploration or exploitation. Constrained optimization problem is another area that has been solved using population based paradigms during the last two decades. Swarmbased algorithms have recently been developed to handle constraints in these type of problems. Although there 5 are few research studies on PSO to solve constrained optimization problems, none of these studies adopt the information from all particles to perform communication within PSO in order to share common interest and to act synchronously. When particles share their information through communication with each other, they will be able to efficiently handle the constraints and optimize the objective function. From a sociological point of view, study has shown that human societies will migrate from one place to another in order to handle their own life constraints and limitations as well as to reach a better economical, social, or political life [8]. People living in different societies migrate in spite of the different value systems and cultural differences. Indeed the cultural belief is an important factor affecting the issues underlying the migration phenomena [9]. On the other hand, finding the appropriate information for communication within swarm can be computationally expensive. One computational aspect is the difficulties of finding the appropriate information to communicate within PSO in order to be able to simultaneously better handle the constraints and optimize the objective function. The optimum solution for many realworld optimization problems changes over time. In such cases known as dynamic optimization problems, the heuristics should track the change as soon as it happens and responds promptly. For example, in job scheduling problems new jobs arrive or machines may break down during operations results a need for dynamic job schedules to accommodate the changes over time [10]. In another example, dynamic portfolio problem, the goal is to obtain an optimal allocation of assets to maximize profit and minimize investment risk [11]. 6 There are four major categories of uncertainties that have been dealt with using population based evolutionary approaches: noise in the fitness function, perturbations in the design variables, approximation in the fitness function, and dynamism in optimal solutions [12]. While noise and approximation bring uncertainty in the objective function, perturbation introduces uncertainty in the decision space. The source of change can be because of the possible change in the objective function, constraints, environmental parameters, or problem representations during optimization process. These changes may affect the height, width, or location of optimum solution or a combination of these three parts [13]. The application of PSO to dynamic optimization problems has been studied by various researchers. There are some issues with the PSO mechanism that needs to be addressed. Maintaining outdated memory is one issue in dynamic optimization problems. When a problem changes, a previously good solution stored as neighborhood or personal best may no longer be good, and will mislead the swarm towards false optima. Diversity loss is another problem in which population normally collapses around the best solution. In dynamic optimization, the partially converged population after a change is detected should quickly rediversify, find the new optimum and reconverge [10]. A number of adaptations have been applied to PSO in order to solve these difficulties. In general, a good evolutionary heuristic to solve DOPs should reuse as much information as possible from previous iterations to increase the optimization search. Among the researches performed in dynamic PSO none of these studies exploits information from all particles 7 to perform rediversification through migration and repulsion. When particles share their information through migration process, they will be able to quickly rediversify and move efficiently towards new optimum by reconverging around it. In order to construct the environment required for this redivergence and reconvergence, we need to establish groundwork to assist us to utilize this information. The major groundwork is the belief space of cultural algorithm assisting the particles in an organized informational manner to locate the necessary information. Discussed in psychosocial texts, attitudinal similarity is a leading factor to attraction among individuals while dissimilarity leads to repulsion in interpersonal relationship [14], as a result people often diverge from members of other social groups by selecting different cultural attitudes or behaviors [15]. Indeed different cultural beliefs lead to repulsion and increase the possibilities of divergence in ideas and in turn open up the doors to new opportunities. One challenge is the difficulty to find the appropriate information to use so that it can be relied on for a quick rediversification when a change happens in the environment. Using many concepts from the cultural algorithm, such as spatial knowledge, temporal knowledge, domain knowledge, normative knowledge and situational knowledge, the information will be organized competently and successfully in order to adopt in several steps of the PSO’s updating mechanism in addition to rediversification and repulsion among swarms. The special rediversification problem to deal with the change in dynamic is an important task that can be solved more efficiently when we have access to 8 the knowledge throughout the search process that is performed by the cultural algorithm as the computational framework. The remaining structure of this dissertation is as following. In Chapter II, a comprehensive literature survey is performed on related computational intelligence paradigms to prepare for the following chapters. Chapter III firstly elaborates on a paradigm based upon the intrasociety and intersociety interaction in order to simulate an algorithm to solve single objective optimization problems. Next the proposed modifications to this socialbased heuristics will be introduced. This proposal has two aspects: one is based upon the idea of adopting information from all individuals in the society (i.e., not only the best performing individuals). The second proposal is based on the fact that different societies have different collective behavior. Politically speaking, the collective behavior of the societies have been quantified into a measure called the liberty rate. In the real sociological context, individuals in a democratic society will have more flexibility and freedom to choose a better environment to live. In contrast, individuals in a dictatorship society will suppress the politically environmental change. While individuals in a liberal society can freely move to be closer to the leaders, individual in a less liberal society will have restriction to move near the leaders. Hence the higher liberty rate a society has, the more flexibility an individual in such society can move. At the end of this chapter, simulation result for a real world mechanical problem is used to test the performance of two proposed modifications. In Chapter IV, a heuristic is proposed to diversify the search space using a novel 9 threelevel particle swarm optimization in a multiple swarm population space. The PSO mechanism is customized to incorporate three levels of searching process. In the lowest level, particles follow the best behaving particle in their own swarm; in next level, particles follow the best performing particle in the neighboring swarms, and finally in the highest level, particles track the whole population’s best behaving particle. A novel algorithm is proposed to define the neighboring swarms based upon the closeness between representatives of each pair of swarms. After a specified number of iterations, the swarms communicate with each other. Each swarm assembles two lists, a sending list and a replacement list. To prepare these two sets of particles, diversity measure is considered as the primary goal instead of the performance of the particles alone. When particles are approaching the local optima, several of them will have similar positional information. This similar redundant information will be replaced by particles from other swarms to diversify the search space. At the end of this chapter, the simulated study is tested to solve benchmark multimodal optimization problems which demonstrate efficiency of the proposed heuristic and its potential to solve difficult optimization problems. Chapter V proposes an innovative algorithm adopting the cultural information that exists in the belief space to adjust flight parameters of multiobjective particle swarm optimization (MOPSO) such as personal acceleration, global acceleration, and momentum. A belief space has been constructed containing three sections of knowledge as the groundwork to perform MOPSO and adapt the parameters. Every particle in 10 MOPSO will use its own adapted momentum and acceleration (local and global) at every iteration to approach the Pareto front. Cultural algorithm provides the required groundwork enabling us to employ the information stored in different belief space efficiently and effectively. The proposed cultural MOPSO is then evaluated against the stateoftheart MOPSO models, showing very competitive and well performing outcome. Finally a comprehensive sensitivity analysis has been performed for the cultural MOPSO with respect to its tuning parameters. In Chapter VI, a novel heuristics is proposed based upon the information extracted from belief space to facilitate the interswarm communication among multiple swarms in particle swarm optimization to solve constrained optimization problems. The cultural computational framework is to find the leading particles in the personal level, swarm level, and global level. Every particle will move using a threelevel flight mechanism and then particles divide into several swarms and interswarm communication takes place to share the information. The performance of the proposed cultural constrained particle swarm optimization (CPSO) has been compared against ten stateoftheart constrained optimization paradigms on 24 benchmark test problems. The comprehensive simulation results demonstrate cultural CPSO to be very effective and efficient. Chapter VII proposes an innovative computational framework according to cultural algorithm to solve dynamic optimization problems using knowledge stored in the belief space in order to rediversify and repel the population right after a change takes 11 place in the dynamic of the problem. Thus the algorithm can comfortably compute the repulsion factor for each particle and locate the leading particles in the personal level, swarm level and global level. Each particle in the proposed culturalbased dynamic PSO will fly through a mechanism of three level flight incorporated with a repulsion factor. After a change takes place, particles regroup into several swarms and a diversitybased migration among swarms along with repulsive mechanism implemented in repulsion factor will take place to increase the diversity as quickly as possible. Finally, Chapter VIII discusses the concluding remarks on how swarm, culture, and society help in solving single objective, multiobjective, constrained, and dynamic optimization problems. The suggestions of the future work of this study are also proposed in this chapter. 12 CHAPTER II LITERATURE REVIEW In this chapter, we briefly review the related work that will assist in understanding the background concepts required for this dissertation. Population based computational intelligence heuristics has extensively evolved from natural evolutionarybased Genetic Algorithms (GA) [1617] over decades of research work. Computational intelligence approaches based upon the psychosocial behavior inspired from either human or animal society have been the subject of the emerging research for a decade. Some concepts borrowed from sociology have shown great improvements in the performance of computational methods. Migration of individuals between concurrent evolving populations has shown its potential to improve the genetic algorithms mechanism [18]. In distributed GA [6] the sociologically inspired concept of communication shows great improvement in the performance of GA. The population is divided into several subpopulations each evolving an independently GA while at regular time intervals, these GAs communicate with each other. Sociological researchers have constructed models to mimic the behavior of human and animal societies. Heppner and Grenander studied synchronization in groups of small birds like pigeons developing a flocking heuristics based upon the social interactions such 13 as attraction to a roost, attraction to flockmates and preserving the velocity [19]. Deneubourg and Goss has shown that the interaction between the individuals and their environment produces different collective patterns on decision making process by introducing a mathematical model [20] which is naturally observed to be essential in the schools of fishes, flocks of birds, groups of mammals, and many other social aggregates. Millonas proposed a model of the collective behavior of a large number of locally acting organisms [21] in which organisms move probabilistically between local cells in space, but with different weights. The evolution and the flow of the organisms construct the collective behavior of the group. This model could successfully analyze movements of ants as swarming organism. Reynolds developed a computer animator of a simulated bird based upon the local perception of the dynamic environment, the laws of simulated physics ruling its motion, and a set of simulated behaviors [22]. Akhtar et al. proposes a sociobehavioral simulated model [23] based upon the concept that the behavior of an individual changes and improves due to social interaction with the society leaders who are identified using a Pareto rank scheme. On the other hand, the leaders of all societies themselves improve their own behavior which leads to a better civilization. Ursem introduced multinational evolutionary algorithm based on the relationship between different nations and their political interaction in order to optimize a profit function [4]. Ray and Liew adopted the intersociety and intrasociety relationship among the individuals and the leaders to optimize the single objective optimization problem [5]. The whole population, clustered into several groups, evolves in two stages. 14 Individuals within group follow the group’s best performing individual, and in the whole population, the very best performing individual leads all groups’ leaders. Ursem elaborates the idea of sharing among agents in a social entity as a means of maintaining multiple peaks in multimodal optimization problems [24]. Deneubourg and coauthors proposed a probabilistic model to explain behavior of ants as social agents [25] which was then followed by Goss et al. showing how sharing information among ants which was done by laying trail and following it could help to solve foraging problem in their societies [26]. Inspired by their research, Dorigo et al. introduced a new computational paradigm, Ant Colony Optimization (ACO) model, that could be adopted to solve engineering optimization problems. ACO’s main characteristic was a positive feedback for rapid discovery of good solution of optimization problem, a distributed computation to avoid premature convergence, and a greedy heuristic to find acceptable solution in the early stages of the search process [2, 27]. The ACO model has been successfully applied to symmetric and asymmetric Travelling Salesman Problem (TSP) as a classical difficult combinatorial optimization problem [2829], quadratic assignment problem [30], adaptive routing [31], jobscheduling problem [2]. Sahin et al. reported applying the antbased swarm algorithm on forming different patterns through interaction among mobile robots [32]. Kennedy and Eberhart introduced the particle swarm optimization (PSO), an algorithm based on imitating behavior of flocking birds. It mimics grouping of birds as particles, their random movement, and regrouping them again to generate a model so that 15 it can solve engineering optimization problems [1, 33]. Particles are known with their positions and velocities and can be updated using: , (2.1) , where is the velocity of the particle, is the position of the particle, is the best position of each particle ever experienced, and is the best position among all particles. and are random numbers uniformly generated in the range of . , , and are personal, social, and momentum coefficients [34] that are predefined constant values. The movement of the particles has been analyzed to understand the mechanism underlying the PSO and its relation to other population based heuristics [35]. The analysis of the particles’ trajectory while moving [36] has led to a generalized model of the algorithm, containing a set of coefficients to control the system's convergence tendencies. The effects of various population structure and topologies on the performance of particle swarm algorithm have shown that von Neumann configuration consistently outperforms other types of topological configurations of particles’ neighborhood [3739]. Several versions of PSO have been developed. Discrete PSO was introduced [40] operating on discrete binary variables whose trajectories are defined as changes in the probability that a coordinate will take on a zero or one value. Comparing with GA on some multimodal optimization problems, discrete PSO showed competitive results [4116 42]. A modified PSO using constriction factor [43] performed well comparing with the original PSO. Particle swarms are also developed to track and optimize dynamic landscape systems [44]. Particle swarm optimization has also been modified to perform permutation optimization problems such as Nqueens problem [45] by defining particles as permutations of a group of unique values and updating velocity based upon the similarity of two particles. The permutation of the particles change with a random rate defined by their velocities. Clustering population into several swarms has been extensively studied. Stereotyping of the particles is investigated [46] in which substitution of cluster centers for shows better performance of the PSO suggesting that PSO is more effective when individuals are attracted toward the center of their own clusters. AlKazemi and Mohan divided the population into two sets at any given time, one set moving to the while another moving in opposite direction by selecting appropriate fixed values for in each set [47]. After some iterations, if the would not improve, then the particles would switch their group. Baskar and Suganthan introduced a concurrent PSO consisting two swarms in order to search concurrently for a solution along with frequent passing of information, the of two swarms [48]. After each exchange, the two swarms had to track the better found. One of the swarms was using regular PSO while the other was using the FitnesstoDistance ratio PSO [49]. Their approach improved the performance over both methods in solving single objective optimization problems. ElAbd and Kamel added a twoway flow of information between 17 two swarms improving its performance [50]. In their algorithm, when exchanging the best particle between two swarms, this particle is used to replace the worst particle in another swarm. The two swarms perform a fixed number of iterations, and then the best particles inside each swarm will replace the worst particles in the other swarm only if they have a better fitness. This makes it possible for both swarms to exchange new information from the other swarm’s experience. Krohling et al. proposed coevolutionary PSO in which two populations of PSO are involved [51]. One PSO runs for a specified number of iterations while the other remains static and serves as its environment. At the end of such period, values obtained in previous cycles have to be reevaluated according to the new environment before starting evolution. Particle swarm optimization has been widely applied for multiobjective optimization problems (MOPs) called multiobjective particle swarm optimization (MOPSO) to find a diverse set of potential solutions, known as Pareto front. There have been several algorithms to extend PSO to handle diversity issue in MOPs. Parspopoulos et al. [52] introduced vector evaluated particle swarm optimizer (VEPSO) to solve multiobjective problems. A VEPSO is a multiswarm variant of PSO in which each swarm is evaluated using only one of the objective functions of the problem under consideration, and the information it possesses for this objective function is communicated to the other swarms through the exchange of their best experience. In VEPSO, the velocity of the particles in each swarm is updated using the best previous position, , of another selected swarm. Selection of this swarm in the migration 18 scheme can be either random or in a sequential order. Ray and Liew [53] used Pareto dominance and combined concepts of evolutionary techniques with the particle swarm. This algorithm uses crowding distance to preserve diversity. Hu and Eberhart [54] in their dynamic neighborhood PSO proposed an algorithm to optimize only one objective at a time. The algorithm may be sensitive to the optimizing order of objective functions. Fieldsend and Singh [55] proposed an approach in which they used an unconstrained elite archive to store the nondominated individuals found along the search process. The archive interacts with the primary population in order to define local guides. Mostaghim and Teich [56] introduced a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of the PSO. Li [57] adopted the main idea from NSGAII into the PSO algorithm. Coello Coello et al. [58], on the other hand proposed an algorithm using a repository for the nondominated particles along with adaptive grid to select the global best of PSO. The algorithms proposed to solve MOPs using PSO are based upon promoting the nondominated particles at any given time, not exploiting the information of all particles in the population. Many MOPSO paradigms are focused on the methods of selecting global best [53, 5556, 5864], or personal best [65]. Most MOPSOs adopt constant value for momentum and accelerations; however some MOPSOs use some simple dynamic to change the parameters. Indeed, one of the difficulties of the PSO and/or MOPSO is to deal with tuning the right value for the momentum, personal and global acceleration in order to get the best results for different test functions. Hu and Eberhart [54] in their dynamic 19 neighborhood MOPSO model and also Hu et al. [66] in the MOPSO with extended memory adopted a random number on the range (0.5,1) as the varying momentum. However both personal and global acceleration are constant values. Sierra and Coello Coello [62] in their crowding and dominance based MOPSO used random value at the range (0.1,0.5) for the momentum and random values at the range (1.5,2.0) for the personal and global acceleration. They adopted this scheme to bypass the difficulties of fine tuning of these parameters for each test function. Zhang et al. [64] introduced intelligent MOPSO based upon AgentEnvironmentRules model of artificial life. In their model, along with adopting some immunity clonal operator, the momentum was decreased linearly from 0.6 to 0.2, but the personal and global acceleration remained constant. Li [67] proposed an MOPSO based upon maxmin fitness function. In his model, while the personal and global acceleration were set constant, the momentum was gradually decreased from 1.0 to 0.4. Zhang et al. [68] adopted a linearlydecreasing momentum from 0.8 to 0.4 for their MOPSO algorithm. However the personal and global acceleration were kept fixed. Mahfouf et al. [69] introduced adaptive weighted MOPSO in which they included adaptive momentum and acceleration. Using comparison study with other wellbehaved algorithms, they demonstrated that the MOPSO search capability is enhanced by adding this adaptation. Ho et al. [63] noted the possible problem of selecting personal and global acceleration independently and randomly. He mentioned because of its stochastic nature they may both be too large or too small. In the former case, both personal and global experiences 20 are overused and as a result the particle will be driven too far away from the optimum. For the latter case, both personal and global experiences are not fully used and as a result the convergence speed of the algorithm is reduced. They used sociobiological activity such as hunting to assure that individuals balance between the weight of their own knowledge and the group’s collective knowledge. In other words, they mentioned that the personal and global acceleration are somehow related to each other. When one acceleration is large, the other one should be small, and vice versa. Using this concept, they modified the main equation of PSO, Equation (2.1) to include a dependent acceleration and momentum [63]. Particle swarm optimization algorithms have been successfully developed to solve constrained optimization problems. Hu and Eberhart generated particles in PSO until the algorithm could find at least one particle in the feasible region and then adopted it to find best personal and global particles [70]. Parsopoulos and Vrahatis used a dynamic multistage penalty function to handle the constraints [71]. The penalty function consisted of weighted sum of all constraints violation with each constraint having a dynamic exponent and a multistage dynamic coefficient. A comparison of preserving feasible solution method [70] and dynamic penalty function [71] demonstrated that the convergence rate for dynamic penalty function algorithm was faster than that of feasible solution method [72]. Hu et al. modified the PSO mechanism to solve constrained optimization problems. PSO starts with a group of feasible solutions and a feasibility function is used 21 to check if the newly explored solutions satisfy all the constraints. Only feasible solutions are kept in the memory [73]. Linearly constrained optimization problems are the basis for a modified version of PSO in which the movement of the particles in the vector space is mathematically guaranteed by the velocity and position update mechanism to always find at least a local optimum [74]. In the constrained PSO, particles that satisfy constraints move to optimize the objective function while particles that violate constraints move in order to satisfy the constraints [75]. Krohling and Coelho adopted Gaussian distribution instead of uniform distribution for the personal and global term random weights of the PSO mechanism to solve constrained optimization problems formulated as minmax problems. They used two populations of the PSO simultaneously, first PSO focuses on evolving the variable vector while the vector of Lagrangian multiplier is kept frozen, and the second PSO is to concentrate on evolving the Lagrangian multiplier while the first population is maintained frozen. The use of normal distribution for the stochastic parameters of the PSO seems to provide a good compromise between the probability of having a large number of small amplitude around the current points, i.e., finetuning, and small probability of having large amplitudes, that may cause the particles to move away from the current points and escape from the local optima [76]. In masterslave PSO [77], master swarm is to optimize objective function while slave swarm is focused on constraint feasibility. Particles in the master swarm only fly toward the current better particles in the feasible region, and they will not fly toward 22 current better particles in the infeasible region. The slave swarm is responsible for searching feasible particles by flying through the infeasible region. Particles in slave swarm only fly toward current better particles in the infeasible region, and they will not fly toward current better particles in the feasible region. The feasible/infeasible leaders from swarm will then be communicated to lead the other swarm. By exchanging flight information between swarms, algorithm can explore a wider solution space. Zheng et al. adopted an approach that congregates neighboring particles in the PSO to form multiple swarms in order to explore isolated, long and narrow feasible space [78]. They also applied a dynamic mutation operator with dynamic mutation rate to enhance flight of particles to feasible region more frequently. For constraint handling a penalty function has been adopted as to how far the infeasible particle is located from the feasible region. Saber et al. [79] introduced a version of PSO for constrained optimization problems. In their version of PSO, the velocity update mechanism uses a sufficient number of promising vectors to reduce randomness for better convergence. The coefficient velocity in the positional update equation is a dynamic rate depending on the error and iteration. They also reinitialized the idle particles if there are particles that are not improving for some iterations. Li et al. [80] proposed dual PSO with stochastic ranking to handle the constraints. One regular PSO evolves simultaneously along with a genetic PSO, a discrete version of PSO including a reproduction operator. The better of the two positions generated by these two PSOs is then selected as the updated position. FloresMendoza and MezuraMontes [81] used Pareto dominance concept for constraint 23 handling technique on a biobjective space, with one objective being sum of the inequality violation constraints and the second objective being sum of the equality violation constraints in order to promote better approach to feasible region. They also adopted a decaying parameter control for constriction factor and global acceleration of the PSO to prevent the premature convergence and to advance the exploration of the search space. Ting et al. [82] introduced a hybrid heuristic consisting PSO and genetic algorithm to tackle constraint optimization problem of load flow algorithm. They adopted twopoint crossover, mutation, and roulettewheel selection from genetic algorithms along with the regular PSO to generate the new population space. Liu et al. [83] incorporated discrete genetic PSO with differential evolution (DE) to enhance the search process in which both genetic PSO and DE update the position of the individual at every generation. The better position will then be selected. Particle swarm optimization algorithms have been effectively developed to solve dynamic optimization problems (DOP) as well. Carlisle and Dozier [84] adjusted PSO mechanism to prevent making position/velocity decision according to the outdated memory by periodic resetting. Particles periodically replace their pbest vector with their current position, forgetting their past experiences. Eberhart and Shi [44] proposed that for small perturbation, the initialization of the swarm can start from old population, while large perturbation needs reinitialization. In detection and response paradigm [85] gbest and the second global best are evaluated to detect changes, then the positions of all particles are rerandomized to respond to the change. Charged swarm avoids collision 24 among particles based upon the force between electric charges which is inversely proportional to distance squared [86]. Atomic PSO [87] and quantum PSO [88] follow the structure of the chemical atom including a cloud of electrons randomly orbiting with a specific radius around the nucleolus. An anticonvergence operator [89] assists interaction among swarms. Also an excluding operation defines a radius to include the best solution of the swarm. These close swarms compete with each other in order to promote diversity. The winner, the swarm with the best function value at its swarm attractor, will remain, while the loser will be reinitialized in the search space [89]. Swarms birth and death [90] was proposed by allowing multiple swarms to regulate their size by bringing new swarms to existence, or diminishing redundant swarms. This dynamic swarm size can be an alternative for anticonvergence and exclusion operators in the PSO mechanism. In partitioned hierarchical PSO for dynamic optimization problems [91], the population is partitioned into some treeform subhierarchies for a limited number of iterations after a change is detected. These subhierarchies continue to independently search for the optimum, resulting a wider spreadout of the search process after the change has occurred. The topmost level of treeform hierarchies which contain the current best particle does not change, but all lower subhierarchies (subswarms) reinitialize the position and velocity and reset their personal best positions. These subhierarchies are rejoined again after a predefined number of iterations. 25 By adopting dynamic macromutation operator [92], PSO is able to maintain the diversity throughout the search process in order to solve DOPs. Every coordinate of each particle will undergo an independent mutation with a dynamic probability which possess its highest value when the change occurs in the dynamic landscape and gradually decreases till the next change takes place. The unified PSO in which the exploration and exploitation term of the PSO mechanism are unified into a unification factor has also been adopted for solving DOPs [93]. Zhang et al. [94] proposed a direct relation between the inertia weight of the particle and the change. In their model, the new gbest and pbest for each particle affect the inertia weight of the particle whenever a change in gbest or pbest occurs. Pan et al. [95] modified the PSO paradigm using a probability based movement of particles based upon the concept of energy change probability in Simulated Annealing (SA). The particle will move to the next position computed through traditional PSO heuristics only with a specific probability that exponentially depends on the difference between the objective values of the current and next iterations. In species based PSO [96], the population is divided into some swarms, each surrounding a dominating particle called seed identified from the objective function values of the entire population. The new seed should not fall within the predefined radius of all previously found seeds in order to promote diversity. The seeds are then selected as the neighborhood best for different swarms. In multistrategy ensemble PSO [97], particles are divided into two sections, part I uses a Gaussian local search to quickly seek global optimum in the current environment, while part II uses differential mutation to 26 explore the search space. The position of particles in part II do not follow the traditional PSO mechanism, instead each particle in part II is determined by the particle in part I through a mutation strategy. Liu et al. [98] introduced a modified PSO to solve DOPs in which many compound particles exist. Each compound particle includes three single particles equilaterally distanced from each other in a triangular shape. A special reflection scheme is proposed to explore the search space more comprehensively in which the position of the worst particle among three in the compound will be replaced with the reflected one. In each compound particle, after reflection is performed, a representative among these three particles is probabilistically chosen based upon the objective function values and distance from other two member particles. The representative member particles will then participate in PSO update mechanism. The two nonrepresentative particles will also move in the same distance/direction as representative particle has been moved in order to preserve the valuable information. Recently a computational framework has been developed by Reynolds known as cultural algorithm (CA) based upon a dual inheritance system where information exists at two different levels: population level and the belief level [3]. Culture is defined as storage of information which does not depend on the individuals who generated and can be potentially accessed by all society members [3]. CA is an adaptive evolutionary computation method which is derived by cultural evolution and learning in agentbased societies [3, 99]. CA consists of evolving agents whose experiences are gathered into a 27 belief space consisting of various forms of symbolic knowledge. CA has shown its ability to solve different types of problems [3, 99107] among which CAEP (cultural algorithm along with evolutionary programming) has shown successful results in solving MOPs [107]. Researchers have identified five basic sections of knowledge stored in belief space based upon the literature in cognitive science and semiotics: situational knowledge, normative knowledge, topographical knowledge [105], domain knowledge, and history knowledge [106]. Situational knowledge is a set of exemplary individuals useful for experiences of all individuals. Situational knowledge guides all individuals to move toward the exemplar individuals. Normative knowledge consists a set of promising ranges. Normative knowledge provides standard guiding principle within which individual adjustments can be made. Individuals jump into the good range using normative knowledge. Topographical or spatial knowledge keeps track of the best individuals which have been found so far in the promising region. Topographical knowledge leads all individuals toward the best performing cells in the search space [105]. Domain knowledge adopts information about the problem domain to lead the search. Domain knowledge about landscape contour and its related parameters guides the search process. Historical or temporal knowledge keeps track of the history of the search process and records key events in the search. It might be either a considerable move in the search space or a discovery of landscape change. Individuals use the history knowledge for guidance in selecting a move direction. Domain knowledge and history knowledge are useful on dynamic landscape problems [106]. The knowledge can swarm 28 between different sections of belief space [108110] which in turn affect the swarming of population. Becerra and Coello Coello [104] proposed cultured differential evolution for constrained optimization. The population space in their study was differential evolution (DE) while the belief space consist of situational, topographical, normative, and history knowledge. The variation operator in DE was influenced by the knowledge source of belief space. Yuan et al. [111] introduced chaotic hybrid cultural algorithm for constrained optimization in which population space as DE and belief space including normative and situational knowledge. They incorporated a logistic map function for better convergence of DE to use its chaotic sequence. Tang and Li [112] proposed a cultured genetic algorithm for constrained optimization problems by introducing a triple space cultural algorithm. The triple space includes belief space, population space in addition with anticulture population consisting individuals disobeying the guidance of the belief space, and going away from the belief space guided individuals. The effect of disobeying behavior enhanced by some mutation operations makes the algorithm faster and less risky for premature convergence, by awarding the most successful individuals and punishing the unsuccessful population. 29 CHAPTER III SOCIETTY AND CIVILIZAION FOR OPTIMIZATION 3.1 Introduction Computational intelligence approaches based upon the psychosocial behavior inspired from either the human or animal society have been the subject of the emerging research for less than a decade. There has been some research in this area focused on optimization in the spirit of the particle swarm intelligence [1] or ant colony system [2]. Particle swarm optimization is an imitation of the collaborative behavior of the birds flying together with the means of information exchange, while ant colony is based on the fact that individual ants interact with each other through their pheromone trails. Additionally, Ursem [4] introduced another ideas based on the relationship between different nations and how to interact between the countries in order to optimize a profit function. More recently, in an attempt to mimic the interactional behavior between societies and within civilization, social algorithm had been proposed [5, 113]. Social algorithm adopts the intersociety and intrasociety relationship among the individuals and the leaders to optimize the single objective optimization problem. The whole population of individuals, called the civilization, is clustered into different societies based on the 30 Euclidean closeness of the individuals. The performance of individuals will be a measure to decide which individuals are the leaders of the society. The rest of the individuals are to follow them in a way to improve themselves which leads to migration (intrasocitey interaction). From the civilization viewpoint, the leaders of the societies will improve themselves by migrating to the bestperforming leaders who are the civilization leaders (intersociety interaction) [114115]. Ray and Liew have successfully demonstrated the performance of their model in single objective optimization problems [5]. Their model seems to be an alternative competitive paradigm to particle swarm heuristics. What was used in their model is mostly by throwing the information of the nonleader individuals away and replacing with those of the corresponding leaders. What is proposed in this chapter involves two aspects. Firstly, using the information of the individual, individual’s talent is computed which equips each individual with different ability to invoke intra or intersociety interaction. Secondly, different society might have different collective behavior measure, called the liberty rate. In the real sociological relationship, a democratic society will have more flexibility and freedom to choose a better environment to live. In contrast, a dictatorship society will discourage individual to change the environment in reaching the leaders. While individuals in a liberal society can migrate easily to be closer to the leaders, individual in a less liberal society will have difficulty to move near the leaders. Hence the higher liberty rate a society has, the more flexibility an individual in such society can move. 31 The chapter is followed by Section 3.2 elaborating basics of social algorithms, including its motivation and how to build the societies in a civilization, how to identify the leaders of such societies, and how to migrate intra or intersocially. It also proposes a novel modification which is based on the idea of using more information from the middleclass individuals. In Section 3.3 the proposed algorithm has been applied on single objective optimization problems to test its efficiency. In Section 3.4, the concluding remarks are discussed in applying social algorithm to solve optimization problems. 3.2 Socialbased Algorithm for Optimization In this section, the details of socialbased algorithm are reviewed to solve single objective optimization problems and then the proposed methods on improving this heuristics are introduced. The general single objective function optimization problem is as the following form: , (3.1) , , (3.2) , , (3.3) 32 where is the number of inequality constraints and is the total number of inequality and equality constraints, respectively, is the dimensional decision space variable. Because of limitation in computer simulations and accuracy of the variables considered, it is much easier to check the validity of an inequality than that of equality. As has been suggested by research in population based heuristics dealing with constraint handling, each equality constraint of is originally transformed into a set of two simultaneous inequalities as and where is an infinitesimal positive constant representing the accuracy of the algorithm. For example with , the algorithm should proceed in a way that the following condition satisfies: which will substitute for the sake of accuracy. Therefore each equality constraint transforms to two inequalities constraints resulting total number of inequality constraints as as following: , , (3.4) , . (3.5) Now assume there are individuals in the population as potential solutions for the constrained optimization problem. A constraint satisfaction factor, , is defined to quantify how much dissatisfied the th constraint ( ) is made using the th individual, , ( ), and formulated as following: 33 , , . (3.6) Based on this definition, when a constraint is satisfied by an individual, the assigned value for constraint violation factor, , is zero. If the th constraint is not met ( ) by the th individual, the negativevalued is assigned as constraint violation factor, , to show how much the constraint is violated. Then a ranking scheme is performed for each constraint as to assign the rank of one to individuals who satisfy that constraint the most. Therefore, for the th constraint, individuals with the highest ( ) considering their sign will be assigned a rank of one, and individuals with the second highest ( ) again considering the sign will be assigned a rank of two, and so forth. After performing this nondominated ranking scheme for all constraints, a matrix is formed as the rank matrix in which rankone means that those individuals are nondominated for a specific constraint [5]. It can be seen that if there is one or more feasible individual for a specific constraint, those will be considered as rankone individuals. Figure 3.1 demonstrates the main flowchart of the socialbased algorithm. The civilization is formed with individuals that are initialized as uniform random numbers. Then each individual in the population of the civilization is evaluated using objective function value. The individuals are categorized into societies using a nonsupervised classification algorithm proposed by Ray and Liew [5] according to their closeness to each other. Notice that the number of societies may vary by time. Then the leaders of 34 each society are identified using a leader identification scheme which will be discussed in Figure 3.2. Next the individuals within the societies will move towards the nearest leader in their society using a migration scheme that will be discussed in Figure 3.3. Figure 3.1 Flowchart for socialbased single objective optimization In the global level, the leaders of the civilization will then be identified through the same leaderidentification scheme. Then the leaders of the societies will move 35 towards the global leaders of the civilization using the same migration scheme. This process continues until the termination criteria are met, i.e., the current iteration reaches a predefined maximum iteration, . In Figure 3.2, a flowchart is depicted to explain the leader identification scheme. Figure 3.2 Flowchart for identifying leaders 36 As shown in this flowchart, a set of individuals are given to find their leaders. The leaders should be the best behaving individuals considering both objective values and constraints. The objective function value for each individual and constraint violation factors for each individual are computed using Equations (3.1) and (3.6), respectively. Through nondominated ranking scheme, the rank matrix will be constructed. Leaders are identified among rankone individuals whose objective function value is less than the average of objective function values of all individuals in the given set of individuals. This means that if there are any feasible individuals, the best ones shall be selected due to their objective function values. There might be a situation that there is no rankone individual whose objective value is less than the average of all. In such case, simply all rankone individuals will be assigned as leaders. The leader identification scheme is used for both society and civilization level. Figure 3.3 shows the details of the migration scheme used in both intrasociety and intersociety level. Assume that an dimensional individual is given along with a set of leaders, . Before applying the migration scheme, it has to be noted that each dimension of the individual must be normalized as following: , , (3.7) 37 where is the th dimesnion of the dimensional decision variable (individual) which has the lowest limit of and highest limit of , respectively. The normalized invidual, , will be in the range of . Next, Euclidian distance between the normalized individual, , and the th member of the leader set, , will be computed as: . (3.8) Next, the closest leader to the individual will be selected as whose distance is: . (3.9) Then, each dimension of the normalized individual will be migrated using the above computed lowest distance through a random normal distribution value as following: , , (3.10) where is a random number with normal distribution with mean zero and a fixed standard deviation, , and is the new location of the th dimension of the 38 individual. As the final step, the migrated position should be rescaled back to the original scale as following: Figure 3.3 Flowchart on how to migrate individuals , . (3.11) It should be noted that migration scheme explained here is adopted in both intrasociety level, migrating the individuals towards their society leaders, and intersociety 39 level, migrating society leaders towards the civilization leaders. Therefore performance of individuals will improve within each society by migrating towards the closest society leader, and in a global view, the performance of society leaders will also improve by migrating towards the best behaving leader in the whole civilization. 3.2.1 Proposed Modifications In this subsection, two proposed modifications are presented. Socialbased algorithm has shown its promise in some single objective optimization problems [5]. What is used in this model is mostly by throwing the information of the nonleader individuals away and replacing with that of the correspondent leaders, as shown in Equation (3.10). However, in the real life it occurs differently. Individuals keep their characteristics along with imitating from some good samples. In the real society, average individuals do not completely throw their past behavior away, but would continuously change it, keeping the history of their behavior. Having the history of the individuals in the local search (intrasociety interaction) helps the individuals keep the information that might be useful later. In the global search, the algorithm is leadercentric preventing to diverse chaotically. Therefore, the exploitation of the intrasociety migration is based on the importance of previous location of the individual. In the intersociety migration the rule remains leadercentric. Figure 3.4 shows the pseudocode for the individuality importance in intrasociety migration. Individual and set of leaders are given. 40 The individual is normalized using Equation (3.7) and then the Euclidian distance between normalized individual and each member of the leader set is computed using Equation (3.8) and the lowest distance is computed as Equation (3.9). Then the normalized individual will be migrated considering individuality importance as following: , . (3.12) Figure 3.4 Pseudocode for individuality importance in intrasociety migration Finally each dimension of migrated individual should be rescale back into its original scale using Equation (3.11) In another modification scheme, Liberty Rate is proposed. A democratic society has more flexibility and freedom to choose better situation to live. In contrast, a dictatorship society restricts change of the situation and reaching the leaders. While Individual and set of leaders are given Normalize in each of its dimensions using its maximum and minimum limits using Equation (3.7) Compute the Euclidian distance between normalized and each member of the leader set, Migrate normalized individual considering individuality importance using Equation (3.12) Rescale back each dimension of the migrated individual into its original scale using Equation (3.11) 41 individuals in a liberal society can migrate easily to be closer to the leaders, individual in a less liberal society will experience difficulty to move near the leaders. So giving preferences to approach to the leaders for different societies will improve the convergence to the optimized solution. Different society will have different collective behavior measure, called Liberty Rate. The higher liberty rate a society has, the more flexible individuals in such society can move. The Liberty Rate of a society is proposed as the relative ratio of average objective functions of the society over the average objective functions of the civilization, formulated as following: , (3.13) where is a predefined normalization constant and is the measure of the collective behavior of the th society, defined as the average of the objective values of the individuals who belong to the th society, formulated as: , (3.14) where is number of individuals in the th society. , the measure for the civilization’s collective behavior is also defined as: . (3.15) 42 Then to migrate each individual in the th society, a libertybased migration is performed as following: , . (3.16) 3.3 Simulation Results Spring Design is a mechanical design problem [116] to minimize the weight of a tension/compression spring as shown in Figure 3.5. There are nonlinear inequality constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The design variables are the mean coil diameter, , the wire diameter, , and the number of active coils, , along with four inequality constraints. The mathematical formulation of the problem is as the following: , (3.17) (3.18) , , , , 43 with the following limits on variables: 0.25 1.3 1 x , 0.05 2.0 2 x , 2 15 3 x . Figure 3.5 Schema for Spring Design problem [117] Figures 3.6 demonstrate the simulation results for Spring Design problem using the proposed modifications compared with the original algorithm. Population size is 30 which is 10 times the number of decision variables as suggested in [5]. This result is after 50 independent runs are performed for all three algorithms. We can see the effect of defining liberty rate in comparison of two modifications. The convergence time and the best value for objective functions in the case that both modifications have been applied have been improved compared to the original method. The comparison results are also shown in Table 3.1. It is noticeable that although two modifications give better results for best objective function, but the algorithm is not robust and the results for the worst objective function and mean objective function are not improved. For the original version, the standard deviation of algorithm discussed in Equation (3.10) has been considered as 1. 44 Figure 3.6 Comparison for best objective function for two proposed modifications: Original model (blue), modified by Individuality Importance (green) and modified by Liberty Rate and Individuality Importance (red) Table 3.1 Comparison of results for Spring Design problem Algorithms Original Method Individuality Importance Liberty Rate and Individuality Importance Best Objective Value (kg.m) 0.0464 0.0379 0.0331 Mean Objective Value (kg.m) 0.0464 6.2388 6.1516 Worst Objective Value (kg.m) 0.0464 32.2617 31.6833 3.4 Discussions In this chapter, two modifications have been suggested for socialbased algorithm. These modifications have been tested on a real world benchmark problem: the Spring Design problem. The simulation results demonstrate that adding two modifications facilitate the performance of the original algorithm resulting a better best objective 45 values. The first modified algorithm, individualitybased social algorithm outperforms the original social algorithm, while the liberty/individualitybased social algorithm outperforms both the original social algorithm and the individualitybased social algorithm in finding the best objective values. Both modified algorithms have migration policy better than the original social algorithm. The original algorithm is basically biased around the best performing individual which may result settling into a local optimum, while both modified versions are based upon individual’s previous performance. The results of modified version of social algorithm is based upon two hypotheses, one is that information from all individuals must be collected and exploited to migrate to the best leader, while the other is that the rate of convergence in different societies is not necessarily the same and depends on the relative collective behavior of the individuals in the society with respect to the civilization. Indeed the result in this case is improved because of giving more weight to diversity to the search in the individual space. If we just throw away all the nonleaders individuals, we lose a lot of information that might be critical in the search process, however getting information from the other nonleaders individuals might add to the convergence time. The best objective values obtained in both modified versions are better than original social algorithm; however the worst and mean values are not better than original algorithm, since we are keeping the diversity while evolving. This also implies room for further improvements in the future research. 46 CHAPTER IV DIVERSITYBASED INFORMATION EXCHANGE FOR PARTICLE SWARM OPTIMIZATION 4.1 Introduction Particle swarm optimization (PSO) is based on the changes of the positions and velocities of the particles in a manner that optimizes a goal function. PSO has demonstrated a promising performance for many problems; yet its fast convergence often leads to premature convergence in local optima. The tradeoff between fast convergence and being trapped in local optima will be even more critical in multimodal functions having many local optima very close to each other. In order to escape from the local optima and avoid premature convergence, the search for global optimum should be diverse. Many researchers have improved the performance of the PSO by enhancing its ability with a more diverse search. Specifically, some have proposed to use multiple swarms each running independent PSO, and then exchange information among them. Exchanging information among clusters has also been adopted as an important design in several computational methods. Distributed genetic algorithm [6] employs GA mechanism to evolve several subpopulations in parallel. During frequent migration 47 among subpopulations, some individuals from each subpopulation will be sent to another subpopulation to replace other individuals based upon a replacement policy. In another algorithm known as society and civilization model [5], individuals from multiple societies would cooperate with each other in order to enhance their performance. The migration in this model occurs in two levels; first, the migrating of individuals inside each society toward the society leaders (intrasociety level), and second, the migrating of society leaders toward the civilization leaders (intersociety level). In this study, a method borrowed from distributed genetic algorithm is employed to exchange information among multiple swarms in PSO. At regular intervals, each swarm prepares two sets of particles. One set is the particles that must be sent to another swarm and another set is the particles that must be replaced by individuals from other swarms. To prepare these two sets of particles, diversity measure is considered as the primary goal instead of only performance of the particles. When particles are approaching the local optima, several of them will have similar positional information. This similar redundant information will be replaced by particles from other swarms. This algorithm also proposes a new paradigm to find each swarm’s neighbors. The neighborhood between swarms is defined by the use of Hamming distance between representatives of each pair of swarms. The particle’s movement in the space is based on one variation of PSO with three basic terms, each one leading the particles toward the best particles in its own swarm, in its swarm’s neighborhood, and in the whole population. 48 The structure of this chapter is organized as follows. Section 4.2 reviews the related studies in this field. In Section 4.3, the proposed algorithm is explained in detail. The main ideas of the proposed method are shown. In Section 4.4, the simulation of the proposed algorithm is performed on a set of hard benchmark problems. Section 4.5 summarizes the benefits of the proposed paradigm on PSO and outlines the future work for multiobjective optimization problems due to the nature of the diversity promotion proposed. 4.2 Review of Related Work Kennedy and Eberhart [1] introduced the particle swarm optimization, an algorithm based on imitating behavior of flocking birds. It mimics grouping of birds as particles, their random movement, and regrouping them again to generate a model so that it can solve engineering optimization problems. Particles are known with their positions and velocities and can be updated using: , (4.1) , where is the velocity of the particle, is the position of the particle, is the best position ever experienced of each particle, and is the best position ever attained 49 among all particles. and are random numbers uniformly generated in the range of . , , and are personal, social, and momentum coefficients that are predefined. The main problem for PSO is its fast convergence to local optima. Later, Kennedy [46] introduced the stereotyping of the particles in which substitution of cluster centers for showed appreciable improvement of the PSO performance. His research suggested that PSO is more effective when individuals are attracted toward the center of their own clusters. In multimodal problems, the search effort needs to be diverse in order to find the global optimum among a set of many local optima. The fast converging behavior of the PSO makes this issue so critical for multimodal problems. To achieve a more diverse search, AlKazemi and Mohan [47] divided the population into two sets at any given time, one set moving to the while another moving in opposite direction by selecting appropriate fixed values for in each set. After some iterations, if the is not improved, then the particles would switch their group. Baskar and Suganthan [48] in their concurrent PSO used two swarms to search concurrently for a solution along with frequent passing of information, which was the of two cooperating swarms. After each exchange, the two swarms had to track the better found. One of the swarms was using regular PSO, and the other was using the FitnesstoDistance ratio PSO [49]. Their approach improved the performance over both methods in solving single objective optimization problems. ElAbd and Kamel [50] further improved the previous algorithm by adding a twoway flow of information between two swarms. In 50 their algorithm, when exchanging the best particle between two swarms, this particle is used to replace the worst particle in another swarm. The two swarms perform a fixed number of iterations, and then the best particles inside each swarm will replace the worst particles in the other swarm only if they have a better fitness. This makes it possible for both swarms to exchange new information from the other swarm’s experience. Krohling et al. [51] proposed coevolutionary PSO in which two populations of PSO are involved. One PSO runs for a specified number of iterations while the other remains static and serves as its environment. At the end of such period, values obtained in previous cycles have to be reevaluated according to the new environment before starting evolution. Although these algorithms used information exchange among swarms, but none of them adopted specific paradigm based on promoting diversity in selecting and sending particles from one swarm to another. On the other hand, one of the main concerns in multiobjective optimization problems (MOP) is also to search for a diverse set of potential solutions, known as Pareto front. There have been several algorithms to extend PSO to handle diversity issue in MOPs. Parspopoulos et al. [52] introduced vector evaluated particle swarm optimizer (VEPSO) to solve multiobjective problems. A VEPSO is a multiswarm variant of PSO in which each swarm is evaluated using only one of the objective functions of the problem under consideration, and the information it possesses for this objective function is communicated to the other swarms through the exchange of their best experience. In VEPSO, the velocity of the particles in each swarm is updated using the best previous 51 position, , of another selected swarm. Selection of this swarm in the migration scheme can be either random or in a sequential order. Ray and Liew [53] used Pareto dominance and combined concepts of evolutionary techniques with the particle swarm. This algorithm uses crowding distance to preserve diversity. Hu and Eberhart [54] in their dynamic neighborhood PSO proposed an algorithm to optimize only one objective at a time. The algorithm may be sensitive to the optimizing order of objective functions. Fieldsend and Singh [55] proposed an approach in which they used an unconstrained elite archive to store the nondominated individuals found along the search process. The archive interacts with the primary population in order to define local guides. Mostaghim and Teich [56, 60] introduced a sigma method in which the best local guides for each particle are adopted to improve the convergence and diversity of the PSO. Li [57] adopted the main idea from NSGAII into the PSO algorithm. Coello Coello et al. [58], on the other hand, proposed an algorithm using a repository for the nondominated particles along with adaptive grid to select the global best of PSO. The algorithms proposed to solve MOPs using PSO are based upon promoting the nondominated particles at any given time, not exploiting the information of all particles in the population. The information exchange through migration in order to increase the search ability of the algorithm has been used in some other innovated paradigms. Ray and Liew [5] introduced their society and civilization model for optimization in accordance with simulation of social behavior. Individuals in a society interact with each other in order to 52 improve. Such improvement is done by information acquisition from the betterperforming individuals or leaders in that society. This intrasociety interaction will improve the individual’s performance, but cannot improve the leader’s performance. The leaders do communicate externally with the leaders of other societies to improve. This intersociety communication leads to migration of the leaders to developed societies, which in turn, moves the overall poorperforming societies toward betterperforming ones. At first, population is clustered into several mutually exclusive ones based on their distance in parametric space. Then objective functions along with constraints (if any) lay down a ranking system to choose the leaders in each cluster, and then migration in two levels will take place. Society and civilization model showed competitive results on single objective constrained optimization problems with respect to GAs. The concept of having multiple sets had been originally introduced and used in distributed genetic algorithm (DGA) [6]. In DGA, population is divided into several subpopulations each running its own GA independently. At regular time intervals, interprocessor communication will happen. During this migration stage, a proportion of each subpopulation is selected and sent to another subpopulation. The migrant individuals will replace others based on replacement policy. In another kind of distributed evolutionary algorithm, Ursem [4] adopted his multinational evolutionary algorithm using a spatially separated model. He applied a fitnesstopology function, instead of clustering, to decide on the relationship between a point and a cluster. The algorithm was to find all peaks of a multimodal function in unconstrained optimization problems. 53 In DGA, there are different policies on selection of migrants and replacement of individuals within each of the subpopulations. CantuPaz [118] showed that sending the fittest individuals of the population and replacing individuals with low fitness produces the best results. Denzinger and Kidney [18] used a diversity measure to select individuals for migration. Power et al. [119] used a method for selection based on a diverse set of individuals rather than the highly fit ones. The reason is to avoid like information to be sent to another subpopulation. Sometimes the majority of individuals can be located very close to each other, especially in the last steps of convergence. Therefore, by selecting the fittest individuals, the similar individuals from a small area will be sent to the next subpopulation. In case the algorithm is likely to be trapped in a local optima, this similar information is useless to diversify the search and get away from the local optima. Instead, the basis is to choose a diversified list of individuals to send to the other GAs. The sending list will be filled by the following individuals in this order: (1) an average individual of the subpopulation as representative of the population, (2) m individuals based on closeness to this representative whose fitness is better than representative, (3) m individuals based on closeness to this representative whose fitness is poorer than representative, and finally (4) the fittest individual in the subpopulation. There will also be a replacement list that will be filled in the following order: (1) individuals having similar genetic information, by order of fitness, with least fit ones being replaced before better fit ones, and (2) individuals with lowest fitness values. Their method was applied to single objective multimodal optimization and showed significantly better results when 54 compared to standard DGA with sendbestreplaceworst strategy. 4.3 Diversitybased Information Exchange among Swarms in PSO The underline principle of the proposed algorithm is based upon the idea of exploiting the information of all particles in the population. The population will be divided into P swarms, and each swarm will perform a PSO paradigm. After some predefined iterations, the swarms will exchange information based on a diversified list of particles. Each swarm prepares a list of sending particles to be sent to the next swarm, and also prepares a list of replacement particles to be replaced by particles coming from other swarms. Each swarm chooses the leaders of the next generation from the updated swarm after exchange of particles. To select the list of particles to send, algorithm uses a strategy according to the locations of the particles in the swarm and their objective values instead of their objective values alone. A list is prepared in the following order. Priority S1: The higher priority in the selection of particles is given to a particle that has the least average Hamming distance from others. This particle is considered as the representative of the swarm. The average Hamming distance between each pair of particles in the swarm is calculated and then the least among them is found. Priority S2: The closest particles to the representative particle whose objective value is greater than that of the representative will be chosen. is a value that depends on the rate of information exchange, , (a predefined value between 0 and 1) among 55 swarms, and population of each swarm, : . (4.2) Priority S3: The closest particles to the representative particle whose objective value is less than that of the representative will be chosen. Priority S4: The best performing particle in the swarm will be chosen. Depending on the predefined fixed value for allowable number of the sending list, the sending list will be filled in each swarm. There will also be a replacement list that each swarm prepares, based on the similar positional information of particles in the swarm. When swarms are approaching local optima, many locations of particles are similar to each other. Each swarm will then remove this excess information through its replacement list. The replacement list in each swarm is prepared in the following order. Priority R1: Particles with identical parametric space information, by the order of their objective values, with the least objective values will be replaced first. Priority R2: Particles with the lowest objective values will be replaced when all particles of the last priority have already been in the replacement list. This information exchange among swarms can happen in a ring sequential or random order between each pair of swarms as shown in Figure 4.1. Each swarm accepts the sending list from other swarm and will replace it with its own replacement list. After 56 information exchange completes, the and will be selected. This algorithm is shown in Figure 4.2. Figure 4.1 Ring and random sequential migration: Migration can be (a) in ring sequential order between swarms1 and 2, then between swarm 2 and 3, etc. or (b) in a random order between swarms. i, k, s, t, j are random numbers between 1 and n. Figure 4.2 Main algorithm for diversitybased multiple PSO (DMPSO) To further overcome the premature convergence problem, especially in multimodal objective optimization, and to increase the ability of communication among particles about common interest information, a concept of neighborhood is proposed to 1 2 3 4 n 5 i k s t n j (a) (b) Initialize population at time t 1. Cluster population into P swarms using kmeans clustering. If t tMigration , then: a. Prepare the sending list and replacement list for each swarm; b. Exchange particles between pairs of swarms, using sending and replacement lists of each swarm; c. Perform the PSO on new swarms using Equation (4.1). Else: Perform PSO on each swarm using Equation (4.1). Repeat the above steps until stopping criteria are met. ( max t t ) 57 promote the particles in a neighborhood to utilize and share information among themselves. For the PSO schema, a threelevel mechanism is adopted. In personal level, particle in a swarm will follow the leader of the swarm that is the best behaving particle in that swarm. In neighborhood level, the particle will simultaneously follow the best behaving particle in its neighborhood to achieve a synchronized behavior in the neighborhood and to share the information, and finally in the global level, particles of each swarm will follow the best behaving particle in the whole population, seeking a global goal. This paradigm of PSO is formulated as: , (4.3) , where is the velocity of the particle, is the position of the particle, is the best position in the cluster, is the best position among all particles and is the best position among the particles’ neighborhood. , , and are random numbers uniformly generated in the range of . Thus particles always move statistically towards the direction of , , and in order to use the past experience in the search process. , , and are constant values representing the weight of each of the terms of personal, global, and neighborhood behavior and is the momentum for previous velocity. It should be noted that the unified PSO [120] integrates the local best and global best PSOs into a single equation to update the velocity of particles based on the global 58 best particle, the neighboring best particle, and the particle’s own best position, while in the proposed paradigm, velocity updates using the best particle in the cluster of particles, the best particle in the neighboring swarms, and the best global particle with no restriction on the weights. To find the neighborhood among particles in PSO, there have been different strategies used by researchers [37, 121]. Some have applied ring neighborhood, the von Neumann neighborhood, or some other topological neighborhoods. The proposed definition of neighborhood is to define neighboring swarms according to the average as representative of each swarm to decide whether the swarms are in neighborhood of each other. In the ith swarm with the particles of , the representative, , is defined by centroid of all particles: (4.4) The interswarm distance between swarms i and j, , is defined by the inner products of two vectors: , (4.5) where is the kth element of the representative . 59 Swarms are defined to be in a neighborhood if and only if all interswarm distances among them are less than the average interswarm distance: , , … and , where , where P is the number of swarms. Figure 4.3 Schema of swarm neighborhood: Swarms 1, 2 and 3 are in a neighborhood, since , and but swarm 4 does not belong to this neighborhood. Notice that even but . Swarms 4, 5 and 6 form another neighborhood, because , and . Swarm 3 does not belong to this neighborhood because even but . (Solid circles denote the representative points of each swarm) For example, for two of them, swarms i and j are in a neighborhood if and only if . If but , then swarm k does not belong to this neighborhood. In Figure 4.3, an example with six swarms is shown. Swarms 1, 2 and 3 are in a neighborhood because , and but swarm 4 does not belong to R13 Swarm 3 Swarm 2 R23 R12 R45 R56 R46 Swarm 4 Swarm 5 Swarm 6 R14 R34 Swarm 1 Size of R 60 this neighborhood. Notice that even but . Swarms 4, 5 and 6 form another neighborhood because , and . Swarm 3 does not belong to this neighborhood because even but . A brief explanation of the proposed algorithm is shown in Figure 4.4. The population is initialized and then clustered into P swarms using the kmeans clustering method. Then the neighbor sets of each swarm will be found using the Equations (4.4) and (4.5) and the rule mentioned above as shown in Figure 4.3. Figure 4.4 Main algorithm for diversitybased multiple PSO with neighborhood (NDMPSO) Initialize population at time t 1. Cluster population into P swarms. If t tMigration , then: a. Prepare the sending list and replacement list for each swarm; b. Exchange particles between pairs of swarms, using sending and replacement lists of each swarm; c. Find the neighbor sets of each swarm. (N , i 1,2,..., P) i ; d. Perform the PSO on each new swarm: o Find the , , and for each new swarm, o Apply the modified version of PSO, Equation (4.3). Else: a. Find the neighbor sets of each swarm. (N , i 1,2,..., P) i ; b. Perform the PSO on each swarm: o Find the , , and for each swarm, o Apply the modified version of PSO, Equation (4.3). Repeat the above steps until stopping criteria are met. ( max t t ) 61 To perform the PSO according to Equation (4.3), we have to find the best performing particle in each swarm, namely the , the best performing particle in the allneighbor sets of that swarm, namely the and the best performing particle in the whole population, . This process will be iterated until the time for migration is reached. At regular fixed intervals, each swarm prepares a list of particles to send to the next swarm, a list of particles that must be replaced from other particles coming from other swarms; then exchange of particles between each of the two swarms will happen according to Figure 4.1. This algorithm including clustering, information exchange, and flight of particles will continue until the stopping criteria are met. 4.4 Simulation Results The proposed algorithm was tested using some benchmark problems, which are often used to examine GA solving multimodal problems [4, 122]. These problems adopted from [119] vary in difficulty and dimension. In order to test the proposed algorithm, its performance has been compared with two distributed genetic algorithms [118119]. One of them is DGA with a standard migration policy (SDGA), bestsentworstreplaced [118]. The other one is a DGA with diversitybased migration policy (DDGA) [119]. In order to draw a fair comparison, the same rate of information exchange as their migration rate has been adopted. The main population for the proposed algorithms DMPSO and NDMPSO was set as 50 particles. The kmeans clustering 62 method was used with m=6 swarms. The coefficients of , , , and are selected as 1.4, 1.4, 1.4, and 0.8, respectively. The rate of information exchange is varied with the values of 0.05, 0.2, and 0.4. At the time interval of , particles are exchanged among swarms. The first problem used to test the proposed algorithm is F1 [119] with five peaks and four valleys between each of the two neighboring peaks. This function is depicted in Figure 4.5. Figure 4.5(a) shows a 3D landscape while Figure 4.5(b) displays the contour map of the function F1. The results of applying both proposed algorithms are shown in Table 4.1. The optimal solution found (in percentage) is calculated out of 30 independent runs for each algorithm. The solution is considered to be optimal when the optimal objective value of 2.5 is reached. The best objective values for the final solution is averaged over 30 runs to obtain the mean best objective reported in the table. Each algorithm is performed for three values of rate of information exchange, 0.05, 0.2, and 0.4. The best performing algorithm in each case is shown in bold face. The graphical view of the location of the best particles of the final solution is depicted in Figure 4.6. Figure 4.6 (a) and (b) are for DMPSO with rate of information exchange equal to 0.05 and 0.4, and (c) and (d) are for NDMPSO with rate of information exchange equal to 0.05 and 0.4, respectively. Figure 4.6 shows that some of the particles in DMPSO will be trapped in local maxima (0.897,0) and (0.897,0). In NDMPSO, most particles approached toward (0,0), the global maximum. The results in Table 4.1 show that both proposed algorithms perform better and NDMPSO outperforms all of them. 63 The second benchmark problem is F2 [119] with 10 peaks shown in Figure 4.7. The results of the algorithms are also shown in Table 4.1. The graphical view of the best particles for both algorithms is obtained in Figure 4.8. This figure shows that some particles in DMPSO are trapped in a local maximum while in NDMPSO most particles reach the global maximum. Results reported in Table 4.1 also show the better performance of NDMPSO. The next benchmark function is F3 [119], shown in Figure 4.9, with two close peaks and a valley between them. The results of the algorithms are summarized in Table 4.1 as well, and the graphical view of the best particles is depicted in Figure 4.10. Figure 4.10 shows that in DMPSO some of the particles are trapped in local maximum at (1.444,0), while in NDMPSO most of the particles reached the global maximum at (1.697,0). Table 4.1 also illustrates that NDMPSO is outperforming other algorithms. The next benchmark function is F4 [119] with a total of five peaks, one global maximum and four local maxima in its neighborhood, shown in Figure 4.11. The results and the graphical presentation of the best particles in Table 4.1 and Figure 4.12 show once again that NDMPSO has less particles trapped in four local maxima located at the corners of the variable space. The results obtained in Table 4.1 confirm a higher number of found optimal solutions. The benchmark function F5 [119] has six peaks, two of which are global maxima as shown in Figure 4.13. The results of the algorithms are shown in Figure 4.14 and Table 4.1. The DDGA in this problem outperforms the proposed algorithms when rate of information exchange is 0.05 and 0.2. On the other hand, with a 64 higher rate, the proposed algorithm performs better, i.e., when more particles are exchanged, PSO shows more superiority. The benchmark function of F6 [119] has a variable dimension. Three different dimensions of 25, 40 and 50 have been used here. The rate of the information exchange for this case and the remaining benchmark functions has been fixed at 0.1. The best objective value for the final solutions is averaged over 30 runs and shown in Table 4.2. The NDMPSO outperforms the other three algorithms at dimensions 25 and 40 but at dimension 50, DDGA performs better. The benchmark function of F7 [119] has also a variable dimension, and three dimensions of 25, 40, and 50 have been adopted here. Results in Table 4.2 show that NDMPSO outperforms the others at dimensions 25 and 40 but again, at dimension 50, DDGA outperforms others. The next benchmark function, F8 [119], has 10 variables. NDMPSO also outperforms other algorithms in this case. And finally, the last benchmark function F9 [119] has 40 variables. NDMPSO performs better than other algorithms as well. In general, NDMPSO outperformed other algorithms in several benchmark functions. It was outperformed in some cases, especially problems with very high dimension, by DDGA. It might be due to the nature of GA that recombination demonstrates a better performance in high dimension; however it needs to be tested more. 65 (a) (b) Figure 4.5 Benchmark function F1 with five peaks and four valleys: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.6 Final best particles for F1: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 66 (a) (b) Figure 4.7 Benchmark function F2 with 10 peaks: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.8 Final best particles for F2: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 1.2 1 0.5 0 0.5 1 X Y 67 (a) (b) Figure 4.9 Benchmark function F3 with two peaks and one valley: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.10 Final best particles for F3: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 2.5 2 1.5 1 0.5 0 0.5 1 1.5 2 2.5 X Y 68 (a) (b) Figure 4.11 Benchmark function F4 with five peaks: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.12 Final best particles for F4: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 2 1.5 1 0.5 0 0.5 1 1.5 2 2 1.5 1 0.5 0 0.5 1 1.5 2 X Y 69 (a) (b) Figure 4.13 Benchmark function F5 with six peaks, two of which are global maxima: (a) 3D landscape, (b) contour map. (a) (b) (c) (d) Figure 4.14 Final best particles for F5: (a) DMPSO with r = 0.05, (b) DMPSO with r = 0.4, (c) NDMPSO with r = 0.05, (d) NDMPSO with r = 0.4. 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 1.5 1 0.5 0 0.5 1 1.5 1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1 X Y 70 Table 4.1 Results for optimal found (%) and mean best objective for F1, F2, F3 and F5 Algorithms SDGA DDGA DMPSO NDMPSO Max F1 Optimal Found (%) r = 0.05 0% 0.8% 10.0% 13.3% r = 0.2 0% 13.3% 13.3% 20.0% r = 0.4 0% 15.8% 16.7% 23.3% Mean best objective r = 0.05 1.98898 2.48217 2.4801 2.4863 r = 0.2 1.97855 2.4605 2.4745 2.4793 r = 0.4 2.02455 2.48811 2.4891 2.4905 Max F2 Optimal Found (%) r = 0.05 0% 22.5% 33.3% 53.3% r = 0.2 0.8% 5.8% 13.3% 26.6% r = 0.4 0% 17.5% 23.3% 33.3% Mean best objective r = 0.05 6.73371 8.58322 8.6739 8.6783 r = 0.2 6.70137 8.63548 8.6532 8.6621 r = 0.4 7.31735 8.68075 8.6923 8.6953 Max F3 Optimal Found (%) r = 0.05 0% 3.3% 16.7% 20.0% r = 0.2 0% 20% 23.3% 33.3% r = 0.4 0% 23.3% 43.3% 53.3% Mean best objective r = 0.05 4.67853 4.812 4.8121 4.8127 r = 0.2 4.7159 4.810 4.8117 4.8136 r = 0.4 4.73849 4.81496 4.8151 4.8155 Max F4 Optimal Found (%) r = 0.05 3.3% 4.2% 16.7% 26.6% r = 0.2 0% 42.5% 43.3% 53.3% r = 0.4 0% 35% 36.7% 43.3% Mean best objective r = 0.05 1.34999 1.48242 1.49016 1.49127 r = 0.2 1.33163 1.49341 1.49281 1.49332 r = 0.4 1.29936 1.49123 1.49178 1.49341 Max F5 Optimal Found (%) r = 0.05 11.7% 67.5% 43.3% 53.3% r = 0.2 14.2% 69.2% 33.3% 36.7% r = 0.4 0.8% 22.5% 36.6% 43.3% Mean best objective r = 0.05 0.970634 1.03 1.0283 1.0297 r = 0.2 0.975198 1.03006 1.0281 1.0288 r = 0.4 0.941727 1.02874 1.0293 1.0297 71 Table 4.2 Mean best objectives for F6, F7, F8, and F9 Algorithms Dimension SDGA DDGA DMPSO NDMPSO Max F6 25 8456.46 8935.65 8745.8 8947.3 40 12578 13131.1 13002.4 13135.1 50 15004.6 15554.9 15387.5 15423.6 Max F7 25 8682.31 9093.61 9026.5 9098.4 40 12796.1 13324.3 13304.5 13331.3 50 15124 15810.5 15723 15799 Max F8 10 1.9513 1.97217 1.9673 1.97221 Max F9 10 605.201 627.921 616.436 628.142 4.5 Discussions A paradigm for particle swarm optimization is presented in order to increase its ability to search widely and to overcome its premature convergence problem. The proposed algorithm uses multiple swarms and exchanges particles among them in regular intervals. The exchanged particles are selected according to the locations of the particles based on a promotional diversity strategy and their correspondence objective values. Furthermore, the PSO was modified using a new neighborhood term that helps the neighboring swarms share the common interest information. The neighborhood for each swarm is found using an unsupervised algorithm according to the interswarm distances between representatives of each pair of swarms. The proposed algorithms, NDMPSO, showed a great performance compared to DMPSO and two versions of distributed genetic algorithm that have similar conceptual basis with the proposed algorithm. The DMPSO showed competitive results compared to DGAs. The NDMPSO showed better 72 performance compared to DMPSO, indicating that sharing information in the neighborhood of swarms helps them to escape from local optima and locate the global optimum. As a drawback of both proposed algorithms, they show dependence of their performance on the rate of information exchange. A range of rate has been selected from 0.05 to 0.4 which reveals no conclusion on what rate is better for a specific application. Further work is needed to find an optimum exchange rate. Due to the nature of the diversity promotion of the proposed algorithm that works well for multimodal problems, it can be a promising candidate as a basis for solving multiobjective optimization. 73 CHAPTER V CULTURALBASED MULTIOBJECTIVE PARTICLE SWARM OPTIMIZATION 5.1 Introduction Population based heuristic for solving multiobjective optimization problems (MOPs) has gained much attention. Multiobjective evolutionary algorithm (MOEA) and multiobjective particle swarm optimization (MOPSO) are two popular population based paradigms introduced within the last decade. MOPSO adopts the particle swarm optimization (PSO) paradigm [1] which in turn mimics behavior of the flocking birds. Although there exist many researches on single objective PSO suggesting dynamic weights for the local and global acceleration [123], most MOPSO researchers assume that all particles should move with the constant momentum, local, and global acceleration. However there have not been many studies to consider a possibility in which particles fly with different “personalized” weights for the momentum, local, and global acceleration. Some may argue that there is no need to have a personalized weight for each particle. Even if an algorithm applies the same weight for all particles, for some particles requiring smaller weight, they will unnecessarily jump far away from the optimum, while 74 for some other particles that need greater weight, they will unsatisfactorily move slowly, resulting in both situations an inefficient design. On the other hand employing a personalized weight for each particle assigns an appropriate amount of jump and contributes to the effectiveness of the performance of the algorithm. From a biological point of view, study [7] has shown that societies that can handle more complex tasks contain polymorphic individuals. Polymorphism is a significant feature of social complexity that results in differentiated individuals. The more differentiated the society, the easier it can handle complex tasks. Differentiation applies in principal to complex societies of prokaryotic cells, multicellular organisms, as well as to colonies of multicellular individuals such as ants, wasps, bees, and so forth. The colony performance is improved if individuals differentiate in order to specialize on particular tasks. As a result of differentiation, individuals perform functions more efficiently. In the study it has been shown the colony’s ability to higher cooperative activity when tackling tasks is a direct consequence of differentiation among other factors. There are few studies in the MOPSO that have tackled the issue of variable momentum for the particles although in all of them momentum is identical for all particles at a specific iteration. Some MOPSO paradigms have proposed simple strategies to adapt the momentum by decreasing the momentum throughout swarming [57, 64, 6768, 124], while other MOPSOs choose a random value for momentum [54, 6263, 66, 69] at every iteration. 75 The MOPSO, similar to PSO, is based upon a simple flight of the particle: , (5.1) , (5.2) where is the dth dimension of the position of the ith particle at time ( and ). is the dth dimension of the velocity of the ith particle at time . is the dth dimension of the personal best position of the ith particle at time , and is the dth dimension of the global best position at time . and are the constant values that are called personal and global acceleration which give different importance to personal or global term of Equation (5.1). and are uniform random numbers from to give stochastic characteristics to the flight of particles. is the velocity momentum of the particles. In Figure 5.1, it can be seen how three vectors which affect the flight of particles depend so much on the momentum, global, and local acceleration. When particles need to be used as exploiter or explorer the emphasis on each term in Equation (5.1) should be different. Therefore not all particles should have the same values for momentum, local, and global acceleration. To the best knowledge of the author, there is no appreciable work in MOPSO on adapting personalized momentum and acceleration based upon the need for the particles to exploration or exploitation. Adaptation of these important factors in the flight of particles is an important task that cannot be solved unless we have access to the knowledge throughout the search process. In this study, a computational framework is 76 proposed based on cultural algorithm (CA) [3, 125] adopting knowledge stored in belief space in order to adapt flight parameters of MOPSO. Cultural algorithms have been frequently used to vary parameters of individual solution in optimization problems [126127]. Proposed paradigm resorts different types of knowledge in belief space to personalize the parameters of the MOPSO for each particle. Every particle in MOPSO will use its own adapted momentum and acceleration (local and global) at every iteration to approach the Pareto front. Cultural algorithm provides required groundwork enabling one to employ the information stored in different sections of belief space efficiently and effectively. By incorporating CA into the optimization process, we categorize the information of the belief space and adopt it in a systematic manner. Information in the belief space provide required parameters needed for the optimization process whenever it is needed. As a result the optimization process will be more competent and successful. Figure 5.1 Schema of particle’s movement in MOPSO: Vectors affecting how particle moves in MOPSO due to gbest, pbest and its velocity. The remaining sections complete the presentation of this chapter. In Section 5.2, principles of cultural algorithm and related works in MOPSO are briefly reviewed. In 77 Section 5.3, the proposed cultural MOPSO is elaborated. In Section 5.4, simulation results are evaluated on the benchmark test problems in comparison with the stateoftheart MOPSO models. This section also includes a sensitivity analysis for the proposed cultural MOPSO. Finally, Section 5.5 summarizes the concluding remarks and future work of this study. 5.2 Review of Literature 5.2.1 Related Works in Multiobjective PSO Hu and Eberhart [54] in their dynamic neighborhood MOPSO model and also Hu et al. [66] in the MOPSO with extended memory adopted a random number on the range (0.5,1) as the varying momentum, however both personal and global acceleration are constant values. Sierra and Coello Coello [62] in their crowding and dominance based MOPSO used random value at the range (0.1,0.5) for the momentum and random values at the range (1.5,2.0) for the personal and global acceleration. They adopted this scheme to bypass the difficulties of fine tuning these parameters for each test function. Zhang et al. [64] introduced intelligent MOPSO based upon AgentEnvironmentRules model of artificial life. In their model, along with adopting some immunity clonal operator, the momentum was decreased linearly from 0.6 to 0.2, but the personal and global acceleration remained constant. Li [67] proposed an MOPSO based upon maxmin 78 fitness function. In his model, while the personal and global accelerations were set constant, the momentum was gradually decreased from 1.0 to 0.4. Zhang et al. [68] adopted a linearl 



A 

B 

C 

D 

E 

F 

I 

J 

K 

L 

O 

P 

R 

S 

T 

U 

V 

W 


