APP下载

Multi-objective particle swarm optimization by fusing multiple strategies

2022-09-19XUZhenxingZHUShuiran

XU Zhenxing, ZHU Shuiran

(1. School of Management Science and Engineering, Anhui University of Technology, Ma’anshan 243032, China;2. School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China)

Abstract: To improve the convergence and distributivity of multi-objective particle swarm optimization, we propose a method for multi-objective particle swarm optimization by fusing multiple strategies (MOPSO-MS), which includes three strategies. Firstly, the average crowding distance method is proposed, which takes into account the influence of individuals on the crowding distance and reduces the algorithm’s time complexity and computational cost, ensuring efficient external archive maintenance and improving the algorithm’s distribution. Secondly, the algorithm utilizes particle difference to guide adaptive inertia weights. In this way, the degree of disparity between a particle’s historical optimum and the population’s global optimum is used to determine the value of w. With different degrees of disparity, the size of w is adjusted nonlinearly, improving the algorithm’s convergence. Finally, the algorithm is designed to control the search direction by hierarchically selecting the globally optimal policy, which can avoid a single search direction and eliminate the lack of a random search direction, making the selection of the global optimal position more objective and comprehensive, and further improving the convergence of the algorithm. The MOPSO-MS is tested against seven other algorithms on the ZDT and DTLZ test functions, and the results show that the MOPSO-MS has significant advantages in terms of convergence and distributivity.

Key words: multi-objective particle swarm optimization (MOPSO); spatially crowding congestion distance; differential guidance weight; hierarchical selection of global optimum

0 Introduction

A large number of engineering problems require the application of multi-objective optimization to solve complex problems[1-4]. The main algorithms for solving multi-objective optimization problems include the non-dominated sorting genetic algorithm (NSGA-II)[5], strength pareto evolutionary algorithm (SPEA2)[6], multi-objective evolutionary algorithm based on decomposition and preselection(MOEA/D)[7], dynamic multi-objective evolutionary algorithms(DMOEA)[8], etc. Among them, meta-heuristic algorithms are one kind of the most effective methods for solving multi-objective optimization[9-14].

Particle swarm optimization is an intelligent optimization algorithm with a simple structure, few adjustable parameters, sufficient information interaction, easy implementation, and good robustness, which has been extended to multi-objective optimization owing to its excellent performance in handling single-objective optimization problems. Multi-objective particle swarm optimization (MOPSO)[15]is not only used to solve complex multi-objective problems, but also as a search mechanism in combination with various advanced constraint handling techniques for single-objective and multi-objective constrained problems. The efficient search performance and excellent collaboration benefits demonstrated in the optimization process have made it increasingly interesting to researchers and a research hotspot. Although MOPSO has achieved widespread applications, there are still several key issues awaiting resolution. For multi-objective optimization problems, it is necessary to solve universal difficulties such as the unbalanced relationship between diversity and convergence and the tendency of algorithms to fall into local optimality, as well as consider proprietary issues such as optimal particle selection strategies, stored solution set maintenance methods, and diversity preservation mechanisms.

In a multi-objective particle swarm algorithm, the distributivity of the population and the convergence of the algorithm are two key factors affecting the performance of the algorithm.

Huang et al.[16]proposed an adaptive MOPSO with a multi-strategy based on energy conversion and explosive mutation, in which the velocity update and result generation of the particle swarm are reflected in terms of energy, and a mutation strategy is incorporated. Hu et al.[17]introduced a parallel grid coordinate system into the algorithm by dynamically tuning the information provided by the cell coordinate system globally. Yu et al.[18]combined the idea of adaptive fuzzy domination with an MOPSO to finely control the non-dominated solution by adaptively varying the threshold of fuzzy domination. Yang et al.[19]proposed an MOPSO for fitness ranking, which can better retain the elite solution set by ranking the individuals in the population through a penalized boundary-crossing method.

Cheng et al.[20]proposed a multi-objective particle swarm-teaching and learning optimization that embeds the idea of teaching and learning optimization into the population iteration. Balling[21]proposed a reward and punishment strategy that “rewards” dispersed solution sets and “punishes” aggregated solution sets, ensuring that the solution sets are distributed. Xiao et al.[22]used harmonic mean distance to estimate population density, accurately estimate the degree of individual crowding, and maintain solution set distributivity.

Based on the above analysis, it is known that many scholars have made some achievements in the research of MOPSO. However, due to the fast convergence speed, MOPSO cannot balance global and local optimization and thus fails to achieve the desired optimization effect. As a result, improving the distribution of solution set and the convergence of the algorithm is still a research topic. The goal of our study is to propose an MOPSO with fused multi-strategies, called MOPSO-MS. The major contributions are as follows:

1) A new congestion distance measure is proposed. An estimation strategy of spatially averaged congestion distances is used to accurately estimate individual congestion distances and at the same time reduce the time complexity of the computation and better maintain the distributivity of the solution set.

2) A new adaptive change inertia weightwis proposed. Typically, the inertia weight of MOPSO decreases with each iteration of the algorithm. However, this method ignores particle characteristics, and the value ofwlacks guidance. In proposed method, the difference between the historical optimum and the global optimum of the particles during the iterative process is used as a direction to guide the update ofw, which effectively improves the algorithm’s convergence.

3) A new global optimal position selection strategy is proposed. The global optimum position selection in general MOPSO is a random selection of one of external archives as the global optimum. In our study, the global optimum position is selected by sorting the Pareto solutions in the external archive according to a new congestion density estimation method, using stratified sampling to select candidate global optima and assigning selection probabilities that change adaptively with the iterative process, and improving the overall performance of the algorithm.

1 Background

1.1 Multi-objective optimization problem definition

Without loss of generality, a multi-objective optimization problem withndecision variables andmobjective functions, in terms of minimization, can be formulated as

(1)

wherexis called the decision variable, andXis ann-dimensional decision space;yis called the objective function, andYis anm-dimensional objective space; The objective functionF(x) defines the mapping relation and themobjectives that need to be optimized simultaneously.

1.2 Standard PSO

Kennedy et al.[23]proposed the PSO algorithm in 1995. Supposing a population of particles is searched inndimensions, for each particle, it is necessary to consider its historical best-fit point and the best-fit point within the population, and on this basis, to consider its next position. The positionSiand velocityViof theith particle are represented as

(2)

whereNis the size of the population, andDis the dimension of the solution space.

The local optimum at theith particle position is

pi=(si1,si2,…,siN),

(3)

wheresi1,si2,…,siNare the coordinates of the historical optimal value point of the current particle. The global optimum of the particle position is

pgd=(sgd1,sgd2,…,sgdN),

(4)

wheresgd1,sgd2,…,sgdNare the coordinates of the historical optimal value points of all particles. According to the above definitions, the velocity and position of theith particle are updated as

(5)

wherewis the inertia coefficient,c1andc2are the learning factors,kis the number of iterative steps,dis the dimension of the sample,ξandηare two pseudo-random numbers uniformly distributed at (0,1).

2 Improved MOPSO by fused multiple strategies

2.1 Spatially averaged crowding distance method

One of the objectives of solving high-dimensional multi-objective optimization problems is to maintain the diversity of the evolving population so that the approximate Pareto optimal solution set has good distribution properties in the objective space and is as widely distributed as possible. This is achieved by reflecting on the distribution of individuals in the objective space. The structure and linkages between individuals are mainly expressed in terms of their sparsity and proximity in the objective space, but as yet there is no accepted method for evaluating the distribution of individuals in the objective space. Early methods of distributivity assessment mostly used the small habitat approach, i.e., the area approach, while the more recent development of multi-objective evolutionary algorithms (MOEAs) usually introduces a density component. The higher the density value is, the less distributed the individual is. The distribution properties of the optimal solution set are ensured by retaining individuals with lower neighborhood densities in the evolving population (or set of external populations).

Existing density assessment strategies include the crowding distance method used by NSGA-II, the grid method used by Pareto archived evolution strategy (PAES), and the clustering method used by SPEA. The MOEAs define the distance between two individuals as the Euclidean distance between the corresponding target vectors. Compared to the grid and clustering methods, the crowded distance method is more applicable because it does not require user parameters, is less complex to compute, and is less time-consuming. However, the existing crowding distance methods are all local density estimation methods based on Euclidean distance, which only estimate the crowding level between 2 ork(k

The goal of solving a multi-objective optimization problem is to get a uniformly distributed set of Pareto solutions in the objective space. There are two types of MOPSO algorithms for estimating population densities currently available.

The first category uses populations or clusters of populations to reduce the computational time complexity of estimating congestion distances. This density estimation method can reduce the algorithm’s time complexity, but it cannot account for all of the individuals in the population, so the estimated population density may not be accurate enough, affecting the solution set’s distribution.

The second category is the estimation of population density using harmonic-like mean distances[22], which takes into account the effect of all population individuals on population density. However, when the number of individuals in the population is large, the computational time complexity is greater, as described below.

For an individualYiin the population, supposing the Euclidean distances of theNindividuals in the objective space from its proximity to its distance aredi1,di2,di3,…,diN, then the Harmonic mean distanceHd1(Yi) of individualYiis ealculated by

(6)

When using harmonic-like mean distances for individuals in crowding distance estimation, it is possible to take into account the effect of individuals from all populations on crowding distance. However, the estimation of crowding distances for each individual requires the Euclidean distances of all other individuals corresponding to it. When the number of populations grows large if the harmonic-like mean distance is still used to estimate the crowding distance to keep the solution set distributivity, the computational time complexity increases, reducing the algorithm’s efficient processing performance. Therefore, this study uses the spatially averaged crowding distance to estimate the crowding distance, which reduces the time complexity of the algorithm while ensuring the accuracy of the estimated crowding distance, as described as

(7)

(8)

(9)

The population’s average spatial location contains information on all individuals’ spatial locations, ensuring that the impact of individual location information on population density is not lost. In this way, it can effectively avoid the impact of a few “remote solutions” and “extreme value solutions” on individual crowding degrees, improve the accuracy of crowding distance to a degree, and upgrade the accuracy of measuring sparsity and proximity in the objective space and the set distribution.

At the same time, when compared with the harmonic-like average distances, the spatially averaged congestion distances are computationally reduced. Let the population size beN, the number of targets bem, and the space dimension bem. Because the spatially averaged crowding distance is the distance between an individual and the population average, and the harmonic-like crowding distance is the distance between an individual and all the remaining individuals, the former has a time complexity ofO(mn) that is less than that of the harmonic-like average distanceO(mnlog2n). Compared to harmonic for estimating congestion distances, the simplified spatially averaged crowding distance eliminates the need to calculate the effect of individual-to-individual distances multiple times, which requires only the calculation of the spatially averaged position once, and greatly reduces the algorithm’s computational complexity.

To explain the difference between this congestion distance strategy and the traditional congestion distance, assuming that the number of multiple targets is 2 and the number of external files is 10, the topology of these two computed congestion distances is shown in Fig.1.

Fig.1 Topology comparison diagram between conventional and spatially averaged congestion distance

In Fig.1,p1,p2,…,p10are the spatial locations of each non-dominated optimal solution in the external archive in the 2-dimensional objective space, andpdis the average spatial location of these non-dominated optimal solutions. To calculate the crowding distance ofp1according to the traditional crowding distance, we need to calculate the distance with all the individuals in the external archive once to get the result, while the spatially averaged crowding distance only needs to calculate the distance betweenp1and the spatially averaged positionpdto get the result. This new crowding distance method can reduce the computation time considerably in subsequent examples, and the final results are still excellent.

The spatially averaged crowding distance position of the population is generated by each individual combined.The size of an individual’s crowding distance is determined by their distance from the spatial mean crowding distance. The spatial mean crowding distance is typically found near dense individuals, and being closer to the spatial mean crowding distance indicates a greater crowding distance for that individual and a higher density near the individual. Similarly, the presence of a small number of distant individuals in a group has little effect on the location of spatially level crowding distances, and if the distance between individuals is used directly as an indicator for estimating crowding distances, the effect of such distant individuals is large and cannot be eliminated. It can be seen that the spatially averaged congestion distance method proposed can maintain the distributivity of the solution set both globally and locally.

2.2 Adaptive inertia weight guided by particle difference

The value of the inertia weightwhas a significant impact on the convergence of the PSO algorithm. The majority ofwvalues decrease linearly or non-linearly as the algorithm iterates. This traditional adaptive change method does not take into account the properties of the particles themselves during the iterative process, and the values ofware unguided.

The difference between the particle’s historical optimal solution and the population’s global optimal solution can reflect the degree of difference between the particle and the population’s optimal particle. When the difference is large, the particle and the population optimal particle should have a larger value ofwto enhance the global search ability of the particle. When the difference is small, it means that the gap between the particle and the optimal particle in the population is small, and the value ofwshould be reduced to improve the local search ability of the particle. In our study, the degree of disparity between the historical optimum of a particle and the optimal particle of the population is used to guide the value ofw. The size ofwis adjusted nonlinearly with the different degrees of disparity, and the nonlinear curve is chosen from the sigmoid activation function of the back propagation (BP) neural network, which can map the difference to the (0,1) interval, as shown in Fig.1. The difference between the historical optimum of theith particle at momentkand the global optimum of the population can be calculated by

(10)

(11)

(12)

(13)

Fig.2 Inertia weight curve

Fig.3 shows the inertia weight distribution of the adaptive inertia weights over the six test functions of the ZDT[24]. The horizontal coordinate is the number of iterations and the vertical coordinate is the inertia weight. It can be seen that the adaptive inertia weights differ from the traditional inertia weights that follow a given curve and instead allow the particles to find the appropriate inertia weights in an iterative process. Such adaptive inertia weights can improve the distributivity and convergence of multi-objective optimization in the following examples.

Fig.3 Distribution of adaptive inertia weights over 6 test functions.

In our study, MOPSO with adaptive inertia weights abandons the traditional MOPSO with artificially set adaptation curves to findw. Instead, the particles are allowed to choose the appropriatewaccording to their characteristics. When the difference between the particle’s historical optimum and the global optimum is large, the value ofwis larger, increasing the particle’s global search ability to avoid falling into a local optimum. When the difference between the particle’s historical optimum and the global optimum is small, the value ofwis smaller, increasing the particle’s local search ability to avoid the phenomenon of “early maturity”.

2.3 Hierarchical selection of global optimum position

In MOPSO, the global optimum position plays a key role in guiding the approximation of the particle swarm towards the Pareto front. Whereas single-objective optimization problems determine the global optimum directly and precisely from the fitness value, and multi-objective optimization problems result in non-inferior sets of solutions, where the global optimum position of the particles is generated from an elite set of solutions from an external archive. MOPSO[16]selects the globally optimal position by picking a random one from the external archives. The selection of the global position can greatly affect the performance of the algorithm. In this study, the stability and convergence of the search direction of the algorithm are taken into account to propose a hierarchical selection of the global optimum strategy.

In response to the random search direction or incomplete search direction when selecting the global optimum, the external archives are sorted by congestion distance, and the external archives are divided into several layers by stratified sampling, and the one with the largest congestion distance in each layer is the candidate global optimum. Hierarchical selection of global optimum positions reduces the randomness of the search direction and maintains the comprehensiveness of the search direction.To avoid particles being repeatedly selected, leading to particle aggregation, the selected candidate global optimum is assigned a probability and the global optimum position is selected according to the probability. If a candidate global optimum is selected as the global optimum position, the probability of the candidate global optimum in that layer decreases, and if the candidate global optimum in that layer changes, the probability reverts to the initial value. The formula for the probability change of the global optimum candidate is expressed as

(14)

wherePkndenotes the probability that a candidate global optimum is selected, andndenotes the number of external archives. In this study, the population size is 100, the number of external archives is 100, divided into 10 layers, andPknis initially 1/10.

Fig.4 shows the distribution of global optima between MOPSO-MS and MOPSO on the ZDT6 test function, and it can be seen that MOPSIO-MS has a more uniform distribution of global optima than MOPSO. In multi-objective optimization, the global optimum is equivalent to the search direction of the algorithm, and only by ensuring a good distribution of the global optimum can we obtain a good distribution of the algorithm. We are still searching for a better way of selecting the global optimum.

Fig.4 Global optimum distribution of MOPSO-MS and MOPSO on ZDT6 test functions

2.4 MOPSO-MS algorithm process

With the various strategies described above, the process of proposed MOPSO-MS algorithm is shown in Table 1.

Table 1 MOPSO-MS algorithm process

3 Simulation and result analysis

3.1 Simulation environment

To fully test the performance of MOPSO-MS, the simulation uses ZDT[24]and DTLZ[25]to test the performance of the algorithm, where ZDT1-ZDT6 are dual-objective problems and DTLZ1-DTLZ7 are triple-objective problems, and the results are compared with those of other representatives MOPSO algorithms, including congestion distance-based multi-objective particle swarm algorithm (cdMOPSO)[26], adaptive multi-objective particle swarm algorithm based on parallel coordinate systems (pccsAMOPSO)[27], multi-objective particle swarm optimizer with fast convergence based on competitive mechanism(CMOPSO)[28], clustering-based adaptive evolutionary algorithm for multiobjective optimization with irregular Pareto fronts(CA-MOEA)[29], multi-objective particle swarm algorithm based on multiple adaptive methods (AMOPSO)[30], novel multi-objective particle swarm optimization with multiple search strategies(MMOPOS)[31]. And two evolutionary algorithms, NSGA-II[1]and SPEA2[2], are also included in this study for a more comprehensive comparison. The parameters common to all algorithms were the total group size (N) and the number of non-dominated solutions (T), which were set to 100, and the maximum number of iterations (Tmax) was 900. The specific parameters of the comparison algorithms were set as in the original literature, and all experiments were programmed in Matlab 2018b, running on a computer with a CPU of 1.8 Hz and 8 GB of RAM under Windows 10 operating system.

3.2 Performance indicators

3.2.1 Inverted generation distance

The inverse generation distance (IGD), which is an inverse mapping of the generation distance, is expressed using the average distance from the solution set points in the Pareto optimal frontier surface to the non-dominated solution set sought by the algorithm. The true Pareto solution set for the set of test functions used in this study is known, and the IGD value of the algorithm can be calculated by uniformly picking points on it, as described in Ref.[32].

Assuming thatPdenotes the approximate solution set to be evaluated,Rdenotes a set of reference points uniformly sampled on the rightmost frontier of Pareto, and ‖p-r‖ denotes the Euclidean distance from the solution set pointpto the reference pointrin the approximate solution set, the value of IGD is calculated as

(15)

The IGD metric measures the average distance from each reference point inRto the nearest one in the approximate solution setP, the smaller theDIG, the higher the similarity betweenPandR. SinceRis obtained by sampling uniformly from the Pareto optimal frontier, the smaller theDIG, the better the convergence and distribution of the approximate solution setP. To ensure the fairness of the experiment, the number of sampling points of all the test functions is 800, and all of sampling points follow the average sampling method in Ref.[31].

3.2.2 Hyper volume

The hyper volume (HV)[33]has become a popular performance metric because of its compatibility with the Pareto dominance relation. The HV evaluates the performance of multi-objective optimization algorithms by calculating the hypervolume of the space enclosed by the non-dominant solution set and reference points. The formula for HV is

(16)

whereλrepresents the Lemberger measure,virepresents the hypervolume formed by the reference point and the non-dominated solution pointi, andSrepresents the set of non-dominated set solutions.

3.3 Analysis of simulation results

3.3.1 Pareto frontier comparison

Simulation result plots for MOPSO-MS and other algorithms on the ZDT (ZDT3, ZDT6) and DTLZ (DTLZ2) problems are shown in Figs.5-7. The distribution of the resulting non-dominated solution sets reflects the convergence and diversity of the algorithms, as can be seen from Figs.5-7 where the other algorithms lack convergence and diversity in the tests. In the typical bi-objective problem ZDT3, the approximate Pareto front distribution of the chosen algorithm is shown in Fig.5. Other algorithms do not converge well on the true Pareto front. Although some have good convergence, they still suffer from a lack of diversity, and their non-dominated solutions are not evenly distributed. The results for the typical three-objective problem DTLZ2 show that cdMOPSO, MMOPSO,NSGA-II, and SPEA2 all converge significantly less. Better convergence can be obtained by comparing MOPSO-MS with CA-MOEA, but it is clear from Fig.6 that CA-MOEA is not as diverse as MOPSO-MS. The solution set obtained by the MOPSO-MS algorithm is consistent with the true Pareto frontier, which cannot be obtained by other algorithms. Both AMOPSO and CA-MOEA algorithms can converge to the Pareto front, but the distribution of the obtained non-dominated solution set is poor, mainly because the optimization of multi-objective problems requires global and local balanced optimization. The strategy of adaptively changing the inertia weights proposed in this study can ensure that MOPSO-MS achieves good convergence with a uniform solution distribution. In addition, the hierarchical strategy of global optimum position selection ensures the diversity of the particle population, allowing the algorithm to search the space more comprehensively and still get better results when solving complex multi-objective problems.

(1) MOPSO-MS

(2) cdMOPSO

(3) pccsAMOPSO

(4) CMOPOS

(5) AMOPSO

(6) NSGA-II

(7) SPEA2

(8) CA-MOEA

(9) MMOPSOFig.5 Pareto front of ZDT3 under different multi-objective optimization algorithms

(1) MOPSO-MS

(2) cdMOPSO

(3) pccsAMOPSO

(4) CMOPOS

(5) AMOPSO

(6) NSGA-II

(7) SPEA2

(8) CA-MOEA

(9) MMOPSOFig.6 Pareto front of ZDT6 under different multi-objective optimization algorithms

(1) MOPSO-MS

(2) cdMOPSO

(3) pccsAMOPSO

(4) CMOPOS

(5) AMOPSO

(6) NSGA-II

(7) SPEA2

(8) CA-MOEA

(9) MMOPSOFig.7 Pareto front of DTLZ2 under different multi-objective optimization algorithms

3.3.2 Comparison of IGD values

The mean and standard deviation (std) of IGD for each algorithm on ZDT test function and DTLZ test function are recorded in Tables 2 and 3, respectively. The mean and ranking of the best IGD values obtained are bolded. Table 6 shows the ranking of IGD values for each algorithms. From the performance metrics provided in Tables 2, 3 and 6, it can be seen that the MOPSO-MS proposed in this paper performs much better than the comparison algorithm in terms of mean IGD. On 12 ZDT and DTLZ instances, MOPSO-MS obtained 7 IGD mean optimal results. MOPSO-MS outperformed other algorithms on 7 problems (ZDT1, ZDT2, ZDT3, DTLZ2, DTLZ3, DTLZ5, DTLZ6). The IGD performance simulation results show that MOPSO-MS has good convergence and distributivity for both two- and three-objective problems.

Table 2 Results of IGD for algorithms compared based on ZDT(all results are average on 30 independent runs)

Table 3 Results of IGD for algorithms compared based on DTLZ(all results are average on 30 independent runs)

Table 4 Results of HV for algorithms compared based on ZDT(all results are average on 30 independent runs)

Table 5 Results of HV for algorithms compared based on DTLZ(all results are average on 30 independent runs)

The Wilcoxon rank-sum test is one of the most commonly used methods in multi-objective algorithms[34], and the results of the Wilcoxon rank-sum test can indicate whether there is a significant difference between the two groups of simulation results and provide the hypothesis test results and ranking of the corresponding IGD indicators. More detailed performance differences are the hypothesis testing effects. The symbols “+”,“-” and indicate that the other algorithms are significantly better, worse, or similar to the MOPSO-MS, respectively, where the significance level for the Wilcoxon rank-sum test isα=0.05. Table 2 shows the IGD ranking of each algorithm on the ZDT problem, with MOPSO-MS receiving three first and two second place rankings on the ZDT problem. At the end of the table are the statistics “Rank sum”, “Final rank” and “Better/worse/similar”, with “Rank sum” recording the overall ranking of the algorithms on the ZDT problem. MOPSO-MS has an overall ranking of 8 on the ZDT problem, placing first among all algorithms. Table 3 gives a comparison of the IGD performance of the DTLZ instances, and the algorithm also shows good performance on the three-objective problem. And MOPSO-MS outperforms the other algorithms on four problems (DTLZ2, DTLZ3, DTLZ5, DTLZ6). MOPSO-MS has an overall ranking of 14 on the DTLZ problem, placing first among all algorithms.

3.3.3 Comparison of HV values

The HV indicator is also used to compare the performance of the algorithms. We ran each test instance 30 times independently and the mean and standard deviation are shown in Tables 4 and 5. Table 7 shows the ranking of HV values for each algorithms. Similarly, if an algorithm obtains the best results shown in bold. the Wilcoxon rank-sum test is also used to compare the HV result of the algorithms at a significance level of 0.05 and symbol+,-, and ≈ means the result of MOPSO-MS is significantly better than, worse than, or similar to that of the compared algorithm, respectively.

As shown in Table 4 and 7, MOPSO-MS outperforms the comparison algorithms on three of the five ZDT instances (ZDT1, ZDT2, ZDT3). The third best result is obtained on the remaining two instances (AMOPSO achieves the best on the dual-target ZDT4 and CA-MOEA achieves the best on the dual-target ZDT6).

As shown in Table 5 and 7, MOPSO-MS outperforms the comparison algorithms in three of the seven DTLZ instances (DTLZ1, DTLZ3, and DTLZ5), and achieves the second best result in the remaining three instances (Ca-MOEA achieves the best result on the three-target DTLZ2, CMOPSO achieves the best result on the three-target DTLZ4, AMOPSO optimizes on the three-target DTLA7).

From the results of Tables 2-5, it can be seen that the performance of IGD and HV is not always consistent. However, we can see that MOPSO-MS consistently performes best overall on the Pareto frontier problems regardless of which metric is used.

4 Conclusions

This paper presents an MOSPO algorithm by fusing multiple strategies, which adopts a new population crowding distance estimation method, comprehensively considers the influence of individuals on crowding distance, reduces the time complexity of the algorithm, lowers the computational cost, ensures the efficiency of external archive maintenance, and enhances the diversity of the algorithm. In addition, the algorithm takes into account the characteristics of the particles themselves and adaptively changes the inertia weightwby combining the historical optimal position and the global optimal position, which improves the convergence of the algorithm. Finally, when the algorithm selects the global optimal position to control the search direction, it uses a hierarchical selection of the global optimal strategy, which avoids the single search direction and eliminates the lack of random search direction, making the selection of the global optimal position more objective and comprehensive. Pareto frontier comparison, IGD value comparison, and HV value comparison between MOPSO-MS and other 8 algorithms in ZDT and DTLZ problems show that the proposed algorithm has significant advantages in convergence and diversity. At present, the algorithm has only been tested on low-dimensional multi-objective problems and can be seen to have a good advantage. However, the algorithm still needs to be tested and modified to deal with high-dimensional multi-objective problems.

At present, the research on MOPSO-MS is in its initial stage. In the future, the MOPSO-MS will be used to solve typical multi-objectiveoptimization problems in practice, such as intelligent warehouse optimization problems, robot route optimization problems, and factory layout optimization, to test the excellent performance of the algorithm.