APP下载

Integrating Variable Reduction Strategy With Evolutionary Algorithms for Solving Nonlinear Equations Systems

2022-10-26AijuanSongGuohuaWuWitoldPedryczandLingWang

IEEE/CAA Journal of Automatica Sinica 2022年1期

Aijuan Song, Guohua Wu, Witold Pedrycz, and Ling Wang

Abstract—Nonlinear equations systems (NESs) are widely used in real-world problems and they are difficult to solve due to their nonlinearity and multiple roots. Evolutionary algorithms (EAs)are one of the methods for solving NESs, given their global search capabilities and ability to locate multiple roots of a NES simultaneously within one run. Currently, the majority of research on using EAs to solve NESs focuses on transformation techniques and improving the performance of the used EAs. By contrast, problem domain knowledge of NESs is investigated in this study, where we propose the incorporation of a variable reduction strategy (VRS) into EAs to solve NESs. The VRS makes full use of the systems of expressing a NES and uses some variables (i.e., core variable) to represent other variables (i.e.,reduced variables) through variable relationships that exist in the equation systems. It enables the reduction of partial variables and equations and shrinks the decision space, thereby reducing the complexity of the problem and improving the search efficiency of the EAs. To test the effectiveness of VRS in dealing with NESs,this paper mainly integrates the VRS into two existing state-ofthe-art EA methods (i.e., MONES and DR-JADE) according to the integration framework of the VRS and EA, respectively.Experimental results show that, with the assistance of the VRS,the EA methods can produce better results than the original methods and other compared methods. Furthermore, extensive experiments regarding the influence of different reduction schemes and EAs substantiate that a better EA for solving a NES with more reduced variables tends to provide better performance.

I. INTRODUCTION

NONLINEAR equations systems (NESs) have emerged in the fields of science, engineering, economics, etc. [1].Numerous real-world problems can be modeled as NESs such as chemistry [2], robotics [3], electronics [4], signal processing [5], and physics [6]. Unlike linear equations systems, NESs have plenty of nonlinear operators like ax, lnx,xnand trigonometric functions. Furthermore, the overwhelming majority of NESs have more than a single equally important root. The above two characteristics make the problem challenging.

There is a class of methods for finding the numerical solutions of NESs. This kind of method mainly includes Newton methods [7], quasi-Newton methods [7], intervalbased methods, e.g., interval-Newton [8], homotopy continuation (embedding) methods [8], [9], trust-region methods [9], secant methods [10], Halley methods [11], and branch and bound methods [11]. Nevertheless, a host of these methods have the following weaknesses: 1) Aimed at locating just a single optimal solution rather than multiple optimal solutions in a single run; 2) High requirements for prior knowledge, such as derivative, and good starting values; 3) The quality of solutions is strongly problem-dependent and depends on the initial guess. Therefore, such methods are no longer sufficient. Over the past decade, people have also developed metaheuristics to solve NESs while continuously improving methods that study the numerical solution of NESs.

Metaheuristics possess many advantages such as simple implementation, strong versatility, and strong global search capabilities [12]. In comparison with the methods to study numerical solutions of NESs, it enjoys unique benefits especially for solving complex problems and requires less prior knowledge, no derivative, and has less dependence on problem characteristics and the initial solution. However,metaheuristics also face quite a few challenges when solving NESs, for instance, when dealing with large-scale and highdimension NESs and finding multiple roots in a single run.

As a crucial branch of metaheuristics, the evolutionary algorithm (EA) is essentially a global stochastic search algorithm [12] and demonstrates the capacity for locating multiple solutions over a single run. EAs are prevalent and effective methods for solving NESs. A NES needs to be transformed into an optimization problem prior to the solving process by EAs. The transformed problem is generally a multi-modal or multi-objective optimization problem. At present, EAs that have been applied to solve NESs comprise an evolutionary strategy (ES), particle swarm optimization algorithm (PSO), differential evolution algorithm (DE), and genetic algorithm (GA), etc. For example, Genget al. [13]proposed a ranking method in ES for solving NESs. Ouyanget al. [14] developed a hybrid PSO, which solves NESs by combining PSO with the Nelder-Mead simplex method.Turgutet al. [15] designed a chaotic behavior PSO to solve NESs, which improves the robustness and effectiveness of the algorithm through different chaotic maps. Gonget al. [16]argued that locating multiple roots by repulsion techniques was a promising method and they proposed a repulsion-based adaptive DE (RADE) for solving NESs. Renet al. [17]developed an efficient GA with symmetric and harmonious individuals for solving NESs, and Joshi and Krishna [18] used an improved GA to solve NESs. Thus, obtaining multiple optimization solutions for the optimization problem by EAs corresponds to getting multiple roots of the NES.

To solve NESs by EAs effectively, we pay attention to the transformation technique, the algorithm, and the problem itself, i.e., designing more reliable transformation techniques,designing more efficient EAs, and reducing the complexity of the problem. Some research shows that effectively integrating an algorithm with problem domain knowledge can generally improve the performance of the algorithm [19]. At present, for efficiently solving NES, the main body of literature is dedicated to transformation techniques and the performance of the EAs but lacks relevant works on the complexity reduction of problems. Hence, when it comes to solving large-scale and high-dimension NESs especially for complicated NESs in the real world, a multitude of the EAs for solving NESs still face tremendous challenges. For instance, the performance of the method proposed in [20] is competitive compared with many state-of-the-art EAs but deteriorates when the number of variables increases to 20 for a NES. Different from most previous studies, we focus on how to enhance optimization efficiency from the problems themselves when solving NESs,especially for large-scale and complicated NESs.

Variable reduction strategy (VRS) [21] can make full use of problem domain knowledge to reduce problems and trigger the complexity reduction of problems. Currently, the VRS has been applied to equality constrained optimization problems[21] and derivative unconstrained optimization problems [22],which have significantly improved the optimization efficiency. Therefore, it is of considerable significance and challenge to study how to apply VRS to solve NESs effectively.

Based on the above considerations, we propose the integration of the VRS with EAs for solving NESs. The VRS represents some variables with other variables through relationships among variables in the equations, resulting in reducing the complexity of the problems and improving the search efficiency of algorithms. The main contributions of this paper can be summarized as follows:

1) The core contribution is that we make the first attempt to utilize the VRS to reduce the complexity of NESs. We elaborately analyze and explain how to apply the VRS to simplify a NES. With the assistance of the VRS, a NES can entail a smaller decision space and lower complexity.

2) A general framework is proposed for integrating the VRS with an arbitrary EA for solving NESs. By this framework, the optimization efficiency of the used EAs can be significantly improved when solving NESs. To evaluate the integration method, we specifically integrate the VRS with two state-ofthe-art algorithms, which is referred to as DR-JADE (dynamic repulsion-based adaptive DE with optional external archive)[20] and MONES (the method that transforms NES to a biobjective optimization problem) [23], respectively.

3) Extensive experiments on two test suites, which respectively include 7 (contains 2 real-world problems) [23]and 46 (contains 5 real-world problems) [20] NESs, are conducted. Experimental results show that with the employment of VRS, the EAs can obtain better performance than the original EAs, thus demonstrating the effectiveness of the VRS for solving NESs and the VRS possesses great potential for practical applications of NESs. Moreover, a series of comparative experiments with the different reduction schemes and EAs reveal that an EA with better performance for solving a NES with more reduced variables tends to provide better performance.

The paper is organized as follows. Section II describes NESs and briefly reviews two transformation techniques of NESs. In Section III, the core idea and the reduction process of the VRS are described. Then, we respectively integrate the VRS with MONES and DR-JADE after presenting the integration framework of the VRS and EAs. Section IV selects two test suites to reduce and the relevant experiments are designed to study the superiority of the EAs with the VRS compared with other methods and the influence of the VRS and EAs on the integration method. Section V concludes this paper and briefly explores future research directions.

II. PROBLEM DESCRIPTION AND RELATED WORK

A. Nonlinear Equations Systems

A NES can be formulated as

whereS⊆Rnis the decision space defined by the parametric constraints of the decision variables and is a compact set that denotes the feasible region of the search space. The decision space can be described as

Solving the NES shown in (1) results in a series of optimization solutions in the decision space, where each optimization solutionsatisfies the following relationships:

Most of NESs have more than a single root. For instance,Fig. 1 depicts a NES problem with two nonlinear equations and two decision variables, and the expression is

Fig. 1. An example of a NES problem with multiple roots.

wherexi∈[−5,5],i=1,2.

As can be seen from Fig. 1, the NES shown in (5) has nine roots. Each root could be equally crucial, so one of the major tasks for solving NESs is locating multiple roots over a single run. Besides, improving the quality of the solutions is also crucial. The quality of the solutions refers to how close the solutions obtained by an algorithm is to the real solutions of a NES.

B. Transformation Techniques

NESs are generally transformed into optimization problems to efficiently develop roots, which possesses several advantages, such as low dependence on problem characteristics, locating multiple solutions in a single run. Currently,popular transformation techniques can be roughly classified into three categories: 1) single-objective optimization-based transformation techniques [20], [24], [25], 2) multiobjective optimization-based transformation techniques [23], [24], [26],and 3) constrained optimization-based transformation techniques [27], [28]. Herein, we will briefly introduce two representative transformation techniques including multiobjective optimization-based transformation techniques and single-objective optimization-based transformation techniques that are more commonly used transformation techniques compared to constrained optimization-based transformation techniques, namely dynamic repulsion-based EA (DREA)[20] and MONES [23], to prepare for the following research.

DREA [20] transforms a NES to a single-objective optimization problem and locates multiple roots by a dynamic repulsion technique. The repulsion function of DREA is as follows:

wheretis the current iteration counter.tmaxis the maximal number of iterations. DREA can be classified as a multiplicative repulsion technique.

MONES [23] transforms a NES into a bi-objective optimization problem to locate multiple roots of NESs. The transformed bi-objective optimization problem consists of two parts: the location function and the system function. The location function can be formulated as below:

wherex1is the first decision variable of the decision vectorEquation (14) determines the Pareto front of the optimization problem. The system function of MONES is

Equation (15) relates the two possible transformed optimization problem versions with the NES. By combining the location function with the two optimization problem versions in (15), we can derive a bi-objective optimization problem representing the original NES:

Equation (16) makes the Pareto optimal solutions of the transformed bi-objective optimization problem correspond to the optimal solutions of the NES. Since the system functions of the NES optimization solutions are equal to 0, their images in the objective space are located on the line segment defined byy=1−x.

III. VARIABLE REDUCTION STRATEGY

A. Variable Reduction Strategy in Nonlinear Equation Systems

The main idea of VRS is to first explore the relationships among variables by utilizing the equality optimality condition of an optimization problem. The equality optimality condition refers to the equality condition that the optimization problem must satisfy when obtaining an optimal solution, which is expressed in the form of equations. For a NES, the equality optimality condition is the equations in the NES. Secondarily,according to the types and relationships of the variables, we always use a part of variables to represent and calculate the other parts of variables during the iterative search process of an EA. In this way, the variables represented and computed by other variables can be reduced and are not directly optimized(i.e., as search dimensions) during the problem-solving process. As a result, some variables and spatial dimensionality can be reduced, such that it could reduce the complexity of the problem and improve the search efficiency of the EA.

Take the NES shown in (1)–(4) as an example. Assume thatAdenotes the set of decision variables included in the NES,A={xk|k=1,2,...,n},Ajis the collection of the decision variables involved in thej-th equation,Aj∈A. For the equationfj(→x)=0, 1 ≤j≤m, if we can obtain a relationship as

Then, in the process of locating the NES solutions,xkcan be calculated by the relationshipRk,jand the values of{xl|l∈Aj,l≠k}. Thus, the decision variablexkcan be reduced via thej-th equation. Meanwhile, since the variable relationship (17) is deduced fromfj(→x)=0, and the equationfj(→x)=0 is always satisfied when computing thexkvalue.Therefore, equationfj(→x)=0 can be reduced as well. In addition, a constraint condition associated with the variablexkis added:

For the sake of having a clear description, some key concepts are given as below:

1) Core Variable(s):The variable(s) used to represent other variables in the equations;

2) Reduced Variable(s):The variable(s) expressed and computed by core variables;

3) Eliminated Equation(s):The equation(s) eliminated along with the reduction of variables due to be totally satisfied by all solutions.

For example, in (17),xkis a reduced variable, {xl|l∈Aj,l≠k} is the collection of core variables, andfj(→x)=0 is the eliminated equation. Through the above variable reduction strategy, all variables in the NES can be divided into two categories: core variables and reduced variables. The collection ofqcore variables is denoted as

Hence, we haveXC∪XR=AandXC∩XR=∅. The reduced decision vector can be represented by the core variables. The reduced decision space formed by the reduced decision vector is recorded asS∗. The collection ofleliminated equations can be denoted as

Accordingly, we can obtain the reduced NES:

wherepis the number of the equations in the reduced NES,p=m−l.denotes the reduction relationship, in which the reduced variablexrican be reduced through the eliminated equationfsj.

The realization of VRS is based on the explicit variable relationships shown in (17). However, due to the complexity and nonlinearity of NESs, not all variables in an equation system can obtain such a relationship in (17). Here, we provide simple empirical guidance on what kinds of NESs are suitable to be reduced by VRS according to the characteristics of NESs and our previous research [21]:

1) If an equation of a NES includes a variable less than or equal to three-order, this variable and the corresponding equation can be reduced.

2) All linear variables and their corresponding equations tend to be reduced in a NES.

3) Regarding nonlinear operations like sinx, cosx, lnx, and exp(x), if a variable is operated only by one such operator and separately included in an equation, the variable and equation can be reduced.

In terms of the above guidance, we can preliminarily judge whether an NES can be reduced or not. In addition, in order to show how the VRS works for a NES, consider the following quintessential illustrative example [20]:

wherex1∈[−5,5],x2∈[−1,3],x3∈[−5,5]. A NES may have more than one reduction scheme, e.g., for (25),x3in (25a),x1,x2,x3in (25b), andx1andx3in (25c) can be reduced according to the above guidance. If we choosex3in (25b) as the reduced variable, the following reduction scheme can be obtained:

Consequently, in this case,x3is the reduced variable.x1andx2are the core variables. The eliminated equation is=0. The obtained reduced NES is

During the reduction process, constraint condition(s)associated with the reduced variable(s) should be considered.For example, reducing (25) brings in constraint conditions presented in (27). Furthermore, if we choosex3in (25a) as the reduced variable, we get

Like the reduction scheme in (28), we could choose a variable with an absolute or quadratic term as a reduced variable, such that a reduced variable may have more than one value computed from the variable relationship. Therefore, the values of reduced variables that exceed the upper and lower bounds of the constraint conditions could be treated by the following handling technique.

On the one hand, a reduced variable value corresponds to a value like (26). In this instance, when the reduced variable value is higher than its upper bound value, we make the reduced variable value equal to the upper bound value. On the contrary, we make the reduced variable value equal to the lower bound value if the reduced variable value is less than its lower bound value.

On the other hand, a reduced variable has more than one candidate value like (28), so we preferentially choose the value that does not violate the constraint condition. For example, when the constraint condition is 0 ≤x1=1±x2≤1 and the core variablex2=1,x1calculated byx2is either 0 or 2.x1=2 violates the constraint condition and is not feasible, so we makex1values equal to 0. If all the reduced variable values violate the constraint condition, we make them equal to its upper bound value or lower bound value.

B. Integration of the VRS and EA

1) A Framework for Integrating the VRS With EA to Solve NESs:The framework for integrating the VRS with EA is exhibited in Fig. 2. In this framework, a NES is first processed by the VRS. The variable reduction process can be seen as a pre-processing of the NES. Second, a transformation technique is used to transform the reduced NES into an optimization problem. Then an EA is used to solve the transformed optimization problem. Then, a series of optimization solutions for the transformed optimization problem can be obtained, which corresponds to obtaining the roots for the reduced NES. At last, the relationships between the reduced variables and the core variables are used to compute the values of the reduced variables. By combining the values of the reduced variables and core variables, a series of roots for the original NESs are finally obtained.

Fig. 2. A framework for the integration of VRS and EA.

In the framework, the VRS can be theoretically combined with any transformation technique and EA. In this work, we mainly study the integration of the VRS with two state-of-theart methods, i.e., MONES and DR-JADE.

2) The Integration of VRS and DR-JADE or MONES:MONES [23] was proposed by Yonget al. in 2014. MONES transforms the NES described by (1)–(4) into the form (16),and then solves the transformed bi-objective optimization problems by NSGA-II (a fast and elitist multi-objective genetic algorithm) [29]. The method that integrates VRS into MONES is abbreviated as VR-MONES.

DREA [20] was presented by Liaoet al. in 2020. DREA transforms a NES into form (6). DREA uses JADE (adaptive differential evolution with an optional external archive) as the optimization engine [30]. The combined method is abbreviated as DR-JADE. We integrate VRS into DR-JADE and the resultant method is named as VR-DR-JADE for short.

As mentioned in Section II-A, a reduced variable may have more than one possible value, which may cause different objective function values for an individual. The objective function value refers tofor VR-MONES orfor VR-DR-JADE in this paper. Taking the original NES (1)–(4) and the reduced NES (22)–(24) as an example, Fig. 3 intuitively displays the calculation process of the objective function value for an individual in a population.

Fig. 3. Illustration of computing the objective function value for the individual.

In Fig. 3, the bottom layer is thei-th individual in a population during the evolution, which consists ofqcore variable values, i.e.,=(xi,c1,xi,c2,...,xi,cq). First, we compute the reduced variable values via the core variable values inand the variable relationships expressed in (23).The obtained reduced variable values are put in the setXi,r,whereXi,r jdenotes the set of the value(s) of thej-th reduced variable (a reduced variable may have more than one candidate values). Second, we handle the reduced variable values violating the constraint by the technique introduced in Section II-A and thus we obtain the feasible reduced variables set. Then, we combine the values of the reduced variables in the setwith the individual(i.e., the set of core variable values) to form the new individual(s) denoted byXi.After that, values inXiare substituted into the reduced (22) to compute the objective function value(s). Finally, we select the minimum value ofas the objective function value of the individual.

Next, for VR-MONES, we should compute the transformed bi-objective optimization function by (14)–(16), in whichis the first decision variable of the decision vectorafter reduction. For VR-DR-JADE, the objective function value obtained by Fig. 3 is thevalue of the individual. We can compute the value of the correspondingby (6). Then,we can use VR-MONES or VR-DR-JADE to iterate and get an evolved populationpop. Finally, combining the values of core variable inpopand the values of reduced variable computed by reduction relationships forms the final population.

Currently, the VRS has been used in equality constrained optimization problems and the EAs with the VRS obtain great improvement compared with the original methods. The differences between the two applications of the VRS in NESs and equality constrained optimization problems are mainly reflected in two aspects. First, for solving equality constrained optimization problems, the VRS reduces the problem by equality constraints. In contrast, the VRS reduces a NES by the equations system itself. Second, the reduced equality constrained optimization problem can be solved by directly using the original EAs. However, the reduced NESs need to consider handling inequality constraints brought by the reduction process.

IV. EXPERIMENTAL STUDY

To demonstrate that VRS can improve the performance of the original algorithms, this section mainly focuses on the comparison between VR-MONES and VR-DR-JADE with their corresponding original algorithms, i.e., MONES and DRJADE, respectively. In Section IV-A, we use a benchmark suite with 7 NESs (in which two test problems are real-world problems) [23] to test the efficiency of VRS by comparing the experimental results between VR-MONES and MONES.Moreover, in Section IV-B, a large-scale test suite of 46 NESs(in which five test problems are real-world problems) [20] is used to show the effectiveness of VRS by comparing VR-DRJADE with other popular and state-of-the-art methods. What is more, to study the effect of different reduction schemes and EAs, a series of experiments concerning reduction schemes with different numbers of reduced variables and different reduced variables and the integrated methods with various EAs are conducted in Section IV-C. At last, in Section IV-A,we briefly summarize the experimental results obtained by Sections IV-A, IV-B, and IV-C.

A. Experimental Study on VR-MONES

We use the VRS for the 7 NESs in reference [23], and compare the performance of VR-MONES and MONES in terms of two performance indicators, i.e., the inverted generational distance and the number of the optimal solutions found.

1) Test Problems:In this section, seven test problems(denoted as F1–F7) are used to investigate the effectiveness of VR-MONES. Among them, the optimal solutions of F1–F4 are known. F5–F7 have infinitely optimal solutions, which are not completely known until now. F6 and F7 are real-world problems and are derived from neurophysiology application models and economics system models, respectively. The information on the seven test problems is summarized in Table I, including the number of the decision variables (D),the decision space (S), the number of the linear equations(LE), the number of the nonlinear equations (NE), and the number of the roots (NoR).

TABLE I CHARACTERIZATIONS OF THE TEST PROBLEMS F1–F7.

2) Performance Metrics:In this section, two performance indicators are introduced to evaluate the capability of VRMONES and MONES to locate the roots of a NES.

a) The inverted generational distance (IGD) [31]:The IGD indicator is computed as

whereIPis a set of the images of the individuals of a population in the objective space andIP∗is a set of the images of all optimal solutions of a NES in the objective space:is the minimum Euclidean distance betweenand the points inIP. If a NES (such as F5,F6 or F7) has infinite roots, |IP∗| is a set of uniformly distributed points in the objective space along Pareto front.|IP∗| is the number of optimal solutions inIP∗, we setIP∗=100 for F5, F6 and F7. In this section, the objective space is defined byx=xrandy=1−xrfor MONES or VRMONES.xris the first decision variable of a NES for MONES or the first decision variable of a reduced NES for VR-MONES. The IGD can measure both the diversity and convergence ofIP.

b) Number of the optimal solutions found (NOF) [23]:The NOF indicator is computed as

Here, ε is a user-defined threshold value. In this section, we set ε=0.01 for F5 and 0.02 for the other six problems according to the number of decision variables [23]. The larger the NOF-indicator value is, the more roots are found.

3) Variable Reduction Results:According to the reduction method given in Section II-A, a NES may have more than one reduction scheme. In this section, we only show one reduction scheme thought to be promising for each NES. Generally, the more variables that are reduced, the smaller the decision space of a NES becomes and the better the results that are obtained.Hence, a promising reduction scheme refers to a reduction scheme that maximizes the number of reduced variables in this work. The expressions of the 7 NESs and the related reduction schemes are shown in Table II. The successive experiments are based on the reduction schemes shown in Table II as well.

As can be seen from Table II, each problem in F1–F4 contains two decision variables and two equations, in which one variable and one equation can be reduced. Two variables and two equations can be reduced for NES F5. Three variables and three equations can be reduced for NES F6, and the reduced F6 contains three variables and equations. In this Section, we setck=0,1 ≤k≤D−1 for F7. One variable and equation can be reduced for NES F7.

4) Experimental Results and Discussions on F1–F7:For a fair comparison, the parameter settings of VR-MONES are the same as those of the original MONES in [23]. To make the experimental results reliable, 30 independent runs are executed on each NES, and the maximum number of generations is set to 500 (i.e., the maximum function evaluation number is 50 000) for each run. Table III presentsthe best, mean, worst, and standard deviation of the IGDindictor and NOF-indicator values generated by VR-MONES and MONES.

TABLE II EXPRESSIONS AND VARIABLE REDUCTION OF TEST SUITE F1–F7.

From the results in Table III, in regard to the IGD indicator,the best, mean, worst and standard deviation of the IGDindicator values obtained by VR-MONES have significantly improved for all test problems. For example, for NES F3, the best, mean, worst and standard deviation of the IGD-indicator values obtained by VR-MONES have been respectively improved by 92.61%, 95.34%, 98.84% and 99.21% compared with those obtained by MONES. The phenomenon indicates that the solutions obtained by VR-MONE are closer to the actual known solutions for all the test problems than those obtained by MONES. We also implemented the Wilcoxon test on the mean IGD-indicator for all the test problems over 30 runs1The statistical tests reported in this paper are calculated by the KEEL3.0 software [32].. Comparing VR-MONES with MONES, we can getR+=28.0,R−=0.0 andp=1.56E−02 by the Wilcoxon test.Since VR-MONES can provide a higherR+value thanR−value and thepvalue is less than 0.05, VR-MONES is significantly better than MONES on the seven test problems.

With respect to the NOF indicator, it is clear that the best,mean, worst, and standard deviation of NOF-indicator values obtained by VR-MONES are better or at least equal to those obtained by MONES for NESs F1–F7. The results reveal that VR-MONES can find more roots than MONES. For each NES in F1–F4 with known optimal solutions, VR-MONES can successfully locate all roots over 30 runs. For each NES in F5–F7 with infinitely many roots (the default number of the optimal solutions is 100 in this section), VR-MONES has thecapability to maintain much more roots than MONES,especially for F5 and F7.

TABLE III STATUS OF IGD-INDICATOR AND NOF-INDICATOR IN VR-MONES AND MONES. THE BETTER OR EQUAL IGD-INDICATOR AND NOFINDICATOR FOR EACH NES ARE HIGHLIGHTED IN BOLDFACE

To further show the performance of MONES and VRMONES, Fig. 4 provides the convergence process of the mean IGD-indicator values provided by MONES and VR-MONES for all the test problems over 30 independent runs.

As depicted in Fig. 4, at the beginning of evolution, the convergence curve of VR-MONES or MONES starts with a relatively large mean IGD-indicator value. As the optimization proceeding, the mean IGD-indicator values in VR-MONES and MONES both converge continuously toward a positive number close to 0. Particularly, VR-MONES can converge to a smaller IGD-indicator value for each NES in F1–F7. For example, during evolution, the mean IGDindicator values of MONES and VR-MONES start at 0.2699 and 0.064 and eventually converge to 0.0094 and 0.0025 respectively for NES F4. This reveals that VR-MONES can robustly obtain better solutions while maintaining the diversity of the population, and that the integration of VRS can noticeably improve the search efficiency of MONES.

Fig. 4. Mean of IGD-indicator values for VR-MONES and MONES during the evolution.

Fig. 5. Evolution of VR-DR-JADE over a typical run on E11.

Fig. 6. Evolution of DR-JADE over a typical run on E11.

B. Experimental Study on VR-DR-JADE

To further evaluate the effectiveness of VRS, this section applies VRS to another test suite with 46 NESs in [20]. In this case, the representative algorithm DR-JADE with and without VRS are used to solve the test problems, respectively. The performance of VR-DR-JADE and other state-of-the-art methods is evaluated by the values of root ratio, success rate,and other indicators obtained from the experiments.

1) Test Problems:In this section, we choose 46 NESs with diverse features to extensively evaluate the performance of an algorithm. The brief information of the 46 NESs (denote as E1–E46) can be found in [20]. The optimal solutions of NESs E1–E42 are known, while the optimal solutions of NESs F43–F46 are unknown. NESs E13 and E43–E46 come from real-world applications.NFEsmaxvalues are different for different problems owing to their different difficulties. In this section, to fairly compare the performance of VR-DR-JADE and DR-JADE, we use the sameNFEsmaxas in [20].

2) Performance Metrics:Due to the difference between MONES and DR-JADE and the suitability of the different evaluation merits, two other performance metrics inspired by the multi-model and multi-objective problems are adopted to comprehensively assess solutions found by a method in this section. Moreover, one performance metric is employed to evaluate the quality of the roots found by a method.

a) Root ratio (RR) [33]:The RR computes the average ratio of roots found over multiple runs:

whereNris the number of runs.Nf,iis the number of roots found in thei-th run andNoRis the number of actual known roots of a NES. In this section, for a solution, if its repulsion value<1e−5, it can be regarded as a root [20]. To make the experimental results general and reliable, each algorithm is executed overNr=30 independent runs for each NES.

b) Success rate (SR) [34]:The SR measures the ratio of successful runs. A successful run refers to a run where all the actual known roots of a NES are found, the expression of SR is

whereNr,sis the number of successful runs.

The optimal solutions of NESs E43–E46 are unknown, so the RR and SR criteria cannot be used for the performance evaluation [34]. We will discuss the performance of NESs E43–E46 in the next section.

c) Evaluating the quality of roots found (QR):The mean of the objective function valuesfor thei-th run:

wheremis the number of equations for a NES, andis thel-th of roots found in thei-th run. The QR indicator is adapted from the IGD indicator, which can measure the quality of the roots obtained by an algorithm.

3) Variable Reduction Results:Due to space limitation, the expressions of the 46 NESs and the selected reduction schemes are shown in the supplementary file2http://faculty.csu.edu.cn/guohuawu/zh_CN/zdylm/193832/list/index.htm.

Among the reduction schemes of the 46 NESs, other than E5, E12, and E21, all the NESs can be reduced by VRS. A NES may have more than one reduction scheme. We show one promising reduction scheme for each NES in the supplementary file2http://faculty.csu.edu.cn/guohuawu/zh_CN/zdylm/193832/list/index.htm. For the NESs that cannot be reduced,they can be divided into two categories:

a) No variable in a NES can be explicitly expressed by other variables, such as E5 and E21.

b) There is a periodic function in a NES and no variable can be completely represented and calculated by other variables in the NES, such as E12.

4) Experimental Results and Discussion on E1–E46:Except for NESs E5, E12, E21 that cannot be reduced, the experimental results of DR-JADE and VR-DR-JADE on the other 39 NESs concerning RR, SR, and QR over 30 independent runs are reported in Table S-2 in the supplemental material2http://faculty.csu.edu.cn/guohuawu/zh_CN/zdylm/193832/list/index.htm.

From the results in Table S-2, comparing VR-DR-JADE with DR-JADE, other than a slight increase for E25, the mean and standard deviation values of QR-indicator values for each test problem in E1–E42 (except E5, E12 and E21) have decreased. The above phenomenon indicates that the most roots located by VR-DR-JADE are closer to the actual known roots than the roots located by DR-JADE. Additionally, to further compare the quality of the overall solutions of VRDR-JADE and DR-JADE, the Wilcoxon test on the mean QRindicator in Table S-2 is performed. Comparing VR-DRJADE with DR-JADE, we can getR+=687.0,R−=16.0, andp=2.46E−09. Since VR-DR-JADE can provide a higherR+value thanR−value and thepvalue is less than 0.05, VR-DRJADE significantly outperforms DR-JADE in terms of the overall quality of solutions.

VR-DR-JADE can get 11 better and 27 equal values in both RR and SR compared with DR-JADE for NESs E1–E42(except E5, E12, and E21). It is worth noting that VR-DRJADE can locate all the roots of NES E24 over each run. In contrast, DR-JADE cannot locate any roots for NES E24 over 30 runs. For NES E9, VR-DR-JADE and DR-JADE both cannot find any roots. But when the number of decision variables of NES E9 is set to 10, VR-DR-JADE can find all the roots of NES E9 while DR-JADE still cannot find any roots.

For NES E16, VR-DR-JADE always has one root not found over each run, which may be related to the search ability of DR-JADE itself.

To further study the roots obtained by VR-DR-JADE, we compare VR-DR-JADE with nine appealing methods, i.e.,DR-JADE [20], an adaptive multiobjective DE with a weighted biobjective transformation technique (A-WeB) [35],MONES [23], an improved harmony search algorithm (I-HS)[36], a neighborhood based crowding DE (NCDE) [37], a neighborhood based speciation DE (NSDE) [37], GA with a sequential quadratic programming (GA-SQP) [38], PSO with Nelder–Mead method (PSO-NM) [39], and a niche cuckoo search algorithm (NCSA) [40]3Except for VR-DR-JADE, DR-JADE, and MONES, the data of other seven compared methods are from the literature [20] and the detailed results of the eleven methods are reported in the supplementary file2.. Note that if we map the individuals in a population to the objective space defined by MONES for NESs E1–E42, the roots that have the same values in the first decision variable will be considered to be discovered even if only a few of them are located. To address this issue, we propose another method to judge whether an individual in the final population is a root or not, i.e., when the minimum Euclidean distance between an individual in the final population and the individuals in the set of optimalsolutions for a NES is less than 0.01, the individual can be considered as a root of the NES. Table IV shows the Friedman test of RR and SR for the 10 methods. Table V displays the Wilcoxon test obtained by the comparison of VR-DR-JADE and the other 9 methods for both RR and SR.

TABLE IV AVERAGE RANKINGS OF VR-DR-JADE AND THE OTHER TEN STATE-OF-THE-ART ALGORITHMS OBTAINED BY THE FRIEDMAN TEST FOR BOTH RR AND SR

TABLE V RESULTS OF VR-DR-JADE COMPARED WITH THE OTHER TEN STATE-OF-THE-ART ALGORITHMS OBTAINED BY THE WILCOXON TEST FOR BOTH RR AND SR

From Table IV, VR-DR-JADE has achieved the highest Friedman test rankings for both SR and RR. The above results reveal that the integration of VRS can make the algorithms locate more roots than the original algorithms overall.Meanwhile, Table V shows that VR-DR-JADE significantly outperforms the other 9 methods for RR and SR by the Wilcoxon test, since it can provide higherR+values thanR−values in all cases, and allpvalues are less than 0.05.Therefore, we can conclude that the integration of VRS can be an effective way to improve the performance of VR-DRJADE.

To compare the convergence process of VR-DR-JADE and DR-JADE, we portray the roots located by VR-DR-JADE and DR-JADE at the 1st, 5th, 15th, and 30th iteration over a typical run. Figs. 5–6 and Figs. 7–8 respectively show the results for E11 and E19.

Comparing Fig. 5 with Fig. 6, att=1 VR-DR-JADE and DR-JADE can both find one of the roots for E11. Att=5,VR-DR-JADE can locate five roots while DR-JADE can only locate three roots. Att=15, VR-DR-JADE can locate 11 roots, but DR-JADE can only locate nine roots. Att=30, VRDR-JADE has located all 15 roots of E11, while DR-JADE has only located 13 roots.

Comparing Fig. 7 with Fig. 8, att=1 owing to the reduced variable with a quadratic term for E19, we can locate two roots for VR-DR-JADE. Att=5, VR-DR-JADE has located all the ten roots while DR-JADE has only located three roots.Att=15 andt=30, DR-JADE has located 5 and 8 roots respectively with two roots not found at the end of the evolution. The above comparison results demonstrate that the application of VRS improves the search efficiency of DRJADE and allows for VR-DR-JADE to locate more roots than DR-JADE after the same number of iterations.

Fig. 7. Evolution of VR-DR-JADE over a typical run on E19.

Fig. 8. Evolution of DR-JADE over a typical run on E19.

TABLE VI STATUS OF DR-JADE AND VR-DR-JADE FOR THE NUMBER OF OBTAINED ROOTS AND THE OBJECTIVE FUNCTION VALUES

In the previous sections, the performance of VR-DR-JADE is verified through the 42 NESs with known roots. For E43–E46, we evaluate the number of obtained roots for DRJADE and VR-DR-JADE by the best, the worst, the mean,and the standard deviation of the number of obtained roots over 30 runs. What is more, we evaluate the quality of obtained roots for DR-JADE and VR-DR-JADE by the best, the mean, and the standard deviation of objective function values of obtained roots over a typical run according to the recommendation in the literature [20]. The results are reported in Table VI4The date of DR-JADE comes from [20]..

As shown in Table VI, for each NES in E43–E46, the number of roots obtained by VR-DR-JADE is better than or equivalent to DR-JADE in terms of the best, the worst, the mean, and the standard deviation values (except the standard deviation values of E46). Especially for E43 and E46, the mean values of the number of roots obtained by VR-DRJADE are 30 and 28.97 over 30 runs respectively, which is much more than those obtained by DR-JADE. For E46,although the standard deviation value of the number of roots obtained by VR-DR-JADE is larger than that obtained by DRJADE, the best, the worst, and the mean values of the number of roots obtained by VR-DR-JADE show great improvement.Especially for E44 and E46, the roots found by VR-DR-JADE have significantly better quality than those found by DRJADE. The above phenomenon reveals that VR-DR-JADE is capable of locating more and better roots than DR-JADE when a NES has an infinite number of solutions.

B. Influence of Different Reduction Schemes and EAs

In this subsection, to determine the reduction scheme that is best for solving NESs by EAs and to determine the types of EAs that are more suitable for the integration method, we will study the effect of different VRS and EAs on our proposed method.

TABLE VII EXPERIMENTAL COMPARISON RESULTS OF THE NESS WITH DIFFERENT NUMBERS OF REDUCED VARIABLES

TABLE VIII EXPERIMENTAL COMPARISON RESULTS OF THE NESS WITH DIFFERENT REDUCED VARIABLES

1) Influence of Different Reduction Schemes:A NES may own more than one reduction scheme by the VRS. To study the influence of different reduction schemes, the following experiment will be conducted.

a) Influence of different numbers of reduced variables

We choose three NESs (i.e., E15, E22, and E28) in Section IV-B and set the number of reduced variables from 0 to the maximum. Then, we solve these NESs by VR-DRJADE. Excluding the number of reduced variables, all parameter settings are consistent with the experiment in Section IV-B. Table VII reports the results of three NESs including the mean and standard deviation of QR, RR, and SR over 30 runs. Besides, Table VII provides the number of roots of the NESs with different numbers of reduced variables at the 1st, 5th, 15th, and 30th iteration over a typical run as well.

From Table VII, we can observe that as the number of reduced variables increases, the mean and the standard deviation values of QR-indicator values decrease, and RRindicator values and SR-indicator values increase or remain unchanged. It is worth noting that with the assistance of VRS,three NESs that are shown can gain better results for the QR,RR, and SR. The phenomenon reveals that with the number of reduced variables increasing, VR-DR-JADE tends to gain more roots with higher quality. Moreover, according to the number of located roots at the different iterations in Table VII,the NESs with more reduced variables can locate more roots at the same iterations, which suggests that the EA can converge faster when solving the NESs with more reduced variables. Therefore, we can draw the conclusion that an EA for solving the NESs with more reduced variables possesses better performance, generally.

b) Influence of different reduced variables

To study the effect of selecting different reduced variables on our presented method, we change the reduced variables and keep the number of reduced variables unchanged. Three NESs(E2, E10, and E31) and VR-DR-JADE are used in the experiment to analyze the influence of different reduced variables. The mean and the standard deviation of the QR,RR, SR over 30 runs, and the number of roots of the NESs with different reduced variables at the 1st, 5th, 15th, and 30th iteration over a typical run are displayed in Table VIII.

As exhibited in Table VIII, all the NESs with the VRS can contribute to better results pertaining to the QR, RR, SR, and the convergence than the original NESs. What is more, the difference of reduced variables has a significant effect on the QR-indicator values. But it only has a slight influence on the RR, SR, and convergence rate. The different results for the reduction schemes with different reduced variables can be attributed to the linkages between reduced variables and other variables or objective functions.

In terms of the above experimental results, we can conclude that different numbers of reduced variables and different reduced variables can impact the performance of the integration method of VRS and EA, where different numbers of reduced variables have a more significant effect than different reduced variables. An EA for solving the NESs with more reduced variables can obtain better performance. In the future, we plan to further study how to automatically obtain the maximum reduced variables for a NES, and select the most suitable reduction scheme considering the linkages between reduced variables and other variables or objective functions.

2) Influence of Different EAs:For the sake of studying the impact of different EAs on the performance of the integration method, and whether the VRS or EAs influences the performance of the presented method, three different appealing EAs, i.e., DR-JADE [20], DR-CLPSO [20], and MONES [23] are selected to study the influence of different EAs on our method, where DR-JADE, DR-CLPSO, and MONES adopt JADE, CLPSO (Comprehensive learning PSO)[41] and NSGA-II as the search engines, respectively. NESs E1–E42 in Section IV-B are solved by the above three EAs with and without VRS. Tables S-3 and S-4 in supplemental material2show the detailed results with regards to the RRindicator and SR-indicator values over 30 independent runs.Besides, Tables IX and X report the Friedman test rankings and Wilcoxon test at a significance level α =0.05 of the above six methods for both RR and SR, respectively.

TABLE IX AVERAGE RANKINGS OF THE EAS WITH AND WITHOUT VRS OBTAINED BY THE FRIEDMAN TEST FOR BOTH RR AND SR

From the results in Table IX and X, we notice that:

a) The EAs with the VRS can gain higher ranks and provide higherR+values thanR−values andpvalues less than 0.05,as compared to the corresponding EA without VRS. The results indicate that the VRS has a significant influence on our integration method and an EA with the VRS tends to obtain better results than their counterparts overall.

b) VR-DR-JADE and VR-DR-CLPSO rank the 1st and 2nd regarding the Friedman test for the RR and SR, separately.DR-JADE ranks the 3rd, followed by VR-MONES. Moreover,since thepvalue between VR-DR-JADE and VR-DR-CLPSO or VR-DR-CLPSO and VR-MONES is greater than 0.05 according to the Wilcoxon test in Table X, the differences between the two pair methods are not obvious. However, VRDR-JADE and VR-MONES have an obvious difference regarding the RR and SR. The phenomenon reveals that EAs also influence the integration methods.

Therefore, VRS can significantly improve the performance of the original EAs and the optimization capability of an EA itself is also very crucial for solving NESs effectively.Generally, a better EA with VRS can provide even better performance.

D. Experimental Conclusions

According to the experimental results and discussions above, we can safely draw some conclusions:

1) The integration of VRS enables MONES and DR-JADE not only to locate more high-quality roots but also significantly improve the search efficiency of the original EAs for the test NESs with various characteristics.

2) Both different numbers of reduced variables and different reduced variables for a NES can impact the performance of the integration method, where different numbers of reduced variables generally have a more significant effect than different reduced variables. The more variables are reduced,the better the performance of an EA will be.

3) Both the VRS and the optimization capability of an EA itself are very crucial for solving NESs effectively. Generally,using a better EA to integrate with the VRS can provide better performance.

Therefore, the VRS is an effective method for enhancing the performance of an EA when solving NESs. A reduction scheme with more reduced variables and an EA with better performance are integrated into our framework can make the integration method achieve better performance. Furthermore,in the real world, many complicated and high-dimension NESs have emerged. Hence, from the perspective of realworld applications, before we use EAs to solve a NES, it would be valuable and effective to check whether the VRS is applicable.

V. CONCLUSIONS AND FUTURE WORK

This paper proposes to incorporate the VRS into EAs to solve NESs. The VRS reduces the number of variables and equations of a NES, accordingly shrinks the decision space and reduces the complexity of the NES, and results in improved optimization efficiency of the original EA for solving NESs. VRS is specifically integrated with two stateof-the-art methods (MONES and DR-JADE) in this work,respectively. The experimental results on two test suites with 7 NESs and 46 NESs, respectively, verify the effectiveness of the VRS in solving NESs. According to the framework of the combination of the VRS and EAs, the VRS theoretically can also be integrated with any EA. The experimental results on different reduction schemes and EAs demonstrate that a reduction scheme with more reduced variables and an EA with better performance are integrated into the integration framework of the VRS and EA to enable a better performanceof our proposed method, generally.

TABLE X RESULTS OF THE EAS WITH AND WITHOUT VRS OBTAINED BY THE WILCOXON TEST FOR BOTH RR AND SR

It is noted that there are still shortcomings in this work. On the one hand, the VRS cannot be applied to all NESs. On the other hand, for several NESs, EAs may even obtain worse results after integrating with the VRS. For the NESs that cannot be explicitly reduced, an approximative variable reduction strategy may be useful to resolve this problem.Moreover, we can develop more efficient transformation techniques and EAs to combine with the VRS to solve NESs,e.g., the ensemble algorithms [42] and objective space partition strategy [43]. It is worth noting that the theory of VRS deserves further investigation as well, such as how to realize maximum variable reduction, how to measure the relationship between reduced variables and other variables or objective function, and what kinds of optimization problems can be reduced. In summary, extending the VRS and designing more efficient EAs and transformation techniques to integrate with the VRS deserve further investigation in the future.