APP下载

Nonlinear constrained optimization using the flexible tolerance method hybridized with different unconstrained methods

2017-05-28AliceMedeirosLimaAntonioJosGonalvesCruzWuHongKwong

Alice Medeiros Lima *,Antonio José Gonçalves Cruz ,Wu Hong Kwong

1 Chemical Engineering Graduate Program,Federal University of São Carlos,Rodovia Washington Luiz,km 235 — SP 310,São Carlos,SP 13565-905,Brazil

2 Department of Chemical Engineering,Federal University of São Carlos,Rodovia Washington Luiz,km 235 — SP 310,São Carlos,SP 13565-905,Brazil

1.Introduction

Constrained nonlinear optimization problems can be solved using a wide range of optimization methods.These can be broadly divided into deterministic and stochastic methods.In deterministic methods,every step can be predicted from knowledge of the starting point,so that,the same answer is always obtained when beginning from the same starting point.However,in stochastic methods,several choices are made based on random numbers drawn at the time of code execution.Since,at every code execution,all the numbers will be different,a random method does not perform the same sequence of operations in two successive runs.Starting from the same point,each code execution will follow its own path and possibly lead to a different final answer.

The search for global optimality in nonlinear optimization problems is a challenge for any optimization method.For linear problems, finding the global optimum can,in principle,be guaranteed.However,in nonlinear problems there is no guarantee of finding the global optimum,even using deterministic and/or stochastic methods,[1].

The flexible tolerance method(FTM),which was the main method used in this work,is a deterministic method that doesnotuse derivative information to perform the search.Although the FTMwas first proposed in the 1960s by Paviani and Himmelblau[2],the method has been little explored and there are few applications reported in the literature.However,the use of decreasing in feasibility tolerances has been employed in many modern methods for constrained optimization,as reported by Martínez and Sobral[3].

Fentonet al.[4]proposed a modification in FTM and apply the method in the solution ofNLP optimization problems.In the modification,the random search technique is used to complement the flexible tolerance method by generating a data file which contains feasible solutions of the optimization problem in question.According to the authors,the modification proposed improved the efficiency and effectiveness of the flexible tolerance method.

Zhang and Ren[5]applied FTM to multicomponent spectrophotometric analyses.It was found that FTM was an efficient procedure for optimization of multidimensional problems,especially in the presence of interactions between variables and when the problem was ill-conditioned.

Constantinescu[6]used FTM to determine smooth and timeoptimal path-constrained trajectories for robotic manipulators.The method was chosen for two reasons:the derivatives of the constraints and the cost function were not available;and the solution sought was expected to be on the boundary of the admissible region(in which case it is desirable to use information about points on both sides of the limiting surface in order to converge to the surface).The FTM proved to be efficient in solving the problem of trajectories with time optimality and smoothness.

Chen and Yin[7]used the FTM to solve the optimization problems for determining hydrologic parameters in the root zone:water uptake rate,spatial root distribution,in filtration rate,and evaporation.

Shanget al.[8]used FTMwith an AGA(adaptive genetic algorithm)to solve nonlinear,multimodal and multi-constraint optimization problems.FTM,serving as one of the AGA operators,used a flexible tolerance criterion for near-feasible points to minimize the constraint violation of an objective function.Complex functions were evaluated,and the authors concluded that the hybrid method was suitable for resolution of real-world problems.

Omowunmi and Susu[9]used FTM to estimate the kinetic parameters of an autocatalytic reaction involving the pyrolysis of n-eicosane.The method effectively minimized the sum of squares residuals between the experimental and predicted rates of reaction.

In previous work,Limaet al.[10]used FTM in the synthesis and analysis of processes with mass integration.The FTM was compared with two indirect optimization methods(GRG and SQP)and good results were obtained solving a classic case of mass integration.It was found that when the FTM was applied to solve engineering problems,the inner search became slow and difficult when the dimensions of the problem became larger.Another difficulty concerned variations involving the range of variables,which could make it hard for the FTM to achieve convergence.In order to try to resolve these issues,the present work proposes the transformation of variables by scaling,as well as hybridization with other methods.Selection of the methods used for hybridization with FTM was based on:(i)derivative-free use,(ii)ease of implementation and(iii)good performance reported in the literature.The methods proposed in this work was based on flexible tolerance method(FTM),and were(i)FTM with scaling of variables(FTMS),(ii)FTMS hybridized with BFGS(FTMS-BFGS),(iii)FTMS hybridized with modified Powell's method(FTMS-Powell)and(iv)FTMS hybridized with PSO(FTMS-PSO).

2.Methodology

The general formulation of a nonlinear programming problem can be stated by Eqs.(1)–(3)as follows:

Fig.1.FTM algorithm flowchart for performing the outer search that minimizes the objective function f(x).All vectors x are assumed to represent x(k),unless noted otherwise.Adapted and modified from Naish[11].The gray boxes indicated the inner search that can be performed using different unconstrained minimization algorithms(in this work:Nelder–Mead,Powelland BFGS).The blue box indicates the initialization step that is replaced by PSO method in this work.

wheref(x)is the objective function,hi(x)the equality constraints andgi(x)the inequality constraints.

The flexible tolerance method(FTM)is a direct search method and was proposed by Paviani and Himmelblau[2].The FTM algorithm improves the value of objective function using the information provided by feasible points,as well some non-feasible points,called near-feasible points.The feasibility becomes more restrictive as the search moves towards the problem solution,until the limit where only the feasible vectors x of Eqs.(1)–(3)are accepted.The result of this basic strategy,the optimization problem,Eqs.(1)–(3),can be rewritten as indicated in Eqs.(4)and(5):

where Φ(k)is the flexible tolerance criterion for feasibility in thekstage of the search,andT(x)is the positive functional of all constraints of equality and/or inequality of the problem,used as a measure of the violation extent of constraints.The functionalT(x)is described by Eq.(6),whereUis the Heaviside operator according to Eq.(7).

The tolerance criterion Φ(k)behaves as a positive decreasing function of x,and the vector x(k)is classified as follows:

The tolerance for near-feasible solutions is decreased until the limit when only feasible solutions are allowed,according to Eq.(9),where x*is the solution vector,within tolerance ε.

The FTM performs an outer search that minimizes the objective functionf(x)and an inner search that minimizes the value ofT(x).The outer search(Fig.1)is a variation of the Nelder–Mead method,when a new vertex is founded during the search,its feasibility is evaluated.If the vertex is near feasibility the search continues.If the outer search select a non-feasible vertex,an inner search(indicated by gray boxes in Fig.1)is performed to convert this non-feasible vertex in a nearfeasible or feasible vertex.

The inner search that minimizes the value ofT(x)can be performed using any multi-variable search technique.In the original implementation[2]the inner search uses the Nelder–Mead method(or Flexible Polyhedron Method,FPM),Algorithm 1.The Nelder–Mead method minimizes the functionT(x)ofnindependent variables usingn+1 vertices of a flexible polyhedron in ℜn.This standard formulation is designated as FTM.

In addition to the standard FTM method and the FTM method with scaling of variables(FTMS),additional variations used in this work employed other methods to perform the inner search.These methods were FTMS hybridized with the BFGS method(FTMS-BFGS),and FTMS hybridized with the Powell method(FTMS-Powell).FTMS was also hybridized with the PSO method(FTMS-PSO),where the PSO method was used to perform the initialization.These methods are described in the next section.

Algorithm 1.Pseudocode of Nelder–Mead method.

The codes used in this work were implemented in Python™and the optimization calculations were performed using Eclipse©IDE software,run under a Linux-like operating system installed on a 2 GHz Pentium®Dual-Core computer.The FTM parameters used were α=1.0,β=0.5,γ=2.0 and δ=0.5 as recommended previously,[12].The tolerance adopted wasε=10−5,and the size of the initialpolyhedron wast=0.4.

The benchmark chosen for analysis of the performance of the optimization algorithms was the net of functions presented in CEC'06[13],which cover a wide range of nonlinear constrained optimization problems,with real problems and some generated problems,and different constraint types(physical,time,geometric,etc.).Two problems of chemical engineering interest,namely heat exchanger design and chemical equilibrium,were detailed in the results.

2.1.Proposed methods

2.1.1.FTMS

The standard FTM using variable scaled is designated FTMS.The scaled variable(yj)is defined according Eq.(10),as reported by Gillet al.[14],whereUjandLjare the upper and lower bounds of variable(xj).This transformation guarantees that−1.0≤yj≤+1.0,∀j,regardless of valuexjwithin the interval[Uj,Lj].

2.1.2.FTMS-BFGS

The BFGS(Broyden,Fletcher,Goldfarb and Shanno)method employs the quasi-Newton procedure,reported by Nocedal and Wright[15].This method can use the gradient of the objective function.However,since the aim in this work was to use a derivative-free method,estimation was performed using first-differences.This method employed the Scipy1SciPy is a Python-based ecosystem of open-source software for mathematics,science,and engineering.library[16],especially thescipy.optimizepackage which provides several commonly used optimization algorithms.

The BFGS method with Scipy library was implemented based on Nocedal and Wright[15],as shown in Algorithm 2.The jacobian of the objective function of the FTMS inner search(T(x))was calculated using forward finite differences,as indicated in Eq.(11),where τ is a small positive scalar,andeiis theith unit vector,fori=1,…,n.

Algorithm 2.Pseudocode of Broyden,Fletcher,Goldfarb and Shanno(BFGS)method.

The BFGS method was employed each time the FTMS called for minimization of the inner search(T(x)),indicated by the gray boxes in the algorithm flowchart of Fig.1.

2.1.3.FTMS-Powell

The method employed was a modification of Powell's method[17],which is a conjugate direction method that,is included in thescipy.optimizepackage from SciPy.It performs sequential onedimensional minimizations along each vector of the direction set,which is updated at each iteration of the main minimization loop.The function need not be differentiable,and no derivatives are taken.The pseudocode is shown in Algorithm 3.

Powell's method was employed each time the FTMS called for minimization of the inner search(T(x)),indicated by the gray boxes in the algorithm flowchart of Fig.1.

2.1.4.FTMS-PSO

The PSOwas hybridized with the FTMS method in a different way.In codes described previously(FTMS-BFGS and FTMS-Powell),the unconstrained optimization methods were used to perform the inner search.In the case ofFTMS-PSO,the PSO method was used during the initialization of the FTMS,in order to find a feasible starting point.The PSO code in its standard form is available in the DEAP(Distributed Evolutionary Algorithms in Python)library that includes several evolutionary optimization algorithms in Python,[18].The PSO from the DEAP library was the basis for building the PSO algorithm used in this work.

The PSO method employed here used the inertia weight(ω)calculated as indicated in Eq.(12),[19].After some initial tests,the random term used for the inertia weight calculus was set between 0.1 and 1.2.

Algorithm 3.Pseudocode of Powell's modified method.

The tuning of the other parameters of the PSO algorithm for all the problems solved was a cognition learning rate(C1)of 2.0 and a social learning rate(C2)of 2.0.The number of generations(GEN)was set at 100,and the population size(N)was set at 20,as suggested previously by Hu and Eberhart[19].The reason for a smaller population size and fewer number of generations was that this significantly decreased the computing time.Moreover,the objective of the PSO method was to find a feasible region that minimized the violation constraints before starting the FTMS,which would lead to the complete solution of the problem.The pseudocode indicated in Algorithm 4.

Algorithm 4.Pseudocode of the Particle Swarm Optimization(PSO)method.

For each particle generated,the penalty functionP(x),Eq.(13),is evaluated,where Υ is the penalty parameter.

The initialization with the PSO method described previously was performed until the best particle was found,with the value ofT(x)(the constraint violation)being less than an arbitrary value(set at 10−3).The penalty parameter acts as indicated in Algorithm 5.

Algorithm 5.Pseudocode of FTMS-PSO.

When the best point satisfied this criterion,the FTMS was started as indicated previously without any modification.This code was performed for 20 runs for each problem.The objective of using PSO hybridized with FTMS was to allow the optimization to explore different routes through the solution space and might help identify alternative solutions.

Fig.2.Objective function(f(x)),equality constraint(h1(x)),inequality constraint(g1(x))and optimum point(x*)for the test problem of Eq.(14).

3.Results

A comparison of the performance of the optimization methods proposed in this work was conducted using a simple problem(a convex programming problem)with two variables,one equality constraint and one inequality constraint,as described in Eq.(14),[12],with the purpose of allowing the visualization of the search direction followed by the methods proposed until the optimum is found.Fig.2 shows the objective function,the equality and inequality constraints and the optimum point.

Table 1 shows the number of objective function evaluations,the number of iterations and the processing time(using the machine described previously)for the five optimization codes implemented in this work.Fig.3 shows the trajectory of each method from the initial(near-feasible)pointuntilthe convergence to the optimumpoint.The initial nonfeasible point was x(0)=[2 2]Twith objective function value off(x)(0)=1.0 for the FTM,FTMS,FTMS-BFGS and FTMS-Powell methods.The FTMS-PSO method generated the initial point randomically,and the FTMS started in the best particle founded after 100 generations,that is x(0)=[0.841 0.903],with the objective function value off(x)(0)=1.352.The optimum value for this problem was x*=[0.823 0.911]Twhen the objective function assumed the value off(x)*=1.393.

Table 1Results summary for the test problem described in Eq.(14)

As can be observed in Table 1,the algorithms tested were able to find the optimum point for this simple problem.Comparing the number of objective function evaluations and iterations,the original FTM and the FTMS-BFGS methods presented the worst performance,with the largest values.The best performance was presented by the FTMS and FTMSPSO methods.The FTMS showed the shortest processing time and the low numbers of iterations and objective function evaluations.For the FTMS-PSOmethod,the processing time was longer,due to the time consumed by the initialization step with the PSO method,however this method presented lower number of iterations and evaluations of objective function.The trajectories followed by each method(Fig.3)indicated that FTMS and FTMS-PSO had a more direct path than the other methods.FTMS-BFGS showed search path in which the objective function reached high values(distant from the optimum point,f(x)≃7.165),hence requiring a greater number of iterations and objective function evaluations in order to reach the optimal point.

3.1.Benchmark problems

Tables 2 and 3 show the number of variables,the type of objective function,the number of equality and inequality constraints,the upper and lower boundary constraints,and brief descriptions for the benchmark problems used in this work.The results were divided into two categories:(i)problems with small dimensions(2≤n≤6),and(ii)problems with larger dimensions(n>6).As can be observed in problem descriptions,these benchmark problems cover a wide range of application including heat exchanger design(g10),chemical equilibrium(g14),maximization of the pro fit of hypothetical wood-pulp plant(g16),and minimization of the costs of blending multicomponent mixtures(g20),among other engineering problems.There are some generated problems important to represent the mathematical formulation and numeric issues commonly found in real constrained optimization problems.Two problems of the benchmark,g10 and g14,were selected to detailed analysis including mathematical formulation,and are described in the next section.

Fig.3.Trajectories from the nonfeasible initial vector obtained using the different optimization methods for test problem(Eq.(14)).The path followed by FTMS-PSO is showed separately due to the proximity of starting point from the optimal point.

Table 2Description of benchmark problems.Where:EC:equality constraints;IC:inequality constraints;ULBC:upper and lower bound constraints

Table 3Description of benchmark problems.Where:EC:equality constraints;IC:inequality constraints;ULBC:upper and lower bound constraints

The results obtained after applying the optimization methods described previously(see Section 2)to the set of benchmark problems(from[13])are described in Tables 4,5 and 6.

The success rates2In this work,the success rate is defined as the number of problems solved divided by the number of problems proposed for resolution(g01 to g20).of the methods were 80%for FTM,100%for FTMS,75%for FTMS-BFGS,95%for FTMS-Powell and 85%for FTMS-PSO.

Tables 4 and 5 show the optimum values found by the different optimization methods.For the first category of problems(2≤n≤6),it can be seen that FTMS-PSO and FTM presented the worst performance among the methods,with some unsolved problems.The best method was FTMS,which solved all the problems with good agreement.The other methods reached the convergence for almost all the problem in this category(2≤n≤6),and were able to reach(or arrive close to)the known optima.

The agreement with known results for problem g12 was a little poor.This problem presents multiple disjointed regions and represents a challenge for any optimization method.The optima point showed by the methods proposed here for problem g12 stays atf(x)*=−0.85,and the optimum known isf(x)*=−1.00.Meanwhile,even using FTMS-PSO,that was run 20 times and thus the search started from different 20 initial points,the result converged to(or near to)the same point reported here.

The problems g05 and g17 have trigonometric functions in their equality constraints.This type of functions appeared to destabilize the swarm of particles,and the PSO was not able to reach feasible start points in both problems.

For the second category of problems(n>6),FTMS-BFGS showed the worst performance,with three unsolved problems,and FTMS and FTMS-PSO presented the best performance.The FTMS,FTMS-Powell and FTMS-PSO methods were able to reach convergence,with the solutions found by FTMS and FTMS-PSO providing better fits to the known optima.

Problem g19 had a nonlinear objective function with 15 independent variables and 5 cubic constraints;the 15 variables also had lower bounds.The FTM,FTMS,FTMS-BFGS and FTMS-Powell converged very slowly to a local optimal solution.FTMS-PSO showed a good performance and converged more rapidly to the globaloptimum known in fewer number of iterations and objective function evaluations than the other methods.This problem clearly presents a large number of local optima,due to the various local optimal points founded by the codes tested.This problem is a challenge for any direct method of optimization(since they are more prone to be stuck in a local optimum),and the hybridization of FTMS with PSO proved to be efficient to overcome this awkwardness.

Problemg16 presents a complex objective function with a large number of nonlinear constraints.For this problem,only the FTMS was able to find the solution.The other methods do not converged to a feasible solution.This same problem is reported in literature,[12],and even indirect method(GRG)failed to solve it,probably due to difficulty in handling the first and second derivatives that are liable to human error,despite the large time required to problem preparation.It is important to highlight that FTMS was able to solve this problem in few evaluations of objective function and iterations,moreover the preparation time was quite small.

The number of objective function evaluations and the number of iterations for each method are indicated in Table 6.

Analysis of the first group of problems showed that for problems with the same number of variables,the number of evaluations could vary widely,probably due to the topology of the objective and constraint functions.Furthermore,even for a small number of variables,the number of objective function evaluations could reach high values,as in the case of problemg17 solved by FTMS.This problem(an electrical network optimization),which has a nonlinear objective function defined by parts and nonlinear equality constraints,was solved by FTMS after 10295 objective function evaluations and 71 iterations.For this same problem,the FTMSPowell method reached the optimum after fewer objective function evaluations(431)but with a larger number of iterations(133).

The FTMS,FTMS-Powell and FTMS-PSO methods presented the best performance for the second group of problems,which were all solved,with FTMS and FTMS-PSO providing better agreement with the known optima.The other methods(FTM and FTMS-BFGS)presented

similar performance in terms of the numbers of iterations and objective function evaluations.

Table 4Optimum known and found by the algorithms for the benchmark problems

Table 5Optimum known and found by the algorithms for the benchmark problems

Table 6Number of objective function evaluations and iterations(in[])

In the case of problem g20,which is complex and represents a challenge for any nonlinear programming method,the results found with the codes tested in this work were in agreement with those ones reported elsewhere[12].

FTMS-PSO showed a good performance compared with the other methods.This code can be useful when the optimization problem has an unknown starting point,or when the known starting point leads to an unfeasible solution.Numerical experiments also indicated improvements in the capacity of a direct method(as FTMS)escape from local optima when hybridized with PSO,that generate different starting points for the deterministic method of search.

3.2.Detailed study cases

3.2.1.g10—heat exchanger design

This problem was first presented by Avriel and Williams[20],and have been employed by many authors since then during optimization tests,as Dembo[21],Teleset al.[22],and Andrei[23],among others.This study case deals with the minimization of the total area of a net of heat exchangers with non-convexities as bilinear terms from energy balances,which brings great difficult to optimization methods.As indicated in Fig.4,the fluid with flow rateWand specific heatCp(WCp=105Btu·h−1·°F−1,1Btu=1.055kJ,t/°C=5(t/°F-32)/9)need to be heated from temperatureT0(100 °F)toT3(500 °F)using three hot streams with different inlet temperatures(TH1,in=300°F,TH2,in=400 °F andTH3,in=600 °F)and the overall transfer coefficients of the exchangersU1=120,U2=80,U3=40 Btu·h−1·ft−2·°F−1(1Btu=1.055kJ,1ft=0.3048m,t/°C=5(t/°F-32)/9)are constants.

The mathematical formulation can be written as(Avriel and Williams[20],Lianget al.[13]and Andrei[23]):

The optimization methods,FTM and the other methods proposed in this work(FTMS,FTMS-BFGS,FTMS-Powell and FTMSPSO)were applied to obtain the solution of the problem.The initial(not feasible)point isx0=[A1,A2,A3,T1,T2,TH1,out,TH2,out,TH3,out]was employed by FTM,FTMS,FTMS-BFGS and FTMS-Powell,wherex0=[5000.0,5000.0,5000.0,200.0,350.0,150.0,225.0,425.0].This point was not used by FTMS-PSO,since the starting point was found by stochastic method PSO,as described previously.

As indicated in Tables 5 and 6,the original method FTMwasnot able to solve the problem,as well as the scaling method hybridized with BFGS(FTMS-BFGS).The FTM method with scaled variables(FTMS)and its hybridization with Powell(FTMS-Powell)and PSO(FTMS-PSO)was able to find an optimum.The solution with areas and temperatures found by each method is showed in Fig.5.The known optimum from literature is also indicated.

From Fig.5 and Table 5 it can be observed that FTMS and FTMSPSO reach the best values and agree with the data reported in literature.The total area for the heat exchangers reported in literature[13]was 7049.331 ft2,while FTMS found 7050.925 ft2,FTMS-Powell 9623.854 ft2and FTMS-PSOBest7049.308 ft2.The FTMS-PSOBestshowed slightly better optimum value than the one reported by Lianget al.[13].Although the FTMS-Powell has found an optimum,the value is much larger than the optimum known and could be associated with a local optimum.The distribution of areas among the heat exchangers presented some small differences as can be noted in Fig.5.

Analyzing the computational effort(Table 6)of the best methods,the result obtained with FTMS used 734 objective function evaluations,358 iterations in 26.14 s,while FTMS-PSOBestemployed 2035 objective function evaluations,1203 iterations in 64.15 s.This same problem was solved by Teleset al.[22]using GAMS 23.2 with CPLEX 12.1 as the MILP solver.The authors employed a multi-parametric disaggregation technique for polynomial programming problems,and reported the value of objective function of 7049.248022 ft2using 6 significant digits in parametrization,and took 13 min to solve the problem.

The results found in this work employing simple optimization methods,FTMS and FTMS-PSO,demonstrate the efficiency of the proposed methods to lead with difficult problems,since the original method FTM did not solve the problem.Besides,the methods also have a good performance with small processing time.

3.2.2.g14—chemical equilibrium

Fig.4.Three stage heat exchanger system(t/°C=5(t/°F-32)/9).Adapted from Avriel and Williams[20].

Hydrazine and its derivatives have been used as rocket propellants([28,29]),and the knowledge of thermodynamic equilibrium of combustion products is an important issue,since a well designed combustion chamber is very nearly of thermodynamic equilibrium[30].As reported by Limaet al.[31],chemical equilibrium in reactions is fundamental in reactor design,and also shows the most favorable conditions for the occurrence of a certain reaction and can predict the composition in equilibrium state.

The problem formulation is the minimization of the total free energy of the mixture with 10 components(indicated in Table 7)subject to mass balance relationships for the elements.The mathematical formulation can be written as(Whiteet al.[24],Bracken and McCormick[25],Lianget al.[13]):

Fig.5.Optimization solution of the three stage heat exchanger system with different methods(1ft=0.3048m,t/°C=5(t/°F-32)/9).

Table 7Data on 1/2N2O4+1/2O2 at 3500 K and 750 psi(1psi=6894.76Pa)Source:Bracken and McCormick[25].

wherexiis the number of moles of the 10 compounds present in the equilibrium mixture,ciis given by Eq.(17)[25]and the values are indicated in Table 7,where(F0/RT)is the modal standard(Gibbs)free energy function for theith compound andPis the total pressure in atmospheres.

This problem has a nonlinear objective function with a logarithmic function of ten independent variables subject to three linear equality constraints,and the independent variables had zero lower bounds.The optimization results for mixture composition in equilibrium are indicated in Table 8 for all methods,the original FTM and the proposed ones FTMS,FTMS-BFGS,FTMS-Powell and FTMS-PSO.

The solution of this problem was achieved by all methods as indicated in Table 8,and the value of objective function found by all methods is very close to the known optimum.The method that presented the best performance was the FTMS-PSO,comparing the objective function value and the equilibrium composition with known values reported in literature.This method also presented a superior performance regarding the number of objective function evaluations(1328)and iterations(786),when compared with the other methods.Except the composition of the mixture found by FTM and FTMS,the other methods found a simil are quilibrium composition,which corroborates to the globaloptimum known.

4.Conclusions

This paper proposes the use of the flexible tolerance method(FTM)with scaling of variables and hybridization with different unconstrained methods(BFGS and modified Powell)to perform the inner search and with a stochastic method(PSO)to perform the initialization.The FTMS and FTMS-PSO method proved to be the most effective when applied to the benchmark problems employed.Hybridization with other unconstrained optimization methods(BFGS and Powell)did not result in any further enhancement of performance.

The flexible tolerance method with scaling(FTMS)was more efficient and computationally more economical than the original flexible tolerance method(FTM).Furthermore,the method required little tuning of parameters,and the same parameter configuration was able to solve all the problems in good agreement with the known solutions,with a few exceptions(such as the problem with disjointed regions,g12).The FTMS employed the Nelder–Mead method to perform the inner search,as originally proposed by Paviani and Himmelblau[2],and used scaled variables as proposed in this work.

The flexible tolerance method with scaling(FTMS)and hybridized with the stochastic method of particle swarm(PSO),FTMS-PSO,presented some advantages applied to the solution of the nonlinearoptimization problems.The relative advantages of the methods(FTMS and PSO)could be employed,and the stochastic method PSO provided good initial points for the deterministic method FTMS and helped escape from local optima.This strategy was useful to corroborate the global optimum in nonlinear optimization problems,which the global optimum cannot be guaranteed.

Typicalchemicalengineering problems were detailed,heatexchanger design(g10)and chemical equilibrium(g14),and the optimization methods proposed presented a superior performance when compared with the original method(FTM)and other methods presented in literature.Forthese problems(g10 and g14),FTMS and FTMS-PSOalso showed better results than the other methods proposed,in accordance with the results found for the other problems of the benchmark analyzed.

Table 8Optimization results for problem g14

Acknowledgments

The authors thank CAPES(Coordenação de Aperfeiçoamento de Pessoal de Nível Superior)and CNPq(Conselho Nacional de Desenvolvimento Cientí ficoe Tecnológico,grant number 161464/2013-0)for the financial support.

[1]R.Smith,Chemical Process:Design and Integration,Wiley,2005.

[2]D.Paviani,D.Himmelblau,Constrained nonlinear optimization by heuristic programming,Oper.Res.17(5)(1969)872–882.

[3]J.M.Martínez,F.N.C.Sobral,Constrained derivative-free optimization on thin domains,J.Glob.Optim.56(3)(2013)1217–1232.

[4]R.Fenton,W.Cleghorn,J.Fu,A modified flexible tolerance method for nonlinear programming,Eng.Optim.15(2)(1989)141–152.

[5]P.Zhang,Y.Ren,Flexible tolerance simplex method and its application to multicomponent spectrophotometric determinations,Anal.Chim.Acta222(1)(1989)323–333.

[6]D.Constantinescu,Smooth and time-optimal trajectory planning for industrial manipulators along specified paths,J.Robot.Syst.17(2000)223–249.

[7]X.Chen,Y.Yin,The flexible tolerance method for estimating hydrologic parameters in the root zone,J.Am.Water Resour.Assoc.42(2)(2006)495–512.

[8]W.Shang,S.Zhao,Y.Shen,A flexible tolerance genetic algorithm for optimal problems with nonlinear equality constraints,Adv.Eng.Inform.23(3)(2009)253–264.

[9]S.Omowunmi,A.Susu,Application of tikhonov regularization technique to the kinetic data of an autocatalytic reaction:Pyrolysis of n-eicosane,Engineering3(2011)1161–1170.

[10]A.Lima,W.Kwong,A.Cruz,Comparative study and application of the flexible tolerance method in synthesis and analysis of processes with mass integration,J.Chem.Chem.Eng.7(3)(2013)228–238.

[11]M.Naish,Sensing-system planning for the surveillance of moving objects Ph.D.thesis,Department of Mechanical and Industrial Engineering,University ofToronto,Canada,2004.

[12]D.Himmelblau,Applied Nonlinear Programming,McGraw-Hill,USA,1972.

[13]J.Liang,T.Runarsson,E.Mezura-Montes,M.Clerc,P.Suganthan,C.Coello,K.Deb,Problem definitions and evaluation criteria for the cec 2006 special session on constrained real-parameter optimization,Tech.rep,Nangyang Technological University,Singapore,2006.

[14]P.Gill,W.Murray,M.Wright,Practical Optimization,Academic Press,USA,1981.

[15]J.Nocedal,S.Wright,Numerical Optimization,Springer series in operations research and financial engineering,Springer,Netherland,1999.

[16]E.Jones,T.Oliphant,P.Peterson,et al.,SciPy:Open source scientific tools for Python,2001 URL.

[17]M.J.D.Powell,An efficient method for finding the minimum of a function of several variables without calculating derivatives,Comput.J.7(2)(1964)155–162.

[18]F.-A.Fortin,F.-M.De Rainville,M.-A.Gardner,M.Parizeau,C.Gagné,DEAP:Evolutionary algorithms made easy,J.Mach.Learn.Res.13(2012)2171–2175.

[19]X.Hu,R.Eberhart,Solving constrained nonlinear optimization problems with particle swarm optimization,6th World Multiconference on Systemics,Cybernetics and Informatics(SCI 2002)UK 2002,pp.203–206.

[20]M.Avriel,A.C.Williams,An extension of geometric programming with applications in engineering optimization,J.Eng.Math.5(2)(1971)187–194.

[21]R.S.Dembo,A set of geometric programming test problems and their solutions,Math.Program.10(1)(1976)192–213.

[22]J.P.Teles,P.M.Castro,H.A.Matos,Multi-parametric disaggregation technique for global optimization of polynomial programming problems,J.Glob.Optim.55(2)(2013)227–251.

[23]N.Andrei,Applications in Chemical Engineering,Springer,US,Boston,MA,2013 161–222.

[24]W.B.White,S.M.Johnson,G.B.Dantzig,Chemical equilibrium in complex mixtures,J.Chem.Phys.28(5)(1958)751–755.

[25]J.Bracken,G.P.McCormick,Selected Applications of Nonlinear Programming,John Wiley&Sons,Inc.,New York,1968.

[26]J.Blecic,J.Harrington,M.O.Bowman,Tea:A code calculating thermochemical equilibrium abundances,Astrophys.J.Suppl.Ser.225(1)(2016)4.

[27]G.P.Rangaiah,Studies in constrained optimization of chemical process problems,Comput.Chem.Eng.9(4)(1985)395–404.

[28]H.Helvajian,Microengineering Aerospace Systems,Aerospace Press,USA,1999.

[29]S.M.Davis,N.Yilmaz,Advances in hypergolic propellants:Ignition,hydrazine,and hydrogen peroxide research,Adv.Aerosp.Eng.2014(1)(2014)9.

[30]P.Hill,C.Peterson,Mechanics and Thermodynamics of Propulsion,Addison-Wesley,USA,1992.

[31]A.M.Lima,A.J.Assis,C.E.Hori,M.H.E.Reis,A.E.H.Machado,Thermodynamic analysis of ethanol dehydration to ethylene through equilibrium constant method using classic thermodynamics and quantum chemistry,I.RE.CH.E.4(5)(2012)466–473.