Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

An Improved Particle Swarm Optimization Algorithm Using Memory Retention

Info: 8756 words (35 pages) Dissertation
Published: 16th Dec 2019

Reference this

Tagged: Computer Science

An Improved Particle Swarm Optimization Algorithm Using Memory Retention

Abstract. Particle Swarm Optimization (PSO) algorithm is a population-based stochastic optimization technique in which each particle moves in the search space in search of an optimal solution. During this movement, each particle updates its position and velocity with the help of its best previous position found so far (pbest) along with the best position found by swarm (gbest). The conventional particle swarm optimization suffers from stagnation as all the particles are attracted towards their local best rather than the global best position. There are several techniques which restrict the particles from falling into local optimum (mainly in multimodal functions). However, achieving higher convergence rate while avoiding stagnation is challenging. In this paper, we present new two variants of particle swarm optimization techniques which can prevent the premature convergence in the swarm. The first technique combines conventional PSO with Memory Retention (PSOMR) which augments memory to the conventional particle swarm optimization by leveraging the concepts of Ebbinghaus forgetting curve and second technique (MS-PSOMR) combines PSO with subswarms to avoid stagnation.  We also show how historical memory, built by storing particles’ historical promising values for pbest and fitness values, improves Particle Swarm Optimization by using the best values from the nth slot in the memory. Experiments are conducted on some well-known benchmark functions and compared with algorithms of their category. The results show that both the approaches performed better than the algorithms of its category for the majority of the measured metrics and also discourages premature convergence. It is also observed that our approaches also find an optimal solution quickly when compared to conventional particle swarm optimization.

Keywords. PSO, memory retention, premature convergence, forgetting curve.

1  Introduction

Particle Swarm Optimization (PSO) algorithm is a population-based stochastic search algorithm that deals with complex non-linear optimization problem [1]. In PSO, each member of the group is called particle, and the entire population is called the swarm. The PSO algorithm is based on sharing knowledge among individual particles without any prior knowledge of their positions. This behavior is inspired by the social behavior of animals. Each particle in the swarm has a velocity. An important characteristic is that all particles perform a collaborative search in which all the particles are attracted towards the global best position which is termed as gbest in the whole swarm and its own best position termed as pbest.Particles of a swarm interconnect with each other and adjust the eir positions and velocity dynamically so that each particle is in its best position in the swarm. The entire process executes in iterations, and in each iteration, all particles move towards better and better positions by evaluating a fitness function f: RnR where Rn is a vector of real numbers and R is a real number. PSO is also simple to implement and can exhibit good convergence properties when compared to other evolutionary algorithms. Becuase of these advantages, PSO has become one of the useful optimization techniques and has been extended and implemented in many areas like function optimizations, image processing, power systems, etc.    []

Although PSO is considered as a potential solution to solve many problems, it suffers from premature convergence since it can be trapped into local minima, mostly in case of multi-dimensional and multimodal problems which contain many numbers of local minima[9]. One of the potential reasons for such behavior is that existing PSO algorithms use only the information of their gbest and particle’s own current best position. However, they fail to make use of particles’ historical promising information in which the particle moves towards a local optimum.(Throughout this paper we refer pbest and fitness value as information).   In this case, it may be impossible for a particle to come out of the local optimum area once its pbest falls into the region where the gbest locates. One important strategy which could prevent premature convergence is through maintaining a good balance between exploration and exploitation []. Exploration is defined as the ability to search for more points in the solution space which improves the optimization process as the algorithm iterates. Exploitation is defined as the ability to search near the current solution. Many PSO variants have been developed in the past to maintain a balance between these two by adjusting PSO parameters [], designing neighborhood topologies[], etc.One more important strategy to discourage premature convergence is to maintain high diversity among the particles using evolutionary operators such as selection, crossover, and mutation, etc.[21]

In this work, we propose two new strategies to overcome the problem of premature convergence and improve the overall optimization process. In these two techniques, we maintain the historical memory of the particles that can discourage premature convergence and improve the performance by storing promising solutions and reuse them later to improve the search process. In the first strategy PSOMR, we attempt to use memory which is inspired by the human brain. We apply the idea of Ebbinghaus forgetting curve into PSO algorithm and find out the information that is present in the particle at time t by choosing the pbest from the best memory slot (considering minimization function) and eliminate premature convergence problem. For the other variant, we use the concept of subswarm and exchange information between them. We store the historical promising fitness values in the particle’s memory and reuse them to avoid premature convergence.

The rest of the paper is organized as follows. Section 2 presents the basic concepts used in the paper along with various variants of PSO algorithm which uses the idea of memory. Section 3 describes proposed algorithms. Section 4 presents the results and section 5 concludes the paper.

2 Background and Motivation

2.1  Particle Swarm Optimization

In conventional PSO, each particle in the swarm is considered as a credible solution to the optimization problem in an n-dimensional space. Every particle in the swarm searches for an optimal point in the search space by possessing both position and velocity. At each iteration, the velocity of each particle is updated using the following equation:

vijt+1

=

vijt+

c1r1jt*pBest it-xijt+c2r2jt*gBest it-xijt              (1)

xi=xi+vi

(2)

where

vijt  is the velocity vector of particle i in  dimension j at time t ,

xijtis the position vector of particle i in dimension j at time t ,

pBest itis the personal best position of particle i in dimension j found till time t,

gBest itis the global best position of particle i in dimension j found till time t.Constant c1, c2 are the acceleration coefficients which determines the influence of particle’s personal and social experiences,

r1jt  and r2tare random numbers.The three terms

vijt,

c1r1jt*pBest it-xijtand

c2r2jt*gBest it-xijtrepresent inertial, cognitive and social components respectively, which play a major role in updating particle’s velocity.

Premature convergence in PSO

The swarm is said to contain premature convergence when the solution comes near to local position, rather than moving towards the global best position. At this stage, the progress towards the global best position will also be stopped so that current activity could only continue to work on a local minimizer. That is, particles are converged too early and the result will be sub-optimal.In premature convergence, all the particles will come close to each other so that the global best and all personal bests are attracted towards one region in the search space. Since particles are repeatedly attracted to the small region in the search space with very less proximity, the momentum of the particles from their previous velocities becomes zero which results in stagnation. One of the effective ways to eliminate premature convergence is to maintain diversity among the population. Diversity in the swarm can be calculated by finding the proportion of distinct individuals based on their fitness values.

2.2 Subswarms

Since conventional PSO does not guarantee diversity among the particles, dividing the swarm population into multiple groups, called sub-swarms is one of such techniques which can increase the overall search process [6] and improve the diversity among individuals[]. This technique of dividing the swarm into multiple sub swarms and performing optimization process local and independent of the subswarm is called multi-swarm optimization.In multi-swarm optimization, each subswarm focuses on a specific region and start the optimization process.

2.3   Forgetting curve

The storage and utilization process of information in the human brain is mainly decided by the length of time since the information is stored in the memory. The efficiency of historical cognition decreases gradually with time. That is, the human brain stores information by its frequent usage, but faded from the memory when it is not in use.  Ebbinghaus, in 1885 described the famous Ebbinghaus forgetting curve to confirm this point. The forgetting curve represents the probability that a person can recall information at time t since the previous recall. It also shows how the information is lost from memory over time when there is no attempt to recall it. It is usually expressed as,

R=e-ts                                                          (3)

Where R is the memory retention denoting the probability of recalling information at time t since the last recall.  e is the Euler number,  which is approximately 2.718,t is the time since the previous recall and S (relative strength of memory or stability) is the approximate time since the last recall for which the information is stored in memory.

An essential aspect of the long-term memory is that after reproduced information recall at time t > 0, the time of storing the information in memory S changes. The change is dependent on the previous time S and at the time of recall t. Usually, the reproduced recall multiplies in the next subsequent time by factor F which is always greater than 1, i.e.,  F >1. Also, immediate recollection of information has no considerable effect on the learning, and the too late reproduced recollection causes substantial forgetting. There is an optimal time opt(S), between these two times where the information recollection will be maximum in which the information is beneficial [3].

The new value of S denoted as Snew, is dependent on (a) type of memory, (b) time of repetitive information recall and (c) on the previous value of S.  It can be calculated as Snew= ch(t, S, F, Sini)S, where the function ch(t, S, F, Sini) calculates the coefficient of change of value S, where Sini is the initial strength of memory.

In this work, we combined the idea of Ebbinghaus forgetting curve with PSO algorithm. We assume that (a) particles in PSO have a memory which possesses the qualities of the human brain, and (b) the memory retention factor for the particles changes according to the law of Ebbinghaus forgetting curve.

2.4 Motivation

According to the Ebbinghaus forgetting curve, the efficiency of historical cognizance mainly depends on the time since it is stored in the memory. In other words, the accuracy will be more when the information is recollected within the short span from the time it is stored in the memory instead of the information recollected in more time span. Following the same law, our idea is to pick the best pbest information from memory similar to the way the human brain does to help all the particles to find global optimum and proceed further to avoid stagnation. In PSO, since the particles iterate to move in the optimum solution space, storing the particle’s information in historical memory helps particles to get back to the optimum regions if they fall into local optima. Intuitively, the historical memory of particles can help PSO to overcome the drawback and improve performance. The reason is that the implicit or explicit historical memory allows PSO to store promising solutions and reuse their information later to improve the search process. On the one hand, the historical memory preserves the information of optimum solution space that particles have searched, which enable particles to return to these optimum regions if they fall into local optima. On the other hand, the historical memory that derives from different particles can also maintain the diversity of population to some extent.

Also, in PSO since all the particles initially start with divergent nature and degree of divergence decreases gradually when they are going to be trapped in local optima resulting in premature convergence. Improving the population diversity can prevent premature convergence and improve the overall convergence rate. Storing the particle’s previous fitness values in the memory and calculating the relative change in the fitness value at each iteration of PSO can be an indicator to identify premature convergence. A profound shift in fitness value, when compared to fitness value in the previous iteration, suggests that particle is lacking the divergent behavior (as the particle changes its position based on fitness value) and result in premature convergence.

3   Related Work

In the literature, there are several vital works which improved the quality of the conventional PSO. Bergh et al. [4] presented a cooperative particle swarm optimizer (CPSO) that uses multiple swarms to optimize various components of the solution vector cooperatively. Mendes et al. [5] introduced a fully informed particle swarm (FIPS) that uses all the neighbors to influence the flying velocity of a particle. Ratnaweera et al. [6] presented a self- organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients (HPSO–TVAC) using time-varying acceleration constants. Liang et al. [7] described a comprehensive learning particle swarm optimizer (CLPSO) that uses all particles’ current pbests to update the velocity of the particle. Chen et al. [8] presented a PSO with an aging leader and challengers (ALCPSO) by using an aging mechanism. Lin et al. [9] presented a self-government PSO in which each particle updating depends on local best information searched at the last iteration as well as pbest and gbest. All the above algorithms always strike the local optimum.

Researchers have also focussed on utilizing memory, multi swarms for eliminating premature convergence. Below are some of the studies which used additional external and internal memory (repository) to improve the performance of conventional PSO. Memory-based methods of PSO in literature has mainly concentrated on various methods through which the local and global best positions are selected and used. Coello et al. [10] determined the global best by choosing the best solutions from the external memory and the local best is updated concerning Pareto dominance. Hu et al. [11] used dynamic neighborhoods and employed an external memory to memorize all potential optimal solutions. In their work, the global best is selected among the candidates in the external memory by defining a neighborhood objective and an optimization objective functions. The global best is found among the particle’s neighbors, subject to the defined neighborhood objective.Wang et al. [12] presented a triggered Memory-Based PSO that adjusts the parameters of the PSO when the search space changes over time. In this method, a predefined number of globally best positions are maintained and reintroduced into the population when necessary. Acan and Gunay [13] placed a single globally best position along with some finite number of globally worst positions. At the end of each iteration, some randomly chosen particles are replaced by applying crossover operator. In another work of Acan [13], each particle maintains global and local memory. A colony consisting of these positions is constructed where the velocity and position of particles are updated in each iteration. Here, the new positions are evaluated where the best replace the particle’s current velocity.

Tang and Fang [14] described an approach which accumulates the historical cognition by using deep extended memory and improved the performance of convergence speed by introducing a new learning method for cognition. Shahriar and Sima [15] presented an extension to PSO that utilizes two separate external memories in which a set of globally found best and worst positions are stored along with their parameters. At each iteration, the distance of current particle to the closest best and closet worst particles are calculated. Jie et al. [16] proposed a composite PSO algorithm namely HMPSO in which each particle has three candidate positions. These positions are generated from the historical memory, particles’ current pbests, and the swarm’s gbest. The above approaches re-introduce the globally best position into the population during the search. Majority of the memory related works in the literature theoretically concentrated on using the memory (which is either external or additional) to utilize the information from their historical cognizance.

The multi-swarm technique has also attracted researchers many researchers. Zang and Ding et al. [17] proposed MSCPSO which uses four subswarms and exchange information among themselves to evaluate fitness function by using master-slave model. Niu Et.al [18] presented a new optimization algorithm with central learning strategy called MPSOCL which employee a new scheme which updates the direction combining historical values from all subswarms.Suganthan et al. [19] proposed DMS-PSO in which the while population is divided into small subswarms which are frequently regrouped by exchanging information among subswarms to increase the diversity in the swarm. Frequent regrouping in DMS-PSO  does not result in better exploitation as the algorithm iterates. Several alterations have been made to the DMS-PSO to achieve good performance.Xia et al. proposed an algorithm called DMS-PSO-CLS by hybridizing DMS-PSO with cooperative learning strategy to achieve better exploitation and exploration. Li and Xiao introduced a new method named Multi-Best PSO (MBPSO) in which the whole population is divided into the sub-swarms. Each subswarm has a separate gbest and the fitness value is calculated by using multiple gbests. In case of premature convergence, multiple gbest values are used to come out of the local position. Zheo.at.al extended DMS-PSO by using subswarms with dynamic sizes to adopt harmony search algorithm. In this method, new harmonies are generated according to the current best position and the nearer personal best solution is replaced with the newly generated harmony having better fitness.

4  Proposed Approach

In this section, the two proposed approaches to eliminate the premature convergence problem is presented.

  1. Approach 1 (PSOMR):

This method uses memory along with the concept of forgetting curve to read the information and update both velocity and position of the particle. The advantage of using memory is the preservation of particle’s historical promising information. From the historical memory, we reflect the distribution of particles’ historical promising pbests in the solution space.  The advantage of incorporating forgetting curve into PSO is that we can fetch the appropriate fitness value by calculating memory retention of a particle. The approach also lies in-line with the way human brain recollects the information at time t and reproduce that for usage. The proposed approach is two-phased. The first phase deals with initialization of memory (Algorithm 1 (see figure1)) where each particle uses memory to store fitness values with their pBest personally explored so far. Note that, in conventional PSO there are local memories for each particle and a global memory for the swarm and all have a capacity of keeping only one position. Unlike conventional PSO, in the proposed approach, a particle is capable of storing a finite number of promising values. The second phase uses the memory from phase 1 and computes the memory retention and selects the pbest from the relevant memory slot.We assume that memory is empty initially and the sizes of memories are the same for all particles. In the initialization process, conventional PSO algorithm is run until the memory is filled with personally discovered promising positions.  The fitness value of each particle i is compared with all the fitness values of the particles in its local memory Mi.  The pBest and fitness value of a particle is inserted into Mi only if local memory is empty, or the current fitness value is better than last inserted value. This way, we ensure that the initial contents of memory are sorted in ascending order of their fitness values. For all particles in the swarm and their all dimensions, we compute the memory retention at time t with equation (3). Next, for all the dimensions, we calculate the percentage of memory retention using equation (5). Moreover, if it is not 0 %, we select the pbest from the nth slot of memory. We compute the velocity of the particle i  at time t+1 as,

vijt+1

=

vijt+

∑k=1tRipBest it-xijt+∑k=1tRigBest it-xijt       (4)

where

Rirepresents the memory retention of particle i at time t.We continue to update the velocity of the particle using the equation (4) and update its position. This procedure is repeated for all the particles for all the dimensions.   We compute function ch(t, S, F, Sini ) as [3]:

As the value of opt(S) varies between 10-30% of time S (see section 2.2.), we considered opt(S) as to be 20% of S and F = 1.2.  Since we are maintaining multiple pbests and fitness values in memory for a particle, in the second phase, we determine the slot from which the information has to be picked from the memory at time t. For this, we use the value of S (which is approximated time since the last recall). If S value is less than predefined time (nṪ), where 0 < n <=m, then we pick pbest from the nth slot from memory. Figure 2 shows the procedure (algorithm 2) for finding the memory slot, and Figure 3 shows the algorithm (algorithm 3) for the proposed PSO with memory.   Note that step 6 in Algorithm 3 uses the nth slot that is computed using algorithm 2. Our procedure differs from the conventional PSO algorithm in two ways:

  1. Uses of historical memory (conventional PSO do not consider historical memory).
  2. Use of forgetting curve to compute the amount of information present in the memory at time t.

 

Approach 2:

PSO with large population performs well on simple problems, and PSO needs a comparatively small population when compared to other evolutionary algorithms to achieve better results on complex problems[20]. Hence, we divide entire swarm into multiple subswarms to slow the convergence speed and increase the diversity.Each member in the subswarm uses the solution space of its subswarm and continue to search for better regions.  Similar to approach 1, we store the fitness values of particles in its memory and use them later for updating the velocity and position. Each subswarm has its own local best and global best termed as lgbest, lpbest. We calculate the relative change in fitness values from memory, and if the relative change is less than a predefined constant(µ), we suspect that the particle is undergoing premature convergence. The reason being that the particle’s change in position is dependent on the fitness value(according to eq.1 and 2) also according to the conventional PSO algorithm, it is evident that the particle will change its position only if there is an improvement of fitness function when compared to the previous iteration. In case, if the relative change of the fitness value is less than µ, we assume that the particle is undergoing premature convergence. To avoid premature convergence of the particle, we use gbest of the best particle in the subswarm and use that to update the particle’s position, and velocity using Eq (1) and Eq (2) and the process is repeated. Since we are replacing the gbest, the particle will change its position and come out of premature convergence.

The following are the key parameters and the efficiency of the approach can be varied by adjusting them:

  1. Size of the memory: More the size of memory, more accurate the detection of premature convergence.
  2. Value of µ: Lesser the value, accurate is the detection of premature convergence.

The pseudo code for the MS-PSO is given below in algorithm 4.

5   Experimental Results

 

5.1Benchmark functions:

 

To test the performance of both the proposed algorithms, we use six benchmark functions presented in Table 1.  The functions are broadly classified into two groups. The first group f1,f2 are relatively simple uni-modal functions. The second group (f3-f6)  are complex multi-modal functions out of which f5  is a complex multimodal function which has a large number of local optima.

5.2 Parameter settings:

We used the variable parameter settings for both the approaches to present the results in more detail manner.For PSO-MR, swarm population of m is taken as 30, memories of size M = 5. The number of dimensions D is set to 30. The results are obtained from 30 independent runs.Maximum number of epochs taken as 4000, which is a termination criterion. For MS-PSOMR, the total number of particles is 200 which are divided into four subswarms comprising of 50 particles in each subswarm. The total number of dimensions are 30. It is observed that the value for µ and the memory size (m) changes along with the function. The values are presented in Table 4. For all  PSO, PSOMR, and MS-PSOMR we took Inertia weight (ω) as 0.729, and c1 and c2 as 1.49445 [17].

5.3 Discussion:

The two proposed approaches are compared with PSO variants which exhibit similar functionality.PSOMR is compared with CLPSO which uses all particles’ current pbests to update the velocity of the particle.MS-PSOMR is compared with DMS-PSO in which the whole population is divided into many small subswarms which are frequently regrouped by using various regrouping schedules. İn the next subsection, we present the results of PSOMR.

 

 

5.3.1 Experimental results of PSOMR Approach:

For illustrating the problem of premature convergence for conventional PSO, the algorithm is set to optimize Rastrigin function.  Figure 4 shows the premature convergence behavior for conventional PSO for one particle in one dimension for 4000 Epochs.  It is observed that during optimization process of the function, premature convergence is detected at 1551 epoch and the particles’ velocity becomes zero. However, premature convergence can be avoided if pbest is taken from the historical memory (see figure 5).

Table 1.   List of benchmark functions
Type of function Name fmin Test Function
UniModal Sphere 0 f1=∑i=1Dimxi2
Rosenbrock 0 f2=∑i=1D-1( 100xi2-xi+12+(xi-1)2
Multimodal Schwefel 0 f3

= 418.9829 * Dim

∑i=1Dim xisin(

|xi|)

Griewank 0 f4

=

14000∑i=1Dimxi2-Πi=1DimCosxii+1

Rastrigin 0 f5=∑i=1D(xi2-10cos⁡2Πxi+10)
Ackley 0 f6=-20exp⁡-0.21Dim∑i=1Dimxi2-

exp(

1Dim∑i=1DimCos(2πxi))+20+e

After applying PSOMR approach, it is observed that velocity did not become zero when the particle is dropped to local minimum. The reason being using the pbest value from the historical memory. Simulations are carried out to compare the performance of PSOMR with conventional PSO.  Table 4 shows best, worst values along with the improvement regarding percentage for both PSO and PSOMR. The best values are shown in bold. It is observed that our PSOMR performed better than conventional PSO for all the majority of statistical metrics mainly for f2, f3, f4, f5, and f6. Both uni-modal functions f1 and f2 found the optimal solutions in all runs.

Our algorithm completely dominated both the multimodal functions Griewank (f4), Rastrigin (f5) in all the metrics which show the viability of our approach. Also, from the results, it can be elucidated that our algorithm best suits for solving the multimodal functions than the uni-modal functions. It is also observed that optimal value is found in less number of epochs when compared to conventional PSO. Figure 7 shows convergence curves for all benchmark functions listed in Table 1 for both PSOMR and conventional PSO. Figure 6 shows the memory retention curves which are drawn after normalizing the retention values. The average execution time of the algorithm is found to be 3000ms. The value of T is taken as 0.0001 sec. Table 2 shows the average memory occupied by our PSOMR approach for various swarm sizes along with the execution time for 300 epochs.

Table 2. Average memory taken by PSOMR for different swarm sizes
Swarm Size Memory taken

( bytes)

Size

(MB)

Execution Time secs)
50 55721984 55.72 1.28
100 111095808 111.09 2.34
200 183603200 170.27 2.98
300 170262528 183.60 3.23
400 292466688 292.46 4.52
Table 3. Average Time cost (sec)
Function Algorithm
PSO PSOMR CLPSO
f1 1.51 2.58 1.84
f2 1.83 2.21 2.01
f3 2.15 3.58 2.46
f4 2.48 3.2 2.45
f5 2.86 3.4 3.2
f6 2.53 3.5 2.83

There is an average increase of 24.12% of main memory per 100 particles in the swarm. Like conventional PSO the majority of the process memory is occupied with information related to particle parameters, initialization (for all the dimensions and epochs) and initialization of process execution. Rest of the memory is occupied with initialization of memory, checking for premature convergence, memory lookup for the best pbest value, replacing the current pbest with the pbest from memory which is used for better overall performance. Further, the execution time does not have a direct impact on the increase in memory. Thus, the memory size of the swarm does not have direct effect or deviate the overall performance of the algorithm except for the increase in process memory which is expected because of additional steps.

Table 3 shows the time cost (in seconds) of both PSO and PSOMR. Both the algorithms are executed for 30 epochs, ten dimensions for ten particles. The average execution time for conventional PSO is 2.2266 secs.  As a result of the introduction of historical memory and simulated human behavior, time cost of PSOMR increased by 0.858 secs to achieve better overall performance. We also observed that performance and efficiency of PSOMR are sensitive to memory slot and memory retention of the particle. The process is executed by introducing a time delay in incremental manner and value of memory retention is calculated using functions in equation 5. Highest efficiency is observed for first equation and least efficiency is observed for last equation. Also, in case of premature convergence, we observed that the function

Sphere function

Griewank function

Rastrigin function

Rosenbrock function

Ackely function

Shwefel function

Fig 6: Convergence characteristics for various functions for PSOMR

value is minimum when pbest is taken from first memory slot and is maximum when it is maximum at the starting point of execution and decreases as the time progresses which was proved in Ebbinghaus forgetting curve.  Since the velocity is dependent on memory retention of the particle (refer equation 4), the velocity of the particle never becomes zero and the particle will never fall into a local optimum. Highest efficiency is observed for first equation and least efficient is observed for last.

Table 4. Optimization results for PSOMR

 Function PSOMR PSO CLPSO
Sphere Best 9.10E-03 7.530E-02 7.01E-02
Worst 3 -6.210E+00 2.97E+00
Average 1.74E+00 -1.072E+00 1.35E+00
Rosenbrock Best 6.66E-02 9.200E-01 9.84E-02
Worst 2.99E+00 3.000E+00 2.10E+01
Average 1.53E+00 6.990E-01 2.42E+00
Schwefel Best -4.35E+00 1.27E-01 2.800E-02
Worst 3.58E+00 2.250E+00 3.98E+00
Average 2.000E-01 3.30E-01 2.05E+00
Griewank Best -2.00E-04 -9.321E-03 3.14E-02
Worst 1.21E+00 2.654E+00 1.64E+00
Average -9.80E-03 1.322E+00 8.74E-01
Rastrigen Best 1.26E-02 1.860E-01 1.83E-01
Worst 3.19E+00 4.830E+00 3.63E+00
Average 1.65E+00 2.349E+00 1.68E+00
Ackley Best 7.14E-03 4.710E-03 0.00E+00
Worst 2.95E+00 5.610E+00 1.99E+00
Average 1.38E+00 2.875E+00 8.94E-01

 

5.3.2 Experimental Results of MS-PSOMR:

In this section, initially we attempt to detect premature convergence problem using conventional PSO and address the same using MS-PSOMR approach. We used 200 particles divided into 4 subswarms comprising of 50 particles each. The approach is set to optimize rastrigen function to detect premature convergence for subswarm1.  Conventional PSO was set to iterate for 4000 epochs. Initial degradation in the particle’s relative change in position is observed at 1830 iteration and finally it has become 0.018 at 1839 position(see figure 8) . Since, the relative change is less than µ which is 0.02, the gbest value of local gbest is used to update the particle’s position and velocity. Since, the gbest of a particle is changed, the particle changes it position and premature convergence is avoided. (see figure 9).

Table 5 shows the optimization results for MS-PSOMR and fitness values in terms of best, worst and average values. The best values are highlighted in bold.Figure 10 shows the convergence characteristics for all the benchmark functions found by all the three algorithms namely conventional PSO, MS-PSOMR, DMS-PSO. The horizontal axis indicates the number of epochs and the vertical axis indicates the best fitness value obtained in that epoch.  From the results obtained, it can be observed that MS-PSOMR performed significantly better than the other two algorithms for the two unimodal functions.  Both DM-PSO and MS- PSOMR found the best value for sphere function. Also, MS-PSOMR  has the better average value than the other two algorithms. In case of Rosebrock function, MS-PSOMR obtained better results for both best and average metrics. İn case of multimodal functions, MS-PSOMR has found best optimal values for all the functions in almost all the epochs when compared to the other functions of its category along with DMS-PSO which has also found best values for f4 and f6. It is also observed the average value for both f5 and f6 while conventional PSO has best average value for f3 and f4 functions. It is also observed that DMS-PSO has found the best worst values for f5,f6. As we observed the best value for f5 which has an extensive search range, we can conclude that MS-PSOMR generates a population with high diversity along with high exploration ability.

Table 5. Table showing optimization results of MS-PSOMR

Function Statistical
metrics
MS-PSOMR PSO DMS-PSO
Sphere Best 0.00E+00 7.53E-02 0.00E+00
Worst -3.97 -6.21E+00 3.20E-01
Average -2.76E-01 -1.07E+00 6.43E-01
Rosenbrock Best 9.36E-02 9.20E-01 1.09E+00
Worst 8.68E+00 3.00E+00 1.14E+00
Average 3.84E-01 6.99E-01 2.72E+00
Schwefel Best 2.80E-02 1.73E-01 6.20E-02
Worst 6.319286 2.25E+00 9.34E-01
Average 4.11E-01 2.00E-01 1.62E+00
Griewank Best 0.00E+00 -9.32E-03 0.00E+00
Worst 6.346507 2.65E+00 3.35E+00
Average -1.50E+00 1.32E+00 1.98E+00
Rastrigen Best 5.26E-02 1.86E-01 7.00E-02
Worst 2.99E+00 4.83E+00 4.19E-01
Average -2.76E-01 2.35E+00 2.29E+00
Ackley Best 0.00E+00 4.71E-03 0.00E+00
Worst 5.09 5.61E+00 2.98E+00
Average -1.00E-01 2.88E+00 5.47E-01

Table 6 shows the average computational time for three PSO variants. The time is calculated by averaging all the values obtained after executing the PSO variants for 30 times. It can be observed that the minimal computational time is observed for conventional PSO and the maximum computational time is observed for the MS-PSOMR. The reason is because of additional steps like dividing the swarm into multiple subwarms, performing additional checks at predefined iterations for observing the degradation of function value, etc.

Experimental results suggest that the dividing the swarm population into multiple swarms, observing the change in fitness function and replacing the gbest value with the value of the best particle helped MS-PSOMR to solve the problem of premature convergence and perform better when compared to other PSO variants possessing the similar behavior. Also, based on the values from experimental results it can be concluded that MS-PSOMR has exhibited better performance in optimizing most of the benchmark functions even though it did not perform best for all the functions.

 

Table 6. Table showing the average computational time for all PSOs

Function Algorithm
PSO MS-PSOMR DMS-PSO
f1 1.51 4.52 4.23
f2 1.83 4.05 3.28
f3 2.15 4.52 3.82
f4 2.48 5.2 4.75
f5 2.86 5.4 4.84
f6 2.53 5.3 4.8

Table 7. Table showing the average optimal threshold and size of memory

Function Average optimal Threshold Value Total size of Memory
Sphere 10-2 5
Rosenbrock 10-7 5
Schwefel 10-6 2
Ackley 10-3 4
Griewank 10-7 3

Sphere function

Griewank function

Rastrigin function

Rosenbrock function

Ackely function

Shwefel function

Fig 10. Convergence characteristics for various functions for MS-PSOMR

 

 

 

.

 

 

 

 

6 Discussion and Conclusion

As most of the existing PSO algorithms fail to make use of particles’ historical promising pbests (except their current pbests), it may be impossible for a particle to move out of the local optimum once its pbest falls into the area where the gbest locates. Also, PSO frequently traps into local optimum especially when applied multi-dimensional and multimodal functions [16]. To overcome this drawback, in this work, we store the promising solutions into memory. We presented a new algorithm called PSOMR in which we used memory to preserve the promising historical pbests and reuse them later to improve the search process.  As it is quite common for the memory to lose the information over a period, we applied the idea of Ebbinghaus forgetting curve into PSO algorithm and picked the pbest from the appropriate memory slot at time t. Our approach is two-fold: (a) Use of memory to maintain particles’ historical promising pbests, and (b) manage the prevent premature convergence when pbests are picked from historical memory from the nth slot of memory.

We performed experiments on six benchmark functions. The results indicate that PSOMR achieves the best results quickly on all the functions mainly because of using the information of particles’ historical promising pbests. Our approach also presents good performance on most multimodal functions.  The approach also shows that there is a strong correlation between memory retention and our approach. The experiment results present that algorithm behaves best if the pbest value is taken from the first location in the memory. Further, PSOMR does not bring considerable overhead to the existing PSO algorithms.

References

  1. J Kennedy, R. Eberhart, “Particle swarm optimization,” In Proc. of IEEE Int. Conf. Neural Networks, pp. 1942–1948, Australia (1995).
  2. C Coello, M.  Lechuga, ”MOPSO: A proposal for multiple objective particle swarm optimization,” In IEEE Congress on Evol. Comp. (CEC), pp. 1051-1056, IEEE Computer Society Washington, DC, USA (2002).
  3. M Kudělka, Z Horák, V, Snášel, P Krömer, J Platoš, A Abraham, “Social and swarm aspects of co-authorship Network,” in Logic Journal of IGPL Advance Access, 20, pp. 634-643 (2012).
  4. V.F Bergh, A P Engelbrecht, “A cooperative approach to particle swarm optimization,” IEEE Trans. Evol. Comput., 8 (3), pp. 225–239 (2004).
  5. R Mendes, J Kennedy, J Neves: “The fully informed particle swarm: Simpler, maybe better,” IEEE Trans. Evol. Comput., 8(3), pp. 204–210 (2004).
  6. A Ratnaweera, S Halgamuge., H.C Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Trans. Evol. Comput. 8(3), pp. 240–255 (2004).
  7. J Liang, A.K Qin, P.N  Suganthan, S, Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions.” IEEE Trans. Evol. Comput., 10(3), pp. 281–295 (2006).
  8. W Chen, J Zhang, Y Lin, Chen, Ni, Z.H Zhan, H Chung, N Li, Y H Shi, “Particle swarm optimization with an aging leader and challengers” IEEE Trans. Evol. Comput., 17(2), pp. 241–258 (2013).
  9. W Lin, X. Gu, Z Lian, Y Xu, B Jiao, “A self-government particle swarm optimization algorithm and its application.Texaco gasification”  in J. Softw, 8(2), pp. 472–479 ( 2013).
  10. C Coello, M  Lechuga, ”MOPSO: A proposal for multiple objective particle swarm optimization.” In: IEEE Congress on Evol. Comp. (CEC), pp. 1051-1056, IEEE Computer Society Washington, DC, USA (2002).
  11. X Hu, R C Eberhart, Y Shi. “Particle swarm with extended memory for multiobjective optimization.” In: Proceedings of the IEEE Swarm Intelligence Symposium (SIS), pp. 193-197, Indianapolis, IN, USA (2003).
  12. H Wang, D. Wang, S. Yang. “Triggered memory-based swarm optimization in dynamic environments,” In Proc. of Applications of Evolutionary Computing, pp. 637-646, Springer-Verlag Berlin, Heidelberg (2007).
  13. A. Acan, A Unveren “A memory-based colonization scheme for particle swarm optimization,” In IEEE Congress on Evolutionary Computation (CEC), pp. 1965-1972, Piscataway, NJ, (2009).
  14. R Tang, Y Fang,  “Modification of particle swarm optimization with human simulated property” in Neurocomputing, 153, pp. 319-331 (2015).
  15. S Asta, A. Uyar, “A novel particle swarm optimization algorithm,” 10th international conference on Artificial Evolution (2011).
  16. J Li., J   Zhang, C  Jiang, M, Zhou, “Composite particle swarm optimizer, with historical memory for function optimization.” IEEE Trans. on Cybernetics, 45(10), pp. 2168-2267 (2015).
  17. R Mendes, J Kennedy, J Neves, “The fully informed particle swarm: Simpler, maybe better.” IEEE Trans. Evol. Comput., 8(3), pp. 204–210 (2004).
  18. Z. Zhang, X.M. Ding, A multi-swarm self-adaptive and cooperative particle swarm optimization, Eng. Appl. Artif. Intell. 24 (6) (2011) 958–967.
  19. B. Niu, H.L. Huang, L.J. Tan, J.J. Liang, Multi-swarm particle swarm optimization with a center learning strategy, in Proceedings of the Advances in Swarm Intelligence, 2013, pp. 72–78.
  20. J. Kennedy, Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance, in Proceedings of the IEEE Congress on Evolutionary Computation, 1999, pp. 1931–1938.
  21. P. J. Angeline, “Using selection to improve particle Swarm Optimization,” in Proc. IEEE Congr. Evol. Comput., Anchorage, AK, 1998, pp.84–89.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Computer Science"

Computer science is the study of computer systems, computing technologies, data, data structures and algorithms. Computer science provides essential skills and knowledge for a wide range of computing and computer-related professions.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: