# Bio-Inspired Algorithms for Global Optimization

**Info:** 12185 words (49 pages) Dissertation

**Published:** 16th Dec 2019

**Tagged:**
Computer Science

Table of Contents

Signature Page……………………………………………………………………………ii

Chapter 2: Ant Colony Optimization

Chapter 3 Artificial Bee Colony

4.2.1. Light Intensity and Attractive Nature

Chapter 5 Application and Comparison to Optimize benchmark problems

Chapter 6 Application to Engineering Problems

6.3.3 Design of Type I FIR Filters

6.3.5. Highpass Filter Results

Chapter 7 Summary and Conclusion

# List of Figures

Figure 2. 1 Solution archive used in ACOR [13]

Figure 3. 1 Steps of ABC algorithm

Figure 4. 1 Pseudo code for the Firefly Algorithm.

Figure 6. 1 Desired & Synthesized array patterns using GA

Figure 6. 2 Desired & Synthesized array patterns using ACO

Figure 6. 3 Desired & Synthesized array patterns using ABC

Figure 6. 4 Desired & Synthesized array patterns using FA

Figure 6. 5 Types of Filters based on Frequency Response

Figure 6. 6 Ideal Low pass filter

Figure 6. 7 Magnitude and Phase Response of Low Pass FIR filter using ACO

Figure 6. 8 Magnitude and Phase Response of Low Pass FIR filter using ABC

Figure 6. 9 Magnitude and Phase Response of Low Pass FIR filter using FA

Figure 6. 10 Lowpass FIR filter using FA and firpm

Figure 6. 11 Magnitude and Phase Response of High Pass FIR filter using ACO

Figure 6. 12 Magnitude and Phase Response of High Pass FIR filter using ABC

Figure 6. 13 Magnitude and Phase Response of High Pass FIR filter using FA

Figure 6. 14 Highpass FIR filter using ABC and firpm

Figure 6. 15 Magnitude and Phase Response of Bandpass FIR filter using ACO

Figure 6. 16 Magnitude and Phase Response of Bandpass FIR filter using ABC

Figure 6. 17 Magnitude and Phase Response of Bandpass FIR filter using FA

Figure 6. 18 Bandpass FIR filter using ACO and firpm

# List of Tables

Table 5. 1 Test functions and their respective Formula used to test the algorithms

Table 5. 2 Test functions ranges and their global minimum values

Table 5. 3 Control parameters of algorithms

Table 5. 4 Results for Branin function

Table 5. 5 Results for Goldstein-Price function

Table 5. 6 Results for Rosenbrock function

Table 5. 7 Results for Sphere function

Table 5. 8 Results for Ackley function

Table 6. 1 GA and ACO Table distribution

Table 6. 2 ABC and FA Table distribution

Table 6. 3 Average time to compute solution for each algorithm

Table 6. 4 Control parameters of algorithms

Table 6. 5 FIR linear phase filter requirements [32]

Table 6. 6 FIR Lowpass Filter Coefficients obtained by using ACO, ABC and FA

Table 6. 7 FIR Highpass Filter Coefficients obtained by using ACO, ABC and FA

Table 6. 8 FIR Bandpass Filter Coefficients obtained by using ACO, ABC and FA

# Abstract

There are many algorithms developed that use evolutionary concepts and biotic components of nature. These algorithms use stochastic method rather than deterministic method to come to an optimal solution. In this thesis, three of the bio-inspired algorithms are studied. Ant Colony Optimization is inspired by the foraging behavior of ants that has been widely used in mathematical and engineering applications. Similarly, Artificial Bee Colony is the other algorithm that is based on the foraging behavior of honey bees. Firefly Algorithm is based on the flashing behavior of fireflies. These three algorithms are described in detail in the thesis. Also, the algorithms are used to solve benchmark mathematical optimization problems. The algorithms are also used to synthesize an antenna pattern for radiometer application. Finally, these algorithms are used to design Finite Impulse Response Digital Filters.

# Chapter 1: Introduction

Global optimization algorithms are used to find the best possible solution of a mathematical or engineering problems. Broadly speaking, there are two ways to find the global optimum solution. One is by using deterministic method and the other is by using stochastic method. Deterministic method guarantees the global optimal solution, but it has dependencies on gradient information that is best for unimodal functions. Several global optimization algorithms such as Simulated Annealing, Genetic Algorithm, and Particle Swarm Optimization uses stochastic method to obtain the optimal solution. Stochastic method is useful for solving multimodal functions that can escape from local minima/maxima to obtain global minima/maxima [1]. Stochastic method that uses chance and local searches are also referred to as metaheuristic [2].

Most of the optimization problems can be written in the generic form as follows:

minimize f(**x**), **x** = (x_{1}, x_{2}, …, x_{D}) ∈ R^{D}, (1.1)

subject to

h_{i}(**x**) = 0, (i = 1, 2, …, M), g_{j}(x) ≤ 0, (j = 1, 2, …, N), (1.2)

where h_{i} and g_{j} are the equality constraints and inequality constraints, respectively [2]. Here, **x** is the design variable and it could be discrete, continuous, or a mixture of both. f(**x**) is called the objective function and this problem is formulated in a D-dimensional design space [2].

Biology-inspired algorithms are population-based algorithms that are developed using nature as an inspiration [3]. Biology-inspired algorithms incorporate the naturally occurring phenomena to solve problems in different fields such as physics, engineering, mathematics, economics, and so on. There has been a significant amount of research in bio-inspired algorithms as they are found to be very effective in solving numerical optimization problems.

Genetic Algorithm, developed by John Holland in the 1960s, is one of the earliest bio-inspired algorithms that uses the Darwinian evolution of biological systems. It uses crossover, mutation, and natural selection as genetic operators for algorithm operations [4].

Several other algorithms such as Evolutionary Algorithms [5], Evolutionary Programming [6], and Evolutionary Strategy(EP) [7] were developed between 1960s and 70s, and were all inspired by genetic evolution.

There are other algorithms that are ecology based. Invasive Weed Optimization(IWO) [8] is one of the popular ecology-based algorithms. It uses the behavior of weeds to find a suitable place to grow, reproduce, and colonize the area. Similarly, another ecology-based algorithm is Biogeography-Based Optimization(BBO) that uses the migration patterns of species in order to find the optimal solution [9].

There were several other algorithms developed in the late 19^{th} century that were swarm based. Social algorithms are swarm intelligence-based bio-inspired algorithms that use the characteristics of social insects [3]. In 1992, Marco Dorigo came up with an optimization algorithm based on ants using pheromones as a chemical messenger, which is widely known as Ant Colony Optimization(ACO) [10]. Another popular swarm-based algorithm is Particle Swarm Optimization (PSO) which was introduced by Eberhart Kennedy in 1995. This algorithm uses the social behavior of birds flocking and fishes schooling. In PSO, the particles swarm around the search space starting from some initial guess [2]. Each particle tracks its current best and shares the global best with the swarm to obtain the optimal solution. Similarly, Artificial Bee Colony (ABC) is another swarm intelligence-based algorithm that was developed by Dervis Karaboga in 2005 [11]. It is based on honey bees swarming and is also widely used for numerical optimization. Firefly Algorithm(FA) is another metaheuristic algorithm that is inspired by the behavior of group of fireflies that interact and communicate using bioluminescence produced in the insect’s body [2].

In this thesis, ACO, ABC, and FA are studied. In chapter 2, chapter 3, and chapter 4, ACO, ABC and FA are introduced and described in detail. In chapter 5, these three algorithms are used to solve optimization test functions. In chapter 6, the algorithms are also used to solve engineering problems. Finally, in chapter 7, the summary and conclusion of the research work is provided.

# Chapter 2: Ant Colony Optimization

## 2.1. Behavior of Ant Colony

The optimization algorithm inspired by the foraging behavior of ants was initially proposed to solve combinatorial optimization problems [10]. The algorithm was implemented to continuous domain by K. Socha and M. Dorigo in 2008 which applies the original ACO metaheuristic proposed by Dorigo in 1992. Ants search for food around their nests in a random manner in the beginning. When an ant finds a food source, it evaluates the food source and brings back the food to the nest. It also deposits a pheromone trail on its way back. The amount of pheromone deposited depends on the quantity and quality of the food. Other ants follow the pheromone trail preferring the trail with more amount of pheromone. This is the main idea behind the ACO algorithm.

## 2.2. Algorithm

In each iteration of ACO algorithms, the solution components are selected using a probability rule defined as follows [12]:

pcijsp,t=[τij(t)]α.[η(cij)]β∑j=iJ[τij(t)]α.[η(cij)]β

(2.1)

where p(c_{ij}|s^{p},t) is the probability of choosing the solution component c_{ij} as the current partial solution s^{p} at iteration t; τ_{ij}(t) is the pheromone value associated with component c_{ij} at iteration t; η(.) assigns the heuristic value representing the cost of choosing the solution component c_{ij}, α and β are two parameters indicating the relative importance of the pheromone trail and heuristic value; and i is the current construction step including j component solutions in the allowable set [13].

After the solution for all ants is calculated, the pheromone values are updated by increasing the pheromone levels associated with the chosen good solution, s_{ch}, and decreasing all the pheromone values through pheromone evaporation.

The algorithm described was applied to solve continuous and mixed-variable optimization problems defined as Ant Colony Optimization Real (ACO_{R}) [14]. The terms ACO and ACO_{R} are used interchangeably in this thesis. The main idea of ACO_{R} is to sample continuous probability distribution instead of sampling a discrete probability distribution as proposed in Equation 2.1. A Gaussian Kernel Probability Distribution Function (PDF) was suggested by the authors that prepares a flexible sampling shape compared to a single Gaussian function. The Kernel is defined as the weighted sum of several one-dimensional Gaussian functions g_{l}^{i} (x) as G^{i}(x) given by [14]:

Gi(x) =∑l=1k wlgli(x)=∑l=1k wl1σli2πe-(x-μli)22σli2

(2.2)

Where k is the number of single pdfs included in Gaussian Kernel PDF at ith construction step; w, µ^{i} and σ^{i} are the vectors of size k and defines the weights, means, and standard deviations associated with the individual Gaussian functions at ith iterations [13].

Figure 2. 1 Solution archive used in ACOR [13]

ACO_{R} stores k solutions in the solution archive T. The archive keeps n variables associated with a solution s_{l} in a n dimensional problem. The structure of solution archive in ACO_{R} is shown in Figure 2.1. S^{i}_{l} is the ith component of the lth solution in the archive. The update of pheromone is done by adding the set of new and better solutions to the solution archive T and also by removing the same number of less optimal solutions to keep the number of solution archive constant. The vectors w, µ^{i} and σ^{i} is determined by using the solutions in the archive. At each step, for any Gaussian kernel pdf G^{i}, the vector µ^{i} is defined as:

μli=sli

(2.3)

where ith variable of archived solution is used to determine µ^{i} [13].

After one cycle, the solution in the archive along with the current solution is ranked according to their fitness value. Only the top k solutions are used to fill the solution archive and the remaining solutions are omitted. The weight w_{l} of the solution s_{l} is calculated by a Gaussian function with argument l, with mean 1, and standard deviation qk, is [14] given by:

wl=1qk2πe(l-1)22q2k2

(2.4)

In Equation 2.4, q is a tuning parameter. A small value of q could lead to pre-mature convergence as the ants prefer to search around highly ranked solutions. A higher value of q makes the probability more uniform and the algorithm searches for solution in the wider range. This will however lead to slower convergence speed [13].

Before an ant generates a solution, it selects one solution in the archive and then uses the Gaussian functions associated with the chosen solution for all n steps. This allows the exploitation of the correlation that may exist between the decision variables [14]. The choice of the ranked solution l is done probabilistically as:

pl= wl∑j=1kwl,l=1,….,k

(2.5)

where w_{l} is the weight of Gaussian function l as defined in equation 2.4.

At each construction step different Gaussian function, all with rank of l, will be sampled. For step I, µ^{i}_{l} = s^{i}_{l}, and σ_{l}^{i}, is calculated by the following equation [14]:

σli=ξ∑e=1k|sei-sli|k-1

(2.6)

This means that at each step i, the parameter ξ multiplies the average distance from the chosen solution s_{l} to other solutions in the archive. The parameter ξ is equivalent to the coefficient of pheromone evaporation in the original ACO. The small value of this parameter converges the solution faster. The optimal value of ξ makes the algorithm less biased towards the already explored solution that are in the archive [13]. At the start of the algorithm, the Kernel PDF is initialized by using a solution of normal distribution between the search domain. Each solution is evaluated at the end of the first iteration, and the first k solutions are transferred to the archive T [13]. Figure 1.2 provides the steps of ACO_{R} algorithm.

# Chapter 3 Artificial Bee Colony

## 3.1. Honey Bees

Artificial Bee Colony (ABC) is an algorithm that is inspired by the foraging behavior of honey bees. There are three components that are essential to the emergence of collective intelligence of honey bee swarm which are food sources, employed foragers, and unemployed foragers [11]. The useful or rich qualities of a food source depend upon it’s proximity, it’s value, and the ability to extract it. Employed foragers carry information about the food source they are associated with. Unemployed foragers are continuously searching for food sources to eat. The exchange of information takes place on the dance floor in the hive. Based on the information collected from the employed foragers, unemployed foragers can choose the best food location and reach it. This is the basis of ABC algorithm which was introduced by Karaboga.

## 3.2 Algorithm

In Karaboga’s ABC algorithm, the colony of artificial bees consists of three group of bees: employed bees, onlookers, and scouts [15]. The employed bees go to the food source and remember the source information. The onlooker bees wait in the dancing area and decide to choose a food source. A scout performs a random search and provide the information to the other bees on the dance floor. Half of the hive in ABC consists of employed bees, and the other bees are the onlookers. Also, the number of employed bees and the number of food sources around the hive are equal. After the food source of an employed bee is exhausted, it becomes a scout. Figure 3.1 shows the main steps for ABC algorithm.

Figure 3. 1 Steps of ABC algorithm

In each ABC cycle, the employed bees move onto the food sources and evaluate its rich qualities. The employed bees share this information to onlooker bees, and the onlooker bees select the food sources. The scout bees are determined, and they explore possible food sources. The food source positions are chosen randomly at the beginning. A food source rich in nectar content has a high probability of being selected by onlooker bees. A new food source is chosen in the neighborhood of the previous food position based on the comparison of food source positions [15].

The position of a food source in ABC algorithm represents a possible solution of the optimization problem. The quality or fitness of the solution is represented by the nectar amount of a food source. In the beginning, ABC generates randomly distributed initial population P of SN solutions. SN represents population size. Each food source or solution x_{i} is a vector with a dimension of D. After initialization, the employed bees, the onlooker bees, and scout bees constantly search for different solutions in repeated cycles, C= 1, 2, ….., C_{max }[15]. An onlooker bee probabilistically produces a modification on its position to find a new source and evaluate its rich qualities. If it is better than its current value, then it keeps this new position and forgets the old one. The probability value associated with that food source, pi, is calculated by [15]:

pi=fiti∑n=1SNfitn

(3.1)

Where fit_{i} is the fitness value of the solution I evaluated by its employed bee [15]. fit_{i} is dependent on the nectar amount of the food source in the position i. A new solution is obtained by the following expression:

vij=xij+φij(xij-xkj)

(3.2)

where k belongs to {1, 2, ….,BN} and j belongs to {1,2,……,D} are randomly chosen indexes [15]. The value of k is determined randomly but it is chosen so that k does not equal to i. The parameter φ_{i,j} is a random number between -1 and 1 which controls the production of a neighbor food source position around x_{i,j }and the modification represents the comparison of the neighbor food positions visually by the bee [15].

In ABC algorithm, a food source is abandoned if it cannot be improved any further through several number of cycles. The abandoned food source is replaced with a new food source by the scouts. A randomly generated position is used in replacement of an abandoned source. There are three control parameters in ABC algorithm: the number of food sources or onlooker bees (SN), the value of limits, and the maximum number of cycles [15].

ABC uses four different selection processes. The onlooker bees for discovering promising regions as described by Equation 3.1 is equivalent to a global selection process. A local selection process occurs in a region by employed bees and the onlookers depending on the local information as described in Equation 3.2. A greedy selection process is carried out by all bees if the new food source is better than the old one, otherwise it keeps the old source in it’s memory. A random selection is done by scouts [15].

# Chapter 4: Firefly Algorithm

## 4.1. Fireflies

Humans have been fascinated by fireflies for hundreds of years with their beautiful lights during summer nights. There are about 2000 species of fireflies all over the world [16]. The light produced by a firefly is usually intermittent, and each species has its own unique pattern of flashes. One of the main functions of the flashing is to communicate with other fireflies. Another function of such flashes could also be to attract potential prey. It could also serve as a defense mechanism [16]. The strength of the flashes and its timing usually determines the attractive nature of a firefly. Based on this, Xin-She Yang developed the firefly algorithm (FA) in 2008 [2]. The algorithm has been extensively used to solve various optimization problems [20] [21] [22] [23].

## 4.2. Algorithm

The algorithm uses the following idealized rules [2]:

- Fireflies are unisex so that one firefly will be attracted to other fireflies regardless of their sex.
- The attractive nature of a firefly is proportional to the brightness, and they both decrease as the distance increases. For any two flashing fireflies, the less bright one will move towards the brighter one. A firefly will move randomly if there is no brighter firefly.
- The landscape of the objective function governs how bright a firefly is.

Figure 4.1 shows the pseudo code of FA.

Figure 4. 1 Pseudo code for the Firefly Algorithm.

### 4.2.1. Light Intensity and Attractive Nature

The variation of light intensity and the formulation of attractiveness are two important factors in firefly algorithm. It is assumed that the attractive nature of a firefly is determined by how bright it is which is associated with the encoded objective function [2]. The brightness I of a firefly at a point x can be chosen as I(x), which is proportional to f(x) but the attractiveness β is relative and thus vary with the distance r_{ij} between firefly i and firefly j [2]. The light intensity I(r) also varies according to the inverse square law

Ir= Isr2

(4.1)

where I_{s} is the intensity at the source. Also, the light intensity I varies in a medium with a fixed light absorption coefficient γ with the distance r by

I=I0e-γr

(4.2)

where I_{0} is the original light intensity. The combined effect of both the inverse square law and absorption can be approximated as the Gaussian form

Ir=I0e-γr2

(4.3)

At r=0, I can be assumed to be I_{s}/r^{2} to avoid singularity. Since the attractiveness is proportional to the light intensity seen by adjacent fireflies, the attractiveness β of a firefly can be written as

β=β0e-γr2

(4.4)

where β_{0} is the attractiveness at r=0. In the actual implementation, the β(r) follows the following generalized form [2]

βr=β0e-γrm, (m≥1)

(4.5)

The distance between any two fireflies I and j at x_{i} and x_{j}, is defined [2] as

rij=xi-xj=∑k=1d(xi,k-xj,k)2

(4.6)

where x_{i,k }is the kth component of the spatial coordinate x_{i} of ith firefly.

The main algorithmic equation for FA for the position x_{i} [2] is

xi=xi+β0e-γr2xj-xi+αϵi

(4.7)

where the second term is due to the attraction. The third term is randomization, and ϵ_{i} is a vector of random numbers drawn from a Gaussian distribution. To speed up the overall convergence, the randomness should be gradually reduced as given by the following equation:

α= α0θt

(4.8)

where α_{0} is the initial value and θ, the reduction factor, is between 0 and 1 [24]. Yang recommends θ to be between 0.9 to 0.99, depending on the type of problem and the desired quality of solutions.

The value of γ is very important in determining the speed of the convergence and the behavior of FA [2]. When γ =0, the visibility between the fireflies is very high. When γ >>1, the visibility is very short. Thus, each firefly flies independently and randomly.

### 4.2.2. FA efficiency

Firefly Algorithm can be efficiently used for clustering and outperform other algorithms. It could be used for discrete as well as continuous problems. FA has two major advantages over other algorithms: automatic subdivision and the ability of dealing with multimodality [17]. The whole population could be subdivided into subgroups, and each group could swarm around each mode or local optimum. The global optimum is thus obtained among all of these local modes. The subdivision also allows the fireflies to be able to find all optima at the same time given that the total number of fireflies is greater than the total number of modes. This is suitable for highly non-linear, multimodal optimization problems [17]. Finally, the parameters in FA can be tuned to control the random nature as the number of iterations goes up, which could speed up convergence. Thus, FA is very efficient in solving continuous problems, clustering, and classification problems, and combinatorial optimization problems [17].

# Chapter 5 Application and Comparison to Optimize benchmark problems

## 5.1. Introduction

The algorithms described in the previous chapters were used to test some of the common functions. Three functions that were chosen are multimodal while the remaining two are variable functions. The multimodal functions were further tested for different dimensions and the results are compared. All three algorithms were used to find the global minimum of the test functions. Table 5.1 shows the name of the test function and the corresponding formula. Table 5.2 shows the range of the variables used to test and the actual minimum values of each of the functions.

Table 5.3 shows the control parameters for each algorithm. Each algorithm was tested for 30 times and the minimum, maximum, average, and standard deviations of global minimum value was obtained. The number of iterations were chosen according to the number of dimension of the test functions. Branin and Goldstein-Price functions are only two variable functions. After several trials, 800 were found to be suitable for the maximum number of iterations when the dimension was less than 15 except the test case for Rosenbrock function. For the dimension greater than 15, the maximum iteration of 5000 was used. All three algorithms were unable to converge to the minimum value for Rosenbrock function with 800 iterations. Thus, the algorithms were tested with 3000 iterations for 5 dimensions and 5000 iterations for 10 dimensions. Tables 5.4, 5.5, 5.6, 5.7, and 5.8 show the results of the algorithms for Branin, Goldstein-Price, Rosenbrock, Sphere, and Ackley functions respectively.

Table 5. 1 Test functions and their respective Formula used to test the algorithms

Functions |
Ranges |
Minimum Value |

Branin |
x1∈-5,10
x2∈0,15 |
f1x⃗=0.397887, at x1,x2=-π,12.275,
π,12.275) and 9.42478,2.475 |

Goldstein-Price |
x1,2∈-2,2 | f2x⃗=3 at x1,x2=0,-1 |

Rosenbrock |
xi∈-2.048,2.048 | f31⃗=0 |

Sphere |
xi∈-5.12,5.12 | f40⃗=0 |

Ackley |
xi∈-32.768,32.768 | f50⃗=0 |

Table 5. 2 Test functions ranges and their global minimum values

ACO |
ABC |
FA |

Population Size:20 | Population size: 20 | Population size: 20 |

Sample_size:50 | Limit: No. of onlooker bees * 15 | alpha: 0.8 |

q: 0.5 | no. of onlookers: 50% of the swarm | betamin: 0.2 |

zeta: 1 | no. of employed bees: 50% of the swarm | gamma: 1 |

Max. iteration: 800,3000,5000 | no. of scouts: 1 | Max. iteration: 800,300,5000 |

Max. iteration: 800,300,5000 |

Table 5. 3 Control parameters of algorithms

### 5.2. Results

Algorithm |
Max. iterations |
D |
Max value |
Min value |
Mean value |
Standard Deviation |
Avg. Time |

ACO | 800 | 2 | 0.399895035 | 0.397887358 | 0.397984348 | 3.88E-04 | 2.8s |

ABC | 800 | 2 | 0.397887358 | 0.397887358 | 0.397887358 | 3.24E-16 | 0.68s |

FA | 800 | 2 | 0.397887371 | 0.397887358 | 0.397887362 | 3.90E-09 | 0.39s |

Table 5. 4 Results for Branin function

Algorithm |
Max. iterations |
D |
Max value |
Min value |
Mean value |
Standard Deviation |
Avg. Time |

ACO | 800 | 2 | 3 | 3 | 3 | 1.27E-15 | 2.7s |

ABC | 800 | 2 | 3.013546694 | 3 | 3.001340874 | 0.003339364 | 0.67s |

FA | 800 | 2 | 3.00000004 | 3 | 3.000000018 | 1.28E-08 | 0.39s |

Table 5. 5 Results for Goldstein-Price function

Algorithm |
Max. iterations |
D |
Max value |
Min value |
Mean value |
Standard Deviation |
Avg. Time |

ACO | 3000 | 5 | 1.30E-04 | 5.86E-05 | 9.59E-05 | 1.66E-05 | 14.53s |

5000 | 10 | 1.48E-04 | 6.32E-05 | 1.01E-04 | 2.07E-05 | 30.81s | |

ABC | 3000 | 5 | 5.68E-01 | 5.07E-04 | 1.46E-01 | 1.93E-01 | 2.62s |

5000 | 10 | 1.46E-01 | 2.92E-04 | 1.69E-02 | 2.96E-02 | 4.46s | |

FA | 3000 | 5 | 6.70E-02 | 4.27E-04 | 3.53E-02 | 1.49E-02 | 1.57s |

5000 | 10 | 8.88E+00 | 1.79E+00 | 3.67E+00 | 1.60E+00 | 3.7s |

Table 5. 6 Results for Rosenbrock function

Table 5. 7 Results for Sphere function

Algorithm |
Max. iterations |
D |
Max value |
Min value |
Mean value |
Standard Deviation |
Avg. Time |

ACO | 800 | 5 | 8.88E-16 | 8.88E-16 | 8.88E-16 | 0.00E+00 | 3.44s |

800 | 10 | 4.44E-15 | 4.44E-15 | 4.44E-15 | 0.00E+00 | 4.81s | |

800 | 15 | 2.38E-10 | 2.27E-12 | 7.11E-11 | 6.77E-11 | 6.23s | |

5000 | 30 | 4.53E-06 | 2.20E-07 | 1.31E-06 | 1.17E-06 | 61.56s | |

ABC | 800 | 5 | 7.99E-15 | 8.88E-16 | 4.56E-15 | 1.14E-15 | 0.67s |

800 | 10 | 2.93E-14 | 7.99E-15 | 1.62E-14 | 4.86E-15 | 0.66s | |

800 | 15 | 6.42E-09 | 1.34E-10 | 1.20E-09 | 1.40E-09 | 0.67s | |

5000 | 30 | 6.84E-14 | 4.00E-14 | 5.15E-14 | 6.90E-15 | 4.23s | |

5000 | 200 | 8.49E-02 | 3.47E-03 | 2.13E-02 | 2.11E-02 | 4.92s | |

FA | 800 | 5 | 3.38E-03 | 9.45E-04 | 2.19E-03 | 5.98E-04 | 0.38s |

800 | 10 | 5.98E-03 | 1.85E-03 | 4.22E-03 | 9.05E-04 | 0.39s | |

800 | 15 | 9.54E-03 | 4.75E-03 | 6.25E-03 | 9.43E-04 | 0.39s | |

5000 | 30 | 7.67E-03 | 4.50E-03 | 6.22E-03 | 6.72E-04 | 2.85s | |

5000 | 200 | 0.06771646 | 0.044467486 | 0.054754785 | 0.004836664 | 6.93s |

Table 5. 8 Results for Ackley function

From the results, it can be inferred that all three algorithms perform well for two variable Branin and Goldstein-Price functions. The computation time for FA and ABC is significantly lower compared to ABC for lower dimensional functions. ABC and FA were tested for 200 dimensions for Ackley and Sphere function. ACO had very long execution time and the result obtained was not suitable. The algorithm parameters needed some adjustment. Similarly, the results from FA for 10-dimension Rosenbrock function is not good. It gives worse results compared to the other algorithms, but the computation time is less than the other two. Better results can be obtained if maximum iteration of the algorithm is increased. Comparatively speaking, the quality of solution from FA is not as good compared to ABC and ACO for same number of iterations and population size.

It should be noted that each algorithm performs differently for different types of problems. In practical applications, one may perform better than the other and it may not be obvious why one outperforms the other. It should also be noted that the global minimum of the test functions is known. However, in real applications, it is not known, so some stopping criteria should be established. It could either be done by running the algorithms up to a certain number of iterations or by stopping after repeatedly obtaining same minimum value for several iterations.

# Chapter 6 Application to Engineering Problems

## 6.1. Introduction

Optimization algorithms are an important tool for engineers. MATLAB, one of the most widely used software program in engineering, has an optimization toolbox that has Genetic Algorithm, Simulated Annealing, and several other optimization functions. ACO has been used in Mobile and Wireless Sensor Networks, Network Security, Pervasive Computing, Data Mining, Image Processing and several other applications [25]. Optimization Algorithms such as ACO, FA and ABC could be and has already been used to design and optimize various types of antennas [26][27][28]. A radiometer antenna array pattern synthesis was done using the three algorithms and compared to the results obtained by Genetic Algorithm. In addition, Finite Impulse Response(FIR) filters were also designed using the algorithms.

### 6.2.1. Antenna Design

The radiometer antenna is a symmetric design. Each planar slot array consists of two sub-arrays, each employing 32 radiating longitudinal offset slots cut in the broad walls of eight stacked rectangular waveguides [18]. The aperture distribution is assumed to be real and is synthesized directly from the desired average power patterns. For the 64 element symmetric array, there are 15 unknowns with the maximum at the center set at 1 [19].

### 6.2.2. Results

The antenna array pattern was obtained by using GA with population size of 10, chromosome size of 150 bits, and 5000 generations. ACO with population size of 20 and max generation of 800 was used to solve the same problem. Figure 6.1 and Figure 6.2 shows the results obtained by GA and ACO respectively. Figure 6.2 and 6.3 shows the array pattern generated by ABC and FA respectively. In the figure, the blue colored plot represents the antenna pattern specifications and the red colored plot represents the array pattern achieved using the algorithms.

GA | ACO | ||||||

0.1661 | 0.127 | 0.0576 | 0 | 0.1215 | 0.0849 | 0.0323 | 0 |

0.4682 | 0.3607 | 0.1642 | 0.0078 | 0.4393 | 0.3325 | 0.1515 | 0.0011 |

0.8152 | 0.6168 | 0.3098 | 0.0508 | 0.7807 | 0.5943 | 0.2971 | 0.0383 |

1.00 | 0.7507 | 0.3734 | 0.0694 | 1.00 | 0.757 | 0.3886 | 0.0723 |

Table 6. 1 GA and ACO Table distribution

ABC | FA | ||||||

0.1444 | 0.1031 | 0.0426 | 0 | 0.1252 | 0.0887 | 0.0357 | 0 |

0.4591 | 0.344 | 0.1539 | 0 | 0.4412 | 0.3325 | 0.1487 | 0 |

0.7986 | 0.6056 | 0.2973 | 0.0388 | 0.7885 | 0.5993 | 0.2954 | 0.0393 |

1.00 | 0.7505 | 0.3747 | 0.0672 | 1.00 | 0.7546 | 0.38 | 0.0698 |

Table 6. 2 ABC and FA Table distribution

Figure 6. 1 Desired & Synthesized array patterns using GA

Figure 6. 2 Desired & Synthesized array patterns using ACO

Figure 6. 3 Desired & Synthesized array patterns using ABC

Figure 6. 4 Desired & Synthesized array patterns using FA

All three algorithms converged to produce the desired power pattern. Table 6.3 shows the comparison between the algorithms in terms of the test time. It was found that ACO was four times faster than GA while FA was 1.5 times faster than GA. ABC was about 1.6 times slower than GA. Table 6.4 provides the algorithm parameters for all three algorithms. As expected, when population size increases, the total execution time of the algorithm increases. The knobs in the algorithms were adjusted and the best result obtained is shown here. There were several cases where the solution converged to a different value that was worse than the values shown here. It is expected as the stochastic method doesn’t always guarantee the global minimum/maximum.

GA | ACO | ABC | FA | |

Average Time (sec) | 271.64 | 76.0835 | 437.36 | 178.5343 |

Table 6. 3 Average time to compute solution for each algorithm

GA | ACO | ABC | FA |

Population Size :10 | Population Size:20 | Population size: 20 | Population size: 20 |

Chromosome size: 150 bits | Sample_size:50 | Limit: No. of onlooker bees * 15 | alpha: 0.8 |

Max. iteration: 5000 | q: 0.5 | no. of onlookers: 50% of the swarm | betamin: 0.2 |

zeta: 1 | no. of employed bees: 50% of the swarm | gamma: 1 | |

Max. iteration: 800 | no. of scouts: 1 | Max. iteration: 5000 | |

Max. iteration: 11000 |

Table 6. 4 Control parameters of algorithms

### 6.3.1 FIR Filter Design

Digital filters are widely used in signal processing, aerospace, control systems, defense, telecommunications, and audio/video processing. The main use of filters in signal processing is signal separation and signal restoration [29]. Signal needs to be separated when the signal consists of undesired information such as noise and interference. Signal needs to be restored when it is distorted. Compared to designing Analog filters, design of digital filters doesn’t require advanced mathematical knowledge. Digital filters are reliable, accurate, flexible, efficient and simple to design and implement [30].

### 6.3.2. Types of Filters

Digital filters are mainly classified into Lowpass, Bandpass, Highpass, and Bandstop filters based on their frequency response. Figure 6.5 shows the frequency response characteristics of all 4 types of filter. Lowpass filter allows signals below a certain frequency to pass. Highpass filter allows signals above cutoff frequency to pass through. Similarly, Bandpass filter allows signals within a certain band to pass through while Bandstop filter passes all frequencies except frequencies at a certain band.

Figure 6. 5 Types of Filters based on Frequency Response

There are two types of filters based on the impulse response, namely, Finite Impulse Response(FIR) and Infinite Impulse Response (IIR) filters. The impulse response of FIR filters has finite number of nonzero samples whereas the impulse response of IIR filters has infinite number of nonzero samples. A Linear Time Invariant(LTI) system can be represented in mathematical equation by the difference equation shown below [31]:

yn=∑k=0N-1akx(n-k)-∑k=1Mbky(n-k)

(6.1)

where a(k) and b(k) are the forward tap coefficients and feedback tap coefficients respectively. By using z Transformation of Equation 6.1, the transfer function of the system given by [31]:

Hz=∑k=0N-1a(k)z-k1+∑k=1Mbkz-k

(6.2)

The denominator of H(z) for FIR filters is 1 so that the equation simplifies to:

Hz=∑k=0N-1akz-k

(6.3)

The transfer function of a FIR filter with impulse response, **h**= [h_{0}, h_{1}, h_{2}, …h_{N-1}], of length N is given by

Hz= ∑n=0N-1hnz-n

(6.4)

The frequency response is determined by substituting z=e^{jw }in Equation 6.4 shown as follows:

Hejw=∑n=0N-1hne-jwn

(6.5)

The frequency response can also be written as follows:

Hejw=Hawθw (6.6)

where H_{a}(w) = |H(e^{jw})| is the magnitude response and θ_{w} = ˂ H(e^{jw}) is the phase response. A linear-phase system is a discrete-time system whose phase response is a linear function of the frequency which results in an even or odd symmetry in its impulse response. Only FIR system demonstrates causal linear-phase impulse response [32]. Based on even or odd symmetry and whether the filter order is even or odd, there are four types of FIR filter with linear phase. Table 6.1 shows the requirements for different type of FIR linear phase filters. One of the advantages of linear phase filter is that the filter coefficients are symmetric so that dimension of the problem is cut in half.

Type | h[n] | Order |

I | even | even |

II | even | odd |

III | odd | even |

IV | odd | odd |

Table 6. 5 FIR linear phase filter requirements [32]

In this thesis, Type I Lowpass, Highpass and Bandpass filters are designed by using three algorithms and the results are compared.

### 6.3.3 Design of Type I FIR Filters

The main idea used in the design of FIR filters using optimization algorithms is to minimize the error between the amplitude response of the ideal filter and the designed filter. Figure 6.2 shows magnitude response of an ideal Low Pass Filter. It has a sharp cutoff at w_{c}, which is the cutoff frequency of the filter.

Figure 6. 6 Ideal Low pass filter

Following steps are carried out in the design:

Step I: Start with random filter coefficients (h_{n}) which is also the impulse response within

certain limits

Step II: Determine M samples of amplitude response of the filter coefficients by using

Fast Fourier Transform(FFT)

Step III: Calculate the error(E) between the amplitude response of the ideal filter and the

designed filter at M different points.

Step IV: Use ABC, ACO or FA to minimize E

The amplitude response of the ideal Lowpass filter is given by:

Hd(ejw)=1, w<wc0, w> wc (6.7)

where w_{c} is the cutoff frequency. Similarly, the amplitude response of the ideal Highpass filter and ideal Bandpass filter is given by Equation 6.8 and Equation 6.9.

Hdejw=0, w<wc1, w> wc

(6.8)

Hdejw=1, w>wc1,w<wc2 0, w<wc1,w>wc2

(6.9)

The ideal amplitude response is sampled at M different points between 0 and pi. Similarly, the amplitude response of the designed filter is also sampled at same M points and error is calculated. There are several error functions proposed in different literatures [33] [34] [35]. An error function E shown in Equation 6.10 is used that is minimized by the algorithms [36].

E=∑i=0MW(wi)*(|H(ejwi)|-|Hd(ejwi)|)2

(6.10)

where W(w_{i}) is defined by Equation 6.11

Wwi=1, wi ∈ passband 0, wi∈stopband

(6.11)

### 6.3.4 Low Pass Filter Results

The passband cutoff of the Low pass filter, w_{p} ,was set to 0.3π and stopband cutoff, w_{s}, was set to 0.4π. The number of samples compared, M, was chosen to 200. The filter order of 20 was designed. Figure 6.7 shows the amplitude and phase response of the designed filter using ACO. Similarly, Figure 6.8 shows the amplitude and phase response for the designed filter using ABC and Figure 6.9 shows the same using FA.

Figure 6. 7 Magnitude and Phase Response of Low Pass FIR filter using ACO

Figure 6. 8 Magnitude and Phase Response of Low Pass FIR filter using ABC

Figure 6. 9 Magnitude and Phase Response of Low Pass FIR filter using FA

Table 6.6 shows the filter coefficients for FIR Low Pass filter with the order of 20 obtained by using all three algorithms.

ACO |
ABC |
FA |
|||

h(n) |
Coefficients |
h(n) |
Coefficients |
h(n) |
Coefficients |

h(1) = h(21) | -0.01682 | h(1) = h(21) | -0.0169 | h(1) = h(21) | -0.00810 |

h(2) = h(20) | -0.01044 | h(2) = h(20) | -0.0104 | h(2) = h(20) | -0.00830 |

h(3) = h(19) | 0.015195 | h(3) = h(19) | 0.01507 | h(3) = h(19) | 0.00920 |

h(4) = h(18) | 0.033651 | h(4) = h(18) | 0.0336 | h(4) = h(18) | 0.02740 |

h(5) = h(17) | 0.013762 | h(5) = h(17) | 0.01385 | h(5) = h(17) | 0.01470 |

h(6) = h(16) | -0.03855 | h(6) = h(16) | -0.0387 | h(6) = h(16) | -0.03280 |

h(7) = h(15) | -0.06918 | h(7) = h(15) | -0.0687 | h(7) = h(15) | -0.06630 |

h(8) = h(14) | -0.01605 | h(8) = h(14) | -0.016 | h(8) = h(14) | -0.01930 |

h(9) = h(12) | 0.125685 | h(9) = h(12) | 0.12549 | h(9) = h(12) | 0.12170 |

h(10) = h(12) | 0.282221 | h(10) = h(12) | 0.28366 | h(10) = h(12) | 0.28300 |

h(11) | 0.350283 | h(11) | 0.34973 | h(11) | 0.3543 |

Table 6. 6 FIR Lowpass Filter Coefficients obtained by using ACO, ABC and FA

Table 6.6 shows that the results for all three algorithms is almost the same. The results from FA is also compared to the results obtained by Parks-McClellan(PM) algorithm. It is obtained by using firpm command on MATLAB. Figure 6.10 shows the amplitude response of Low Pass filter using PM algorithm and FA for comparison.

Figure 6. 10 Lowpass FIR filter using FA and firpm

The results obtained by firpm command on MATLAB and FA is very similar. FA provides better attenuation at stopband compared to firpm method.

### 6.3.5. Highpass Filter Results

The passband cutoff of the High pass filter, w_{p},was set to 0.4π and stopband cutoff, w_{s}, was set to 0.3π. The number of samples compared, M, was again chosen to 200 and the filter order of 20 was designed. Figure 6.11 shows the amplitude and phase response of the designed filter using ACO. Similarly, Figure 6.12 shows the amplitude and phase response for the designed filter using ABC and Figure 6.13 shows the same using FA.

Figure 6. 11 Magnitude and Phase Response of High Pass FIR filter using ACO

Figure 6. 12 Magnitude and Phase Response of High Pass FIR filter using ABC

Figure 6. 13 Magnitude and Phase Response of High Pass FIR filter using FA

Table 6.7 shows the filter coefficients for FIR Highpass filter with the order of 20 obtained by using all three algorithms.

ACO |
ABC |
FA |
|||

h(n) |
Coefficients |
h(n) |
Coefficients |
h(n) |
Coefficients |

h(1) = h(21) | 0.016824 | h(1) = h(21) | 0.014247 | h(1) = h(21) | 0.007254 |

h(2) = h(20) | 0.010437 | h(2) = h(20) | 0.013736 | h(2) = h(20) | -0.00366 |

h(3) = h(19) | -0.0152 | h(3) = h(19) | -0.0114 | h(3) = h(19) | -0.0232 |

h(4) = h(18) | -0.03365 | h(4) = h(18) | -0.03377 | h(4) = h(18) | -0.02886 |

h(5) = h(17) | -0.01376 | h(5) = h(17) | -0.01759 | h(5) = h(17) | -0.00117 |

h(6) = h(16) | 0.038553 | h(6) = h(16) | 0.035438 | h(6) = h(16) | 0.046264 |

h(7) = h(15) | 0.069177 | h(7) = h(15) | 0.070254 | h(7) = h(15) | 0.063966 |

h(8) = h(14) | 0.016054 | h(8) = h(14) | 0.020211 | h(8) = h(14) | 0.003264 |

h(9) = h(12) | -0.12569 | h(9) = h(12) | -0.12314 | h(9) = h(12) | -0.13268 |

h(10) = h(12) | -0.28222 | h(10) = h(12) | -0.28411 | h(10) = h(12) | -0.27586 |

h(11) | 0.649717 | h(11) | 0.645416 | h(11) | 0.66291 |

Table 6. 7 FIR Highpass Filter Coefficients obtained by using ACO, ABC and FA

The results obtained by ABC algorithm is compared to that obtained by firpm MATLAB function and the result is shown if Figure 6.14. Similar to the results obtained in Lowpass filter case, the filter obtained by using ABC shows better stopband attenuation. The passband ripple is also better compared to firpm.

Figure 6. 14 Highpass FIR filter using ABC and firpm

### 6.3.6. Bandpass Filter Results

The low side passband cutoff of the Bandpass filter, w_{p1} ,was set to 0.4π and the low side stopband cutoff, w_{s1}, was set to 0.3π. The high side passband cutoff of the desired filter, w_{p2}, of the filter was set to 0.7π. The high side stopband cutoff of the filter, w_{s2}, was set to 0.8π. The number of samples compared, M, was chosen to 200. The filter order of 20 was designed. Figure 6.15 shows the amplitude and phase response of the designed filter using ACO. Similarly, Figure 6.16 shows the amplitude and phase response for the designed filter using ABC and Figure 6.17 shows the same using FA.

Figure 6. 15 Magnitude and Phase Response of Bandpass FIR filter using ACO

Figure 6. 16 Magnitude and Phase Response of Bandpass FIR filter using ABC

Figure 6. 17 Magnitude and Phase Response of Bandpass FIR filter using FA

Table 6.8 shows the filter coefficients for FIR Bandpass filter with the order of 20 obtained by using all three algorithms.

ACO |
ABC |
FA |
|||

h(n) |
Coefficients |
h(n) |
Coefficients |
h(n) |
Coefficients |

h(1) = h(21) | -0.0004 | h(1) = h(21) | -0.00019 | h(1) = h(21) | -0.00041 |

h(2) = h(20) | 0.033695 | h(2) = h(20) | 0.033181 | h(2) = h(20) | 0.033786 |

h(3) = h(19) | -0.01644 | h(3) = h(19) | -0.017 | h(3) = h(19) | -0.01642 |

h(4) = h(18) | -0.05182 | h(4) = h(18) | -0.05216 | h(4) = h(18) | -0.05186 |

h(5) = h(17) | 0.022396 | h(5) = h(17) | 0.023444 | h(5) = h(17) | 0.022386 |

h(6) = h(16) | -0.01042 | h(6) = h(16) | -0.00971 | h(6) = h(16) | -0.01045 |

h(7) = h(15) | 0.078165 | h(7) = h(15) | 0.077457 | h(7) = h(15) | 0.078115 |

h(8) = h(14) | 0.088646 | h(8) = h(14) | 0.088116 | h(8) = h(14) | 0.088738 |

h(9) = h(12) | -0.27634 | h(9) = h(12) | -0.27666 | h(9) = h(12) | -0.27636 |

h(10) = h(12) | -0.05606 | h(10) = h(12) | -0.05597 | h(10) = h(12) | -0.05611 |

h(11) | 0.384174 | h(11) | 0.38578 | h(11) | 0.38415 |

Table 6. 8 FIR Bandpass Filter Coefficients obtained by using ACO, ABC and FA

PM algorithm was also used to design similar Bandpass filter. Figure 6.18 shows the magnitude response of FIR Bandpass filter using ‘firpm’ and ACO. Similar to previous results, the designed filter using ACO produces better stopband attenuation while, passband ripples are comparable.

Figure 6. 18 Bandpass FIR filter using ACO and firpm

# Chapter 7 Summary and Conclusion

In this thesis, three bio-inspired algorithms were discussed: Ant Colony Optimization, Artificial Bee Colony, and Firefly Algorithm. Ant Colony Optimization is inspired by the way ants navigate from their nest to the closest food source using pheromones. Artificial Bee Colony is based on the foraging pattern of honey bees. Firefly Algorithm is built considering the flashing pattern of fireflies. All three algorithms could be a very useful tool used to solve problems in the real world.

The algorithms were used to solve well known mathematical functions. It was noticed that the performance of all three algorithms degraded with the increase in dimension of the test functions. Also, all three algorithms had a hard time to solve higher order Rosenbrock functions. The three algorithms were also tested for the case of an array pattern synthesis for a radiometer application. In this application, ACO was found to be four times faster than GA, while FA was 1.5 times faster than GA. ABC was about 1.6 times slower than GA.

ACO, ABC and FA was also used to design Lowpass, Highpass and Bandpass digital FIR filters. The results from each algorithm was compared to the results obtained by using PM algorithm. The filter obtained using the algorithms produced better stopband attenuation compared to the filter obtained by PM algorithm. Amplitude ripple in passband using all three algorithms were very similar to that obtained by PM algorithm.

# References

[1] Yang, X.-S. (2010). Engineering optimization: an introduction with metaheuristic applications. John Wiley & Sons.

[2] Yang, X. S. (2008). Nature-inspired metaheuristic algorithms (2nd ed.). Luniver Press.

[3] X.-S. Yang, Social Algorithms, in: Encyclopedia of Complexity and Systems Science (Edited by R. A. Meyers), Springer, (2017).

[4] Holland, J. H. (1975). Adaptation in natural and artificial systems: an introductory

analysis with applications to biology, control, and artificial intelligence. U Michigan Press.

[5] Fraser, A. S. (1957). Simulation of genetic systems by automatic digital computers.

Australian Journal of Biological Science, 10(1), 484–491.

[6] Fogel, D. B. (1995). A comparison of evolutionary programming and genetic

algorithms on selected constrained optimization problems. Simulation, 64(6), 397–

404.

[7] Rechenberg, I. (1973). Evolutionsstrategie: Optimierung technischer systeme nach

prinzipiender biologischen evolution. Frommann-Holzboog, Stuttgart.

[8] Mehrabian, A. R., & Lucas, C. (2006). A novel numerical optimization algorithm

Inspired from weed colonization. Ecological Informatics, 1(4), 355–366.

[9] D. Simon, Biogeography-Based Optimization, *IEEE Transactions on Evolutionary*

* **Computation*, vol. 12, no. 6, pp. 702-713, December 2008

[10] Dorigo, M., Optimization, Learning and Natural Algorithms, Ph.D. thesis,

Dipartimento di Elettronica, Politecnico di Milano, 1992 (in Italian).

[11] D. Karaboga, AN IDEA BASED ON HONEY BEE SWARM FOR NUMERICAL

OPTIMIZATION, TECHNICAL REPORT-TR06, Erciyes University,

Engineering Faculty, Computer Engineering Department 2005.

[12] M. Dorigo, V. Maniezzo, and A. colorni, Ant system: Optimization by a colony of

cooperating agents. IEEE Trans. Syst. Man Cybern. B 26(1) (1996) 29-41

[13] A. Afshar and S. Madadgar, “Ant Colony Optimization for Continuous Domains:

Application to Reservoir Operation Problems,” 2008 Eighth International

Conference on Hybrid Intelligent Systems, Barcelona, 2008, pp. 13-18.

[14] Krzysztof Socha, Marco Dorigo, Ant colony optimization for continuous domains,

European Journal of Operational Research, Volume 185, Issue 3,2008, Pages

1155-1173, ISSN 0377-2217

[15] Karaboga, D., Basturk, B.: A powerful and efﬁcient algorithm for numerical function

optimization: artiﬁcial bee colony (ABC) algorithm. J. Global Optim. 39, 459–471 (2007)

[16] “Fireflies | National Geographic”, Nationalgeographic.com, 2018. [Online].

Available:https://www.nationalgeographic.com/animals/invertebrates/group/fireflies/. [Accessed: 06- Oct- 2018].

[17] Xin-She Yang and Xingshi He, (2013). ‘Firefly Algorithm: Recent Advances and

Applications’, Int. J. Swarm Intelligence, Vol. 1, No. 1, pp. 36–50.

[18] S.R. Renjarajan, M. S. Zawadzki, and R. E. Hodges, “Slot Array Antennas for the

Juno Radiometer Application.” IEEE International Antennas and Propagation, San Diego, CA, 2007

[19] S. R. Rengarajan, “Genetic algorithm applications in the Juno radiometer antenna

arrays,” 2009 IEEE Antennas and Propagation Society International Symposium,

Charleston, SC, 2009, pp. 1-4

[20] Apostolopoulos, T., & Vlachos, A. (2010). Application of the firefly algorithm for

solving the economic emissions load dispatch problem. International Journal of

Combinatorics,2011

[21] Coelho, L. D. S., & Mariani, V. C. (2012). Firefly algorithm approach based on chaotic

tinkerbell map applied to multivariable pid controller tuning. Computers & Mathematics with Applications, 64(8), 2371–2382.

[22] Marichelvam, M. K., Prabaharan, T., & Yang, X.-S. (2014). A discrete firefly

algorithm for the multi-objective hybrid flowshop scheduling problems. IEEE

Transactions on Evolutionary Computation, 18(2), 301–305.

[23] Olamaei, J., Moradi, M., & Kaboodi, T. (2013). A new adaptive modified firefly

algorithm to solve optimal capacitor placement problem. 2013 18th Conference on Electrical Power Distribution Networks (EPDC), Kermanshah, May 2013, (pp. 1–6), IEEE.

[24] X.-S. Yang, X.S. He, Why the firefly algorithm works? in: Nature-Inspired

Algorithms and Applied Optimization (Edited by X.-S. Yang), Springer, pp. 245-259 (2018)

[25] Geetha, R. and G. Umarani Srikanth. “Ant Colony Optimization in Diverse

Engineering Applications: an Overview.” (2012).

[26] Primson, K.P.R.C. & Anita, R. (2014). Antenna design for WiMAX applications using

Artificial Bee Colony Algorithm. Journal of Theoretical and Applied Information

Technology. 68. 493-503.

[27] K. Kaur and V. K. Banga, “Synthesis of linear antenna array using firefly

algorithm,” International Journal of Scientific & Engineering Research, vol. 4, pp.

601–606, 2013.

[28] Zare, Amirsaman. (2015). Application of Ant Colony Optimization Algorithm to

Pattern Synthesis of Uniform Circular Antenna Array. Applied Computational Electromagnetics Society Journal. 30. 810-818.

[29] S. Mondal, D. Chakraborty, R. Kar, D. Mandal and S. P. Ghoshal, “Novel particle

swarm optimization for high pass FIR filter design,” 2012 IEEE Symposium on

Humanities, Science and Engineering Research, Kuala Lumpur, 2012, pp. 413-418.

[30] H. K. Kwan, Optimization Methods for Digital Filter Design, Edition 1.1, dfisp.org,

ISBN: 9780993670794, 19 February 2016.

[31] Rehan, Muhammed Kunwar, “Linear-Phase FIR Digital Filter Design with Reduced

Hardware Complexity using Discrete Differential Evolution” (2016). Electronic

Theses and Dissertations. 5763.

[32] Manolakis, D., & Ingle, V. (2011). Applied Digital Signal Processing: Theory and

Practice. Cambridge: Cambridge University Press.

doi:10.1017/CBO9780511835261

[33] Z. Zhao, H. Gao and Y. Liu, “Chaotic particle swarm optimization for FIR filter

design,” 2011 International Conference on Electrical and Control Engineering,

Yichang, 2011, pp. 2058-2061. doi: 10.1109/ICECENG.2011.6057672

[34] M. Kumar and T. N. Sasamal, “Design of FIR filter using PSO with CFA and inertia

weight approach,” International Conference on Computing, Communication &

Automation, Noida, 2015, pp. 1331-1334. doi: 10.1109/CCAA.2015.7148583

[35] B. Luitel and G. K. Venayagamoorthy, “Differential evolution particle swarm

optimization for digital filter design,” 2008 IEEE Congress on Evolutionary

Computation (IEEE World Congress on Computational Intelligence), Hong Kong,

2008, pp. 3954-3961.

[36] Ali, Ghazanfer, “Designs of Digital Filters and Neural Networks using Firefly

Algorithm” (2017). Electronic Theses and Dissertations. 7344.

## Cite This Work

To export a reference to this article please select a referencing stye below:

## Related Services

View allRelated Content

All Tags**Content relating to: "Computer Science"**

Computer science is the study of computer systems, computing technologies, data, data structures and algorithms. Computer science provides essential skills and knowledge for a wide range of computing and computer-related professions.

**Related Articles**

### DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: