Contributed by: Shreya Shetty

Have you ever wondered how certain theories greatly inspire a particular invention? The same goes with Genetic Algorithms. All of us would have heard of the famous theory of Charles Darwin, “Survival of the fittest” (fittest individuals are selected for reproduction to produce offspring of the next generation), which extends to Evolution by Natural Selection.

Inspired by Darwin’s theory, the Genetic Algorithm is a part of Evolutionary Algorithms, specifically to generate high-quality solutions to optimization and search problems by relying on biologically inspired operators such as mutation, crossover, and selection.

I will elaborate on the conceptual part here and keep room for more exploration on the coding part.

As highlighted earlier, genetic algorithm is majorly used for 2 purposes-

1. Search
2. Optimisation

Genetic algorithms use an iterative process to arrive at the best solution. Finding the best solution out of multiple best solutions (best of best). Compared with Natural selection, it is natural for the fittest to survive in comparison with others.

Now let’s try to grab some pointers from the evolution side to clearly correlate with genetic algorithms.

• Evolution usually starts from a population of randomly generated individuals in the form of iteration. (Iteration will lead to a new generation).
• In every iteration or generation, the fitness of each individual is determined to select the fittest.
• Genome fittest individuals selected are mutated or altered to form a new generation, and the process continues until the best solution has reached.

#### The process terminates under 2 scenarios-

1. When maximum number of generations have been created
2. Fitness level reached is sufficient.

Relating it to the Optimisation scenario, we need to identify the Genetic Representation of our solution domain or business problem we need to solve. Evaluation criteria i.e., Fitness Function to decide the worth of a solution.

#### For example:

• We need to maximize the profit (Fitness Function) by increasing sales (Genetic representation) of the product.
• We need to find the best model hyperparameters (Fitness function) for the classification algorithms i.e., Fine-tuning to yield the best prediction
• Optimum number of feature (fitness function) selection for building the machine learning model (Genetic representation).

#### 1. Initialisation:

Randomly generate a population with multiple chromosomes. Gene is the smallest unit and can be referred to as a set of characteristics (variables). We aim to join the Genes to obtain the Chromosomes(solution). The chromosome itself represents one candidate solution abstractly. The generation of Chromosome is user-defined (combination of numbers between 0 and 5 or only binary numbers).

#### 2. Defining the fit function:

Now we need to define the evaluation criteria for best chromosomes(solution). Each chromosome is assigned with a fitness score by the fitness function, which represents the goodness of the solution. Let’s say the fitness function is the sum of all the genes. Hence, the chromosome with the maximum sum is the fittest. In our case, the chromosome has a sum of 12.

#### 3. Selection:

Selecting the top 2 fittest chromosomes for creating the next generation. These will act as parents to generate offspring for the next generation which will naturally inherit the strong features. Two pairs of individuals (parents) are selected based on their fitness scores. Other chromosomes are dropped. Here are some of the methods of parent selection-

1. Roulette Wheel Selection
2. Rank Selection
4. Tournament Selection
5. Elitism Selection

#### 4. Crossover:

Crossover is the equivalent of two parents having a child. Each chromosome contributes a certain number of genes to the new individual. Offspring are created by exchanging the genes of parents among themselves until the crossover point is reached.

1. Single point crossover.
2. k-point crossover (k ≥ 1)
3. Uniform crossover.

#### 5. Mutation:

To avoid the duplicity(crossover generates offspring similar to parents) and to enhance the diversity in offspring we perform mutation. The mutation operator solves this problem by changing the value of some features in the offspring at random.

These steps are repeated until the termination criteria is met.

When to apply Genetic Algorithm:

• There are multiple local optima
• The objective function is not smooth (so derivative methods cannot be applied)
• Number of parameters is very large
• Objective function is noisy or stochastic

• Concept is easy to understand
• Modular, separate from application
• Answer gets better with time
• Inherently parallel; easily distributed
• Genetic algorithms work on the Chromosome, which is an encoded version of potential solutions’ parameters, rather the parameters themselves.
• Genetic algorithms use fitness score, which is obtained from objective functions, without other derivative or auxiliary information