The heading of this article comprises of the most searched keywords on Google today in the field of computer science. Every scholar, researcher has very high hopes and plans to solve the problems on our planet using these technologies.
What makes artificial intelligence, intelligent?
The answer to that question lies in the application under consideration. Broadly, we can define machine intelligence as:
 Robustness
 Ability to learn
Add the freedom to explore, and we would be on the brink of creating life in the form that we know today. These two qualities are very hard for traditional algorithms to attain, as the number of parameters under consideration is very high. Coming up with a mathematical model to solve the problem, would be a tedious task, and also, would be highly applicationspecific. For example, a model for recognizing apples and mangoes, would not be useful in the domain of recognizing faces, would it? On the contrary, when the algorithm understands the features and recognizes them, its intelligence and learning possibilities are enhanced significantly. It is no longer rote learning, but a quantitative understanding of the data through the features collected.
Why is this such a promising field?
 Traditional computer science algorithms were applicationspecific and a lot of time was required to come up with an efficient algorithm for a specific application. Each application usually had different requirements and thus there was no one approach fits all method.
 Artificial Intelligence, Machine Learning, and Deep Learning promise a subset of algorithms that are suitable for a variety of subtasks under a specific application domain. For example, object classification algorithms can be used to identify cats and dogs, but at the same time, the same model can be used to identify apples and mangoes. We will look into this concept a bit later.
How is artificial intelligence connected to machine learning?
 Machine Learning is a subset of Artificial Intelligence, and is a domain, which deals with algorithms that improve over time.
 The definition of Machine Learning in terms of Tom Mitchell is as follows, “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E.”
Now that we have a basic understanding of artificial intelligence and machine learning, what is deep learning? Before we dive into deep learning, we need to understand the evolution of neural networks.
How did the neural networks evolve?
– Neural Networks are inspired partially by the neuron interconnection system in our brains. The notion of trial and error learning is emulated via neural networks.
– The neural networks were initially used to solve basic linear problems, until activation functions were introduced. Modelling the logical gates such as AND, OR, NAND, etc are linear problems. Linearity refers to the property of a single equation of a line being able to classify two classes.
– When one line isn’t sufficient to classify the two classes, that is, it is nonlinear, then a nonlinear equation is required. This is provided by the activation function, such as the sigmoid, ReLU, tanh, etc.
– Activation functions introduce nonlinearities and thus, the range of problems that can be solved using neural networks increases.
– An application for the same would be modelling of the XOR gate, which requires two lines to define the boundaries of the two classes.
How do Neural Networks learn?
– Neural networks mentioned above are oneshot learners. They only model the nonlinearities in various models.
– Introduction of the backpropagation algorithm enhanced the learning capabilities of neural networks by a huge factor.
– After the introduction of backpropagation algorithm and activation function, the neural network’s learning capabilities are enhanced.
What are some of the advantages of BackPropagation Algorithm?
– Faster training
– Efficient training
– Quantitative and qualitative measures of learning are available
– Works in realtime environment
For an indepth discussion on neural networks, you can check this blog.
Where do neural networks fit into Deep Learning? Deep learning stems from the idea that complex problems with a large number of parameters can be solved by increasing the number of layers in an architecture. Thus, the first attempt to apply deep neural networks was recorded when the model LeNet was implemented. AlexNet, the ImageNet prodigy was inspired by LeNet, which won the ImageNet competition in 2012.
Given an overview of all the three fields, we will go through some of the concepts and applications involved under each of these categories. This will enhance the understanding between the three subfields.
How can Artificial Intelligence be classified?
AI can be classified broadly into three categories:
 Analytical: Deals with only the intellect aspect of the brain
 Humaninspired: Deals with intellect, as well as emotional intelligence
 Humanized artificial intelligence: Understands intellect, emotions and the social structure
AI can be further classified based on the types of learning:
 Weak artificial intelligence: Machine programs work due to a well defined, highly efficient algorithm
 Strong artificial intelligence: The algorithms identify and learn the patterns, and thus least human involvement is involved after the design of the algorithm. Machine learning and deep learning are subsets of this field of AI.
What are the applications of AI?
 Applications: There are various applications of artificial intelligence systems. Let’s list a few:
 Maps services
 Recommendation Engines (Spotify, Netflix, Amazon)
 Robotics (Drones, Sophia the robot)
 Healthcare (Medical diagnosis, prognosis, precision surgery)
 Autonomous systems (Autopilot systems, selfdriving cars)
 Drug discovery
 Stock market predictors
Having had a look at artificial intelligence, and its applications, we need to consider the following question.
How is machine learning different from artificial intelligence?
 Machine learning is a subset of strong artificial intelligence, where the algorithm’s performance improves over time. So many of the applications mentioned under artificial intelligence might overlap here too.
 Machine learning algorithms are classified into three types:
 Supervised learning
 Unsupervised learning
 Reinforcement learning
How are the various machine learning algorithms different?
The classification is based on the way the algorithm improves the performance measure P, given the dataset.
Let’s consider a classroom experience, where the teacher is passing on the information to the students in a variety of ways like verbal recitations, writing on the board, activities, etc. What is the teacher doing? How is it different from a student sitting on his own and learning?
The answer to this question leads us to the difference between supervised and unsupervised learning algorithms. The role of the teacher is to teach and also tell the students what exactly they are learning, for example, in a physics class, the teacher tells the students what each equation means.
On the contrary, if a student sits with the textbook which only has the equation in it, then, what the student makes out of the equation depends on her/his ability to comprehend mathematical equations. Alright, now let’s get technical.
Supervised Learning algorithms work on datasets which are labelled and thus, know what they are about to learn. In scenarios where we have an input variable x and an output variable G., The supervised learning algorithm learns the mapping between G and x as G=g(x), where g is the mapping function. Examples of supervised learning algorithms include all the classification algorithms, and the regression problems, like image classification, sentiment analysis, predicting the prices of used cars, etc.
The main difference between classification and regression is that the output variable G is discrete in a classification problem, whereas it is continuous in a regression problem. Some major algorithms in this field of Machine Learning are Random forests, Support Vector Machines, Neural Networks (both shallow and deep).
Unsupervised Machine Learning
 In this scenario, the data is unlabeled and the main objective of the algorithm is to get as much relevant information out of the dataset given. The algorithm models the dataset or the distribution of the dataset and predicts its behavior given an input.
 Applications of unsupervised learning algorithms include clustering and association, like kmeans clustering hierarchical ROCK clustering, and the famous association rule learning problems.
 Clustering essentially means the grouping of similar data points having similar properties together.
 Association learning is of interest majorly in data science, where the goal is to find what factors are inhibiting or boosting the sales of an iPhone let’s say.
Apart from this, there’s something called semisupervised learning. The world where these two types of algorithms meet is the arena of semisupervised learning, where the unsupervised learning algorithms are used as better feature extractors, and thus, as the rule of thumb goes for supervised learning algorithms, “the better the feature, the better the performance measure”.
Reinforcement Learning
Reinforcement learning works on the concept of action and reward. It models our lives in a way, that is, makes incremental changes towards the optimum gradient by taking various possible actions.
Consider an example of a drone. When a bird learns to fly, it starts flapping its wings, and slowly learns to fly. Whenever the bird falls down during a try, its reward is 1, and whenever it moves in the direction of the desired outcome, it gets a reward of +6. Hence, slowly it learns how to fly.
Applications of Machine Learning:

 Regression (Prediction)
 Classification(lesser number of classes, with less data)
 Control Systems(Drones)
Deep Learning:
Deep learning is part of the neural network based artificial intelligence. It’s roots lay in the concepts of optimization theory, and mathematical modelling of systems. The deep learning approach that works today did not work 10 years ago. The reason for this is:
 Enhanced, large and efficient datasets
 Huge rise in computational power capabilities
There are various types of architectures involved in deep learning. Each architecture has various basic units that make up the architecture. For example, machine translation systems have LSTM units. Classification models like Resnet have the residual block which repeats itself. The building blocks have desirable characteristics, and when repeated over time, result in positive results.
How do you come up with your own architecture?
The answer would be practice and experience. Each layer, each change in the structure of the basic unit results in various desirable outputs. To learn how each concept affects the results, one should understand the basics, that can be found in our previous blogs.
There are various applications of deep learning as well:
 Translation systems (Google Translate)
 3D Animation and effects (Avengers Endgame: Thanos)
 Navigation systems (Maps)
 Augmented Reality
 Virtual Reality
 Healthcare (Disease prediction)
 Security (Detection of security breaches)
Overall, artificial intelligence comprises of machine learning and deep learning. machine learning is the go to set of algorithms, when data is limited, computational power is constrained and number of variables in the problem are less. deep learning deals with larger datasets, larger number of variables, and also requires tremendous power to run the models. We need to think about the brain at this moment. A small machine that can do all these tasks in less than 25 W of power. The future of these algorithms is the energy on which they shall sustain, and thus use of renewable energy is the need of the hour.
1
Good