Deep Learning, a simple explanation.

Siddharth Gupta
4 min readJul 4, 2022

--

Have you ever considered how Google Translate can quickly translate an entire paragraph from one language to another?

How do Netflix and YouTube determine our preferences for movies and videos to make relevant recommendations?

Or, for that matter, how self-driving cars like Tesla are even possible?

Deep Learning and Artificial Intelligence are to thank for all of this.

Let’s start with the concept of Deep Learning.

1. What is Deep Learning, exactly?

The term “artificial intelligence” refers to the techniques that allow computers to mimic human behaviour. Machine Learning is classified as a subset of Artificial Intelligence, while Deep Learning is the subset of Machine Learning. All the above-stated things are made possible through machine learning, a collection of algorithms taught on data.

Deep Learning is machine learning inspired by the human brain’s structure. Deep learning algorithms analyse data with a predetermined logical system to reach similar conclusions. Deep Learning does this through neural networks, a multi-layered structure of algorithms.

The creation of the human brain inspires the neural network’s structure design. Neural networks perform the same tasks on data that our brains identify patterns and classify information.

Individual layers of neural networks are like filters that operate from coarse to fine, increasing the chances of detecting and outputting a correct result.

The brain compares new information to known items whenever we acquire it. Deep neural networks operate on the same principle.

We may use neural networks to accomplish various tasks, including classification, regression, or clustering. We can use neural networks to group or sort unlabelled data based on similarities between the samples. Alternatively, in the classification instance, we can train the neural network on a labelled dataset to categorise the examples in the dataset.

Generally, neural networks may execute the same tasks as traditional machine learning techniques. It is not, however, the other way around.

Deep learning models can solve challenges that machine learning models can never handle because artificial neural networks have unique characteristics.

Deep learning is responsible for all recent developments in artificial intelligence. Without Deep Learning, self-driving cars, chatbots, and personal assistants like Alexa and Siri would not exist. Netflix or YouTube would have no notion what movies or TV shows we like or detest, and Google Translate would remain as rudimentary as it was ten years ago (before Google turned to neural networks for this App). Neural networks underpin all these technologies.

We may even say that deep learning and artificial neural networks are currently leading a new industrial revolution.

To conclude, deep learning is the most effective and most evident approach to actual machine intelligence we’ve got thus far.

2. What is the current popularity of deep learning?

Why are deep learning and artificial neural networks powerful and unique in today’s industry? Above all, what makes deep learning models superior to machine learning models? Allow me to elaborate.

The lack of requirement for so-called feature extraction is the primary advantage of deep learning over machine learning.

Traditional machine learning approaches were used long before deep learning. Some examples are SVM, Nave Bayes Classifier, Decision Trees, and Logistic Regression.

Flat algorithms are another name for these algorithms. These algorithms cannot be used directly with raw data since they are flat (such as .csv, images, and text.). A pre-processing procedure called Feature Extraction is required.

Feature Extraction results represent the raw data that these classic machine learning algorithms can now use to perform a task — for example, classifying the data into several categories or classes.

Feature extraction is tricky and demands a thorough understanding of the problem domain. This pre-processing layer must be customised, tested, and optimised for the best results.

The Feature Extraction step is unnecessary for Deep Learning or artificial neural networks.

The layers can directly and independently learn an implicit representation of the raw input. Over successive layers of artificial neural networks, a more abstract and compressed model of the original data is formed. The result is then created using this compressed form of the input data. For instance, classifying the input data into multiple classifications could result.

In other words, the feature extraction stage is pre-included in the artificial neural network process.

The neural network also optimises this step during the training process to obtain the best possible abstract representation of the input data. Hence, deep learning models thus require little to no manual effort to perform and optimise the feature extraction process.

Consider the following scenario. For instance, if you wish to employ a machine learning model to assess if a given image depicts a car or not, we must first identify the unique traits or attributes of a vehicle (form, size, windows, and wheels.), extract the feature, and feed it to the algorithm as input data.

The algorithm would then classify the images in this manner. A programmer must intervene directly for the model to conclude in machine learning.

The feature extraction stage is entirely unnecessary in the case of a deep learning model. The model would recognise a car’s distinct traits and make accurate predictions.

Every other task you’ll ever accomplish with neural networks will require you to refrain from extracting data attributes. Feed the neural network the raw data, and the model will take care of the rest.

The Big Data Era is Here

The second significant benefit of Deep Learning, one of the main reasons for its popularity, is that large volumes of data power it. The “Big Data Era” of technology will open vast possibilities for Deep Learning advances. According to Andrew Ng, the head scientist of Baidu, China’s most popular search engine and one of the Google Brain Project’s leaders.

“The deep learning models are the rocket engine. The fuel is the massive amounts of data we can feed these algorithms,” says the analogy.

With more training data, deep learning models tend to improve their accuracy. SVM and Naive Bayes classifiers are traditional machine learning models that stop improving after saturation.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Siddharth Gupta
Siddharth Gupta

No responses yet

Write a response