fbpx

What is deep learning?

What is deep learning

Deep learning is a type of machine learning and artificial intelligence that imitates how humans gain certain kinds of knowledge. According to TechTarget, deep learning is an important element of data science, including statistics which is highly beneficial to data scientists tasked with collecting, analyzing and interpreting large amounts of data; deep learning makes this process faster and easier.

At its simplest, deep learning can be thought of as a way to automate predictive analytics.   

How does deep learning work?

According to IBM, deep learning neural networks, or artificial neural networks, attempts to mimic the human brain through a combination of data inputs, weights, and bias. These elements work together to accurately recognize, classify, and describe objects within the data.

Deep neural networks consist of multiple layers of interconnected nodes, each building upon the previous layer to refine and optimize the prediction or categorization. This progression of computations through the network is called forward propagation. A deep neural network’s input and output layers are called visible layers. The input layer is where the deep learning model ingests the data for processing, and the output layer is where the final prediction or classification is made.

Another process called backpropagationuses algorithms, like gradient descent, to calculate errors in predictions and then adjusts the weights and biases of the function by moving backwards through the layers to train the model. Together, forward propagation and backpropagation allow a neural network to make predictions and correct for any errors accordingly. Over time, the algorithm becomes gradually more accurate.

The above describes the simplest type of deep neural network in the simplest terms. However, deep learning algorithms are incredibly complex, and there are different types of neural networks to address specific problems or datasets.

Deep learning as scalable learning across domains

According to Machine Learning Mastery, deep learning excels in problem domains where the inputs (and even output) are analog. This means they are not a few quantities in a tabular format but are images of pixel data, text data or files of audio data.

Yann LeCun is the director of Facebook Research and is the father of the network architecture that excels at object recognition in image data called the Convolutional Neural Network (CNN). This technique is successful because, like multilayer perceptron feedforward neural networks, the method scales with data and model size and can be trained with backpropagation.

This biases his definition of deep learning as the development of very large CNNs, which have wildly succeeded in object recognition in photographs.

In a 2016 talk at Lawrence Livermore National Laboratory titled “Accelerating Understanding: Deep Learning, Intelligent Applications, and GPUs,” he described deep learning generally as learning hierarchical representations and defined it as a scalable approach to building object recognition systems:

Leave a Reply

Your email address will not be published. Required fields are marked *

X