Python is a popular language for deep learning due to its versatility, simplicity, and broad support for libraries and frameworks. TensorFlow, Keras, scikit-learn and PyTorch are essential libraries used in deep learning, providing various tools and APIs for creating, training and deploying deep neural networks (DNNs).
Deep learning refers to the creating, and training of deep neural networks, which are computational models inspired by the structure and function of the human brain. Python is essential as a programming language for exploring deep neural networks and sigmoid functions.
But where to start? This article will cover the topic of deep learning for Python, including deep neural networks and sigmoid function, to understand the construction of deep neural networks.
Deep learning is a subfield of machine learning, which is a branch of AI (artificial intelligence) used to create algorithms that enable computers to learn independently and make decisions from data.
Deep learning focuses on building artificial neural networks that mimic how human brain neurons communicate and learn from data so the computer can solve complex problems automatically using pre-trained data.
An artificial neural network consists of multiple interconnected layers or nodes to process and transform the input data to produce meaningful outputs.
Deep learning has gained immense popularity and success in recent years due to several key factors:
The application of deep learning is huge, and we can use it to handle massive datasets and complex tasks like image recognition, speech recognition, autonomous driving and many more. Its potential to automatically learn and extract relevant data from raw data has made it a game-changer and valuable technology in solving complex problems across various industries.
Deep neural networks (DNNs), often called deep networks, are machine learning models consisting of multiple layers of interconnected nodes or neurons. Each layer works independently to achieve specific tasks by processing and transforming data hierarchically.
The depth of multiple layers of nodes or neurons allows the model to learn and represent complex and abstract patterns within the given data.
The DNN nodes or neurons are organized into three types of layers:
Each node or neuron in one layer is connected to every node or neuron in adjacent layers. Multiple layers of interconnected nodes or neurons characterize deep neural networks.
Activation functions are an important component of deep neural networks, which introduce non-linearity into computational models by allowing the network to learn complex patterns and relationships in the data.
Basically, it decides whether the neuron should be activated (i.e., “fired”) or not. And if the neuron will be activated, to what extent. More simply, it decides whether the neuron’s input to the neural network is important or not in the process of prediction or decision using mathematical operations.
There are a variety of activation functions available in a neural network, such as:
Let’s talk about the sigmoid function in detail to understand the workings of activation functions.
The sigmoid function often called the logistic sigmoid function, is a mathematical function commonly used in deep learning and deep neural networks. The sigmoid function maps any real-valued number to a value between 0 and 1. Therefore, it is mainly used for computational models where we must predict the probability as an output.
Graphing the sigmoid function looks like an S-shaped curve, which makes it helpful in predicting probabilities.
It demonstrates the larger the input (positive), the closer the output is to 1.0, while the smaller the input (negative), the closer the output is to 0.0.
The sigmoid function is defined as:
Where:
x is the input to the sigmoid function.
e is Euler's number (approximately 2.71828).
The sigmoid function is commonly used because it introduces non
linearity into the computational model. It is primarily used in the output layer, where deep neural networks are required to generate probabilities.
Deep Neural Network Example
We are going to build a simple neural network to demonstrate its ability to learn and predict the logical OR operation. We will go through the code step-by-step and then run it to predict the output from the given input.
First, we start by importing all the required libraries. In our case, we must import NumPy, TensorFlow and keras from TensorFlow.
import numpy as np
import tensorflow as tf
from tensorflow import keras
Where the NumPy is used for numerical operations, TensorFlow is used for building neural networks and the Keras module of
TensorFlow is used for building and training neural networks.
We generate synthetic data using NumPy to represent the logical OR operation.
X = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0])
Where the X variable contains the input data.
The y variable contains the binary classification labels.
Then, we define a simple neural network using keras.Sequential.
model = keras.Sequential([
keras.layers.Dense(2, activation='sigmoid', input_shape=(2,)), # Input layer with 2 features and sigmoid activation
keras.layers.Dense(1, activation='sigmoid') ])
The defined neural network consists of:
The defined neural network consists of:
We compile the model with the optimizer as “adam”
and the loss as “binary_crossentropy”, and we specify that we want to track the model's accuracy during training.
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
After building and compiling the model, we need to train the model with data. So, we are using
input data X and label y for 5000 epochs. Additionally, we define the argument verbose with the value 0 to suppress training updates.
model.fit(X, y, epochs=5000, verbose=0)
After completing the training part of the neural network, we go ahead and use the trained
model to make some predictions on the same generated synthetic input data X.
predictions = model.predict(X)
print("Predictions:")
print(predictions)
Trained model predictions represent the predicted output for each input sample.
Since this neural network uses the sigmoid function and is trained with the logical OR operation, the output will be between 0 and 1.
When we run the deep neural network, it starts predicting the output value between 0 and 1 based on the input value.
The output will look like this:
Predictions:
[[0.19620144]
[0.8595249 ]
[0.7904943 ]
[0.18041825]]
In this article, we’ve covered concepts like deep learning, deep neural networks, activation function, and sigmoid function, as well as deep neural network examples. We’ve also included a step-by-step guide to building your first deep neural network using the sigmoid function in Python.
The created deep neural network has utilized the Python NumPy and TensorFlow libraries. We developed a neural network from scratch, which included defining the neural network with the sigmoid function, compiling and training the model, and finally predicting the values using the trained model.
Are you looking for a Python web development company to develop exceptional websites with seamless experience? Look no further! We at Delphin Technologies provide services to bring your ideas to life in stunning visuals and performance.
Grovy Optiva, A-5, Block-A Sector-68,
Noida-201301 Uttar Pradesh, India
712 H St NESte 1735, City: Washington, State: DC, ZIP Code: 20002
Compass Building, Al Shohada Road, Al Hamra Industrial Zone Ras Al Khaimah, United Arab Emirates
Unit 1603, 16th Floor, The L. Plaza, 367 - 375 Queen’s Road Central, Sheung Wan, Hong Kong