Neural Networks

 What is Neural Network?

Neural networks are mathematical models that use learning algorithms inspired by the brain to store information. Since neural networks are used in machines, they are collectively called an ‘artificial neural network.’ 


Nowadays, the term machine learning is often used in this field and is the scientific discipline that is concerned with the design and development of algorithms that allow computers to learn, based on data, such as from sensor data or databases. 

A major focus of machine-learning research is to automatically learn to recognize complex patterns and make intelligent decisions based on data.
Neural networks are a popular framework to perform machine learning, but there are many other machine-learning methods, such as logistic regression, and support vector machines.
Similar to the brain, neural networks are built up of many neurons with many connections between them. 


Neural networks have been used in many applications to model the unknown relations between various parameters based on large numbers of examples. 
Examples of successful applications of neural networks are classifications of handwritten digits, speech recognition, and the prediction of stock prices. Moreover, neural networks are more and more used in medical applications. 


From where is came??

The first step toward artificial neural networks came in 1943, when Warren McCulloch, a neurophysiologist, and a young mathematician, Walter Pitts, wrote a paper on how neurons might work. 
They modeled a simple neural network with electrical circuits. In the 1950s, Rosenblatt's work resulted in a two-layer network, the perceptron, which was capable of learning certain classifications by adjusting connection weights but also had some limitations. In the early 1980s, researchers showed renewed interest in neural networks.

It encompasses a family of nonlinear computational methods that, at least in the early stage of their development, were inspired by the functioning of the human brain. 
Indeed, the first ANNs were nothing more than integrated circuits devised to reproduce and understand the transmission of nerve stimuli and signals in the human central nervous system.
On the contrary, computational NNs have progressively emerged as tools that are able to perform tasks or solve problems that were considered difficult, or in some cases impossible, for the traditional mathematical and statistical methods. 

So that, during the last few years, research on ANNs has proceeded along two main pathways :

1. neurophysiologically oriented, aims at developing in silico models of the human brain that are as accurate as possible to gather a more profound understanding of all its mechanisms of behavior.

2. The other one considers NNs just as a computational tool to solve complex problems, usually heavily nonlinear. 
In this second framework, interest is not much focused on obtaining the best model of the human brain but on trying to identify and capture those biological features that make humans perform some tasks better than any computer and in a very short time, and to implement them in a rather schematic algorithmic form.
They provide accurate models and prediction in a wide range of fields, such as econometrics, weather forecasting, signal filtering, and pattern recognition.
Many different types of neural networks exist. Examples of various types of neural networks are Hopfield network, the multilayer perceptron, the Boltzmann machine, and the Kohonen network. The most commonly used and successful neural network is the multilayer perceptron

Multi layer perceptron?

Multi layer perceptron (MLP) is a supplement of feed forward neural network. 
It consists of three types of layers—the input layer, output layer and hidden layer.


 
Similar to a feed forward network in a MLP the data flows in the forward direction from input to output layer. The neurons in the MLP are trained with the back propagation learning algorithm.

The connections between the layers are assigned weights. The weight of a connection specifies its importance. This concept is the backbone of an MLP’s learning process.

While the inputs take their values from the surroundings, the values of all the other neurons are calculated through a mathematical function involving the weights and values of the layer before it.
In a conventional MLP, random weights are assigned to all the connections. These random weights propagate values through the network to produce the actual output. Naturally, this output would differ from the expected output. The difference between the two values is called the error.

Backpropagation refers to the process of sending this error back through the network, readjusting the weights automatically so that eventually, the error between the actual and expected output is minimized.

In this way, the output of the current iteration becomes the input and affects the next output. This is repeated until the correct output is produced. The weights at the end of the process would be the ones on which the neural network works correctly.

MLPs are designed to approximate any continuous function and can solve problems which are not linearly separable. The major use cases of MLP are pattern classification, recognition, prediction and approximation.
The multilayer perceptron (MLP) is used for a variety of tasks, such as stock analysis, image identification, spam detection, and election voting predictions.
 

how NN works??

Neural neworks are typically organized in layers. Layers are made up of a number of interconnected 'nodes' which contain an 'activation function'. Patterns are presented to the network via the 'input layer', which communicates to one or more 'hidden layers' where the actual processing is done via a system of weighted 'connections'. The hidden layers then link to an 'output layer'.

In simple terms, A neural network has many layers. Each layer performs a specific function, and the complex the network is, the more the layers are.
The purest form of a neural network has three layers:
  1. The input layer - It picks up the input signals and transfers them to the next layer. It gathers the data from the outside world.
  2. The hidden layer -  It performs all the back-end tasks of calculation. A network can even have zero hidden layers. However, a neural network has at least one hidden layer.
  3. The output layerThe output layer transmits the final result of the hidden layer’s calculation 


Industrial use case of Neural Network :

Retail & Sales

Neural networks are excellent in the realm of sales forecasting, due to their ability to simultaneously consider multiple variables such as market demand for a product, a customer’s disposable income, population size, product price, and the price of complementary products. Forecasting of sales in supermarkets and wholesale suppliers has been shown to outperform traditional statistical techniques like regression, as well as human experts.
Another important area where retail and sales can benefit from neural networks is in shopping cart analysis, such as gathering and inputting information relating to which products are often purchased together, or the expected time delay between sales of two products.
Retailers can use this information to make decisions about the layout of the store

Banking & Finance

One of the main areas of banking and finance that has been affected by neural networks is trading and financial forecasting. Neural networks have been applied successfully to problems like derivative securities pricing and hedging, futures price forecasting, exchange rate forecasting and stock performance and selection prediction since the 1990s.
But there are many other areas of banking and finance that have been improved through the use of neural networks. For many years, banks have used credit scoring techniques to determine which loan applicants they should lend money to. Traditionally, statistical techniques have driven the software. These days, however, neural networks are the underlying technique driving the decision making. Credit scoring systems can learn to correctly identify good or poor credit risks. Neural networks have also been successful in learning to predict corporate bankruptcy.

Telecommunications

Machine learning offers telecommunications organizations a clear opportunity to ascertain a much more complete picture of their operations and their customers, as well as to further their innovation efforts. Some companies are using a series of neural networks to analyze customer and call data in order to predict if, when, and why a customer is likely to leave for another competitor. Many telecommunications organizations use machine learning to help predict the effects of forthcoming promotional strategies, as well as sift through and refine data to find the most profitable customers.

Credit Evaluation

The HNC company, founded by Robert Hecht-Nielsen, has developed several neural network applications. One of them is the Credit Scoring system which increase the profitability of the existing model up to 27%. The HNC neural systems were also applied to mortgage screening. A neural network automated mortgage insurance underwritting system was developed by the Nestor Company. This system was trained with 5048 applications of which 2597 were certified. The data related to property and borrower qualifications. In a conservative mode the system agreed on the underwriters on 97% of the cases. In the liberal model the system agreed 84% of the cases. This is system run on an Apollo DN3000 and used 250K memory while processing a case file in approximately 1 sec. 

 

Hope you liked my blog :)


Comments

Post a Comment

Popular posts from this blog

Automation Using Python

Chat Server using UDP

NETWORK TOPOLOGY