ARTIFICIAL NEURAL NETWORKS
4.1 CONCEPT AND LAYOUT OF ANN
Neural networks are inspired from the nervous system of human beings. Our nervous system consists of neurons and dendrites. Neurons have got a stem like structure & there are protrusions from this stem. The stem is called the axon & the protrusions are called dendrites. A neuron is joined to another through these dendrites. The dendrites of different nerves meet to form synapses. The messages from various parts of the body to the brain are passed via these synapses. The neuron receives the impulse (message) via synapses & sends it down to axon only when the strength of the signal exceeds some threshold value. The axon passes the impulse to other neurons via other synapses. Thus, we see that in our body, the neurons receive a set of signals (from adjoining neurons via synapses) & send out the output based on the strength of the signals obtained & the nature of synapses through which the input impulse has been.
In a very similar fashion, the neural network has got Computational Units and connection between them. The neural network is divided into many layers.The output of each computational unit is dependent on the input received from the previous layer, the weight attached to the link between the layer and the threshold value specified for the set of operation and process. The information is processed at neurons in accordance with the following formula,
Output= F (∑ (input x weight) – threshold value)
The function ‘F’ is a suitable non-linear function and the general functions that are taken in practice are sigmoid function, the step function, ramp function The sigmoid function is given by
F(x) = (1 + exp-x)־¹ ,
A neural network is an adaptable system that can learn relationships through repeated presentation of data and is capable of generalizing to new, previously unseen data. If a network is to be of any use, there must be inputs (which carry the values of variables of interest in the outside world) and outputs which form predictions, or control signals). Inputs and outputs correspond to sensory and motor nerves such as those coming from the eyes and leading to the hands. However, there may also be hidden neurons, which play an internal role in the network. The input, hidden and output neurons need to be connected together.
The best-known example of a neural network training algorithm is back-propagation. In back-propagation, the gradient vector of the error surface is calculated. This vector points in the direction of steepest descent from the current point, so we know that if we move along it a ‘‘short’’ distance, we will decrease the error. A sequence of such moves (slowing as we near the bottom) will eventually find a minimum of some sort. Large steps may converge more quickly, but may also overstep the solution or (if the error surface is very eccentric) go off in the wrong direction. A classic example of this in neural network training is where the algorithm progresses very slowly along a steep, narrow, valley, bouncing from one side across to the other. In contrast, very small steps may go in the correct direction, but they also require a large number of iterations. The input variables used in the present investigation are: electrode radius (R,mm), Travel speed(S,mms-1),Electrode speed(U,mms-1),voltage(V, V),Current (I, A),Frequency(ν,s-1), and arc-length (L, mm). The output layer forming the variables which are to be predicted consists of depth of penetration (D, mm). The hidden and output layer neurons are each connected to all of the units in the preceding layer. When the network is executed (used), the input variable values are placed in the input units, and then the hidden and output layer units are progressively executed. Each of them calculate their activation value by taking the weighted sum of the outputs f the units in the preceding layer, and subtracting the threshold. The activation value is passed through the activation function to produce the output of the neuron. When the entire network has been executed, the outputs of the output layer act as the outputs of the entire network.
The experimental data used to train the proposed neural network is shown in Table 1. In this study, the structure of the neural network was 7–30–50–30--1 (7 neurons in the input layer, 30neurons in the 1st hidden layer, 50 neurons in the 2nd hidden layer, 30 neurons in the 3rd hidden layer and 1 neuron in the output layer). The network was trained for 300 iterations. Further training did not improve the performance of the network.. The testing data are given in the Table-2 and these sets of data were not used for training the network.
By using the experimental data and testing data, program code was written in Matlab 6.1 The errors in penetration estimates very rarely exceeded 20%, and thus the network was able to predict with significant accuracy. It can be concluded from this part of the work that the neural networks appear to constitute a workable model for predicting the penetration under given set of welding conditions.
:
Once the network weights and biases have been initialized, the network is ready for training. The network can be trained for function approximation (nonlinear regression), pattern association, or pattern classification. The training process requires a set of examples of proper network behavior - network inputs
p
and target outputs t
. During training the weights and biases of the network are iteratively adjusted to minimize the network performance function. The default performance function for feedforward networks is mean square error -
the average squared error between the network outputs a
and the target outputs t
.
RESULTS AND DISCUSSION:
Table 1:Testing data for gas metal arc welding process
R,mm
|
S,mm s-1
|
U,mm s-1
|
V,V
|
I,A
|
ν,s-1
|
L,mm
|
d,mm
|
0.572
|
7
|
134.3
|
19.8
|
196.5
|
53
|
2.9
|
2.3
|
0.445
|
7
|
169.2
|
23.5
|
143.0
|
64
|
3.7
|
2.5
|
0.572
|
7
|
134.3
|
28.7
|
244.9
|
280
|
7.2
|
3.9
|
Comparison of depth of penetration and error% in different conditions
S.No
|
EXPERIMENTAL
d, mm
|
THEORETICAL
d, mm ERROR%
|
ANN
d, mm ERROR%
| ||
1.
|
2.3
|
2.558
|
11.2
|
2.6
|
13.04
|
2.
|
2.5
|
2.2484
|
10.02
|
2.1
|
16
|
3.
|
3.9
|
4.4084
|
13.03
|
4.3
|
10.25
|
Variation of depth of penetration in different experiments with different welding conditions
Effect of arc length on depth of penetration in different methods
Variation of error in three experiments in Theoritical and ANN Models
No comments:
Post a Comment