à menginputkan pola
angka 0 sampai 9
>>
pola = [0 0 1 0 0 0 1 0 1 0 0 1
0 1 0 0 1
0 1 0 0 1 0 1 0 0 1 0 1 0 0 0 1 0
0;0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0;0 0 1 1 0 0 1 0 1 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1
0 0 0 0 1 1 1 0;0
1 1 0
0 0 0
0 1 0
0 0 0 1 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1 0
0 1 1 0 0;0 0 0 1 0
0 0 1 1 0
0 1 0 1 0
1 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0;0
1 1 1 0 0 1 0 0 0 0
1 0 0
0 0 1 1 1 0 0 0 0 1 0 0
0 0 1 0 0 1 1 1 0;0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 1
1 0 0 1 0 1 0 0 1 0 1 0
0 0 1 0
0;1 1 1 1 1 0 0 0 0 1 0 0 0 0 1
0 0 0 1
0 0 0 1 0 0 0 1 0
0 0 1 0 0 0 0;0 0
1 0 0 0 1 0 1 0 0 1 0 1 0 0 0 1 0 0 0 1 0 1 0 0 1
0 1 0
0 0 1 0
0;0 0 1 0 0
0 1 0 1 0 0 1
0 1 0
0 1 1 1 0 0 0 0 1 0 0
0 1 0 0
0 1 0 0
0];
pola
=
0
0 0 0
0 0 0
1 0
0
0
0 0 1 0
1 0 1 0 0
1
1 1 1
0 1 0
1 1
1
0
0 1 0
1 1 1
1 0
0
0
0 0 0
0 0 0
1 0
0
0
0 0 0
0 0 0
0 0
0
1
0 1 0 0
1 0 0 1 1
0
1 0 0
1 0 1
0 0
0
1
0 1 1
1 0 0
0 1
1
0
0 0 0
0 0 0
1 0
0
0
0 0 0
0 0 0
0 0
0
1
0 0 0
1 1 1
0 1
1
0
1 0 0
0 0 0
0 0
0
1
0 1 1
1 0 0
0 1
1
0
0 0 0
0 0 0
1 0
0
0
0 0 0
1 0 0
0 0
0
1
0 0 0
1 1 1
0 0
1
0
1 1 1
1 1 1
0 1
1
1
0 0 0
1 1 1
1 0
1
0
0 0 0
0 0 0
0 0
0
0
0 0 0
0 0 0
0 0
0
1
0 1 0
0 0 1
0 1
0
0
1 0 0
0 0 0
1 0
0
1
0 0 1
1 1 1
0 1
1
0
0 0 0
0 0 0
0 0
0
0
0 0 0
0 0 0
0 0
0
1
0 1 0
0 0 1
1 1
0
0
1 0 0
0 0 0
0 0
1
1
0 0 1
1 1 1
0 1
0
0
0 0 0
0 0 0
0 0
0
0
0 0 0
0 0 0
1 0
0
0
0 1 1
0 1 0
0 0
1
1
1 1 1
0 1 1
0 1
0
0
0 1 0
1 1 0
0 0
0
0
0 0 0
0 0 0
0 0
0
à menginputkan
target dari pola angka 0 sampai 9
>>
target = [0 0 0 0;0 0 0 1;0 0 1 0;0 0 1 1;0 1 0 0;0 1 0 1;0 1 1 0;0 1 1 1;1 0 0
0;1 0 0 1];
target
=
0
0 0 0
0 0 0
0 1
1
0
0 0 0
1 1 1
1 0
0
0
0 1 1
0 0 1
1 0
0
0
1 0 1
0 1 0
1 0
1
à menghitung net
>>
net=newff(minmax(pola),[10,4],{'logsig','logsig'})
Warning:
NEWFF used in an obsolete way.
>
In nntobsu at 18
In newff at 86
See help for NEWFF to update calls to
the new argument list.
**
Warning in INIT
**
Network "input{1}.processedRange" has a row with equal min and max
values.
**
Constant inputs do not provide useful information.
net
=
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1
output
biases: {2x1 cell} containing 2
biases
inputWeights: {2x1 cell} containing 1
input weight
layerWeights: {2x2 cell} containing 1
layer weight
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'gdefaults'
initFcn: 'initlay'
performFcn: 'mse'
plotFcns:
{'plotperform','plottrainstate','plotregression'}
trainFcn: 'trainlm'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow,
.showCommandLine, .epochs,
.time, .goal, .max_fail,
.mem_reduc,
.min_grad, .mu, .mu_dec,
.mu_inc,
.mu_max
weight and bias values:
IW: {2x1 cell} containing 1
input weight matrix
LW: {2x2 cell} containing 1
layer weight matrix
b: {2x1 cell} containing 2
bias vectors
other:
name: ''
userdata: (user information)
>>
net=init(net)
**
Warning in INIT
**
Network "input{1}.processedRange" has a row with equal min and max
values.
**
Constant inputs do not provide useful information.
net
=
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1
output
biases: {2x1 cell} containing 2
biases
inputWeights: {2x1 cell} containing 1 input
weight
layerWeights: {2x2 cell} containing 1
layer weight
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'gdefaults'
initFcn: 'initlay'
performFcn: 'mse'
plotFcns: {'plotperform','plottrainstate','plotregression'}
trainFcn: 'trainlm'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow,
.showCommandLine, .epochs,
.time, .goal, .max_fail,
.mem_reduc,
.min_grad, .mu, .mu_dec,
.mu_inc,
.mu_max
weight and bias values:
IW: {2x1 cell} containing 1
input weight matrix
LW: {2x2 cell} containing 1
layer weight matrix
b: {2x1 cell} containing 2
bias vectors
other:
name: ''
userdata: (user information)
>>
net=newff(minmax(pola),[10,4],{'logsig','logsig'});
Warning:
NEWFF used in an obsolete way.
>
In nntobsu at 18
In newff at 86
See help for NEWFF to update calls to
the new argument list.
**
Warning in INIT
**
Network "input{1}.processedRange" has a row with equal min and max
values.
**
Constant inputs do not provide useful information.
>>
net=newff(minmax(pola),[10,4],{'logsig','logsig'})
Warning:
NEWFF used in an obsolete way.
>
In nntobsu at 18
In newff at 86
See help for NEWFF to update calls to
the new argument list.
**
Warning in INIT
**
Network "input{1}.processedRange" has a row with equal min and max
values.
**
Constant inputs do not provide useful information.
net
=
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1
output
biases: {2x1 cell} containing 2
biases
inputWeights: {2x1 cell} containing 1 input
weight
layerWeights: {2x2 cell} containing 1
layer weight
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'gdefaults'
initFcn: 'initlay'
performFcn: 'mse'
plotFcns: {'plotperform','plottrainstate','plotregression'}
trainFcn: 'trainlm'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow,
.showCommandLine, .epochs,
.time, .goal, .max_fail,
.mem_reduc,
.min_grad, .mu, .mu_dec,
.mu_inc,
.mu_max
weight and bias values:
IW: {2x1 cell} containing 1
input weight matrix
LW: {2x2 cell} containing 1
layer weight matrix
b: {2x1 cell} containing 2
bias vectors
other:
name: ''
userdata: (user information)
>>
net.trainParam.epochs
ans
=
1000
à mentraining net,pola,target sehingga muncul proses epoch pada gambar
1.1
>>
net=train(net,pola,target)
net
=
Neural Network object:
architecture:
numInputs: 1
numLayers: 2
biasConnect: [1; 1]
inputConnect: [1; 0]
layerConnect: [0 0; 1 0]
outputConnect: [0 1]
numOutputs: 1 (read-only)
numInputDelays: 0 (read-only)
numLayerDelays: 0 (read-only)
subobject structures:
inputs: {1x1 cell} of inputs
layers: {2x1 cell} of layers
outputs: {1x2 cell} containing 1
output
biases: {2x1 cell} containing 2
biases
inputWeights: {2x1 cell} containing 1
input weight
layerWeights: {2x2 cell} containing 1
layer weight
functions:
adaptFcn: 'trains'
divideFcn: (none)
gradientFcn: 'gdefaults'
initFcn: 'initlay'
performFcn: 'mse'
plotFcns:
{'plotperform','plottrainstate','plotregression'}
trainFcn: 'trainlm'
parameters:
adaptParam: .passes
divideParam: (none)
gradientParam: (none)
initParam: (none)
performParam: (none)
trainParam: .show, .showWindow,
.showCommandLine, .epochs,
.time, .goal, .max_fail,
.mem_reduc,
.min_grad, .mu, .mu_dec,
.mu_inc,
.mu_max
weight and bias values:
IW: {2x1 cell} containing 1
input weight matrix
LW: {2x2 cell} containing 1
layer weight matrix
b: {2x1 cell} containing 2
bias vectors
other:
name: ''
userdata: (user information)
à hasil target pola
angka 0 sampai 9
>>
output=sim(net,pola)
output
=
0.0000
0.0000 0.0000 0.0000
0.0000 0.0000 0.0000
0.0000 1.0000
1.0000
0.0000
0.0000 0.0000 0.0000
1.0000 1.0000 1.0000
1.0000 0.0000
0.0000
0.0000
0.0000 1.0000 1.0000
0.0000 0.0000 1.0000
1.0000 0.0000
0.0000
0.0000
1.0000 0.0000 1.0000
0.0000 1.0000 0.0000
1.0000 0.0000
1.0000
à menguji inputan pola angka 0
>> output=sim(net,[0;0;1;0;0;0;1;0;1;0;0;1;0;1;0;0;1;0;1;0;0;1;0;1;0;0;1;0;1;0;0;0;1;0;0])
output =
1.0e-006 *
0.0017
0.0400
0.0000
0.2389
à menguji inputan
pola angka 1
>>
output=sim(net,[0;0;1;0;0;0;0;1;0;0;0;0;1;0;0;0;0;1;0;0;0;0;1;0;0;0;0;1;0;0;0;0;1;0;0])
output
=
0.0000
0.0000
0.0000
1.0000
à menguji inputan pola angka
2
>>
output=sim(net,[0;0;1;1;0;0;1;0;1;0;0;0;0;1;0;0;0;1;0;0;0;1;0;0;0;0;1;0;0;0;0;1;1;1;0])
output =
0.0000
0.0000
1.0000
0.0000
à menguji inputan pola angka
3
>> output=sim(net,[0;1;1;0;0;0;0;0;1;0;0;0;0;1;0;0;0;1;0;0;0;0;0;1;0;0;0;0;1;0;0;1;1;0;0])
output =
0.0000
0.0000
1.0000
1.0000
à menguji inputan
pola angka 4
>> output=sim(net,[0;0;0;1;0;0;0;1;1;0;0;1;0;1;0;1;1;1;1;0;0;0;0;1;0;0;0;0;1;0;0;0;0;1;0])
output =
0.0000
1.0000
0.0000
0.0000
à menguji inputan pola angka
5
>>
output=sim(net,[0;1;1;1;0;0;1;0;0;0;0;1;0;0;0;0;1;1;1;0;0;0;0;1;0;0;0;0;1;0;0;1;1;1;0])
output =
0.0000
1.0000
0.0000
1.0000
à menguji inputan pola angka
6
>>
output=sim(net,[0;0;0;1;0;0;0;1;0;0;0;1;0;0;0;0;1;1;1;0;0;1;0;1;0;0;1;0;1;0;0;0;1;0;0])
output =
0.0000
1.0000
1.0000
0.0000
à menguji inputan pola angka 7
>>
output=sim(net,[1;1;1;1;1;0;0;0;0;1;0;0;0;0;1;0;0;0;1;0;0;0;1;0;0;0;1;0;0;0;1;0;0;0;0])
output =
0.0000
1.0000
1.0000
1.0000
à menguji inputan pola angka
8
>>
output=sim(net,[0;0;1;0;0;0;1;0;1;0;0;1;0;1;0;0;0;1;0;0;0;1;0;1;0;0;1;0;1;0;0;0;1;0;0])
output =
1.0000
0.0000
0.0000
0.0000
à menguji inputan pola angka 9
>>
output=sim(net,[0;0;1;0;0;0;1;0;1;0;0;1;0;1;0;0;1;1;1;0;0;0;0;1;0;0;0;1;0;0;0;1;0;0;0])
output =
1.0000
0.0000
0.0000
1.0000
Tidak ada komentar:
Posting Komentar