Skip to content

Implementation of an entire Multilayer Perceptron on MATLAB without using any Deep Learning Tools

Notifications You must be signed in to change notification settings

DOLARIK/dynamic_multi_layer_perceptron

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Dynamic Multilayer Perceptron

I have designed a completely dynamic model of a Multilayer Perceptron in which we can give whatever shape we'd like to give an MLP. We can create a 1 Hidden Layer Model to a 1000 Hidden Layer Model to any Number of Hidden Layers we'd like the model to have in it just by tweaking some numbers. This frees us from adding a bunch of fresh lines of code every time we'd like to add just a single new layer in the model. It is easy and intuitive to use. It also occupies less space until the network architecture is decided.

Test for single hidden layer MLP

100 neurons in the layer

Accuracy: 97.7% Epochs: 49 (run before decrement in loss became less than 0.01)

Loss

Loss

Process Flow (Runtime)

We will work with a Vanilla Neural Network with N Hidden Layers

How many Hidden Layers should be there?
1
ReLU Slope = .1
Learning Rate (usualy start with .001) = .001
Regularization Parameter (b/w 0 and 1) = .9
How many Minimum No. of iterations would you like to have?
10
When would you like to stop the Training? When the Development becomes < .01
 and Loss becomes < .15

Reading Training Data.....

Loading Training Data.....

Training.....

How many neurons do you want in 1st Hidden Layer? ->
100

Iteration No. 1 || Loss = 2.0742
Iteration No. 2 || Loss = 0.606292
Iteration No. 3 || Loss = 0.473446
Iteration No. 4 || Loss = 0.400623
Iteration No. 5 || Loss = 0.352264
Iteration No. 6 || Loss = 0.317295
Iteration No. 7 || Loss = 0.290461
Iteration No. 8 || Loss = 0.269446
Iteration No. 9 || Loss = 0.252274
Iteration No. 10 || Loss = 0.237645
Iteration No. 11 || Loss = 0.225146 || Development = 1.83655
Iteration No. 12 || Loss = 0.214243 || Development = 0.381147
Iteration No. 13 || Loss = 0.204752 || Development = 0.259202
Iteration No. 14 || Loss = 0.196311 || Development = 0.19587
Iteration No. 15 || Loss = 0.188877 || Development = 0.155953
Iteration No. 16 || Loss = 0.182174 || Development = 0.128417
Iteration No. 17 || Loss = 0.176341 || Development = 0.108286
Iteration No. 18 || Loss = 0.170984 || Development = 0.0931052
Iteration No. 19 || Loss = 0.165959 || Development = 0.0812894
Iteration No. 20 || Loss = 0.161423 || Development = 0.0716862
Iteration No. 21 || Loss = 0.157111 || Development = 0.0637226
Iteration No. 22 || Loss = 0.15324 || Development = 0.0571325
Iteration No. 23 || Loss = 0.149566 || Development = 0.0515129
Iteration No. 24 || Loss = 0.146343 || Development = 0.046745
Iteration No. 25 || Loss = 0.143349 || Development = 0.0425348
Iteration No. 26 || Loss = 0.140551 || Development = 0.0388256
Iteration No. 27 || Loss = 0.137952 || Development = 0.0357904
Iteration No. 28 || Loss = 0.135449 || Development = 0.0330318
Iteration No. 29 || Loss = 0.133279 || Development = 0.0305102
Iteration No. 30 || Loss = 0.131106 || Development = 0.0281441
Iteration No. 31 || Loss = 0.129083 || Development = 0.0260044
Iteration No. 32 || Loss = 0.1272 || Development = 0.0241562
Iteration No. 33 || Loss = 0.125373 || Development = 0.0223661
Iteration No. 34 || Loss = 0.123636 || Development = 0.0209696
Iteration No. 35 || Loss = 0.121968 || Development = 0.0197126
Iteration No. 36 || Loss = 0.120415 || Development = 0.0185829
Iteration No. 37 || Loss = 0.118886 || Development = 0.0175374
Iteration No. 38 || Loss = 0.117503 || Development = 0.0165626
Iteration No. 39 || Loss = 0.116145 || Development = 0.015776
Iteration No. 40 || Loss = 0.11486 || Development = 0.0149617
Iteration No. 41 || Loss = 0.11364 || Development = 0.0142232
Iteration No. 42 || Loss = 0.112481 || Development = 0.0135594
Iteration No. 43 || Loss = 0.111321 || Development = 0.0128923
Iteration No. 44 || Loss = 0.110215 || Development = 0.0123146
Iteration No. 45 || Loss = 0.109199 || Development = 0.0117525
Iteration No. 46 || Loss = 0.108202 || Development = 0.0112163
Iteration No. 47 || Loss = 0.107286 || Development = 0.0106847
Iteration No. 48 || Loss = 0.106363 || Development = 0.0102166
Iteration No. 49 || Loss = 0.105499 || Development = 0.00978167

Writing Training Data.....

Kindly check your Training Data files, the data has been written onto those files.

Reading Test Data.....Loading Test Data.....

Predicting Test Values.....

ACCURACY = 97.7

Writing Test Data.....

Kindly check your Test Data files, the data has been written onto those files.

Pretty Good Model. Good Job :)

300 neurons in the layer

Accuracy: 98.09% Epochs: 49 (run before decrement in loss became less than 0.01)

Loss

Loss

Process Flow (Runtime)

We will work with a Vanilla Neural Network with N Hidden Layers

How many Hidden Layers should be there?
1
ReLU Slope = .1
Learning Rate (usualy start with .001) = .001
Regularization Parameter (b/w 0 and 1) = .9
How many Minimum No. of iterations would you like to have?
10
When would you like to stop the Training? When the Development becomes < .01
 and Loss becomes < .1

Reading Training Data.....

Loading Training Data.....

Training.....

How many neurons do you want in 1st Hidden Layer? ->
300

Iteration No. 1 || Loss = 2.01147
Iteration No. 2 || Loss = 0.583974
Iteration No. 3 || Loss = 0.453826
Iteration No. 4 || Loss = 0.38161
Iteration No. 5 || Loss = 0.33311
Iteration No. 6 || Loss = 0.297333
Iteration No. 7 || Loss = 0.269427
Iteration No. 8 || Loss = 0.246675
Iteration No. 9 || Loss = 0.227657
Iteration No. 10 || Loss = 0.211785
Iteration No. 11 || Loss = 0.198199 || Development = 1.79968
Iteration No. 12 || Loss = 0.186537 || Development = 0.385776
Iteration No. 13 || Loss = 0.176348 || Development = 0.267289
Iteration No. 14 || Loss = 0.167286 || Development = 0.205262
Iteration No. 15 || Loss = 0.159315 || Development = 0.165824
Iteration No. 16 || Loss = 0.152105 || Development = 0.138018
Iteration No. 17 || Loss = 0.145782 || Development = 0.117322
Iteration No. 18 || Loss = 0.139952 || Development = 0.100893
Iteration No. 19 || Loss = 0.134713 || Development = 0.0877052
Iteration No. 20 || Loss = 0.130007 || Development = 0.0770719
Iteration No. 21 || Loss = 0.125662 || Development = 0.068192
Iteration No. 22 || Loss = 0.121718 || Development = 0.0608746
Iteration No. 23 || Loss = 0.118023 || Development = 0.0546295
Iteration No. 24 || Loss = 0.114634 || Development = 0.0492631
Iteration No. 25 || Loss = 0.111544 || Development = 0.0446812
Iteration No. 26 || Loss = 0.108625 || Development = 0.0405608
Iteration No. 27 || Loss = 0.10588 || Development = 0.0371575
Iteration No. 28 || Loss = 0.103323 || Development = 0.0340711
Iteration No. 29 || Loss = 0.100963 || Development = 0.0313903
Iteration No. 30 || Loss = 0.098703 || Development = 0.0290432
Iteration No. 31 || Loss = 0.0966293 || Development = 0.0269594
Iteration No. 32 || Loss = 0.0946448 || Development = 0.0250889
Iteration No. 33 || Loss = 0.0927668 || Development = 0.0233783
Iteration No. 34 || Loss = 0.0909763 || Development = 0.0218672
Iteration No. 35 || Loss = 0.0893457 || Development = 0.020568
Iteration No. 36 || Loss = 0.0877507 || Development = 0.0192788
Iteration No. 37 || Loss = 0.0862552 || Development = 0.0181297
Iteration No. 38 || Loss = 0.0848304 || Development = 0.017068
Iteration No. 39 || Loss = 0.0834302 || Development = 0.016133
Iteration No. 40 || Loss = 0.0820864 || Development = 0.0152728
Iteration No. 41 || Loss = 0.0808237 || Development = 0.0145429
Iteration No. 42 || Loss = 0.0796308 || Development = 0.0138211
Iteration No. 43 || Loss = 0.0785066 || Development = 0.013136
Iteration No. 44 || Loss = 0.0774269 || Development = 0.0124697
Iteration No. 45 || Loss = 0.0764383 || Development = 0.0119188
Iteration No. 46 || Loss = 0.0754907 || Development = 0.0113124
Iteration No. 47 || Loss = 0.0745546 || Development = 0.0107645
Iteration No. 48 || Loss = 0.0737181 || Development = 0.0102758
Iteration No. 49 || Loss = 0.0729069 || Development = 0.00971207

Writing Training Data.....

Kindly check your Training Data files, the data has been written onto those files.

Reading Test Data.....Loading Test Data.....

Predicting Test Values.....

ACCURACY = 98.09

Writing Test Data.....

Kindly check your Test Data files, the data has been written onto those files.

Pretty Good Model. Good Job :)

References

For more checkout my other repo

About

Implementation of an entire Multilayer Perceptron on MATLAB without using any Deep Learning Tools

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages