Multilayer Perceptron



Distribution 1.0
15 August 2015

Massih R. Amini

Université Grenoble Alpes
Laboratoire d'Informatique de Grenoble





 


Description


This program is an implementation of the Multi-Layer Perceptron.
The learning is done through stochastic backpropagation [Rumelhart, 1986; Bottou, 1991], also described in [Amini, 2015; p. 113-118].


Download and Installation


The program is free for scientific use. If you publish results based on this program, please acknowledge its use, by referring to:

Massih-Reza Amini. Apprentissage Machine: de la théorie à la pratique. Eyrolles, 2015.

The source code is developed on Linux with gcc and it is available from:
http://ama.liglab.fr/~amini/MLP/MLP.tar.bz2

After downloading the file, and unpackting it:

> bzip2 -cd MLP.tar.bz2 | tar xvf -

you need to compile the program in the new directory LogisticRegression/

> make

After compilation, two executables are created:

  • MLP-Learn (for training the model)
  • MLP-Test (for testing it)


Training and testing


Each example in these files is represented by its class label in the set {1,..,K} followed by its plain vector representation. In MLP/example/ there are two (training_set and test_set) files, from UCI repository.


Train the model:
> MLP-Learn [options] input_file parameter_file

Options are:
-t   (float) Maximum number of iterations (default 10000),
-a   (float) Learning rate (default 0.1),
-? Help


Test the model:
> MLP-Test input_file parameters_file predictions_file


Example


./MLP-Learn -t 300000 example/LETTER-Train Params
Number of examples 16000, dimension: 16, number of classes: 26
Number of layers?
2
Number of neurons of the 1-th layer
40
Number of neurons of the 2-th layer
30
Training ....1500....3000....4500....6000....7500....9000....10500....12000....13500....15000....
16500....18000....19500....21000....22500....24000....25500....27000....28500....30000....31500....
33000....34500....36000....37500....39000....40500....42000....43500....45000....46500....48000....
49500....51000....52500....54000....55500....57000....58500....60000....61500....63000....64500....
66000....67500....69000....70500....72000....73500....75000....76500....78000....79500....81000....
82500....84000....85500....87000....88500....90000....91500....93000....94500....96000....97500....
99000....100500....102000....103500....105000....106500....108000....109500....111000....112500....
114000....115500....117000....118500....120000....121500....123000....124500....126000....127500....
129000....130500....132000....133500....135000....136500....138000....139500....141000....142500....
144000....145500....147000....148500....150000....151500....153000....154500....156000....157500....
159000....160500....162000....163500....165000....166500....168000....169500....171000....172500....
174000....175500....177000....178500....180000....181500....183000....184500....186000....187500....
189000....190500....192000....193500....195000....196500....198000....199500....201000....202500....
204000....205500....207000....208500....210000....211500....213000....214500....216000....217500....
219000....220500....222000....223500....225000....226500....228000....229500....231000....232500....
234000....235500....237000....238500....240000....241500....243000....244500....246000....247500....
249000....250500....252000....253500....255000....256500....258000....259500....261000....262500....
264000....265500....267000....268500....270000....271500....273000....274500....276000....277500....
279000....280500....282000....283500....285000....286500....288000....289500....291000....292500....
294000....295500....297000....298500....300000

> MLP-Test example/LETTER-Test Params Predictions
Number of examples 4000, dimension: 16, number of classes: 26
Error=0.251250


Disclaimer

This program is publicly available for research use only. It should not be distributed for commercial use and the author is not responsible for any (mis)use of this algorithm.


Bibliography



[Amini, 2015] Massih-Reza Amini. Apprentissage Machine: de la théorie à la pratique. Eyrolles, 2015.

[Bottou, 1991] Léon Bottou. Une approche théorique de l'apprentissage connexionniste: applications à la reconnaissance de la parole. Thèse de doctorat, université Paris XI Orsay, France.

[Rumelhart, 1986] David E. Rumelhart, Geoffrey E. Hinton, and R. J. Williams. Learning Internal Representations by Error Propagation. David E. Rumelhart, James L. McClelland, and the PDP research group. (editors), Parallel distributed processing: Explorations in the microstructure of cognition, Volume 1: Foundations. MIT Press, 1986.