scilab-ann-0.4.2.4/ 0000755 0001750 0001750 00000000000 11441407762 014556 5 ustar sylvestre sylvestre scilab-ann-0.4.2.4/readme.txt 0000644 0001750 0001750 00000002024 11441407762 016552 0 ustar sylvestre sylvestre ANN Toolbox ver. 0.4.2.4 for Scilab 5.3
=======================================
This represents a toolbox for artificial neural networks,
based on my developments described in "Matrix ANN" book,
under development, if interested send me an email at
r.hristev@phys.canterbury.ac.nz
Current feature:s
- Only layered feedforward networks are supported *directly* at the moment
(for others use the "hooks" provided)
- Unlimited number of layers
- Unlimited number of neurons per each layer separately
- User defined activation function (defaults to logistic)
- User defined error function (defaults to SSE)
- Algorithms implemented so far:
* standard (vanilla) with or without bias, on-line or batch
* momentum with or without bias, on-line or batch
* SuperSAB with or without bias, on-line or batch
* Conjugate gradients
* Jacobian computation
* Computation of result of multiplication between "vector" and Hessian
- Some helper functions provided
For full descriptions start with the toplevel "ANN" man page.
scilab-ann-0.4.2.4/.gitignore 0000644 0001750 0001750 00000000255 11441407762 016550 0 ustar sylvestre sylvestre #
# Generated files (common)
#
jar/
*.bin
lib
names
loader.sce
cleaner.sce
#
# Generated files (Windows)
#
*.bak
master_help.xml
scilab_*_help
#
# Generated files (Linux)
#
scilab-ann-0.4.2.4/license.txt 0000644 0001750 0001750 00000001710 11441407762 016740 0 ustar sylvestre sylvestre ÿþ- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
C O P Y R I G H T ( C )
T h e p r o g r a m s a n d a s s o c i a t e d f i l e s i n t h i s t o o l k i t
a r e r e l e a s e d u n d e r G N U P u b l i c L i c e n c e v e r s i o n 2
C o p y r i g h t 1 9 9 8 , 2 0 0 1 ( C ) R y u r i c k M . H r i s t e v
u p d a t e d b y A l l a n C O R N E T f o r S c i l a b 5 . x ( 2 0 0 8 )
T h i s t o o l b o x i s n o r m a l p a r t o f " M a t r i x A N N " b o o k ( c o v e r e d b y t h e s a m e l i c e n s e )
b u t s e p a r a t e d i s t r i b u t i o n i s e x p l i c i t l y a l l o w e d .
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
scilab-ann-0.4.2.4/changelog.txt 0000644 0001750 0001750 00000005275 11441407762 017257 0 ustar sylvestre sylvestre ChangeLog: ANN Toolbox --- for Scilab
==========================
=====================================================================
From 0.4.2.3 -> 0.4.2.4 :
- compatibility with Scilab 5.3.0
(Allan CORNET , DIGITEO , 2010)
=====================================================================
From 0.4.2.2 -> 0.4.2.3 :
- compatibility with Scilab 5.2.0
(Allan CORNET , DIGITEO , 2009)
=====================================================================
From 0.4.2.1 -> 0.4.2.2 :
- remove some hardcoded paths
(Allan CORNET , DIGITEO , 2008)
=====================================================================
From 0.4.2 -> 0.4.2.1 :
- Adjustments for Scilab 5.x
macros and help updated
uses standard architecture of toolboxes for scilab 5.0
(Allan CORNET , INRIA , 2008)
=====================================================================
From 0.4.1 -> 0.4.2
Minor adjustments for Scilab 2.6:
- install.sh script which will install/uninstall the toolbox
(system-wide or in selected dir)
- manual updates (macros change, contents unchanged)
From 0.4 -> 0.4.1
Minor adjustments for the new Scilab 2.5:
- new Makefile
- README updated
=====================================================================
From 0.3 -> 0.4
Function names have been rationalized to a more suitable nomenclature.
Discrete time loops are now performed inside training engines.
Some algorithms added.
WHAT YOU HAVE TO DO TO CONVERT YOUR OLD SCRIPTS:
- rename the functions according to the new nomenclature.
- some training engines now perform the discrete time loops inside,
so they require a new parameter T and external looping have to be
removed. Also note that the format of optional ex parameter have also
changed in order to acomodate this.
=====================================================================
From 0.2x -> 0.3
This toolkit now uses hypermatrices available only on Scilab 2.4
and upward. This will make possible to easily add future algorithms.
Also: the patterns are now represented by column vectors and weight
matrices are as one would expect, i.e. as they are currently used
in main ANN literature.
For these reasons scripts written for previous version(s) of this toolkit
will not work on this one. I apologies for inconveniences. The introduction
of hypermatrices in Scilab 2.4 will (PROBABLY) make unnecessary any other
major changes in future versions of this toolkit (from the user's scripts
point of view).
WHAT YOU HAVE TO DO TO CONVERT YOUR OLD SCRIPTS:
- transpose the inputs and targets, e.g. x=x'
and nothing else if you don't touch the weight matrix;
- if you manipulated the weight matrix then convert that part to the
new format of W, see the man pages for details.
scilab-ann-0.4.2.4/macros/ 0000755 0001750 0001750 00000000000 11441407762 016042 5 ustar sylvestre sylvestre scilab-ann-0.4.2.4/macros/ann_FF_grad_BP_nb.sci 0000644 0001750 0001750 00000004536 11441407762 021736 0 ustar sylvestre sylvestre function grad_E = ann_FF_grad_BP_nb(x, t, N, W, c, af, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Calculate the error gradient considering all patterns
// trough a backpropagation procedure
// this function is designed for networks without bias
// see ANN_FF (help)
[lsh,rsh] = argn(0);
// define default parameters if necessary
if rsh < 5, c = 0, end;
if rsh < 6, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 7, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of layers
L = size(N, 'c');
// ... and patterns
P = size(x,'c');
// initialize "z" to avoid resizing
z = zeros(max(N), L);
// initialize grad_E, W is a hypermatrix, grad_E have same layout
grad_E = hypermat(size(W)');
// calculate grad_E
// go trough all patterns
for p = 1 : P
// find all neuronal outputs (activation) for current input pattern
// first "z" column is exactly "x(:,p)"
z(1:N(1),1) = x(:,p);
for l = 2 : L
// first calculate total input (as column vector) ...
z(1:N(l),l) = W(1:N(l), 1:N(l-1),l-1) * z(1:N(l-1), l-1);
// ... then activation
execstr('z(1:N(l),l) = ' + af(1) + '(z(1:N(l),l))');
end;
// now for layer "L" (last), requiring special treatment on "err_dz"
// "err_dz" for output layer, don't propagate smaller than c
execstr('err_dz = clean(' + err_deriv_y + '(z(1:N(L),L),t(:,p)), c)');
// "deriv_af" for output layer
execstr('deriv_af = ' + af(2) + '(z(1:N(L),L))');
// "err_dz_deriv_af" product is used twice
err_dz_deriv_af = err_dz .* deriv_af;
// adding contribution of pattern p
// using the transposed of z vector here
grad_E(1:N(L), 1:N(L-1), L-1) = ...
grad_E(1:N(L), 1:N(L-1), L-1) + ...
err_dz_deriv_af * z(1:N(L-1), L-1)';
// backpropagate
for l = L-1 : -1 : 2
// new "err_dz" based on previous one
// transpose two vectors instead of W
err_dz = (err_dz_deriv_af' * W(1:N(l+1), 1:N(l), l))';
// new "deriv_af"
execstr('deriv_af = ' + af(2) + '(z(1:N(l),l))');
// same as for layer "L", "err_dz_deriv_af" also used on next loop above
err_dz_deriv_af = err_dz .* deriv_af;
grad_E(1:N(l), 1:N(l-1), l-1) = ...
grad_E(1:N(l), 1:N(l-1), l-1) + ...
err_dz_deriv_af * z(1:N(l-1), l-1)';
end;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_grad.sci 0000644 0001750 0001750 00000003315 11441407762 020670 0 ustar sylvestre sylvestre function grad_E = ann_FF_grad(x,t,N,W,dW,af,ef)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Calculates the error gradient following a finite difference procedure,
// i.e. perturbing each weight in turn;
// used for --- testing --- purposes only as is much slower than BP algorithm.
// The gradient is calculated only for all patterns in "x" and "t"
// see ANN_FF (help)
[lsh,rsh] = argn(0);
// define optional parameters if necessary
if rsh < 6, af = 'ann_log_activ', end;
if rsh < 7, ef = 'ann_sum_of_sqr', end;
// create the return matrix
grad_E = hypermat(size(W)');
// rl - run between layers, parameter for ann_FF_run function
rl = [2,size(N,'c')];
// for each pattern
for p = 1 : size(x,'c')
// for each layer
for l = 2 : size(N,'c')
// for each neuron in layer
for n = 1 : N(l)
// for each connection to previous layer
for i = 1 : N(l-1)+1
// hold the old value of W
temp = W(n,i,l-1);
// change W value
W(n,i,l-1) = temp - dW;
// run the net
y = ann_FF_run(x(:,p),N,W,rl,af);
// calculate new error, to the "left"
execstr('err_n = ' + ef + '(y,t(:,p))');
// change W value
W(n,i,l-1) = temp + dW;
// run the net
y = ann_FF_run(x(:,p),N,W,rl,af);
// calculate new error, to the "right"
execstr('err_p = ' + ef + '(y,t(:,p))');
// "2" because \Delta w = 2 * dW
grad_E(n,i,l-1) = ...
grad_E(n,i,l-1) + (err_p - err_n) / (2 * dW);
// restore W
W(n,i,l-1) = temp;
end;
end;
end;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Hess.sci 0000644 0001750 0001750 00000006472 11441407762 020664 0 ustar sylvestre sylvestre function H = ann_FF_Hess(x, t, N, W, dW, dW2, af, ef)
// This file is part of:
// ANN Toolbox for Scilab
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public Licence version 2
// Calculates the Hessian
// using a finite difference procedure by perturbing two weights
// used for --- testing --- purposes only as is very slow
// The Hessian is calculated considering the whole set of patterns
// see ANN_FF (help)
[lsh,rsh] = argn(0);
// define default parameters if necessary
if rsh < 7, af = 'ann_log_activ', end;
if rsh < 8, ef = 'ann_sum_of_sqr', end;
// no. of layers
L = size(N,'c');
// rl run between layers, parameter for ann_FF_run function
rl = [2, size(N,'c')];
// create the return hypermatrix,
// layout is of type W .*. W (NOT W .*. W')
H = hypermat([size(W)', size(W)']);
// first weights are W(k1,i1,l1-1), second ones are W(k2,i2,l2-1)
// for each layer
// WARNING: THIS WILL NOT CALCULATE CORECTLY THE "DIAGONAL" ELEMENTS
for l1 = 2 : L, for l2 = 2 : L
// for each neuron in layer
for k1 = 1 : N(l1), for k2 = 1 : N(l2)
// for each connection from previous layer
for i1 = 1 : N(l1-1)+1, for i2 = 1 : N(l2-1)+1
// hold original weight values
temp1 = W(k1,i1,l1-1);
temp2 = W(k2,i2,l2-1);
// first change: ++
W(k1,i1,l1-1) = temp1 + dW;
W(k2,i2,l2-1) = temp2 + dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err1 = ' + ef + '(y,t)');
// second change -+
W(k1,i1,l1-1) = temp1 - dW;
W(k2,i2,l2-1) = temp2 + dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err2 = ' + ef + '(y,t)');
// third change +-
W(k1,i1,l1-1) = temp1 + dW;
W(k2,i2,l2-1) = temp2 - dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err3 = ' + ef + '(y,t)');
// fourth change --
W(k1,i1,l1-1) = temp1 - dW;
W(k2,i2,l2-1) = temp2 - dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err4 = ' + ef + '(y,t)');
// restore weights
W(k1,i1,l1-1) = temp1;
W(k2,i2,l2-1) = temp2;
// calculate hessian term
// "4" factor because (\Delta W)^2 = (2 dW)^2
H(k1,i1,l1-1,k2,i2,l2-1) = ...
(err1 - err2 - err3 + err4) / (4 * dW^2);
end, end;
end, end;
end, end;
// NOW THE DIAGONAL ELEMENTS
// (avoid "if"-s above, it's too slow)
for l = 2 : L
// for each neuron in layer
for k = 1 : N(l)
// for each connection from previous layer
for i = 1 : N(l-1)+1
// hold original weight values
temp = W(k,i,l-1);
// first change: +
W(k,i,l-1) = temp + (1 + dW2) * dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err1 = ' + ef + '(y,t)');
// second change +
W(k,i,l-1) = temp + (1 - dW2) * dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err2 = ' + ef + '(y,t)');
err_p = (err1 - err2) / (2 * dW2 * dW);
// first change: -
W(k,i,l-1) = temp - (1 - dW2) * dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err1 = ' + ef + '(y,t)');
// second change -
W(k,i,l-1) = temp - (1 + dW2) * dW;
y = ann_FF_run(x,N,W,rl,af);
execstr('err2 = ' + ef + '(y,t)');
err_n = (err1 - err2) / (2 * dW2 * dW);
// restore weight
W(k,i,l-1) = temp;
// calculate hessian term
H(k,i,l-1,k,i,l-1) = ...
(err_p - err_n) / (2 * dW);
end;
end;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Std_online.sci 0000644 0001750 0001750 00000001760 11441407762 022053 0 ustar sylvestre sylvestre function W = ann_FF_Std_online(x, t, N, W, lp, T, af, ex, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN, including biases
// based on backpropagation algorithm.
// see ANN_FF (help)
// "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 8, ex = [" "," "], end;
if rsh < 9, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of patterns
P = size(x,'c');
// repeat T times
for time = 1 : T
// go trough all patterns, one at a time
for p = 1 : P
// find gradient
grad_E = ann_FF_grad_BP(x(:,p), t(:,p), N, W, lp(2), af, err_deriv_y);
// update weights
W = W - lp(1) * grad_E;
// go trough "ex"
execstr(ex(1));
end;
execstr(ex(2));
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_run_nb.sci 0000644 0001750 0001750 00000002302 11441407762 021231 0 ustar sylvestre sylvestre function y = ann_FF_run_nb(x, N, W, l, af)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// runs the network,
// with input pattern(s) "x" injected al layer "l(1)"
// returning the activation at layer "l(2)"
// (defaults to whole network)
// this function is designed for networks without bias
// see ANN_FF (help)
// "l" and "af" are optional
[lsh, rsh] = argn(0);
// "l" defaults to whole network
if rsh < 4, l = [2, size(N,'c')], end;
// "af" defaults to logistic activation function
if rsh < 5, af = 'ann_log_activ', end;
// initialize "y"
y = zeros(N(l(2)), size(x,'c'));
// go trough all patterns
for p = 1 : size(x,'c')
// first "input" layer uses "x(:,p)" and calculate total input ...
z = W(1:N(l(1)), 1:N(l(1)-1), l(1)-1) * x(:,p);
// ... then activation
execstr("z = " + af + "(z)");
// propagate, same as above but use "z"
for ll = l(1)+1 : l(2)
// use old "z" to find total input ...
z = W(1:N(ll), 1:N(ll-1), ll-1) * z;
// ... then activation
execstr("z = " + af + "(z)");
end;
// collect data
y(:,p) = z;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Jacobian_BP.sci 0000644 0001750 0001750 00000002562 11441407762 022045 0 ustar sylvestre sylvestre function J = ann_FF_Jacobian_BP(x,N,W,af)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// calculate the Jacobian following a backpropagation procedure
[lsh,rsh] = argn(0);
// optional parameters
if rsh < 4, af = ["ann_log_activ", "ann_d_log_activ"], end;
// no. of layers
L = size(N,'c');
// ... and patterns
P = size(x,'c');
// create the hypermatrix to hold (grad_{a(\ell)}} z^\T)^\T
grad_a_z = hypermat([N(L), max(N(2:L)), L-1]);
// the matrix containing the activities
d_f = zeros(max(N(2:L)), L-1);
// initialize J
J = hypermat([N(L),N(1),P]);
// for all patterns
for p = 1 : P
// forward propagation
// initial activation
z = x(:,p);
for l = 1 : L-1
// find next activation, use extended z, i.e. bias
execstr('z = ' + af(1) + '(W(1:N(l+1), 1:N(l)+1, l) * [1;z]);');
// and store its derivative
execstr('d_f(1:N(l+1),l) = ' + af(2) + '(z)');
end;
// backpropagation
// initial values
grad_a_z(:, 1:N(L), L-1) = diag(d_f(1:N(L),L-1));
for l = L-2 : -1 : 1
grad_a_z(:, 1:N(l+1), l) = ...
(grad_a_z(:, 1:N(l+2), l+1) * ...
W(1:N(l+2), 2:N(l+1)+1, l+1)) .* ...
(ones(N(L),1) * d_f(1:N(l+1),l)')
end;
J(:,:,p) = grad_a_z(:, 1:N(2),1) * W(1:N(2), 2:N(1)+1, 1);
end;
endfunction
scilab-ann-0.4.2.4/macros/buildmacros.sce 0000644 0001750 0001750 00000000426 11441407762 021044 0 ustar sylvestre sylvestre // ====================================================================
// Allan CORNET
// DIGITEO 2010
// INRIA 2008
// ANN Toolbox
// ====================================================================
tbx_build_macros(TOOLBOX_NAME,get_absolute_file_path("buildmacros.sce")); scilab-ann-0.4.2.4/macros/ann_FF_SSAB_batch.sci 0000644 0001750 0001750 00000003221 11441407762 021640 0 ustar sylvestre sylvestre function [W,Delta_W_old,Delta_W_oldold,mu]=ann_FF_SSAB_batch(x,t,N,W,lp,Delta_W_old,Delta_W_oldold,T,mu,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN, including biases
// based on backpropagation with SuperSAB algorithm (batch version).
// see ANN_FF (help)
// "mu", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// size of W hypermatrix, required in several places
size_W = size(W)';
// define "mu", "af", "ex" and "err_deriv_y" if necessary
if rsh < 9, mu = lp(1) * hypermat(size_W,ones(prod(size_W),1)), end;
if rsh < 10, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 11, ex = " ", end;
if rsh < 12, err_deriv_y = 'ann_d_sum_of_sqr', end;
// repeat T times
for time = 1 : T
// error gradient
grad_E = ann_FF_grad_BP(x,t,N,W,lp(2),af,err_deriv_y);
// sign hypermatrix
M = sign(sign(Delta_W_old .* Delta_W_oldold) ...
+ hypermat(size_W, ones(prod(size_W),1)));
// mu hypermatrix update (former lp(1))
mu = ( (lp(4) - lp(5)) * M ...
+ lp(5) * hypermat(size_W, ones(prod(size_W),1)) ) .* mu;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e on next loop or next call to this function)
// same for Delta_W_oldold
Delta_W_oldold = Delta_W_old;
Delta_W_old = ...
- mu .* grad_E ...
- (lp(3) * Delta_W_old) ...
.* (hypermat(size_W, ones(prod(size_W),1)) - M);
W = W + Delta_W_old;
// execute "ex"
execstr(ex);
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Std_online_nb.sci 0000644 0001750 0001750 00000002032 11441407762 022523 0 ustar sylvestre sylvestre function W = ann_FF_Std_online_nb(x, t, N, W, lp, T, af, ex, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN,
// based on backpropagation algorithm.
// this function is designed for networks without bias
// see ANN_FF (help)
// "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 8, ex = [" "," "], end;
if rsh < 9, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of patterns
P = size(x,'c');
// repeat T times
for time = 1 : T
// go trough all patterns, one at a time
for p = 1 : P
// find gradient
grad_E = ann_FF_grad_BP_nb(x(:,p), t(:,p), N, W, lp(2), af, err_deriv_y);
// update weights
W = W - lp(1) * grad_E;
// execute "ex"
execstr(ex(1));
end;
execstr(ex(2));
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_d_log_activ.sci 0000644 0001750 0001750 00000000566 11441407762 021657 0 ustar sylvestre sylvestre function z = ann_d_log_activ(y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// calculates the derivative of logistic activation function,
// given the actual value of the function
// see ANN_GEN (help)
z = y .* (1 - y);
endfunction scilab-ann-0.4.2.4/macros/ann_FF_SSAB_online.sci 0000644 0001750 0001750 00000003450 11441407762 022047 0 ustar sylvestre sylvestre function [W,Delta_W_old,Delta_W_oldold,mu]=ann_FF_SSAB_online(x,t,N,W,lp,Delta_W_old,Delta_W_oldold,T,mu,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN, including biases
// based on backpropagation with SuperSAB algorithm.
// see ANN_FF (help)
// "mu", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// size of W hypermatrix, required in several places
size_W = size(W)';
// define "mu", "af", "ex" and "err_deriv_y" if necessary
if rsh < 9, mu = lp(1) * hypermat(size_W,ones(prod(size_W),1)), end;
if rsh < 10, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 11, ex = [" "," "], end;
if rsh < 12, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of patterns
P = size(x,'c');
// repeat T times
for time = 1 : T
// go trough all patterns
for p = 1 : P
// error gradient
grad_E = ann_FF_grad_BP(x(:,p),t(:,p),N,W,lp(2),af,err_deriv_y);
// sign hypermatrix
M = sign(sign(Delta_W_old .* Delta_W_oldold) ...
+ hypermat(size_W, ones(prod(size_W),1)));
// mu hypermatrix update (former lp(1))
mu = ( (lp(4) - lp(5)) * M ...
+ lp(5) * hypermat(size_W, ones(prod(size_W),1)) ) .* mu;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e on next loop or next call to this function)
// same for Delta_W_oldold
Delta_W_oldold = Delta_W_old;
Delta_W_old = ...
- mu .* grad_E ...
- (lp(3) * Delta_W_old) ...
.* (hypermat(size_W, ones(prod(size_W),1)) - M);
W = W + Delta_W_old;
// execute "ex"
execstr(ex(1));
end;
execstr(ex(2));
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Mom_online.sci 0000644 0001750 0001750 00000006112 11441407762 022045 0 ustar sylvestre sylvestre function [W,Delta_W_old]=ann_FF_Mom_online(x,t,N,W,lp,T,Delta_W_old,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN, including biases
// based on backpropagation algorithm with momentum.
// see ANN_FF (help)
// "Delta_W_old", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "Delta_W_old", "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, Delta_W_old = hypermat(size(W)'), end;
if rsh < 8, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 9, ex = [" "," "], end;
if rsh < 10, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of layers
L = size(N, 'c');
// ... and patterns
P = size(x, 'c');
// initialize "z" to avoid resizing
z = zeros(max(N), L);
// grad_E_mod is a hypermatrix with the same layout as W
// because of the flat spot elimination grad_E is calculated here
grad_E_mod = hypermat(size(W)');
// repeat T times
for time = 1 : T
// go trough all patterns
for p = 1 : P
// find all neuronal outputs (activation) for current input pattern
// first "z" column is exactly "x(:,p)"
z(1:N(1),1) = x(:,p);
for l = 2 : L
// adding "1" to "z(1:N(l-1),l-1)" to represent bias
// first calculate total input (as column vector) ...
z(1:N(l),l) = W(1:N(l), 1:N(l-1)+1, l-1) ...
* [1; z(1:N(l-1),l-1)];
// ... then activation
execstr('z(1:N(l),l) = ' + af(1) + '(z(1:N(l),l))');
end;
// now for layer "L" (last), requiring special treatment on "err_dz"
// "err_dz" for output layer, don't propagate smaller than lp(2)
execstr('err_dz = clean(' + err_deriv_y + '(z(1:N(L),L),t(:,p)), lp(2))');
// "deriv_af" for output layer, also add flat spot elimination
execstr('deriv_af = ' + af(2) + '(z(1:N(L),L))' + ...
' + lp(4) * ones(N(L),1)');
// "err_dz_deriv_af" product is used twice
err_dz_deriv_af = err_dz .* deriv_af;
// using the transposed of extended z vector here
grad_E_mod(1:N(L), 1:N(L-1)+1, L-1) = ...
err_dz_deriv_af * [1, z(1:N(L-1), L-1)'];
// backpropagate
for l = L-1 : -1 : 2
// new "err_dz" based on previous one
// transpose two vectors instead of W
err_dz = (err_dz_deriv_af' * W(1:N(l+1), 2:N(l)+1, l))';
// new "deriv_af"
execstr('deriv_af = ' + af(2) + '(z(1:N(l),l))' + ...
' + lp(4) * ones(N(l),1)');
// same as for layer "L", "err_dz_deriv_af" also used on next loop above
err_dz_deriv_af = err_dz .* deriv_af;
grad_E_mod(1:N(l), 1:N(l-1)+1, l-1) = ...
err_dz_deriv_af * [1, z(1:N(l-1), l-1)'];
end;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e. on next loop or next call to this function)
Delta_W_old = -lp(1) * grad_E_mod + lp(3) * Delta_W_old;
W = W + Delta_W_old;
// go trough "ex"
execstr(ex(1));
end;
execstr(ex(2));
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Std_batch.sci 0000644 0001750 0001750 00000001560 11441407762 021646 0 ustar sylvestre sylvestre function W = ann_FF_Std_batch(x, t, N, W, lp, T, af, ex, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN, including biases
// based on standard backpropagation algorithm (batch version).
// see ANN_FF (help)
// "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 8, ex = " ", end;
if rsh < 9, err_deriv_y = 'ann_d_sum_of_sqr', end;
// repeat T times
for time = 1 : T
// find gradient
grad_E = ann_FF_grad_BP(x, t, N, W, lp(2), af, err_deriv_y);
// update weights
W = W - lp(1) * grad_E;
// go trough "ex"
execstr(ex);
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_sum_of_sqr.sci 0000644 0001750 0001750 00000000526 11441407762 021556 0 ustar sylvestre sylvestre function E = ann_sum_of_sqr(y,t)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// calculates sum-of-squares error between "y" and "t" patterns
// see ANN_GEN (help)
E = sum((y-t) .^ 2) / 2;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_run.sci 0000644 0001750 0001750 00000002400 11441407762 020551 0 ustar sylvestre sylvestre function y = ann_FF_run(x, N, W, l, af)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// runs the network, with biases,
// with input pattern(s) "x" injected al layer "l(1)"
// returning the activation at layer "l(2)"
// (defaults to whole network)
// see ANN_FF (help)
// "l" and "af" are optional
[lsh, rsh] = argn(0);
// "l" defaults to whole network
if rsh < 4, l = [2,size(N,'c')], end;
// "af" defaults to logistic activation function
if rsh < 5, af = 'ann_log_activ', end;
// no. of present patterns
P = size(x,'c');
// initialize "y"
y = zeros(N(l(2)), P);
// go trough all patterns
for p = 1 : P
// first "input" layer uses "x(:,p)" and calculate total input ...
// (an "1" is added to the input vector to represent bias)
z = W(1:N(l(1)), 1:N(l(1)-1)+1, l(1)-1) * [1; x(:,p)];
// ... then activation
execstr("z = " + af + "(z)");
// propagate, same as above but use "z"
for ll = l(1)+1 : l(2)
// ... use old "z" to find total input
z = W(1:N(ll), 1:N(ll-1)+1, ll-1) * [1; z];
// ... then compute activation
execstr("z = " + af + "(z)");
end;
// collect data
y(:,p) = z;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Jacobian.sci 0000644 0001750 0001750 00000001450 11441407762 021457 0 ustar sylvestre sylvestre function J = ann_FF_Jacobian(x,N,W,dx,af)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// calculates the Jacobian using a finite differences procedure
[lsh,rsh] = argn(0);
// optional parameters
if rsh < 5, af = "ann_log_activ", end;
// required for ann_FF_run
l = [2,size(N,'c')];
// no. of patterns
P = size(x,'c');
// initialize J
J = hypermat([N(size(N,'c')), N(1), P]);
// for each pattern
for p = 1 : P
// for each input
for i = 1 : N(1)
temp = x(i,p);
x(i,p) = temp + dx;
y_p = ann_FF_run(x(:,p), N, W, l, af);
x(i,p) = temp - dx;
y_n = ann_FF_run(x(:,p), N, W, l, af);
J(:,i,p) = (y_p - y_n) / (2 * dx);
end;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_ConjugGrad.sci 0000644 0001750 0001750 00000004577 11441407762 022011 0 ustar sylvestre sylvestre function W = ann_FF_ConjugGrad(x, t, N, W, T, dW, ex, af, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// trains the network for T epochs using the Conjugate gradient algorithm
// see ANN_FF (help)
[lsh,rsh] = argn(0);
// deal with default parameters
if rsh < 7, ex = [" "], end;
if rsh < 8, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 9, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of layers
L = size(N,'c');
//-------------------------------------------------------------------
// first step for conjugate gradients is performed outside the loop,
// for proper initialization
// calculate first grad_E and initialize directly to grad_E_old
grad_E = ann_FF_grad_BP(x, t, N, W, 0, af, err_deriv_y);
// initialize direction to grad_E
D = - grad_E;
// iterate to T-1 (for T we need fewer calculations)
for time = 1 : T-1
// calculate D^\T \circ Hessian
D_circ_H = ann_FF_VHess(x, t, N, W, D, dW, af, err_deriv_y);
// calculate D^\T \circ grad_E and D^\T \circ Hessian \circ D ...
D_circ_grad_E = 0;
D_circ_H_circ_D = 0;
for l = 1 : L-1
// using "old" grad_E
D_circ_grad_E = D_circ_grad_E + sum(D(:,:,l) .* grad_E(:,:,l));
D_circ_H_circ_D = D_circ_H_circ_D + sum(D_circ_H(:,:,l) .* D(:,:,l));
end;
// ... and alpha
alpha = - D_circ_grad_E / D_circ_H_circ_D;
// new weights
W = W + alpha * D;
// execute ex if necessary
execstr(ex);
// new gradient
grad_E = ann_FF_grad_BP(x, t, N, W, 0, af, err_deriv_y);
// calculate D^\T \circ grad_E (new) ...
D_circ_grad_E = 0;
for l = 1 : L-1
D_circ_grad_E = D_circ_grad_E + sum(D(:,:,l) .* grad_E(:,:,l));
end;
// ... and beta
beta = - D_circ_grad_E / D_circ_H_circ_D;
// new direction
D = - grad_E + beta * D;
end;
// for T only
// calculate D^\T \circ Hessian
D_circ_H = ann_FF_VHess(x, t, N, W, D, dW, af, err_deriv_y);
// calculate D^\T \circ grad_E and D^\T \circ Hessian \circ D ...
D_circ_grad_E = 0;
D_circ_H_circ_D = 0;
for l = 1 : L-1
// using "old" grad_E
D_circ_grad_E = D_circ_grad_E + sum(D(:,:,l) .* grad_E(:,:,l));
D_circ_H_circ_D = D_circ_H_circ_D + sum(D_circ_H(:,:,l) .* D(:,:,l));
end;
// ... and alpha
alpha = - D_circ_grad_E / D_circ_H_circ_D;
// final weights
W = W + alpha * D;
// and final ex
execstr(ex);
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_SSAB_batch_nb.sci 0000644 0001750 0001750 00000003240 11441407762 022320 0 ustar sylvestre sylvestre function [W,Delta_W_old,Delta_W_oldold,mu]=ann_FF_SSAB_batch_nb(x,t,N,W,lp,Delta_W_old,Delta_W_oldold,T,mu,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN,
// based on backpropagation with SuperSAB algorithm (batch version).
// this function is to be used on networks without bias
// see ANN_FF (help)
// "mu", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// size of W hypermatrix, required in several places
size_W = size(W)';
// define default parameters if necessary
if rsh < 9, mu = lp(1) * hypermat(size_W,ones(prod(size_W),1)), end;
if rsh < 10, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 11, ex = " ", end;
if rsh < 12, err_deriv_y = 'ann_d_sum_of_sqr', end;
// repeat T times
for time = 1 : T
// error gradient
grad_E = ann_FF_grad_BP_nb(x,t,N,W,lp(2),af,err_deriv_y);
// sign hypermatrix
M = sign(sign(Delta_W_old .* Delta_W_oldold) ...
+ hypermat(size_W,ones(prod(size_W),1)));
// mu hypermatrix update (former lp(1))
mu = ( (lp(4) - lp(5)) * M ...
+ lp(5) * hypermat(size_W,ones(prod(size_W),1)) ) .* mu;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e on next loop or next call to this function)
// same for Delta_W_oldold
Delta_W_oldold = Delta_W_old;
Delta_W_old = ...
- mu .* grad_E ...
- (lp(3) * Delta_W_old) .* (hypermat(size_W,ones(prod(size_W),1)) - M);
W = W + Delta_W_old;
// execute "ex"
execstr(ex);
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Mom_batch_nb.sci 0000644 0001750 0001750 00000006151 11441407762 022324 0 ustar sylvestre sylvestre function [W,Delta_W_old] = ann_FF_Mom_batch_nb(x,t,N,W,lp,T,Delta_W_old,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN,
// based on backpropagation with momentum algorithm.
// this function is to be used on networks without bias
// see ANN_FF (help)
// "Delta_W_old", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "Delta_w_old", "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, Delta_W_old = hypermat(size(W)'), end;
if rsh < 8, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 9, ex = " ", end;
if rsh < 10, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of layers
L = size(N, 'c');
// ... and patterns
P = size(x,'c');
// initialize "z" to avoid resizing
z = zeros(max(N), L);
size_W = size(W)';
// repeat T times
for time = 1 : T
// grad_E_mod is a hypermatrix with the same layout as W
// because of flat spot elimination the modified grad_E is calculated here
// reinitialize at each loop
grad_E_mod = hypermat(size_W);
// go trough all patterns
for p = 1 : P
// find all neuronal outputs (activation) for current input pattern
// first "z" column is exactly "x(:,p)"
z(1:N(1),1) = x(:,p);
for l = 2 : L
// first calculate total input (as column vector) ...
z(1:N(l),l) = W(1:N(l), 1:N(l-1), l-1) * z(1:N(l-1), l-1);
// ... then activation
execstr('z(1:N(l),l) = ' + af(1) + '(z(1:N(l),l))');
end;
// now for layer "L" (last), requiring special treatment on "err_dz"
// "err_dz" for output layer, don't propagate smaller than lp(2)
execstr('err_dz = clean(' + err_deriv_y + '(z(1:N(L),L),t(:,p)), lp(2))');
// "deriv_af" for output layer, also add flat spot elimination
execstr('deriv_af = ' + af(2) + '(z(1:N(L),L))' + ...
' + lp(4) * ones(N(L),1)');
// "err_dz_deriv_af" product is used twice
err_dz_deriv_af = err_dz .* deriv_af;
grad_E_mod(1:N(L), 1:N(L-1), L-1) = ...
grad_E_mod(1:N(L), 1:N(L-1), L-1) + ...
err_dz_deriv_af * z(1:N(L-1), L-1)';
// backpropagate
for l = L-1 : -1 : 2
// new "err_dz" based on previous one
// transpose two vectors instead of W
err_dz = (err_dz_deriv_af' * W(1:N(l+1), 1:N(l), l))';
// new "deriv_af", also add flat spot elimination
execstr('deriv_af = ' + af(2) + '(z(1:N(l),l))' + ...
' + lp(4) * ones(N(l),1)');
// same as for layer L, "err_dz_deriv_af" also used on next loop above
err_dz_deriv_af = err_dz .* deriv_af;
grad_E_mod(1:N(l), 1:N(l-1), l-1) = ...
grad_E_mod(1:N(l), 1:N(l-1), l-1) + ...
err_dz_deriv_af * z(1:N(l-1),l-1)';
end;
end;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e on next loop or next call to this function)
Delta_W_old = - lp(1) * grad_E_mod + lp(3) * Delta_W_old;
W = W + Delta_W_old;
// execute "ex"
execstr(ex);
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_pat_shuffle.sci 0000644 0001750 0001750 00000001163 11441407762 021677 0 ustar sylvestre sylvestre function [x,t] = ann_pat_shuffle(x,t)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// shuffles the patterns from "x" and the corresponding "t"
// see ANN_GEN (help)
// no. of patterns
P = size(x,'c');
my_rand = ceil(P * rand(P,1));
for p = 1 : P
// shuffle x
temp = x(:,my_rand(p));
x(:,my_rand(p)) = x(:,p);
x(:,p) = temp;
// shuffle t same way (keep x(:,p) <-> t(:,p) correspondence)
temp = t(:,my_rand(p));
t(:,my_rand(p)) = t(:,p);
t(:,p) = temp;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_init.sci 0000644 0001750 0001750 00000002206 11441407762 020714 0 ustar sylvestre sylvestre function W = ann_FF_init(N, r, rb)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// generate the weight matrix for an feedforward ANN defined by N
// see ANN_FF (help)
// "r" and "rb" are optional arguments
[lsh, rsh] = argn(0);
// define "r" if necessary
if rsh < 2, r = [-1,1], end;
// "+1" -- to alow room for biases (first column in each W)
// don't create weight entries for input neurons (from input layer 1)
// i.e. no. of matrices W(:,:,*) is size(N,'c')-1
W = hypermat([max(N), max(N)+1, size(N,'c') - 1]);
// initialize weights with random numbers between "r(1)" and "r(2)"
// (only the required values, first column, i.e. bias, later)
for l = 2 : size(N,'c')
W(1:N(l), 2:N(l-1)+1, l-1) = ...
(r(2) - r(1)) * rand(N(l), N(l-1)) + r(1) * ones(N(l), N(l-1));
end;
// biases, if required, otherwise leave them 0
if rsh > 2 ...
then for l = 2 : size(N,'c')
W(1:N(l), 1, l-1) = ...
(rb(2) - rb(1)) * rand(N(l), 1) + rb(1) * ones(N(l), 1);
end;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_VHess.sci 0000644 0001750 0001750 00000001434 11441407762 021003 0 ustar sylvestre sylvestre function VH = ann_FF_VHess(x, t, N, W, V, dW, af, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// calculates the result of multiplication between a vector and Hessian
// trough a finite differences procedure
[lsh,rsh] = argn(0);
// define default parameters if necessary
if rsh < 7, af = ['ann_log_activ', 'ann_d_log_activ'], end;
if rsh < 8, err_deriv_y = 'ann_d_sum_of_sqr', end;
// calculate gradient to the +
grad_p = ann_FF_grad_BP(x, t, N, W + dW * V, 0, af, err_deriv_y);
// ... and to the -
grad_n = ann_FF_grad_BP(x, t, N, W - dW * V, 0, af, err_deriv_y);
// result, difference is 2 * dW
VH = (grad_p - grad_n) / (2 * dW);
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Mom_batch.sci 0000644 0001750 0001750 00000006265 11441407762 021653 0 ustar sylvestre sylvestre function [W,Delta_W_old]=ann_FF_Mom_batch(x,t,N,W,lp,T,Delta_W_old,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN, including biases
// based on backpropagation algorithm with momentum (batch version).
// see ANN_FF (help)
// "Delta_W_old", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "Delta_W_old", "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, Delta_W_old = hypermat(size(W)'), end;
if rsh < 8, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 9, ex = " ", end;
if rsh < 10, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of layers
L = size(N, 'c');
// ... and patterns
P = size(x, 'c');
// initialize "z" to avoid resizing
z = zeros(max(N), L);
size_W = size(W)';
// repeat T times
for time = 1 : T
// grad_E_mod is a hypermatrix with the same layout as W
// because of the flat spot elimination grad_E is calculated here
// reinitialize at each loop
grad_E_mod = hypermat(size_W);
// go trough all patterns
for p = 1 : P
// find all neuronal outputs (activation) for current input pattern
// first "z" column is exactly "x(:,p)"
z(1:N(1),1) = x(:,p);
for l = 2 : L
// adding "1" to "z(1:N(l-1),l-1)" to represent bias
// first calculate total input (as column vector) ...
z(1:N(l),l) = W(1:N(l), 1:N(l-1)+1, l-1) ...
* [1; z(1:N(l-1),l-1)];
// ... then activation
execstr('z(1:N(l),l) = ' + af(1) + '(z(1:N(l),l))');
end;
// now for layer "L" (last), requiring special treatment on "err_dz"
// "err_dz" for output layer, don't propagate smaller than lp(2)
execstr('err_dz = clean(' + err_deriv_y + '(z(1:N(L),L),t(:,p)), lp(2))');
// "deriv_af" for output layer, also add flat spot elimination
execstr('deriv_af = ' + af(2) + '(z(1:N(L),L))' + ...
' + lp(4) * ones(N(L),1)');
// "err_dz_deriv_af" product is used twice
err_dz_deriv_af = err_dz .* deriv_af;
// using the transposed of extended z vector here
grad_E_mod(1:N(L), 1:N(L-1)+1, L-1) = ...
grad_E_mod(1:N(L), 1:N(L-1)+1, L-1) + ...
err_dz_deriv_af * [1, z(1:N(L-1), L-1)'];
// backpropagate
for l = L-1 : -1 : 2
// new "err_dz" based on previous one
// transpose two vectors instead of W
err_dz = (err_dz_deriv_af' * W(1:N(l+1), 2:N(l)+1, l))';
// new "deriv_af"
execstr('deriv_af = ' + af(2) + '(z(1:N(l),l))' + ...
' + lp(4) * ones(N(l),1)');
// same as for layer "L", "err_dz_deriv_af" also used on next loop above
err_dz_deriv_af = err_dz .* deriv_af;
grad_E_mod(1:N(l), 1:N(l-1)+1, l-1) = ...
grad_E_mod(1:N(l), 1:N(l-1)+1, l-1) + ...
err_dz_deriv_af * [1, z(1:N(l-1), l-1)'];
end;
end;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e. on next loop or next call to this function)
Delta_W_old = -lp(1) * grad_E_mod + lp(3) * Delta_W_old;
W = W + Delta_W_old;
execstr(ex);
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_Mom_online_nb.sci 0000644 0001750 0001750 00000005765 11441407762 022541 0 ustar sylvestre sylvestre function [W,Delta_W_old] = ann_FF_Mom_online_nb(x,t,N,W,lp,T,Delta_W_old,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN,
// based on backpropagation with momentum algorithm.
// this function is to be used on networks without bias
// see ANN_FF (help)
// "Delta_W_old", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "Delta_w_old", "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, Delta_W_old = hypermat(size(W)'), end;
if rsh < 8, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 9, ex = [" "," "], end;
if rsh < 10, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of layers
L = size(N, 'c');
// ... and patterns
P = size(x,'c');
// initialize "z" to avoid resizing
z = zeros(max(N), L);
// grad_E_mod is a hypermatrix with the same layout as W
// because of flat spot elimination the modified grad_E is calculated here
grad_E_mod = hypermat(size(W)');
// repeat T times
for time = 1 : T
// go trough all patterns
for p = 1 : P
// find all neuronal outputs (activation) for current input pattern
// first "z" column is exactly "x(:,p)"
z(1:N(1),1) = x(:,p);
for l = 2 : L
// first calculate total input (as column vector) ...
z(1:N(l),l) = W(1:N(l), 1:N(l-1), l-1) * z(1:N(l-1), l-1);
// ... then activation
execstr('z(1:N(l),l) = ' + af(1) + '(z(1:N(l),l))');
end;
// now for layer "L" (last), requiring special treatment on "err_dz"
// "err_dz" for output layer, don't propagate smaller than lp(2)
execstr('err_dz = clean(' + err_deriv_y + '(z(1:N(L),L),t(:,p)), lp(2))');
// "deriv_af" for output layer, also add flat spot elimination
execstr('deriv_af = ' + af(2) + '(z(1:N(L),L))' + ...
' + lp(4) * ones(N(L),1)');
// "err_dz_deriv_af" product is used twice
err_dz_deriv_af = err_dz .* deriv_af;
grad_E_mod(1:N(L), 1:N(L-1), L-1) ...
= err_dz_deriv_af * z(1:N(L-1), L-1)';
// backpropagate
for l = L-1 : -1 : 2
// new "err_dz" based on previous one
// transpose two vectors instead of W
err_dz = (err_dz_deriv_af' * W(1:N(l+1), 1:N(l), l))';
// new "deriv_af", also add flat spot elimination
execstr('deriv_af = ' + af(2) + '(z(1:N(l),l))' + ...
' + lp(4) * ones(N(l),1)');
// same as for layer L, "err_dz_deriv_af" also used on next loop above
err_dz_deriv_af = err_dz .* deriv_af;
grad_E_mod(1:N(l), 1:N(l-1), l-1) ...
= err_dz_deriv_af * z(1:N(l-1),l-1)';
end;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e on next loop or next call to this function)
Delta_W_old = - lp(1) * grad_E_mod + lp(3) * Delta_W_old;
W = W + Delta_W_old;
// execute "ex"
execstr(ex(1));
end;
execstr(ex(2));
end;
endfunction
scilab-ann-0.4.2.4/macros/cleanmacros.sce 0000644 0001750 0001750 00000000756 11441407762 021035 0 ustar sylvestre sylvestre // ====================================================================
// Allan CORNET
// DIGITEO 2009
// This file is released into the public domain
// ====================================================================
libpath = get_absolute_file_path('cleanmacros.sce');
binfiles = ls(libpath+'/*.bin');
for i = 1:size(binfiles,'*')
mdelete(binfiles(i));
end
mdelete(libpath+'/names');
mdelete(libpath+'/lib');
// ====================================================================
scilab-ann-0.4.2.4/macros/ann_FF_Std_batch_nb.sci 0000644 0001750 0001750 00000001632 11441407762 022325 0 ustar sylvestre sylvestre function W = ann_FF_Std_batch_nb(x, t, N, W, lp, T, af, ex, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN,
// based on standard backpropagation algorithm (batch version).
// this function is designed for networks without bias
// see ANN_FF (help)
// "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// define "af", "ex" and "err_deriv_y" if necessary
if rsh < 7, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 8, ex = " ", end;
if rsh < 9, err_deriv_y = 'ann_d_sum_of_sqr', end;
// repeat T times
for time = 1 : T
// find gradient
grad_E = ann_FF_grad_BP_nb(x, t, N, W, lp(2), af, err_deriv_y);
// update weights
W = W - lp(1) * grad_E;
// execute "ex"
execstr(ex);
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_d_sum_of_sqr.sci 0000644 0001750 0001750 00000000507 11441407762 022060 0 ustar sylvestre sylvestre function err_d = ann_d_sum_of_sqr(y,t)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// calculates the derivative of sum-of-squares error
// see ANN_GEN (help)
err_d = y - t;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_grad_BP.sci 0000644 0001750 0001750 00000004634 11441407762 021256 0 ustar sylvestre sylvestre function grad_E = ann_FF_grad_BP(x, t, N, W, c, af, err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Calculate the error gradient considering all patterns
// trough a backpropagation procedure
// see ANN_FF (help)
[lsh,rsh] = argn(0);
// define default parameters if necessary
if rsh < 5, c = 0, end;
if rsh < 6, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 7, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of layers
L = size(N, 'c');
// ... and patterns
P = size(x,'c');
// initialize "z" to avoid resizing
z = zeros(max(N), L);
// initialize grad_E, W is a hypermatrix, grad_E have same layout
grad_E = hypermat(size(W)');
// calculate grad_E
// go trough all patterns
for p = 1 : P
// find all neuronal outputs (activation) for current input pattern
// first "z" column is exactly "x(:,p)"
z(1:N(1),1) = x(:,p);
for l = 2 : L
// adding "1" to "z(1:N(l-1), l-1)" to represent bias
// first calculate total input (as column vector) ...
z(1:N(l),l) = W(1:N(l), 1:N(l-1)+1,l-1) ...
* [1; z(1:N(l-1), l-1)];
// ... then activation
execstr('z(1:N(l),l) = ' + af(1) + '(z(1:N(l),l))');
end;
// now for layer "L" (last), requiring special treatment on "err_dz"
// "err_dz" for output layer, don't propagate smaller than lp(2)
execstr('err_dz = clean(' + err_deriv_y + '(z(1:N(L),L),t(:,p)), c)');
// "deriv_af" for output layer
execstr('deriv_af = ' + af(2) + '(z(1:N(L),L))');
// "err_dz_deriv_af" product is used twice
err_dz_deriv_af = err_dz .* deriv_af;
// adding contribution of pattern p
// using the transposed of extended z vector here
grad_E(1:N(L), 1:N(L-1)+1, L-1) = ...
grad_E(1:N(L), 1:N(L-1)+1, L-1) + ...
err_dz_deriv_af * [1, z(1:N(L-1), L-1)'];
// backpropagate
for l = L-1 : -1 : 2
// new "err_dz" based on previous one
// transpose two vectors instead of W
err_dz = (err_dz_deriv_af' * W(1:N(l+1), 2:N(l)+1, l))';
// new "deriv_af"
execstr('deriv_af = ' + af(2) + '(z(1:N(l),l))');
// same as for layer "L", "err_dz_deriv_af" also used on next loop above
err_dz_deriv_af = err_dz .* deriv_af;
grad_E(1:N(l), 1:N(l-1)+1, l-1) = ...
grad_E(1:N(l), 1:N(l-1)+1, l-1) + ...
err_dz_deriv_af * [1, z(1:N(l-1), l-1)'];
end;
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_init_nb.sci 0000644 0001750 0001750 00000001547 11441407762 021402 0 ustar sylvestre sylvestre function W = ann_FF_init_nb(N, r)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// generate the weight matrix for an feedforward ANN defined by N
// this function is designed for networks without bias
// see ANN_FF (help)
// "r" is optional argument
[lsh, rsh] = argn(0);
// define "r" if necessary
if rsh < 2, r = [-1,1], end;
// don't create weight entries for input neurons (from layer 0),
// i.e. no. of matrices W(:,:,*) is size(N,'c')-1
W = hypermat([max(N), max(N), size(N,'c') - 1]);
// initialize (only the required values)
// with random numbers between "r(1)" and "r(2)"
for l = 2 : size(N,'c')
W(1:N(l), 1:N(l-1), l-1) = ...
(r(2) - r(1)) * rand(N(l), N(l-1)) + r(1) * ones(N(l), N(l-1));
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_SSAB_online_nb.sci 0000644 0001750 0001750 00000003503 11441407762 022525 0 ustar sylvestre sylvestre function [W,Delta_W_old,Delta_W_oldold,mu]=ann_FF_SSAB_online_nb(x,t,N,W,lp,Delta_W_old,Delta_W_oldold,T,mu,af,ex,err_deriv_y)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Updates weight matrix of an ANN,
// based on backpropagation with SuperSAB algorithm.
// this function is to be used on networks without bias
// see ANN_FF (help)
// "mu", "af", "ex" and "err_deriv_y" are optional arguments
[lsh, rsh] = argn(0);
// size of W hypermatrix, required in several places
size_W = size(W)';
// define default parameters if necessary
if rsh < 9, mu = lp(1) * hypermat(size_W,ones(prod(size_W),1)), end;
if rsh < 10, af = ['ann_log_activ','ann_d_log_activ'], end;
if rsh < 11, ex = [" "," "], end;
if rsh < 12, err_deriv_y = 'ann_d_sum_of_sqr', end;
// no. of patterns
P = size(x,'c');
// repeat T times
for time = 1 : T
// go trough all patterns, one at a time
for p = 1 : P
// error gradient
grad_E = ann_FF_grad_BP_nb(x(:,p),t(:,p),N,W,lp(2),af,err_deriv_y);
// sign hypermatrix
M = sign(sign(Delta_W_old .* Delta_W_oldold) ...
+ hypermat(size_W,ones(prod(size_W),1)));
// mu hypermatrix update (former lp(1))
mu = ( (lp(4) - lp(5)) * M ...
+ lp(5) * hypermat(size_W,ones(prod(size_W),1)) ) .* mu;
// update weights
// (the new Delta_W_old ! ;) will become old after weight update,
// i.e on next loop or next call to this function)
// same for Delta_W_oldold
Delta_W_oldold = Delta_W_old;
Delta_W_old = ...
- mu .* grad_E ...
- (lp(3) * Delta_W_old) .* (hypermat(size_W,ones(prod(size_W),1)) - M);
W = W + Delta_W_old;
// execute "ex"
execstr(ex(1));
end;
execstr(ex(2));
end;
endfunction
scilab-ann-0.4.2.4/macros/ann_log_activ.sci 0000644 0001750 0001750 00000000525 11441407762 021347 0 ustar sylvestre sylvestre function y = ann_log_activ(x)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// calculates logistic activation function for each component of "x"
// see ANN_GEN (help)
y = 1 ./ (1+%e^(-x));
endfunction
scilab-ann-0.4.2.4/macros/ann_FF_grad_nb.sci 0000644 0001750 0001750 00000003413 11441407762 021346 0 ustar sylvestre sylvestre function grad_E = ann_FF_grad_nb(x,t,N,W,dW,af,ef)
// This file is part of:
// ANN Toolbox for Scilab 5.x
// Copyright (C) Ryurick M. Hristev
// updated by Allan CORNET INRIA, May 2008
// released under GNU Public licence version 2
// Calculates the error gradient following a finite difference procedure,
// i.e. perturbing each weight in turn;
// used for --- testing --- purposes only as is much slower than BP algorithm.
// this function is designed for networks without bias
// The gradient is calculated only for all patterns in "x" and "t"
// see ANN_FF (help)
[lsh,rsh] = argn(0);
// define optional parameters if necessary
if rsh < 6, af = 'ann_log_activ', end;
if rsh < 7, ef = 'ann_sum_of_sqr', end;
// create the return matrix
grad_E = hypermat(size(W)');
// rl - run between layers, parameter for ann_FF_run function
rl = [2,size(N,'c')];
// for each pattern
for p = 1 : size(x,'c')
// for each layer
for l = 2 : size(N,'c')
// for each neuron in layer
for n = 1 : N(l)
// for each connection to previous layer
for i = 1 : N(l-1)
// hold the old value of W
temp = W(n,i,l-1);
// change W value
W(n,i,l-1) = temp - dW;
// run the net
y = ann_FF_run_nb(x(:,p),N,W,rl,af);
// calculate new error, to the "left"
execstr('err_n = ' + ef + '(y,t(:,p))');
// change W value
W(n,i,l-1) = temp + dW;
// run the net
y = ann_FF_run_nb(x(:,p),N,W,rl,af);
// calculate new error, to the "right"
execstr('err_p = ' + ef + '(y,t(:,p))');
// "2" because \Delta w = 2 * dW
grad_E(n,i,l-1) = ...
grad_E(n,i,l-1) + (err_p - err_n) / (2 * dW);
// restore W
W(n,i,l-1) = temp;
end;
end;
end;
end;
endfunction
scilab-ann-0.4.2.4/ANN_toolbox.iss 0000644 0001750 0001750 00000010322 11441407762 017456 0 ustar sylvestre sylvestre ÿþ; # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
; I n n o S e t u p I n s t a l l s c r i p t f o r T o o l b o x _ s k e l e t o n
; h t t p : / / w w w . j r s o f t w a r e . o r g / i s i n f o . p h p
; A l l a n C O R N E T
; C o p y r i g h t I N R I A 2 0 0 8
; # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
; m o d i f y t h i s p a t h w h e r e i s t o o l b o x _ s k e l e t o n d i r e c t o r y
# d e f i n e B i n a r i e s S o u r c e P a t h " E : \ A N N _ T o o l b o x _ 0 . 4 . 2 . 2 "
# d e f i n e A N N _ T o o l b o x _ v e r s i o n " 0 . 4 . 2 . 2 "
# d e f i n e C u r r e n t Y e a r " 2 0 0 8 "
# d e f i n e T o o l b o x _ A N N D i r F i l e n a m e " A N N _ T o o l b o x _ 0 . 4 . 2 . 2 "
; # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
[ S e t u p ]
; D e b u t D o n n é e s d e b a s e à r e n s e i g n e r s u i v a n t v e r s i o n
S o u r c e D i r = { # B i n a r i e s S o u r c e P a t h }
A p p N a m e = A N N T o o l b o x 0 . 4 . 2 . 2 f o r S c i l a b 5 . x
A p p V e r N a m e = A N N T o o l b o x 0 . 4 . 2 . 2 f o r S c i l a b 5 . x
D e f a u l t D i r N a m e = { p f } \ { # T o o l b o x _ A N N D i r F i l e n a m e }
I n f o A f t e r f i l e = r e a d m e . t x t
L i c e n s e F i l e = l i c e n s e . t x t
W i n d o w V i s i b l e = t r u e
A p p P u b l i s h e r = Y o u r C o m p a n y
B a c k C o l o r D i r e c t i o n = l e f t t o r i g h t
A p p C o p y r i g h t = C o p y r i g h t © { # C u r r e n t Y e a r }
C o m p r e s s i o n = l z m a / m a x
I n t e r n a l C o m p r e s s L e v e l = n o r m a l
S o l i d C o m p r e s s i o n = t r u e
V e r s i o n I n f o V e r s i o n = { # A N N _ T o o l b o x _ v e r s i o n }
V e r s i o n I n f o C o m p a n y = N O N E
; # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
[ F i l e s ]
; A d d h e r e f i l e s t h a t y o u w a n t t o a d d
S o u r c e : l o a d e r . s c e ; D e s t D i r : { a p p }
S o u r c e : b u i l d e r . s c e ; D e s t D i r : { a p p }
S o u r c e : l i c e n s e . t x t ; D e s t D i r : { a p p }
S o u r c e : e t c \ A N N _ t o o l b o x . q u i t ; D e s t D i r : { a p p } \ e t c
S o u r c e : e t c \ A N N _ t o o l b o x . s t a r t ; D e s t D i r : { a p p } \ e t c
S o u r c e : h e l p \ e n _ U S \ a d d c h a p t e r . s c e ; D e s t D i r : { a p p } \ h e l p \ e n _ U S
S o u r c e : j a r \ s c i l a b _ e n _ U S _ h e l p . j a r ; D e s t D i r : { a p p } \ j a r
S o u r c e : m a c r o s \ b u i l d m a c r o s . s c e ; D e s t D i r : { a p p } \ m a c r o s
S o u r c e : m a c r o s \ l i b ; D e s t D i r : { a p p } \ m a c r o s
S o u r c e : m a c r o s \ n a m e s ; D e s t D i r : { a p p } \ m a c r o s
S o u r c e : m a c r o s \ * . s c i ; D e s t D i r : { a p p } \ m a c r o s
S o u r c e : m a c r o s \ * . b i n ; D e s t D i r : { a p p } \ m a c r o s
S o u r c e : d e m o s \ * . * ; D e s t D i r : { a p p } \ d e m o s ; F l a g s : r e c u r s e s u b d i r s
;
; # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
;
scilab-ann-0.4.2.4/etc/ 0000755 0001750 0001750 00000000000 11441407762 015331 5 ustar sylvestre sylvestre scilab-ann-0.4.2.4/etc/ANN_toolbox.start 0000644 0001750 0001750 00000003056 11441407762 020576 0 ustar sylvestre sylvestre // =============================================================================
// Allan CORNET
// Copyright DIGITEO 2010
// Copyright INRIA 2008
// =============================================================================
mprintf("Start ANN Toolbox 0.4.2.4\n");
if isdef("ANN_toolboxlib") then
warning("ANN Toolbox 0.4.2.4 library is already loaded");
return;
end
etc_tlbx = get_absolute_file_path("ANN_toolbox.start");
etc_tlbx = getshortpathname(etc_tlbx);
root_tlbx = strncpy( etc_tlbx, length(etc_tlbx)-length("\etc\") );
//Load functions library
// =============================================================================
mprintf("\tLoad macros\n");
pathmacros = pathconvert( root_tlbx ) + "macros" + filesep();
ANN_toolboxlib = lib(pathmacros);
clear pathmacros;
// Load and add help chapter
// =============================================================================
if or(getscilabmode() == ["NW";"STD"]) then
mprintf("\tLoad help\n");
path_addchapter = pathconvert(root_tlbx+"/jar");
if ( isdir(path_addchapter) <> [] ) then
add_help_chapter("ANN Toolbox 0.4.2.4", path_addchapter, %F);
clear add_help_chapter;
end
clear path_addchapter;
end
// Load demos
// =============================================================================
if or(getscilabmode() == ["NW";"STD"]) then
mprintf("\tLoad demos\n");
pathdemos = pathconvert(root_tlbx+"/demos/ANN.dem.gateway.sce",%F,%T);
add_demo("ANN Toolbox 0.4.2.4", pathdemos);
clear pathdemos add_demo;
end
clear root_tlbx;
clear etc_tlbx;
scilab-ann-0.4.2.4/etc/ANN_toolbox.quit 0000644 0001750 0001750 00000000267 11441407762 020424 0 ustar sylvestre sylvestre // ====================================================================
// Allan CORNET
// Copyright INRIA 2008
// ==================================================================== scilab-ann-0.4.2.4/builder.sce 0000644 0001750 0001750 00000002460 11441407762 016702 0 ustar sylvestre sylvestre // =============================================================================
// Copyright INRIA 2008
// Copyright DIGITEO 2010
// Allan CORNET
// =============================================================================
mode(-1);
lines(0);
TOOLBOX_NAME = "ANN_toolbox";
TOOLBOX_TITLE = "ANN toolbox";
toolbox_dir = get_absolute_file_path("builder.sce");
// Check Scilab's version
// =============================================================================
try
v = getversion("scilab");
catch
error(gettext("Scilab 5.3 or more is required."));
end
if v(2) < 3 then
// new API in scilab 5.3
error(gettext('Scilab 5.3 or more is required.'));
end
clear v;
// Check modules_manager module availability
// =============================================================================
if ~isdef('tbx_build_loader') then
error(msprintf(gettext('%s module not installed."), 'modules_manager'));
end
// Action
// =============================================================================
tbx_builder_macros(toolbox_dir);
tbx_builder_help(toolbox_dir);
tbx_build_loader(TOOLBOX_NAME, toolbox_dir);
tbx_build_cleaner(TOOLBOX_NAME, toolbox_dir);
// Clean variables
// =============================================================================
clear toolbox_dir TOOLBOX_NAME TOOLBOX_TITLE;
scilab-ann-0.4.2.4/demos/ 0000755 0001750 0001750 00000000000 11441407762 015665 5 ustar sylvestre sylvestre scilab-ann-0.4.2.4/demos/enc858_ssab_nb.sce 0000644 0001750 0001750 00000001050 11441407762 021056 0 ustar sylvestre sylvestre // ==================================================
// Loose 8-5-8 encoder
// on a backpropagation network without biases, with SuperSAB
// (Note that the tight 8-3-8 encoder will not work without biases)
// (The 8-4-8 encoder have proven very difficult to train on SuperSAB)
// ==================================================
FILENAMEDEM = "enc858_ssab_nb";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/ANN.dem.gateway.sce 0000644 0001750 0001750 00000002736 11441407762 021211 0 ustar sylvestre sylvestre // ====================================================================
// Copyright INRIA 2008
// Allan CORNET
// ====================================================================
demopath = get_absolute_file_path("ANN.dem.gateway.sce");
subdemolist = [ "encoder 4-3-4 on ANN without biases", "encoder_nb.sce" ; ..
"tight encoder 4-2-4 on ANN with biases", "encoder.sce" ; ..
"encoder 4-3-4 on ANN without biases compare with encoder_nb.sce", "encoder_m_nb.sce" ; ..
"tight encoder 4-2-4 on ANN with biases compare with encoder.sce", "encoder_m.sce" ; ..
"encoder 8-4-8 on ANN without biases", "enc848_m_nb.sce" ; ..
"encoder 8-3-8 on ANN with biases", "enc838_m.sce" ; ..
"encoder 8-5-8 on ANN without biases", "enc858_ssab_nb.sce" ; ..
"encoder 8-4-8 on ANN with biases", "enc848_ssab.sce" ; ..
"tight encoder 4-2-4 on ANN with biases uses a mixed standard/conjugate gradients method", "encoder_cc.sce" ..
];
subdemolist(:,2) = demopath + subdemolist(:,2);
// ====================================================================
scilab-ann-0.4.2.4/demos/enc848_m_nb.sci 0000644 0001750 0001750 00000001611 11441407762 020370 0 ustar sylvestre sylvestre // Loose 8-4-8 encoder
// on a backpropagation network without biases, with momentum
// (Note that the tight 8-4-8 encoder will not work without biases)
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [8,4,8];
// inputs
x = [1,0,0,0,0,0,0,0;
0,1,0,0,0,0,0,0;
0,0,1,0,0,0,0,0;
0,0,0,1,0,0,0,0;
0,0,0,0,1,0,0,0;
0,0,0,0,0,1,0,0;
0,0,0,0,0,0,1,0;
0,0,0,0,0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [2.5,0.1,0.9,0.25];
// init randomize weights between:
r = [-1,7];
W = ann_FF_init_nb(N,r);
Delta_W_old = hypermat(size(W)');
// 250 epochs are enough to ilustrate
T = 250;
[W,Delta_W_old] = ann_FF_Mom_online_nb(x,t,N,W,lp,T,Delta_W_old);
// full run
ann_FF_run_nb(x,N,W)
// encoder
encoder = ann_FF_run_nb(x,N,W,[2,2])
// decoder
decoder = ann_FF_run_nb(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/encoder_m_nb.sci 0000644 0001750 0001750 00000001422 11441407762 020776 0 ustar sylvestre sylvestre // Loose 4-3-4 encoder
// on a backpropagation network without biases, with momentum
// (Note that the tight 4-2-4 encoder will not work without biases)
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [4,3,4];
// inputs
x = [1,0,0,0;
0,1,0,0;
0,0,1,0;
0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [2.5,0.05,0.9,0.25];
// init randomize weights between:
r = [-10,15];
W = ann_FF_init_nb(N,r);
Delta_W_old = hypermat(size(W)');
// 50 epochs are enough to ilustrate
T = 50;
[W,Delta_W_old] = ann_FF_Mom_online_nb(x,t,N,W,lp,T,Delta_W_old);
// full run
ann_FF_run_nb(x,N,W)
// encoder
encoder = ann_FF_run_nb(x,N,W,[2,2])
// decoder
decoder = ann_FF_run_nb(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/enc858_ssab_nb.sci 0000644 0001750 0001750 00000002146 11441407762 021071 0 ustar sylvestre sylvestre // Loose 8-5-8 encoder
// on a backpropagation network without biases, with SuperSAB
// (Note that the tight 8-3-8 encoder will not work without biases)
// (The 8-4-8 encoder have proven very difficult to train on SuperSAB)
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [8,5,8];
// inputs
x = [1,0,0,0,0,0,0,0;
0,1,0,0,0,0,0,0;
0,0,1,0,0,0,0,0;
0,0,0,1,0,0,0,0;
0,0,0,0,1,0,0,0;
0,0,0,0,0,1,0,0;
0,0,0,0,0,0,1,0;
0,0,0,0,0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [0.4, 0, 0.85, 1.004, 0.9999];
// init randomize weights between:
r = [-1,2];
W = ann_FF_init_nb(N,r);
mu = lp(1) * hypermat(size(W)',ones(prod(size(W)'),1));
Delta_W_old = hypermat(size(W)');
Delta_W_oldold = hypermat(size(W)');
// 350 epochs are enough to ilustrate
T = 350;
[W, Delta_W_old, Delta_W_oldold, mu] ...
= ann_FF_SSAB_online_nb(x,t,N,W,lp,Delta_W_old,Delta_W_oldold,T,mu);
// full run
ann_FF_run_nb(x,N,W)
// encoder
encoder = ann_FF_run_nb(x,N,W,[2,2])
// decoder
decoder = ann_FF_run_nb(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/encoder_nb.sce 0000644 0001750 0001750 00000000713 11441407762 020460 0 ustar sylvestre sylvestre // ==================================================
// Loose 4-3-4 encoder on a backpropagation network without biases
// (Note that the tight 4-2-4 encoder will not work without biases)
// ==================================================
FILENAMEDEM = "encoder_nb";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/enc848_m_nb.sce 0000644 0001750 0001750 00000000736 11441407762 020373 0 ustar sylvestre sylvestre // ==================================================
// Loose 8-4-8 encoder
// on a backpropagation network without biases, with momentum
// (Note that the tight 8-4-8 encoder will not work without biases)
// ==================================================
FILENAMEDEM = "enc848_m_nb";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/encoder_m.sci 0000644 0001750 0001750 00000001263 11441407762 020322 0 ustar sylvestre sylvestre // Tight 4-2-4 encoder
// on a backpropagation ANN with biases and momentum
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [4,2,4];
// inputs
x = [1,0,0,0;
0,1,0,0;
0,0,1,0;
0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [2.5,0,0.9,0.25];
// init randomize weights between:
r = [-1,7];
W = ann_FF_init(N,r);
Delta_W_old = hypermat(size(W)');
// 200 epochs are enough to ilustrate
T = 200;
[W,Delta_W_old] = ann_FF_Mom_online(x,t,N,W,lp,T,Delta_W_old);
// full run
ann_FF_run(x,N,W)
// encoder
encoder = ann_FF_run(x,N,W,[2,2])
// decoder
decoder = ann_FF_run(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/enc848_ssab.sci 0000644 0001750 0001750 00000002125 11441407762 020406 0 ustar sylvestre sylvestre // Loose 8-5-8 encoder
// on a backpropagation network without biases, with SuperSAB
// (Note that the tight 8-3-8 encoder will not work without biases)
// (The 8-4-8 encoder have proven very difficult to train on SuperSAB)
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [8,4,8];
// inputs
x = [1,0,0,0,0,0,0,0;
0,1,0,0,0,0,0,0;
0,0,1,0,0,0,0,0;
0,0,0,1,0,0,0,0;
0,0,0,0,1,0,0,0;
0,0,0,0,0,1,0,0;
0,0,0,0,0,0,1,0;
0,0,0,0,0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [2, 0, 0.85, 1.003, 0.9999];
// init randomize weights between:
r = [-1,1];
W = ann_FF_init(N,r);
mu = lp(1) * hypermat(size(W)',ones(prod(size(W)'),1));
Delta_W_old = hypermat(size(W)');
Delta_W_oldold = hypermat(size(W)');
// 300 epochs are enough to ilustrate
T = 300;
[W, Delta_W_old, Delta_W_oldold, mu] ...
= ann_FF_SSAB_online(x,t,N,W,lp,Delta_W_old,Delta_W_oldold,T,mu);
// full run
ann_FF_run(x,N,W)
// encoder
encoder = ann_FF_run(x,N,W,[2,2])
// decoder
decoder = ann_FF_run(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/encoder_cc.sci 0000644 0001750 0001750 00000001266 11441407762 020456 0 ustar sylvestre sylvestre // Tight 4-2-4 encoder using a mixed standard/conjugate gradients algorithm
rand('seed',0);
x = [1,0,0,0;
0,1,0,0;
0,0,1,0;
0,0,0,1]';
t = x;
N = [4,2,4];
W = ann_FF_init(N, [-1,1], [-1,1]);
// --- standard BP algorithm ---
// learning parameter for standard BP part
lp = [2.5,0];
printf("Standard BP ...");
// standard BP for first 20 steps
T = 20;
W = ann_FF_Std_online(x,t,N,W,lp,T);
// --- Conjugate Gradients algorithm ---
printf("Conjugate Gradients ...");
T = 20;
dW = 0.00001;
W = ann_FF_ConjugGrad(x, t, N, W, T, dW);
// --- test ---
// full run
ann_FF_run(x,N,W)
// encoder
encoder = ann_FF_run(x,N,W,[2,2])
// decoder
decoder = ann_FF_run(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/encoder_m.sce 0000644 0001750 0001750 00000000617 11441407762 020320 0 ustar sylvestre sylvestre // ==================================================
// Tight 4-2-4 encoder
// on a backpropagation ANN with biases and momentum
// ==================================================
FILENAMEDEM = "encoder_m";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/enc838_m.sce 0000644 0001750 0001750 00000000616 11441407762 017710 0 ustar sylvestre sylvestre // ==================================================
// Tight 8-3-8 encoder
// on a backpropagation ANN with biases and momentum
// ==================================================
FILENAMEDEM = "enc838_m";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/README.examples 0000644 0001750 0001750 00000001522 11441407762 020362 0 ustar sylvestre sylvestre Backpropagation
- Standard algorithm
encoder_nb.sce : 4-3-4 encoder on ANN without biases
encoder.sce : 4-2-4 tight encoder on ANN with biases
- Momentum
encoder_m_nb.sce : 4-3-4 encoder on ANN without biases compare with encoder_nb.sce
encoder_m.sce : 4-2-4 tight encoder on ANN with biases compare with encoder.sce
enc848_m_nb.sce : 8-4-8 encoder on ANN without biases
enc838_m.sce : 8-3-8 encoder on ANN with biases
- SuperSAB
enc858_ssab_nb.sce : 8-5-8 encoder on ANN without biases
enc848_ssab.sce : 8-4-8 encoder on ANN with biases
(Note that the more tight encoders are very difficult to train with this algorithm)
- Conjugate Gradients
encoder_cc.sce : 4-2-4 tight encoder on ANN with biases uses a mixed standard/conjugate gradients method
scilab-ann-0.4.2.4/demos/encoder_nb.sci 0000644 0001750 0001750 00000001336 11441407762 020466 0 ustar sylvestre sylvestre // Loose 4-3-4 encoder on a backpropagation network without biases
// (Note that the tight 4-2-4 encoder will not work without biases)
// ensure the same random starting point
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [4,3,4];
// inputs
x = [1,0,0,0;
0,1,0,0;
0,0,1,0;
0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [8,0];
// init randomize weights between
r = [-1,1];
W = ann_FF_init_nb(N,r);
// 500 epochs are enough to ilustrate
T = 500;
W = ann_FF_Std_online_nb(x,t,N,W,lp,T);
// full run
ann_FF_run_nb(x,N,W)
// encoder
encoder = ann_FF_run_nb(x,N,W,[2,2])
// decoder
decoder = ann_FF_run_nb(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/encoder_m_nb.sce 0000644 0001750 0001750 00000000737 11441407762 021002 0 ustar sylvestre sylvestre // ==================================================
// Loose 4-3-4 encoder
// on a backpropagation network without biases, with momentum
// (Note that the tight 4-2-4 encoder will not work without biases)
// ==================================================
FILENAMEDEM = "encoder_m_nb";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/enc838_m.sci 0000644 0001750 0001750 00000001475 11441407762 017720 0 ustar sylvestre sylvestre // Tight 8-3-8 encoder
// on a backpropagation ANN with biases and momentum
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [8,3,8];
// inputs
x = [1,0,0,0,0,0,0,0;
0,1,0,0,0,0,0,0;
0,0,1,0,0,0,0,0;
0,0,0,1,0,0,0,0;
0,0,0,0,1,0,0,0;
0,0,0,0,0,1,0,0;
0,0,0,0,0,0,1,0;
0,0,0,0,0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [1.5, 0.07, 0.8, 0.1];
// init randomize weights between:
r = [-10,15];
rb = r;
W = ann_FF_init(N,r,rb);
Delta_W_old = hypermat(size(W)');
// 500 epochs are enough to ilustrate
T = 500;
[W,Delta_W_old] = ann_FF_Mom_online(x,t,N,W,lp,T,Delta_W_old);
// full run
ann_FF_run(x,N,W)
// encoder
encoder = ann_FF_run(x,N,W,[2,2])
// decoder
decoder = ann_FF_run(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/encoder_cc.sce 0000644 0001750 0001750 00000000620 11441407762 020443 0 ustar sylvestre sylvestre // ==================================================
// Tight 4-2-4 encoder using a mixed standard/conjugate gradients algorithm
// ==================================================
FILENAMEDEM = "encoder_cc";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/encoder.sci 0000644 0001750 0001750 00000001114 11441407762 020001 0 ustar sylvestre sylvestre // Tight 4-2-4 encoder on a backpropagation ANN
// ensure the same starting point each time
rand('seed',0);
// network def.
// - neurons per layer, including input
N = [4,2,4];
// inputs
x = [1,0,0,0;
0,1,0,0;
0,0,1,0;
0,0,0,1]';
// targets, at training stage is acts as identity network
t = x;
// learning parameter
lp = [2.5,0];
W = ann_FF_init(N);
// 400 epochs are enough to ilustrate
T = 400;
W = ann_FF_Std_online(x,t,N,W,lp,T);
// full run
ann_FF_run(x,N,W)
// encoder
encoder = ann_FF_run(x,N,W,[2,2])
// decoder
decoder = ann_FF_run(encoder,N,W,[3,3])
scilab-ann-0.4.2.4/demos/enc848_ssab.sce 0000644 0001750 0001750 00000001045 11441407762 020402 0 ustar sylvestre sylvestre // ==================================================
// Loose 8-5-8 encoder
// on a backpropagation network without biases, with SuperSAB
// (Note that the tight 8-3-8 encoder will not work without biases)
// (The 8-4-8 encoder have proven very difficult to train on SuperSAB)
// ==================================================
FILENAMEDEM = "enc848_ssab";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/demos/encoder.sce 0000644 0001750 0001750 00000000561 11441407762 020002 0 ustar sylvestre sylvestre // ==================================================
// Tight 4-2-4 encoder on a backpropagation ANN
// ==================================================
FILENAMEDEM = "encoder";
lines(0);
scepath = get_absolute_file_path(FILENAMEDEM+".sce");
exec(scepath+FILENAMEDEM+".sci",1);
clear scepath;
clear FILENAMEDEM;
// ==================================================
scilab-ann-0.4.2.4/help/ 0000755 0001750 0001750 00000000000 11441407762 015506 5 ustar sylvestre sylvestre scilab-ann-0.4.2.4/help/builder_help.sce 0000644 0001750 0001750 00000000440 11441407762 020636 0 ustar sylvestre sylvestre // ====================================================================
// Copyright INRIA 2008
// Allan CORNET
// ====================================================================
tbx_builder_help_lang(["en_US"], ..
get_absolute_file_path("builder_help.sce"));
scilab-ann-0.4.2.4/help/en_US/ 0000755 0001750 0001750 00000000000 11441407762 016517 5 ustar sylvestre sylvestre scilab-ann-0.4.2.4/help/en_US/ann_FF_Jacobian_BP.xml 0000644 0001750 0001750 00000003651 11441407762 022544 0 ustar sylvestre sylvestre
$LastChangedDate: 2008-03-26 09:50:39 +0100 (mer., 26 mars 2008) $
ann_FF_Jacobian_BP
computes Jacobian trough backpropagation.
CALLING SEQUENCE
J = ann_FF_Jacobian_BP(x,N,W[,af])
PARAMETERS
J The Jacobian hypermatrix: each J(:,:,p) have same structure as
z(:,p)*x(:,p)', where z(:,p) is the network output given input
x(:,p).
x Matrix of input patterns, one pattern per column.
N Row vector describing the number of neurons per layer. N(1) is the
size of input pattern vector, N(size(N,'c')) is the size of output
pattern vector (and also target).
W The weight hypermatrix.
af The activation function to be used. This parameter is optional,
default value "ann_log_activ", i.e. the logistic activation function.
Description
This function calculates the Jacobian trough a backpropagation algorithm,
for all patterns presented in x.
See Also
ANN
ANN_GEN
ANN_FF
scilab-ann-0.4.2.4/help/en_US/ann_FF_Mom_online.xml 0000644 0001750 0001750 00000012762 11441407762 022554 0 ustar sylvestre sylvestre
$LastChangedDate: 2008-03-26 09:50:39 +0100 (mer., 26 mars 2008) $
ann_FF_Mom_online
online backpropagation with momentum.
CALLING SEQUENCE
[W,Delta_W_old]=
ann_FF_Mom_online(x,t,N,W,lp,T[,Delta_W_old,af,ex,err_deriv_y])
PARAMETERS
x Matrix of input patterns, one pattern per column
t Matrix of targets, one pattern per column. Each column have a
correspondent column in x.
N Row vector describing the number of neurons per layer. N(1) is the
size of input pattern vector, N(size(N,'c')) is the size of output
pattern vector (and also target).
W The weight hypermatrix (initialized first trough ann_FF_init).
lp Learning parameters [lp(1),lp(2),lp(3),lp(4)].
lp(1)
is the well known learning parameter of standard backpropagation
algorithm, W is changed according to the formula:
W(t+1) = W(t) - lp(1) * grad E + momentum term
where t is the (discrete) time and E is the error. Typical
values: 0.1 ... 1. Some networks train faster with even greater
learning parameter.
lp(2)
defines the threshold of error which is backpropagated: a error
smaller that lp(2) (at one neuronal output) is rounded towards
zero and thus not propagated. Typical values: 0 ... 0.1. E.g.
assume that output of neuron n have the actual output 0.91 and
the target (for that particular neuron, given the corresponding
input) is 1. If lp(2) = 0.1 then the error term associated to n
is rounded to 0 and thus not propagated.
lp(3)
is the momentum parameter. The momentum term added to W is:
momentum term = lp(3) * Delta_W_old
Typical values: 0 ... 0.9999... (smaller than 1).
lp(4)
is the flat spot elimination constant added to the derivative of
activation function, when computing the error gradient, to help
the network pass faster over areas where the error gradient is
small (flat error surface area). I.e. the derivative of
activation is replaced:
f'(total neuronal input) --> f'(total neuronal input) + lp(4)
when computing the gradient. Typical values: 0 ... 0.25.
T The number of epochs (training cycles trough all pattern set).
Delta_W_old
The previous weight adjusting quantity. This parameter is optional,
default value is:
Delta_W_old = hypermat(size(W)')
NOTE: When calling ann_FF_Mom_online for the first time you should
either:
- not give any value to Delta_W_old
- initialize it to zero using "Delta_W_old=hypermat(size(W)')"
On subsequent calls to ann_FF_Mom_online you should give the value
of Delta_W_old returned by the previous call.
af Activation function and its derivative. Row vector of strings:
af(1)
name of activation function.
af(2)
name of derivative.
Warning: given the activation function y = f(x), the derivative
have to be expressed in terms of y, not x. This parameter is
optional, default value is "['ann_log_activ',
'ann_d_log_activ']", i.e. logistic activation function and its
derivative.
err_deriv_y
the name of error function derivative with respect to network outputs.
This parameter is optional, default value is "ann_d_sum_of_sqr", i.e.
the derivative of sum-of-squares.
ex two-dimensional row vector of strings representing valid Scilab
sequences. ex(1) is executed after the weight matrix have been
updated, after each pattern (not whole set), using execstr. ex(2),
same as ex(1), but is executed once after each epoch. This parameter
is optional, default value is [" "," "] (do nothing).
Description
Returns the updated weight hypermatrix of a feedforward ANN, after
training with a given set of patterns, T times. The algorithm used is
online backpropagation with momentum. Delta_W_old holds the previous W
update (usefull for subsequent calls).
See Also
ANN
ANN_GEN
ANN_FF
ann_FF_init
ann_FF_run
scilab-ann-0.4.2.4/help/en_US/build_help.sce 0000644 0001750 0001750 00000000432 11441407762 021321 0 ustar sylvestre sylvestre // ====================================================================
// Copyright INRIA 2008
// Copyright DIGITEO 2010
// Allan CORNET
// ====================================================================
tbx_build_help(TOOLBOX_TITLE,get_absolute_file_path("build_help.sce"));
scilab-ann-0.4.2.4/help/en_US/ann_FF_Jacobian.xml 0000644 0001750 0001750 00000003742 11441407762 022164 0 ustar sylvestre sylvestre
$LastChangedDate: 2008-03-26 09:50:39 +0100 (mer., 26 mars 2008) $
ann_FF_Jacobian
computes Jacobian by finite differences.
CALLING SEQUENCE
J = ann_FF_Jacobian(x,N,W,dx[,af])
PARAMETERS
J The Jacobian hypermatrix: each J(:,:,p) have same structure as
z(:,p)*x(:,p)', where z(:,p) is the network output given input
x(:,p).
x Matrix of input patterns, one pattern per column.
N Row vector describing the number of neurons per layer. N(1) is the
size of input pattern vector, N(size(N,'c')) is the size of output
pattern vector (and also target).
W The weight hypermatrix.
dx The quantity used to perturb each x(i,p) in turn.
af The activation function to be used. This parameter is optional,
default value "ann_log_activ", i.e. the logistic activation function.
Description
This function calculates the Jacobian trough a finite differences
procedure, for all patterns presented in x.
See Also
ANN
ANN_GEN
ANN_FF
scilab-ann-0.4.2.4/help/en_US/ann_log_activ.xml 0000644 0001750 0001750 00000003166 11441407762 022052 0 ustar sylvestre sylvestre
$LastChangedDate: 2008-03-26 09:50:39 +0100 (mer., 26 mars 2008) $
ann_log_activ
logistic activation function
CALLING SEQUENCE
y = ann_log_activ(x)
PARAMETERS
y Matrix containing the activation, one pattern per column. For each
pattern:
1
y(i) = --------------
1 + exp[-x(i)]
x Matrix containing the total neuronal input, one pattern per column.
For each pattern: each x(i) for the corresponding i-th neuron on the
current layer.
Description
This function is the default neuronal activation function. Any other,
user defined, function should have the same input and output format for
variables.
See Also
ANN
ANN_FF
scilab-ann-0.4.2.4/help/en_US/ann_FF_run_nb.xml 0000644 0001750 0001750 00000004311 11441407762 021732 0 ustar sylvestre sylvestre
$LastChangedDate: 2008-03-26 09:50:39 +0100 (mer., 26 mars 2008) $
ann_FF_run_nb
run patterns trough a feedforward net (without bias).
CALLING SEQUENCE
y = ann_FF_run_nb(x,N,W[,l,af])
PARAMETERS
y Matrix of outputs, one pattern per column. Each column have a
correspondent column in x.
x Matrix of input patterns, one pattern per column
N Row vector describing the number of neurons per layer. N(1) is the
size of input pattern vector, N(size(N,'c')) is the size of output
pattern vector.
W The weight hypermatrix (initialized first trough ann_BP_init_nb).
l Defines the "injection" layer and the output layer. x patterns are
injected into layer l(1) as coming from layer l(1) - 1. y outputs are
collected from the outputs of layer l(2). This parameter is
optional, default value is [2,size(N,'c')], i.e. the whole network.
Warning: l(1)=1 does not make sense.
af String containing the name of activation function.
This parameter is optional, default value "ann_log_activ", i.e.
logistic activation function.
Description
This function is used to run patterns trough a feedforward network as
defined by N and W. This function is to be used on networks without
biases.
See Also
ANN
ANN_FF
scilab-ann-0.4.2.4/help/en_US/ann_sum_of_sqr.xml 0000644 0001750 0001750 00000003056 11441407762 022256 0 ustar sylvestre sylvestre
$LastChangedDate: 2008-03-26 09:50:39 +0100 (mer., 26 mars 2008) $
ann_sum_of_sqr
calculates sum-of-squares error
CALLING SEQUENCE
E = ann_sum_of_sqr(y,t)
PARAMETERS
E The sum-of-squares error.
y Matrix containing the actual network outputs, one pattern per
column, each column have a correspondent in t.
t Matrix containing the targets, one pattern per column, each column
have a correspondent in y.
Description
This function calculates the sum-of-squares error given y and t. Any
other, user defined, error function should have the same input and output
format for variables.
See Also
ANN
ANN_FF
scilab-ann-0.4.2.4/help/en_US/ann_FF_grad.xml 0000644 0001750 0001750 00000005033 11441407762 021366 0 ustar sylvestre sylvestre
$LastChangedDate: 2008-03-26 09:50:39 +0100 (mer., 26 mars 2008)
$
ann_FF_grad
error gradient trough finite differences.
CALLING SEQUENCE
grad_E = ann_FF_grad(x,t,N,W,dW[,af,ef])
PARAMETERS
grad_E The error gradient, same layout as W. x Input patterns, one
per column.
t Target patterns, one per column. Each column have a correspondent
column in x.
N Row vector describing the number of neurons per layer. N(1) is
the size of input pattern vector, N(size(N,'c')) is the size of output
pattern vector (and also target).
W The weight hypermatrix.
dW The quantity used to perturb each W parameter.
af The name of activation function to be used (string). This
parameter is optional, default value "ann_log_activ", i.e. the logistic
activation function.
ef The name of error function to be used (string). This parameter is
optional, default value "ann_sum_of_sqr", i.e. the sum-of-squares error
function.
Description
Calculates error gradient trough a (slow) finite differences
procedure. Each element W(n,i,l) is changed to W(n,i,l)-dW then the error
is calculated and the process is repeated for W(n,i,l)+dW.
From the values obtained the partial derivative of the
sum-of-squares error function, with respect to W(n,i,l), is calculated and
the value of gradient returned.
This process is very slow (compared to the backpropagation
algorithms) so it is to be used only for testing purposes.
See Also
ANN
ANN_FF