"$d_Qroor""email to Michael Jenkins - conversion of c to nial 20050926.txt" -
replace // with % (rest of line is comment) [move to end of line] ;
replace /* with #
replace */ with
replace = with := (as long as the equal doesn't have qualifiers!)
****************************
Michael Jenkins. President Nial Systems. Kingston. Canada
You probably won't remember me, but I did a port of Nial to AmigaDOS (ca 1986-1991?) to the point of having "relatively few bugs" (grin - but it was quite functional!) until someone stole my computer systems and I realized that commercialization would take more money than I was comfortable with. (Subsequent to that I found other ways of losing money...)
But it's almost like I feel that I owe you money -> I have just started to work on a paper that I want to substantially finish by the end of November, and when looking for both QNial and C++ it turns out that you have just posted QNial as Open Source (I'm just using the executable). Anyways, it's been >10 years since I've programmed seriously (overtime etc), although I continue to attend the International Joint Conference on Neural Networks each year as a vacation. There are a few changes to Nial in that time too (see below).
My current intent is to write some basic neural network code in NIAL, and try out some concepts on "Computational Neurogenetic Modelling" (mix up genetics and NNs). As I'm also in the middle of a new-job search, the timing is perhaps not guaranteed. There should be APL code around for NNs, but if you have seen NN code in Nial, please let me know who/where. And if I get things to work out, I'm assuming that you are willing to post the source on your site.
Bill Howell
home 1-613-265-5696
work: Natural Resources Canada, Ottawa 1-613-992-1589
Changes that I've noticed so far:
flip->pack; syntactic dot; p
lus != +;
great GUI interface;
'\' for line extension in .ndf files ?not required?;
faults normally interrupt execution;
and others I'll encounter with time I suppose.
Glitches
Replace feature doesn't work properly in edit windows
<= comparison seems to work sometimes but not others?
Nice-to-haves (some suggestions)
C++ style comments '/*' etc, ability to have empty lines in source code for expressions
Much more complete set of string operations and transformers! (in theory, this should be easy, in practice it takes time)
I should make a note of any issues that I see.
*****************************
Timothy Masters files "Advanced algorithms for neural networks: a C++ sourcebook" John Wiley 1995
ACT_FUNC - Compute the activation function - The default used here is f(x) = TanH ( 1.5 x )
actderiv - computes the derivative as a function of the activation level
ACTIVITY - Evaluate the activity of a single LayerNet neuron
partial derivatives of the activations with respect to real and imaginary parts
several versions, according to whether the inputs and outputs are real or complex
CLASSES - Headers for all classes
CONFUSE - All routines related to CLASSIFY confusion
reset_confusion, show_confusion, save_confusion, classify_from_file
conjgrad - Conjugate gradient learning
Normally this returns the scaled mean square error
CONST.H - System and program limitation constants
This also contains typedefs, structs, et cetera.
CONTROL - Routines related to processing user's control commands
direcmin - Minimize along a direction
Normally this returns the mean square error, which will be 0-1.
If the user interrupted, it returns the negative mean square error.
DOTPROD - Compute dot product of two vectors
DOTPRODC - Compute dot product of two complex vectors
EXECUTE - All routines related to AUTO and MAPPING network execution
CLASSIF stuff is in CONFUSE.CPP
But note that EXECUTE is certainly valid in CLASSIF mode. It outputs
neuron activations, just like in the other modes.
FLRAND - Generate full 32 bit period-free random numbers
This routine is nonportable in that it assumes 32-bit longs!
void sflrand ( long iseed ) - Set the random seed
long flrand () - Return a full 32 bit random integer
double unifrand () - Return uniform random in [0,1)
FUNCDEFS.H - Header file
gradient - Called by CONJGRAD to compute weight gradient. Also called by SSG.
1. LAYERNET - All principal routines for LayerNet processing
Constructor, Destructor
copy_weights - Copy the weights from one network to another
zero_weights - Zero all weights in a network
trial - Compute the output for a given input by evaluating network
trial_error - Compute the mean square error for the entire training set
learn
wt_print - Print weights as ASCII to file
wt_save - Save weights to disk (called from WT_SAVE.CPP)
wt_restore - Restore weights from disk (called from WT_SAVE.CPP)
LEV_MARQ - Do Levenberg-Marquardt direct descent learning
Normally this returns the scaled mean square error.
If the user interrupted, it returns the negative mean square error.
Also - local routines to add correction vector to weight vector, debugging
LIMIT - Limit a point (dead simple)
LM_CORE - Called by LEV_MARQ to compute error, alpha, and beta
Routines for real & complex, input or output
MEM - Supervised memory allocation
MESSAGES - All routines for issuing messages to user
MLFN - Main program for implementing all multiple-layer feedforward nets
Howell: this only looks like a user interface loop & parameter/ state setting
PARSDUBL - ParseDouble routine to parse a double from a string
PERTURB - Called by annealing routines to perturb coefficients
RANDOM - Assorted non-uniform random number generators.
They all call an external uniform generator, unifrand().
normal () - Normal (mean zero, unit variance)
normal_pair ( double *x1 , double *x2 ) - Pair of standard normals
beta ( int v1 , int v2 ) - Beta with parameters v1 / 2 and v2 / v2
rand_sphere ( int nvars , double *x ) - Uniform on unit sphere surface
cauchy ( int n , double scale , double *x ) - Multivariate Cauchy
REGRESS - Use regression to compute LayerNet output weights
CLASSes - TrainingSet, SingularValueDecomp, LayerNet
REGRS_DD - Learn by hybrid of regression plus direct descent
This method is valid only if there is no hidden layer.
If the output is linear and MSE error used, this is direct: call regress.
Otherwise we call regress to get the starting weights, then call conjgrad
to optimize. We may call anneal1 to break out of a local minimum, but
there is no point to an outermost loop as in AN1_CJ.
SHAKE - Randomly perturb a point
SSG - Use stochastic smoothing with gradients to learn LayerNet weights.
SVDCMP - SingularValueDecomp object routines for performing singular
value decomposition on a matrix, and using backsubstitution
to find least squares solutions to simultaneous equations.
The decomposition algorithm is yet another implementation of
the classic method of Golub and Reinsch (Wilkinson, J.H. and
Reinsch, C., 1971, 'Handbook for Automatic Computation' vol. 2)
Some tricks have been taken from later sources. See (Press
et al 'Numerical Recipes in C') for a complete list of
references.
TEST - All routines related to AUTO and GENERAL network testing
CLASSIF stuff is in CONFUSE.CPP
TRAIN - All routines related to training
Constructor, Destructor, train - Add members to a training set
VECLEN - Compute the SQUARED length of a vector
WT_SAVE - Save and restore learned weights to/from disk files
endfile