Note: For installation and introduction, please read
      readme.txt!
   
     ================================================

       |   |  |   |  |---  |     |   |   /\   /--- 
       |\  |  |\  |  |     |     |\ /|  |  |  | 
       | \ |  | \ |  |--   |     | v |  |  |  \--\ 
       |  \|  |  \|  |     |     |   |  |  |     | 
       |   |  |   |  |---  |---  |   |   \/   ---/

     Neuronale Netze lernen mit Osnabrcker Studenten
     ================================================
	         VERSION 1.12 April 1997
     ================================================

	       Version 1.10: December 1996
	       Version 1:    December 1995
	       Version 0:    July 1995 


     A note for German users:

     Eine ausfuehrliche deutsche Anleitung befindet
     sich im Unterverzeichnis doc (im ASCII-Format)!!
     Eine Postscript-Version ist ueber unsere homepage
     zu beziehen.



		     Hello out there!

We'd like to present you our newest version of NNelmos
(which is an acronym of 'Neuronale Netze lernen mit
Osnabruecker Studenten' = 'Learning neural nets with
Osnabrueck students').  It's still incomplete, most
probably buggy but already a bit useful (we hope). 

So we want want you to test our work and to tell us about
how you like it (A questionnaire is enclosed for that
purpose (see answer.nn), E- and Snail-Mail adresses are
enclosed at the end of this document.

WE DON'T WANT TO MAKE MONETARY PROFIT OF THIS PROGRAM.
FEEL FREE TO USE AND REDISTRIBUTE IT FREELY 
WE USED DJGPP (2.00 / gcc 2.7.2), BY DJ DELORIE AND THE
ALLEGRO GAME PROGRAMMING LIBRARY BY SHAWN HARGREAVES. 
THEIR GREAT WORK IS WORTHY TO BE APPRECIATED, MAKE SHURE 
TO READ THE COPYRIGHT NOTICES (in the copying subdirectory).

 WHAT IS NNELMOS ? 
 -----------------

 NNELMOS TRIES TO VISUALIZE THE PROCESSES INSIDE A NEURAL
 NET (in this version, the PERCEPTRON, BACKPROPAGATION AND
 SOM model are supported).

NNelmos was designed as a little aid in learning about
neural nets.  Theoretical background is important, but
sometimes it's much easier to get the right idea of an
algorithm if you SEE what happens.


 IS IT USEFUL FOR ME ?  
 ---------------------

 SOME BASIC KNOWLEDGE ABOUT NEURAL NETS IS REQUIRED BUT
 IT'S ALSO SUITABLE FOR ACCOMPAIGNING LEARNING.

If you have some basic knowledge about neural nets but
think you just didn't get it totally clear, this should be
the right thing for you.  If you always wanted to learn
about them, if you even bought an expensive book but never
read it because it's dry-as-dust: combine it with NNelmos
and things will probably work better.


 HOW DO I START AND USE NNELMOS ?  
 --------------------------------

 Just start NNelmos.exe after you unpacked it.


 SOM - self organizing maps
 --------------------------

 1. 2D-SOM

 Choose File/New for a 2D-SOM with randomly generated
 data inside a chooseble topology. Choose File/New Tsp
 for a demonstration of a SOM solving a TSP problem.
 For a quick demonstration you can use the default values
 but you can freely experiment with many parameters.

 Here's a short description of the parameters:

 Neighbours:    Each unit is linked to two
                or four others, so you get a grid (4) or 
                a line (2), a ring also has two neighbours, 
                the ends of the line are linked
 Num of units:  With four neighbours the root of the units
 		number should be even; more units aren't 
 		always nicer, more than 10000 don't look 
 		any good.
 Radius beginning/Final radius: Correcting the weights of 
 		one of the units affects others as well.
 		The radius of this influence decreases by
 		an exponential function.  Changing these
 		two parameters you can set the first and
 		last value of the radius. For lines (2
 		neighbours), try bigger start values.
 Iterations per step: If many iterations are calculated, 
 		it isn't necessary to display each of them.
 		So the graphics are updated after the 
 		number of iterations you choose.
 Initial Eta:   The units are 'pulled' in the direction of 
 		the input values, strength of this
                pulling is Eta.  You can set the start
                value here.  
 Iterations:    To compute more iterations isn't always the 
 		best way to improve the results. The number 
 		of iterations must fit the other 
 		parameters. Just try.  
 Input Space Topology: For demonstration purposes
		(as in our program) the input values are 
		calculated randomly. But other shapes of 
		the Input Space as rectangular are 
		possible: we implemented circles, triangles, 
		crosses and rings.  

 After changing parameters (or not) you see the initial
 state of the net (for better performance all start weights
 are located in a middle area). Step manually (Mode/Step)
 restart or end the calculation or start the
 learning stepping automatically (Mode/Run). 
 work here. Help can be obtained by clicking Help.

 2. File-Train

 Higher dimensional self organizing maps can be trained and
 visualized with NNelmos. Chosse File/New to create a new net.
 The data is read from a datafile (e.g. data\som\animals.kpt),
 the parameters are similar to 2D-Train except for "contextual
 map": Datafiles generated from natural language ASCII-texts
 need this option.

 There are 3 visualisations available for multidimensional SOMs:
 - Options/Show Data: if the datafile contains pictures (8*12 pixels)
                      the data can be shown (try letter.kpt or
                      number.kpt)            
 - Options/Show Minunits: the classical visualization
 - Option/Show Minunits with grayscale: the distance between the
                      units is visualized by gray squares additionally

 As a current research topic, contextual maps are included. With
 File/Generate context-SOM datafile, you can analyse natural language
 texts so that similar words are mapped to similar locations. This
 module can handle any ASCII-file, you can change the characters
 belonging to words (such as - /) by editing the file data\wordchar.val.
 Any word contained in the file data\elimword.val will not be taken into
 account for learning (stopwords).

 PERCEPTRON 
 ----------

 We included a simple perceptron with two inputs, and you
 can watch it learning (or not learning) AND, OR, NAND and
 XOR.

 You'll see two interesting windows on the screen one
 showing the input data space (with the four possible
 inputs coloured according to the chosen function to be
 learned), clicking onto it will make the perceptron
 perform one learning step, the other one showing the
 neuron with its input weights and the output. Clicking on
 it will test the actual learned weights with the input
 samples.

 A white line divides the input space into to halves, one
 one them contains the positve classified values, the other
 one the negative.  The perceptron learned correctly if
 this line separates the positive and negative examples
 correctly. You can see that a perceptron cannot learn XOR:
 No single line can divide the input space so that the
 positive and negative examples are separated.

 Everything else should be self-explanatory.


 BACKPROPAGATION 
 ---------------

 Backpropagation is integrated into NNelmos in two
 different ways: There's a standard backpropagation
 algorithm builtin that's capable to learn anything a
 backpropagation net can learn. It's fed with simple to
 build datafiles (see Datafile section) and parameters that
 can be modified. The other backpropagation part is the
 NeuroCar, the simulation of a car that (hopefully) can
 keep an appropriate distance to a pace car. For more
 details on the NeuroCar see the NeuroCar section.

 The backpropagation user interface resembles very much to
 the Kohonen one. First you have to specify a file to be
 loaded, some nice already come with NNelmos, just test
 them. 
 You can change following parameters: 
 - # hidden units: How many units shall be in the hidden 
 	layer ? For XOR, 3 works fine, but also try 2!  
 - Temp:        Temp for learning function 
 - Eta: 	The learning rate 
 - Error:       Value to stop calculation
 - Show net:    Don't display the weight changes in the
	 net, for calculation of larger nets.

 You then can watch the net learning, using step or run
 just the same way as with Kohonen nets. Help displays a
 little help text, Test shows you what the net learned: The
 input units are coloured according to the actual pattern's
 value (RED=0, GREEN=1), the output units are coloured
 according to their actual activation and surrounded by a
 circle coloured according to the awaited activation.  So
 if you see a Red filled Circle with a larger unfilled
 green circle around it, something wasn't learned
 correctly. End ends the calculation, Restart begins a new
 one.


 NeuroCar 
 --------

 The NeuroCar is trained with a std. Backprop algorithm,
 using a net with 3 inputs and one output.

 Inputs: 
 1. Speed of the pace car (0...1 = 0...100 km/h) 
 2. Speed of the NeuroCar (0...1 = 0...100 km/h) 
 3. Distance between them (0...1 = 0...200 metres)

 Output: 
 Change of speed for NeuroCar (0..1 = -10...+9)

 There's a builtin, pretrained car (wich was trained with
 the faboulos SNNS (Stuttgarter Neuronale Netze Simulator)
 and converted with snns2c. Thanks to the authors of SNNS!)
 which keeps the distance quite good (try the 'linear'
 acceleration function...) and you can use datafiles to
 train it (some not so good are included, make better
 ones!).

 First choose the default car and leave the parameters
 unchanged.  You can see a street at the bottom of the
 screen with three cars on it.  The right lower one is the
 pace car, the left lower one the NeuroCar, the upper one
 is a car driven by a formula (which can be found in
 'auto.c'). At the left side of the screen you see a red
 bar marking the NeuroCar's speed (you can also see a black
 line which marks the Pace Car's speed). At the right side
 is a red bar for the pace car.  The orange box in the
 middle contains five graphs: the lower three for the speed
 of the three cars, the upper two for the distance between
 NeuroCar/formula car and pace car. The standard buttons at
 the bottom work as with Kohonen and Backpropagation.

 The parameters you can change: 
 - initial speed of pace car (0...100 km/h) 
 - initial speed of NeuroCar (0...100 km/h)
 - initial distance between them (0...200 metres) 
 - acceleration function: How shall the pace car change its
		speed?  
 	random: speed changes randomly 
 	sinus: speed change is calculated by a sinus 
 		function 
	linear: the pace car begins accelerating by 1, 
		then braking with -1, then accelerating 
		by 2, braking by 2, .....

 If you created a good datafile for the NeuroCar, let us
 know!


 DATAFILES 
 ---------

 The NNelmos datafiles are vey similar to SNNS datfiles,
 only the header is a bit different. Look at a sample
 datafile:

	# Comments are allowed at any place 
	# This is a XOR Pattern file 
	# 
	# The next line indicates that this is 
	# a datafile for Backpropagationtraining 
	#
	Datafile: bptraining0

	# How many Examples do we have (XOR has 4!) 
	#
	Examples: 4

	# How many input units are there 
	Inputs:  2

	# and how many output units 
	Outputs: 1

	# Now the Examples: one line with inputs, then one
	line with # outputs.

	# The first one is (0 XOR 0) = 0 
	0 0 
	0

	# (0 XOR 1) = 1 
	0 1 
	1

	# (1 XOR 0) = 1 
	1 0 
	1

	# (1 XOR 1) = 0 
	1 1 
	0

 All datafiles are build like that, for a more complex one,
 have a look at auto.pat.

 WHAT WORK WILL FURTHER BE DONE ?
 --------------------------------

 Better (english) documentation 
 Complete help-Files
 Translation of documentation??
 Port to C++ 
 ...


 HOW CAN I REACH YOU ?  
 ---------------------

 Email: nnadmin@cl-ki.uni-osnabrueck.de
 	msaure@rz.uni-osnabrueck.de
 	tthelen@rz.uni-osnabrueck.de

 Snail-Mail: 
 	Tobias Thelen and Michael Saure
 	Feldbreede 7
 	49078 Osnabrueck 
 	Germany

 NNelmos homepage:
  http://www.cl-ki.uni-osnabrueck.de/~nntthele/nnelmos/
  http://www-cl-ki.uni-osnabrueck.de/~nnadmin

 DONT FORGET THE QUESTIONNAIRE!!!! (answer.nn) THANK YOU!
