Skip to content

Commit 19bae5e

Browse files
committed
the first working version
0 parents  commit 19bae5e

32 files changed

+27064
-0
lines changed

.gitignore

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
/lnn
2+
testenv
3+
*.local
4+
*.o
5+
*.swp
6+
*.tmp
7+
*.nosync

LICENSE

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
BSD 3-Clause License
2+
3+
Copyright (c) DONG Yuxuan <https://www.dyx.name>
4+
5+
Redistribution and use in source and binary forms, with or without
6+
modification, are permitted provided that the following conditions are
7+
met:
8+
9+
1. Redistributions of source code must retain the above copyright
10+
notice, this list of conditions and the following disclaimer.
11+
12+
2. Redistributions in binary form must reproduce the above copyright
13+
notice, this list of conditions and the following disclaimer in the
14+
documentation and/or other materials provided with the distribution.
15+
16+
3. Neither the name of the copyright holder nor the names of its
17+
contributors may be used to endorse or promote products derived from
18+
this software without specific prior written permission.
19+
20+
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
21+
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
22+
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
23+
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
24+
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
25+
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
26+
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
27+
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
28+
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
29+
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
30+
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

Makefile

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
.PHONY: test install clean
2+
3+
CC=cc
4+
INSTALL=install
5+
prefix=/usr/local
6+
bindir=$(prefix)/bin
7+
8+
all: lnn
9+
10+
lnn: main.o utils.o matrix.o neunet.o diffable.o
11+
$(CC) -o $@ $^
12+
13+
main.o: main.c utils.h neunet.h diffable.h
14+
$(CC) -c -o $@ $<
15+
16+
utils.o: utils.c utils.h
17+
$(CC) -c -o $@ $<
18+
19+
matrix.o: matrix.c matrix.h
20+
$(CC) -c -o $@ $<
21+
22+
neunet.o: neunet.c neunet.h matrix.h utils.h
23+
$(CC) -c -o $@ $<
24+
25+
diffable.o: diffable.c diffable.h
26+
$(CC) -c -o $@ $<
27+
28+
test: lnn
29+
./runtest
30+
31+
install: lnn
32+
$(INSTALL) -d $(bindir)
33+
$(INSTALL) $< $(bindir)
34+
35+
clean:
36+
rm -rf lnn testenv *.o *.tmp

README.md

Lines changed: 103 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,103 @@
1+
LNN
2+
===
3+
4+
LNN (Little Neural Network) is a command-line C program running, training, and testing feedforward neural networks, with the following features.
5+
6+
- Light weight, containing only a standalone executable;
7+
- Serve as a Unix filter; Easy to work with other programs;
8+
- Plain-text formats of models, input, output, and samples;
9+
- Compact notations;
10+
- Different activation functions for different layers;
11+
- L2 regularization;
12+
- Mini-batch training.
13+
14+
**Table of Contents**
15+
16+
- [Installation](#installation)
17+
- [Getting Started](#getting-started)
18+
- [Further Documentation](#further-documentation)
19+
20+
Installation
21+
------------
22+
23+
It would be better to select a version from the [release page](https://github.com/dongyx/lnn/releases)
24+
than downloading the working code,
25+
unless you understand the status of the working code.
26+
The latest release is always recommended.
27+
28+
$ make
29+
$ sudo make install
30+
31+
By default, LNN is installed to `/usr/local`.
32+
You could call `lnn --version` to check the installation.
33+
34+
Getting Started
35+
---------------
36+
37+
The following call of LNN creates a network with
38+
a 10-dimension input layer,
39+
a 5-dimension hidden layer,
40+
and a 2-dimension output layer.
41+
42+
$ lnn train -C q10i5s2s samples.txt >model.nn
43+
44+
The `-C` option creates a new model with the structure specified by the argument.
45+
The argument here is `q10i5s2s`.
46+
The first character `q` specifies the loss function to be the quadratic error.
47+
The following three strings `10i`, `5s`, `2s` represent that
48+
there are 3 layers,
49+
including the input layer,
50+
with dimensions 10, 5, 2, respectively.
51+
The character following each dimension specifies the activation function for that layer.
52+
Here `i` and `s` represent the identity function and the sigmoid function respectively ([Further Documentation](#further-documentation)).
53+
54+
The remaining part of this chapter assumes that
55+
the network maps $R^n$ to $R^m$.
56+
In words, it has a $n$-dimension input layer and $m$-dimension output layer.
57+
58+
LNN reads samples from the file operand, or, by default, the standard input.
59+
The trained model is printed to the standard output in a text format.
60+
61+
The sample file is a text file containing numbers separated by white characters (space, tab, newline).
62+
Each $n+m$ numbers constitute a sample.
63+
The first $n$ numbers of a sample constitute the input vector,
64+
and the remaining constitute the output vector.
65+
66+
LNN supports many training arguments like learning rate, iteration count, and batch size ([Further Documentation](#further-documentation)).
67+
68+
LNN could train a network based on an existed model
69+
by replacing `-C` with `-m`.
70+
71+
$ lnn train -m model.nn samples.txt >model2.nn
72+
73+
This allows one to observe the behaviors of the model in different stages
74+
and provide different training arguments.
75+
76+
The `run` sub-command runs an existed model.
77+
78+
$ lnn run -m model.nn input.txt
79+
80+
LNN reads the input vectors from the file operand, or, by default, the standard input.
81+
The input shall contain numbers separated by white characters
82+
(space, tab, newline).
83+
Each $n$ numbers constitute an input vector.
84+
85+
The output vector of each input vector is printed to the standard output.
86+
Each line contains an output vector.
87+
Components of an output vector are separated by a space.
88+
89+
The `test` sub-command evaluates an existed model.
90+
91+
$ lnn test -m model.nn samples.txt
92+
93+
LNN reads samples from the file operand, or, by default, the standard input.
94+
The mean loss value of the samples is printed to the standard output.
95+
The format of the input file is the same as of the `train` sub-command.
96+
97+
Further Documentation
98+
---------------------
99+
100+
- The [technical report](https://www.dyx.name/notes/lnn.html) serves as an extension of this read-me file.
101+
It contains more details and examples for understanding the design and usage.
102+
103+
- Calling `lnn --help` prints a brief of the command-line options.

diffable.c

Lines changed: 123 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,123 @@
1+
#include <string.h>
2+
#include <math.h>
3+
4+
void ident(double *y, double *x, int n)
5+
{
6+
memcpy(y, x, n * sizeof *y);
7+
}
8+
9+
void dident(double *d, double *y, int n)
10+
{
11+
while (n-- > 0)
12+
*d++ = 1;
13+
}
14+
15+
void sigm(double *y, double *x, int n)
16+
{
17+
while (n-- > 0)
18+
*y++ = 1 / (1 + exp(-*x++));
19+
}
20+
21+
void dsigm(double *d, double *y, int n)
22+
{
23+
for (; n-- > 0; y++)
24+
*d++ = *y * (1 - *y);
25+
}
26+
27+
void htan(double *y, double *x, int n)
28+
{
29+
double h;
30+
31+
while (n-- > 0) {
32+
h = exp(2 * *x++);
33+
*y++ = (h-1)/(h+1);
34+
}
35+
}
36+
37+
void dhtan(double *d, double *y, int n)
38+
{
39+
for (; n-- > 0; y++)
40+
*d++ = 1 - (*y)*(*y);
41+
}
42+
43+
void relu(double *y, double *x, int n)
44+
{
45+
for (; n-- > 0; x++)
46+
*y++ = *x > 0 ? *x : 0;
47+
}
48+
49+
void drelu(double *d, double *y, int n)
50+
{
51+
while (n-- > 0)
52+
*d++ = *y++ > 0;
53+
}
54+
55+
void smax(double *y, double *x, int n)
56+
{
57+
double s;
58+
int i;
59+
60+
for (s = i = 0; i < n; i++)
61+
s += (y[i] = exp(x[i]));
62+
while (n-- > 0)
63+
*y++ /= s;
64+
}
65+
66+
void dsmax(double **d, double *y, int n)
67+
{
68+
int i, j;
69+
70+
for (i = 0; i < n; i++)
71+
for (j = 0; j < n; j++)
72+
if (i == j)
73+
d[i][j] = y[i]*(1-y[i]);
74+
else
75+
d[i][j] = -y[i]*y[j];
76+
}
77+
78+
double quade(double *ov, double *tv, int n)
79+
{
80+
double s, d;
81+
82+
for (s = 0; n-- > 0; ov++, tv++) {
83+
d = *ov - *tv;
84+
s += d*d / 2;
85+
}
86+
return s;
87+
}
88+
89+
void dquade(double *dv, double *ov, double *tv, int n)
90+
{
91+
while (n-- > 0)
92+
*dv++ = *ov++ - *tv++;
93+
}
94+
95+
double binxe(double *ov, double *tv, int n)
96+
{
97+
double s;
98+
99+
for (s = 0; n-- > 0; ov++, tv++)
100+
s -= *tv*log(*ov) + (1-*tv)*log(1-*ov);
101+
return s;
102+
}
103+
104+
void dbinxe(double *dv, double *ov, double *tv, int n)
105+
{
106+
for (; n-- > 0; ov++, tv++)
107+
*dv++ = (*ov-*tv) / (*ov*(1-*ov));
108+
}
109+
110+
double xentr(double *ov, double *tv, int n)
111+
{
112+
double s;
113+
114+
while (n-- > 0)
115+
s -= *tv++ * log(*ov++);
116+
return s;
117+
}
118+
119+
void dxentr(double *dv, double *ov, double *tv, int n)
120+
{
121+
while (n-- > 0)
122+
*dv++ = -*tv++ / *ov++;
123+
}

diffable.h

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
/* differentiable functions and their derivatives */
2+
3+
extern void ident(double *y, double *x, int n);
4+
extern void dident(double *d, double *y, int n);
5+
extern void sigm(double *y, double *x, int n);
6+
extern void dsigm(double *d, double *y, int n);
7+
extern void htan(double *y, double *x, int n);
8+
extern void dhtan(double *d, double *y, int n);
9+
extern void relu(double *y, double *x, int n);
10+
extern void drelu(double *d, double *y, int n);
11+
extern void smax(double *y, double *x, int n);
12+
extern void dsmax(double **d, double *y, int n);
13+
14+
extern double quade(double *ov, double *tv, int n);
15+
extern void dquade(double *dv, double *ov, double *tv, int n);
16+
extern double binxe(double *ov, double *tv, int n);
17+
extern void dbinxe(double *dv, double *ov, double *tv, int n);
18+
extern double xentr(double *ov, double *tv, int n);
19+
extern void dxentr(double *dv, double *ov, double *tv, int n);

0 commit comments

Comments
 (0)