This article is more than 1 year old
GNU want (another) free AI package release? Yes. But we should train this puppy
Gneural Network out - now let's teach it some tricks...
The GNU free software project has launched version 0.0.1 of its Gneural Network package in response to the “outstanding and truly inspiring” results achieved of late in proprietary artificial intelligence.
The Free Software Foundation (FSF) describes Gneural Network as a GNU package for a programmable neural network, which as of 0.0.1 is “a very simple feed-forward network* which can learn very simple tasks such as curve fitting**” – although more advanced features will hopefully be delivered soon.
The FSF claimed that “the fact that only companies and labs have access to this technology can represent a threat” – hence author and maintainer Jean Michel Sellier's decision to release it under a free licence like GPL.
Gneural Network is far from the first free offering of neural network software, however. Participants in a discussion on Hacker News, who were largely sympathetic to the FSF, griped that there was in fact no shortage of freely available software in the neural/deep networking arena, including releases by IBM, Google, Yahoo!, and others. Commenters concurred that what was lacking was training data, which is wanted in enormous amounts.
Noting this demand, Yahoo! recently released 13.5TB of news interaction data for academic machine learning enthusiasts. As Cambridge's Dr Sean Holden explained to The Register, the size and quality of the data is pretty much vital for those working on a deep network: “The architectures tend to be very large, you have a lot of parameters to set,” said Holden, and so “you tend to need a lot of data to get them to do something useful. So for applications where you have that available then that's brilliant.”
Acknowledging this discrepancy, and perhaps aiming towards the creation of shared datasets for deep network training, the FSF stated: “As a matter of fact, data and trained neural networks are rarely shared, and one should always remember that in practice without the training data it is almost impossible to duplicate results (even when the source code is available).” ®
* The simplest type of neural network, the feed-forward model, only pushes connections in one direction, forward from the input nodes (then though hidden nodes if there are any) and then to the output nodes. This is in contrast to a recurrent neural network, where connections are made in a cycle. Read more here (PDF)
** A type of model where you work out which curve (curved line) or maths function (say a sine wave) fits best to a certain given set of values/data points. So basically, with a linear model, you're mapping the data points into a curve, or, if you've got a trickier set, a maths function (ie, an equation)... By the way, for the purposes of maths stuff, any line that isn’t perfectly straight is considered a “curve”. For example, let's say you've drawn your x/y axes, and you've dotted your data points in the appropriate places and then drawn a line which is an approximately equal distance away from all the data points - that's a linear model, though obviously you want a little more accuracy than that. So, the simplest way to do linear curve fitting, short of using software – hey, you can use the Gneural Network! – is with the method of "least squares". Here, you'll add up the square of each difference between a data point and the predicted value for that data given by the model. You'll then want to find for the curve values that minimise the total of the squared differences. If you liked trig and algebra at school, have a Google and you won't be disappointed.