Neural Networks - Appraisal and Overview

"My brain is a newwrall net-virk, a luurning computah ..." - so said a young Arnie in the Terminator movies, back in the day.

Neural networks were a "fad" that came in, went out, came back in, and seemingly went back out again.

To be honest, they are a useful technique and often overlooked these days - when you have lots of data and the problem is not too "hard", they can work really well. Finding the best one for the job can be tricky, but so what? - there are no free lunches anywhere. It should be pointed out there are lots of applications which use them, and companies whch have made big money selling them, especially from the mil-ind complex - how do you think all those security applications work? Those that do number plate recognition, facial scanning and so on, yup, its neural networks, albeit heavily customised ones.

The trouble was, as with so many things, they were oversold - "all we need is a big neural network and computers will talk, see, tell jokes, understand our facial expressions, have emotions" - a minor technical breakthrough would take us straight into Metropolis or BladeRunner and of course SkyNet / Terminator.

Of course, it was all absolute bollocks. You have to cut the Yanks a bit of slack though - they are always making crass exaggerations, it's just the way they do things. Nothing is ever "quite good" with them, it always has to "TOTALLY ROCK, MAN!" or is "AWESOME, DUDE!"

One strand of the neural hype was of course - "predict the stock market!" - and lots of applications were created, often selling for many thousands of dollars until they got found out. To be fair there are a few still in business, but their neural network techniques are always presented as being part of the "technical analysis" toolbox rather than individual star performers - just another indicator like the RSI or the Bollinger Band.

One big problem with the "use a neural network" idea is - "which one?" Mostly when you think about using a neural network it's because you lack a good model, but have lots of data - the lack of a model usually means you have no prior expectation about what the "right" neural network is; so you now have a solution programme, but now it "reduces" to a huge optimisation problem over the neural network search space - architecture, size, training method, and other hyperparameters. Where do you start? One researcher has used genetic algorithms to grow neural networks - this is clever, but also its the obvious thing to do when faced with a vast search problem with no idea where to start - and there is also the "small" fact that he needed to created very specialised computer hardware using FPGAs, not only that, but I don't think you can buy this kit anywhere, especially since I think he went to work for DARPA or the Pentagon, who will no doubt want to stick these "learning computers" into the next generation of Predator drones.

Neural networks still have an allure about them though - how many intrepid seekers out there have read a little about them then went hunting on the internet for codes to download, and stock price data from Yahoo, then to spend weeks building their own models, only to find ... they suck, dude, they really do!

The problem with this problem is that what we want out of our network is typically a multi-step prediction - and this is very hard because the errors multiply so after some way, you are probably way off. There is also the problem that stock price data is very noisy and the training algorithms will take a very long time to run; if they converge at all that is. Also, the typical "stopped training" technique for choosing when to stop, doesn't work very well. Here's another nasty surprise, the multistep predictions you produce will vary enormously with the amount of training - stop training after 9999 passes and what you get is wildly different from the result from 10000 passes. Nor do neural networks give probabilistic information about their predictions - how believable is our prediction? Well, we don't know, at least not in typical networks.

It is worth noting that the most recent revival of neural networks has been done at Google, where they call it "deep learning"; the approach taken is interesting for it goes away from most types of recent research - this tends to progress as follows : mess around with some small neural nets on a toy problem, solve it, apply it to something more substantial, fail badly, then concentrate on modifying the network architecture, the training algorithms, to in some way incorporate your "domain knowledge" about the problem in order to "help out" the system - a case in point, visual processing; you cannot get your net to do what you want so you break the problem down, use several networks, for example a typical approach might be to - extract hotspots, lines, edges, corners, associate these into regions, identify shapes, colours etc and so on until you can finally say - "it's a green apple being eaten by a labrador puppy"; but here is the thing - Google researchers have gone the other way entirely - they have decided to use general structures and to "go big" or rather "go massive" creating networks with a billion neurons living across their huge distributed computing network, and using an unsupervised leanring approach, which is cheaper when creating your training data; this is not something you can do yourself on your desktop, the cloud perhaps, but that might be too expensive (- only Google has the resources to try this brute-force approach - and I for one -Welcome!- our google-land wonka-chocolate-factory android-skynet-overlords.)

The "net" result of this study is that Google now has a neural network which can ... recognise cats in you-tube videos ... this may sound silly and pointless, and a bit disappointing for being at the technical cutting edge, but, in fact, "cat-recognition" is a good problem to try - it is just hard enough to tell you something, but not too hard you can't do it; the fruits of this research will, no doubt, soon be found in more practical guises, in their speech recognition engine and so on; for example face recognition whereby your android tablet can look at you and tell whether or not you are "enjoying" the video you are watching; this should be entirely possible - and that kind of unobtrusive feedback is worth a lot of money!