[ad_1]
The first step in training a neural network to solve a problem is usually the selection of an architecture: a specification of the number of computational nodes in the network and the connections between them. Architectural decisions are generally based on historical precedent, intuition, and plenty of trial and error.
In a theoretical paper I presented last week at the 28th International Conference on Artificial Neural Networks in Munich, I show that the arbitrary selection of a neural architecture is unlikely to provide the best solution to a given machine learning problem, regardless of the learning algorithm used, the architecture selected, or the tuning of training parameters such as batch size or learning rate.
Rather, my paper suggests, we should use computational methods to generate neural architectures tailored to specific problems. Only by considering a vast space of possibilities can we identify an architecture that comes with theoretical guarantees on the accuracy of its computations.
In fact, the paper is more general than that. Its results don’t just apply to neural networks. They apply to any computational model, provided that it’s Turing equivalent, meaning that it can compute any function that the standard computational model — the Turing machine — can.
To be more specific, we must introduce the function approximation problem. This is a common mathematical formulation of what machine learning actually does: given a function (i.e., your model) and a set of samples, you search through the parameters of the function so that it approximates the outputs of a target function (i.e., the distribution of your data).
Sequential thinking
My paper rethinks the function approximation problem, with the aim of accounting for modern developments in machine learning, such as deep learning or — you guessed it — the automated design of neural architectures. I reformulate function approximation as the problem of finding a sequence of known functions that approximates the outputs of a target function. This has the advantage of allowing us to better model neural networks, by characterizing the architecture (a sequence of functions) according to its ability to approximate the target function.
Within this framework, the paper establishes some theoretical bounds. As is typical in theoretical computer science, those bounds depend on the notion of tractability. A tractable computation is one that can be performed relatively efficiently; an intractable computation is one that’s so complex that, for all but the simplest cases, it couldn’t be completed by all the computers in the world in the lifetime of the universe.
Here are the paper’s main conclusions:
- No method (i.e., computer program) is one-size-fits-all, able to approximate every possible target function to zero error;
- Given a target function and a set of candidate functions that includes all the functions necessary to establish Turing equivalence, there is a procedure for determining the exact sequence of candidate functions with the minimum approximation error for a given sequence length;
- Unfortunately, that procedure is likely to be intractable;
- Genetic algorithms offer a tractable procedure that can find a sequence that is as good or nearly as good.
With respect to neural networks, the second conclusion may be the most important. Neural-architecture search, which uses automated procedures to design neural architectures for particular tasks, is a burgeoning area of research. It usually works through trial-and-error combinations of basic network components. These components are things like 3×3 convolutional layers or max-pooling layers, which define how the outputs of one layer of a neural network are processed before being fed to the next layer.
My results suggest that the components should be selected so that they provide all the functionality necessary to guarantee Turing equivalence. Otherwise, the search may be suboptimal. Devising an adequate component set shouldn’t be difficult: the first proof that neural networks are Turing equivalent appeared almost 30 years ago, and it proceeded by identifying network components that could execute the primitive operations from which Turing machines are constructed.
Maximizing fitness
The paper’s other immediately applicable result is the identification of genetic algorithms — and, more specifically, coevolutionary algorithms — as the most practical way to find an optimal (or nearly optimal) architecture.
A genetic algorithm begins by generating candidate algorithms for solving some problem. The best-performing candidates are combined with each other and tested again, much the way animals’ genetic information is recombined and retested with every new generation. Coevolutionary algorithms are genetic algorithms whose performance metric depends on their interactions with each other, rather than on their separate execution of the same task.
Based on experience, many researchers have come to the conclusion that coevolutionary algorithms provide the best way to build machine learning systems. But the function-approximation framework from my paper helps provide a more secure theoretical foundation for their intuition.
window.fbAsyncInit = function() { FB.init({
appId : '1024652704536162',
xfbml : true, version : 'v2.9' }); };
(function(d, s, id){
var js, fjs = d.getElementsByTagName(s)[0];
if (d.getElementById(id)) {return;}
js = d.createElement(s); js.id = id;
js.src = "https://connect.facebook.net/en_US/sdk.js";
fjs.parentNode.insertBefore(js, fjs);
}(document, 'script', 'facebook-jssdk'));
[ad_2]
Source link