As an Amazon Associate I earn from qualifying purchases.

How to construct the optimal neural architecture for your machine learning task

[ad_1]

The first step in training a neural network to solve a problem is usually the selection of an architecture: a specification of the number of computational nodes in the network and the connections between them. Architectural decisions are generally based on historical precedent, intuition, and plenty of trial and error.

In a theoretical paper I presented last week at the 28th International Conference on Artificial Neural Networks in Munich, I show that the arbitrary selection of a neural architecture is unlikely to provide the best solution to a given machine learning problem, regardless of the learning algorithm used, the architecture selected, or the tuning of training parameters such as batch size or learning rate.

Rather, my paper suggests, we should use computational methods to generate neural architectures tailored to specific problems. Only by considering a vast space of possibilities can we identify an architecture that comes with theoretical guarantees on the accuracy of its computations.

In fact, the paper is more general than that. Its results don’t just apply to neural networks. They apply to any computational model, provided that it’s Turing equivalent, meaning that it can compute any function that the standard computational model — the Turing machine — can.

To be more specific, we must introduce the function approximation problem. This is a common mathematical formulation of what machine learning actually does: given a function (i.e., your model) and a set of samples, you search through the parameters of the function so that it approximates the outputs of a target function (i.e., the distribution of your data).

A new theoretical analysis can help guide the process of neural-architecture search, or piecing a neural network together from a set of component parts (convolutional layers, concatenations, max pooling, and the like).

Sequential thinking

My paper rethinks the function approximation problem, with the aim of accounting for modern developments in machine learning, such as deep learning or — you guessed it — the automated design of neural architectures. I reformulate function approximation as the problem of finding a sequence of known functions that approximates the outputs of a target function. This has the advantage of allowing us to better model neural networks, by characterizing the architecture (a sequence of functions) according to its ability to approximate the target function.

Within this framework, the paper establishes some theoretical bounds. As is typical in theoretical computer science, those bounds depend on the notion of tractability. A tractable computation is one that can be performed relatively efficiently; an intractable computation is one that’s so complex that, for all but the simplest cases, it couldn’t be completed by all the computers in the world in the lifetime of the universe.

Here are the paper’s main conclusions:

  1. No method (i.e., computer program) is one-size-fits-all, able to approximate every possible target function to zero error;
  2. Given a target function and a set of candidate functions that includes all the functions necessary to establish Turing equivalence, there is a procedure for determining the exact sequence of candidate functions with the minimum approximation error for a given sequence length;
  3. Unfortunately, that procedure is likely to be intractable;
  4. Genetic algorithms offer a tractable procedure that can find a sequence that is as good or nearly as good.

With respect to neural networks, the second conclusion may be the most important. Neural-architecture search, which uses automated procedures to design neural architectures for particular tasks, is a burgeoning area of research. It usually works through trial-and-error combinations of basic network components. These components are things like 3×3 convolutional layers or max-pooling layers, which define how the outputs of one layer of a neural network are processed before being fed to the next layer.

My results suggest that the components should be selected so that they provide all the functionality necessary to guarantee Turing equivalence. Otherwise, the search may be suboptimal. Devising an adequate component set shouldn’t be difficult: the first proof that neural networks are Turing equivalent appeared almost 30 years ago, and it proceeded by identifying network components that could execute the primitive operations from which Turing machines are constructed.

Maximizing fitness

The paper’s other immediately applicable result is the identification of genetic algorithms — and, more specifically, coevolutionary algorithms — as the most practical way to find an optimal (or nearly optimal) architecture.

A genetic algorithm begins by generating candidate algorithms for solving some problem. The best-performing candidates are combined with each other and tested again, much the way animals’ genetic information is recombined and retested with every new generation. Coevolutionary algorithms are genetic algorithms whose performance metric depends on their interactions with each other, rather than on their separate execution of the same task.

Based on experience, many researchers have come to the conclusion that coevolutionary algorithms provide the best way to build machine learning systems. But the function-approximation framework from my paper helps provide a more secure theoretical foundation for their intuition.



[ad_2]

Source link

We will be happy to hear your thoughts

Leave a reply

Discover Your Essential Style at NovaEssentials
Logo