The Inception of Neural Networks and Finite State Machines
In the first post of this series, get a brief look at research about neural networks, finite state machines, models of the human brain, and abstract machines.
Join the DZone community and get the full member experience.
Join For FreeWhat is the relationship between the behavior of artificial neural networks and models used in formal language theory and theory of computation?
There are two main research fields related to this question; neural networks and finite state machines and models of the human brain and abstract machines.
The aim of this article is to give an explanation of the commonalities between the two and how they are related. We will also review existing papers exploring this topic.
Neural networks have been becoming more and more popular recently, and automata theory and finite state machines have been used intensively for many years. Using finite state machines for modeling in software development is not a new concept. Automata-based techniques have been widely used as general-purpose program development methodology.
Automata theory is a branch of computer science that established its roots during the 20th century. As matter of fact, the first people to consider the concept of a finite state machine included biologists, psychologists, mathematicians, engineers, and some of the first computer scientists. They all shared a common interest: to model the human thought process, whether in the brain or in a computer.
The neurophysiologist Warren McCulloch and logician Walter Pitts were the first to present a description of finite automata in 1943. Their paper, entitled, "A Logical Calculus Immanent in Nervous Activity," is commonly regarded as the inception of two fields of research. One is the theory of finite state machines as a model of computation. The other one is the field of artificial neural networks. The fields of neural networks and finite state computation indeed started simultaneously. McCulloch and Pitts mathematically formulated the behavior of ensembles of neurons (after a number of simplifying assumptions) and they defined what we currently know as a finite-state machine (FSM).
In 1967, Minsky published his book "Computation: Finite and Infinite Machines." In Minsky's own words:
Every finite-state machine is equvalent to, and can be simulated by, some neural net.
At the beginning of this century, the relation between neural nets and automata theory was further investigated and discussed. In 2002, Mikel Forcada published his draft paper "Neural Networks: Automata and Formal Models of Computation" in an attempt to collect and analyze various papers and works around the topic since the publication of "A Logical Calculus Immanent in Nervous Activity." The basic questions stated at the beginning of this paper are:
Can a neural network of architecture class A perform the same computation as an automaton of class M?
Can a neural network of architecture class A be a recognizer for languages of language class L?
Can a neural network of architecture class A be trained to perform the same computation as an automaton of class M from a set of examples?
Can a neural network of architecture class A be trained to recognize a language of class L from a set of examples?
Furthermore, Mikel Forcada talks about neural state machines and provides a mathematical definition. This and other topics like Turing neural networks will be further investigated in subsequent articles.
Opinions expressed by DZone contributors are their own.
Comments