The distributed structure of artificial neural networks makes them stand as models of parallel computation. Their very fine grain parallelism uses many information exchanges, so that hardware implementations are more likely to fit neural computations. But the number of operators and the complex connection graph of most usual neural models can not be directly handled by digital hardware devices. Therefore a theoretical and practical framework has been defined to reconcile simple hardware topologies with complex neural architectures. This framework has been designed mainly to meet the demands of configurable digital hardware. Field programmable neural arrays (FPNA) are based on an original paradigm of neural computation, so that they compute complex neural functions despite their simplified architectures. This paper focuses on the parallel aspects of the FPNA computation paradigm, from its definition to its applied implementation on FPGAs. FPNAs attest that a connectionist paradigm may represent an actually practical model of parallel computing.
Special Issue Papers