NeuralNetTools: Visualization and Analysis Tools for Neural Networks

Marcus W. Beck

Main Article Content

Abstract

Supervised neural networks have been applied as a machine learning technique to identify and predict emergent patterns among multiple variables. A common criticism of these methods is the inability to characterize relationships among variables from a fitted model. Although several techniques have been proposed to "illuminate the black box", they have not been made available in an open-source programming environment. This article describes the NeuralNetTools package that can be used for the interpretation of supervised neural network models created in R. Functions in the package can be used to visualize a model using a neural network interpretation diagram, evaluate variable importance by disaggregating the model weights, and perform a sensitivity analysis of the response variables to changes in the input variables. Methods are provided for objects from many of the common neural network packages in R, including caret, neuralnet, nnet, and RSNNS. The article provides a brief overview of the theoretical foundation of neural networks, a description of the package structure and functions, and an applied example to provide a context for model development with NeuralNetTools. Overall, the package provides a toolset for neural networks that complements existing quantitative techniques for data-intensive exploration.

Article Details

Article Sidebar