Back to glossary

Neural Network

A computational model inspired by the human brain, composed of layers of interconnected nodes (neurons) that learn patterns from data by adjusting connection weights during training.

Neural networks are the foundational building block of modern AI. They consist of an input layer, one or more hidden layers, and an output layer. Each connection between neurons has a weight that is adjusted during training to minimize prediction errors. The universal approximation theorem shows that even a single hidden layer can theoretically approximate any continuous function, but deeper networks learn hierarchical representations more efficiently.

Training a neural network involves feeding it labeled examples, computing the error between predictions and ground truth using a loss function, and propagating that error backward through the network to update weights via gradient descent. This cycle repeats over many iterations (epochs) until the network converges on a useful set of weights.

In practice, neural networks power everything from image recognition and speech synthesis to recommendation systems and language models. The architecture you choose depends on your data: convolutional neural networks for images, recurrent networks for sequences, and transformers for language. For growth engineers, neural networks underpin the ML models used for churn prediction, user segmentation, personalization, and content recommendation.

Related Terms