Neuro‑AI is a bidirectional enterprise: metaphors from machine learning help sharpen our hypotheses about brain function, while biological constraints suggest how to build more efficient artificial systems. Here, we frame cortical computation through the lens of graph neural networks (GNNs): local, energy‑constrained message passing that preserves relational structure under wiring and metabolic budgets. This framing serves both as a neuroscientific hypothesis about why topographies, receptive fields, and distance‑dependent connectivity look the way they do, and as a compact design principle for AI models that generalize better, from less data and at lower energy cost.
The Graph Neural Network Hypothesis
GNNs are distinguished from classical ANNs by their explicit encoding of relational data through structured message passing across interconnected nodes (Battaglia et al., 2018). Crucially, message passing occurs locally, guided by adjacency relations defined by edges. This structural inductive bias naturally aligns with biological principles governing neuronal connectivity and coding. Neurons in cortical and subcortical structures form densely interconnected graphs. Connectivity decays predictably with spatial distance, reflecting the principle of wiring minimization (Bullmore & Sporns, 2012). Such distance-dependent connectivity ensures that neurons primarily interact with spatially and functionally related neighbors, thus implicitly creating a relational message-passing system reminiscent of modern GNNs.
Cortical Homunculi: Relational Structure in Sensory Coding
A striking illustration of the brain's relational computation emerges from sensory topographies like the cortical homunculus. Penfield and Boldrey (1937) first documented the distorted somatotopic organization of the human somatosensory cortex, where disproportionately large cortical areas represent body parts with higher receptor densities. Rather than a simple reflection of physical size, this cortical layout emphasizes relational proximity: adjacent cortical columns correspond to adjacent body regions, preserving sensory neighborhoods. The cortical homunculus thus exemplifies a graph embedding, where local message passing among neurons preserves sensory adjacency relationships rather than absolute spatial coordinates (Kaas, 1997). Just as GNNs embed input graphs into latent spaces that preserve relational proximity, cortical maps embed body surfaces onto cortical sheets, maintaining local relational integrity.
Receptive Fields and Graph-Based Neural Coding
Similarly, receptive fields in visual and auditory systems exhibit relational coding characteristics. Visual cortical areas (V1, V2, etc.) are organized retinotopically, preserving retinal adjacency relations. Auditory cortical tonotopic maps represent adjacent sound frequencies in neighboring cortical regions (Merzenich & Brugge, 1973). These topographic representations directly parallel GNN embedding procedures, encoding input relationships through adjacency-constrained message passing. Hubel and Wiesel (1962) demonstrated how neurons in primary visual cortex integrate inputs from spatially adjacent receptive fields, akin to local message aggregation in GNN layers. This spatially restricted integration ensures neurons capture relational information intrinsic to visual stimuli, essential for recognizing features like edges, orientations, and ultimately complex objects.
Distance-Dependent Connectivity: A Fundamental Principle
Neuronal wiring adheres to a robust principle: connectivity probability diminishes exponentially with anatomical distance (Markov et al., 2013). This principle not only reduces metabolic and wiring costs but naturally implements relational inductive biases central to GNN computations. Short-range recurrent excitatory and inhibitory connections allow local information integration, reinforcing the relational structure within sensory maps (Douglas & Martin, 2004). Such spatially constrained connectivity mirrors the neighborhood aggregation performed by GNNs, with neurons acting as nodes exchanging signals primarily with topologically adjacent peers. The propagation of neural signals thus implicitly encodes relational information, allowing the brain to efficiently extract structured patterns and generalize across sensory modalities.
Plasticity and Dynamics in Cortical Graphs
While classical GNNs typically operate on static adjacency matrices, cortical relational structures are inherently dynamic due to experience-dependent plasticity. The size and connectivity strength of cortical maps, such as the homunculus, are modulated by sensory experience, notably demonstrated by experiments involving sensory deprivation (Wiesel & Hubel, 1963). Incorporating plasticity mechanisms into GNN frameworks, where adjacency matrices dynamically evolve based on sensory inputs and experience, would thus increase their biological plausibility, providing a richer conceptual framework for cortical function.
Relational priors as an energy budget
If the cortex optimizes for relational preservation, wiring cost acts like an implicit prior. Local, sparse message passing with short‑range recurrence encodes neighborhood structure while minimizing metabolic load. The AI implication is that GNNs with locality constraints and learnable sparsity (e.g., top‑k edges or routing‑by‑need) should improve sample efficiency and energy efficiency on manifold‑structured data such as vision, audio, and sensor graphs.
Development, plasticity, and the stabilization of maps
Topographic maps (retino‑, tono‑, and somatotopy) self‑organize under activity‑dependent rules and then stabilize via pruning and myelination. A GNN view predicts critical‑period dynamics: early, high‑plasticity phases that shape graph topology (which edges exist) followed by lower‑plasticity phases that tune weights (how strongly messages pass). For AI, staged training that first learns adjacency (who communicates) and then learns messages (what is communicated) may reduce overfitting and improve transfer.
Cross‑modal alignment as graph matching
Perception and action rely on aligning partially independent graphs (e.g., visual and somatosensory) into shared relational coordinates. Contrastive or correspondence‑based objectives that match neighborhoods across modalities (rather than raw features) mirror cortico‑cortical integration. In AI, cross‑modal GNN alignment can enhance generalization and robustness to sensor drift.
Neurological disorders as graph errors
From this perspective, we can speculate about neurological disorders in which some of the symptomatology can be metaphorically understood through dysfunctional GNNs. Examples could include amblyopia (early mis‑establishment of retinotopic edges leading to persistent under‑weighting of one input), task‑specific focal dystonia (over‑merging of adjacent somatotopic nodes that blurs fine control), tinnitus (runaway recurrent amplification within tonotopic subgraphs after deafferentation), hemispatial neglect (hub failure in spatial attention networks that breaks long‑range integration), developmental dyslexia (instability in phoneme–grapheme mapping graphs degrading relational consistency), autism spectrum conditions (strong local clustering with reduced long‑range edges limiting global integration), schizophrenia (noisy or mis‑signed messages between associative hubs yielding spurious relational inferences), and temporal lobe epilepsy (hyperexcitable hubs that dominate message flow and trap activity in pathological attractors).
Conclusion
If the brain computes by preserving relational structure through localized, energy‑constrained message passing, then GNN‑like principles offer a hypothesis for a fundamental principle in neural function and a practical blueprint for AI that is more data‑efficient, robust, and frugal. In that spirit, we anticipate therapies that increasingly target connectivity motifs (who talks to whom) rather than only cell types; hardware and software stacks that enforce locality and sparsity to achieve large gains in joules per inference; and two‑stage curricula that learn topology first and messages second to yield stable, transferable representations. Getting the relations right may prove to be the shortest path to both understanding biological intelligence and building better machines.
References
Battaglia, P. W., et al. (2018). Relational inductive biases, deep learning, and graph networks. arXiv.
Bullmore, E., & Sporns, O. (2012). The economy of brain network organization. Nature Reviews Neuroscience.
Friston, K., Parr, T., & de Vries, B. (2018). The graphical brain: belief propagation and active inference. Network Neuroscience.
Kaas, J. H. (1997). Topographic maps are fundamental to sensory processing. Brain Research Bulletin.
Penfield, W., & Boldrey, E. (1937). Somatic motor and sensory representation in the cerebral cortex. Brain.
Rao, R. P. N., & Ballard, D. H. (1999). Predictive coding in the visual cortex. Nature Neuroscience.
Markov, N. T., et al. (2013). Cortical high-density counterstream architectures. Science.