Are We All Just Neural Networks in Disguise?

Are We All Just Neural Networks in Disguise?

The question asks whether human cognition is best understood as a neural network in biological clothing. I examine five layers of the debate: biological implementation, computational theory, algorithmic models, phenomenology of experience, and engineering analogs. I review evidence from deep learning, Bayesian predictive processing, and major theories of consciousness. I evaluate classic objections from symbolic cognition and show why current research favors hybrid accounts that integrate network learning with structured representations. I close with ethical and design implications for AI systems that increasingly mirror human cognitive strategies.

“Neural network” names both a biological organ and a class of computational models. The temptation is to collapse the two and say the mind just is a neural net. That claim succeeds or fails at different explanatory levels. Marr’s framework separates what a system does, how it does it, and what it is made of. This paper keeps those levels distinct while asking where they align.

Humans are not “just” neural networks, but much of human cognition can be modeled as multi-level learning in networks that implement probabilistic prediction and global broadcasting. The strongest position is a hybrid one that treats network learning as the substrate for structured, symbol-like operations.

What “being a neural network” could mean

2.1 Implementation level

At the hardware level we are networks of neurons connected by synapses. Here the statement is trivially true. Marr called this the implementational level. University at Albany

2.2 Algorithmic level

At the algorithmic level the claim is substantive. Do human brains learn and compute in ways similar to artificial neural networks. Deep learning shows that layered networks can discover rich internal representations and solve perception and control at scale. That is an algorithmic success that rhymes with cortical hierarchies, though equivalence is not established. Nature

2.3 Computational theory level

At the top level the brain can be cast as a prediction and control machine that minimizes uncertainty and error. Predictive processing and the Free Energy Principle offer unifying accounts of action and perception as inference under uncertainty. This positions neural networks as tools for approximate Bayesian inference.

Evidence that brains “think” like networks

3.1 Deep learning as a cognitive mirror

Modern nets recapitulate several cognitive phenomena. They form multi-level features, show category selectivity, and can approximate complex functions with gradient-based learning. Their success across vision, speech, and language suggests that network learning is a general-purpose strategy for extracting structure from data. Nature

3.2 The Bayesian brain and predictive processing

The Bayesian coding hypothesis holds that neural populations represent probability distributions. Predictive processing refines this idea. Brains send predictions down the hierarchy and learn from error signals. Neural networks with predictive objectives implement similar principles through loss minimization and representation learning. PubMedSpringerLink

3.3 Global broadcast and conscious access

The Global Neuronal Workspace model describes consciousness as ignition and widespread broadcasting of information across long-range networks. This dovetails with large recurrent and attention-based architectures where certain states gain global access that coordinates specialized processors. PMCPubMed


4. Where the analogy breaks

4.1 The symbol-structure challenge

Classic critiques argued that connectionist networks struggle with systematicity and compositional structure. People can understand “John loves Mary” and “Mary loves John” and generalize across roles. Purely associative nets historically fell short. This motivated arguments for symbolic or hybrid architectures. ScienceDirectruccs.rutgers.edu

4.2 Networks that learn structure

The landscape has shifted. Modern networks learn discrete variables, attention, and program-like behaviors. Yet the critique still matters. It suggests that humanlike cognition likely requires networks that acquire and manipulate structured, role-sensitive representations rather than raw associations alone. The best current reading is not “brains are just nets,” but “brains are networks that learn to implement structured operations.”


5. Consciousness: are networks enough

Integrated Information Theory claims consciousness tracks how much information a system integrates. If so, highly integrated biological networks have high Φ. This frames consciousness as a property of network organization, not substrate. Critics point to measurement difficulties and panpsychist implications, but the theory keeps the network lens central. GNW, by contrast, focuses on access rather than intrinsic integration. Both treat large-scale network dynamics as essential. NatureInternet Encyclopedia of Philosophy

Public-facing discussions by Koch and others illustrate both the appeal and controversy of network-based theories. They highlight how different theories cash out the same intuition. Conscious experience may be tied to patterns of integration and broadcast in complex networks. WIRED+1

Synthesis: a hybrid picture

Putting the pieces together yields a layered synthesis.

  1. Implementation. Biological neurons form adaptive, recurrent networks.

  2. Algorithms. Learning approximates Bayesian prediction through gradient-like credit assignment and error correction. PubMedNature

  3. Computation. The system solves tasks that require variable binding, abstraction, and systematic generalization. Meeting this demand likely recruits structured representations within networks. ScienceDirect

  4. Conscious access. Global broadcast selects some network states for coordinated control and report. PMC

On this view humans are networks that learn predictive models and also learn structures that behave like symbols when needed. This avoids a false choice between “just nets” and “just symbols.”

Implications for AI and society

7.1 Designing human-aligned systems

If human cognition is networked prediction with global broadcast, then scalable alignment should target loss functions, uncertainty handling, and attention-like gating. Systems should expose calibrated uncertainty and robust ways to decline action when predictions are brittle. PubMed

7.2 Interpretability

Workspace-like ignition suggests useful interfaces. Track which internal states gain global access and why. Map predictive errors that drive updates. Borrow from neuroscience tools used to study broadcast, ignition, and recurrent loops to interpret large models. PMC

7.3 Ethics of consciousness claims

IIT-inspired metrics tempt us to quantify machine experience. Premature attributions risk confusion. Treat network integration as a research signal, not a verdict. Keep medical and legal standards conservative until multi-theory tests converge. Nature

Objections and replies

Objection. Current networks are brittle and data hungry. Humans learn from few examples.
Reply. This is an algorithmic gap, not a refutation. Hybrid approaches that add structure and active inference aim to reduce data needs and increase extrapolation. Nature

Objection. Consciousness is not prediction or information integration.
Reply. True. These are theories of access and organization. They may be necessary but not sufficient. They still make testable predictions about network dynamics that any finished theory must accommodate. PMCNature

Objection. Symbolic capacities require an architecture unlike networks.
Reply. The last decade shows networks can emulate aspects of structure. The constructive path is to specify which symbolic operations are required and build networks that learn them. ScienceDirect

We are not “just” neural networks if “just” means a narrow feedforward approximator. We are very plausibly networks that learn predictive, structured, and globally coordinated models of the world. That picture honors Marr’s levels, absorbs insights from deep learning and Bayesian neuroscience, and keeps consciousness tied to large-scale network dynamics without overclaiming. The right question is not whether we are neural networks, but what kind of networked learning makes minds like ours possible. University at AlbanyNaturePubMedPMC

 

 

References and recommended readings

  • LeCun, Y., Bengio, Y., & Hinton, G. Deep learning. Nature 521, 436–444, 2015. NaturePubMed

  • Knill, D. & Pouget, A. The Bayesian brain. Trends in Neurosciences 27, 712–719, 2004. PubMedScienceDirect

  • Friston, K. The free-energy principle: a unified brain theory. Nature Reviews Neuroscience 11, 127–138, 2010. Nature

  • Mashour, G., Roelfsema, P., Changeux, J. P., & Dehaene, S. Conscious processing and the global neuronal workspace. Neuron, 2020. Open-access summary available. PMCPubMed

  • Tononi, G., Boly, M., Massimini, M., & Koch, C. Integrated Information Theory. Nature Reviews Neuroscience, 2016. Nature

  • Marr, D. Three levels of analysis. Overview and discussion by McClamrock. University at Albany

  • Fodor, J. & Pylyshyn, Z. Connectionism and cognitive architecture. Cognition, 1988. Accessible draft. ScienceDirectruccs.rutgers.edu

  • Miller, M. (ed.). Special issue on Predictive Processing and Consciousness. Review of Philosophy and Psychology, 2022. SpringerLink

 

 

 

 

 

 

 

Back to blog