Back to AI Research

AI Research

Causal Learning with Neural Assemblies | AI Research

Key Takeaways

  • This paper explores how neural assemblies—groups of neurons that fire together and strengthen through repeated co-activation—can learn the direction of causa...
  • Can Neural Assemblies -- groups of neurons that fire together and strengthen through co-activation -- learn the direction of causal influence between variables?
  • While established as a computationally general substrate for classification, parsing, and planning, neural assemblies have not yet been shown to internalize causal directionality.
  • We demonstrate that the inherent operations of neural assemblies -- projection, local plasticity control, and sparse winner selection -- are sufficient for directional learning.
  • We introduce DIRECT (DIRectional Edge Coupling/Training), a mechanism that co-activates source and target assemblies under an adaptive gain schedule to internalize directed relations.
Paper AbstractExpand

Can Neural Assemblies -- groups of neurons that fire together and strengthen through co-activation -- learn the direction of causal influence between variables? While established as a computationally general substrate for classification, parsing, and planning, neural assemblies have not yet been shown to internalize causal directionality. We demonstrate that the inherent operations of neural assemblies -- projection, local plasticity control, and sparse winner selection -- are sufficient for directional learning. We introduce DIRECT (DIRectional Edge Coupling/Training), a mechanism that co-activates source and target assemblies under an adaptive gain schedule to internalize directed relations. Unlike backpropagation-based methods, DIRECT relies solely on local plasticity, making the resulting causal claims auditable at the mechanism level. Our findings are verified through a dual-readout validation strategy: (i) synaptic-strength asymmetry, measuring the emergent weight gap between forward and reverse links, and (ii) functional propagation overlap, quantifying the reliability of directional signal flow. Across multiple domains, the framework achieves perfect structural recovery under a supervised, known-structure setting. These results establish neural assemblies as an auditable bridge between biologically plausible dynamics and formal causal models, offering an "explainable by design" framework where causal claims are traceable to specific neural winners and synaptic asymmetries.

This paper explores how neural assemblies—groups of neurons that fire together and strengthen through repeated co-activation—can learn the direction of causal influence between variables. While neural assemblies are already known for their ability to classify data and plan actions, this research demonstrates that they can also internalize directed causal relationships. By using a mechanism called DIRECT (DIRectional Edge Coupling/Training), the authors show that neural assemblies can function as an "explainable by design" framework, where causal claims are directly traceable to specific neural activity and synaptic connections.

How the DIRECT Mechanism Works

The core of the framework is the DIRECT mechanism, which teaches the system directed relationships without relying on the complex, opaque optimization methods often used in traditional neural networks. Instead, it uses local plasticity—the brain-inspired process where connections between neurons strengthen based on their activity.
To learn a causal link from a source variable to a target variable, the system co-activates the corresponding neural assemblies while applying a temporary, adaptive increase in "gain" to the forward connection. This process is carefully managed through a "warm-ramp" schedule: it starts with conservative updates to ensure the neural representations are stable, then gradually increases the strength of the directional binding. This ensures that the learned asymmetry between forward and reverse links is a result of the causal training rather than random noise or instability.

Validating Causal Claims

Because the framework is built on local neural dynamics, the researchers can verify their results through two specific, auditable readouts. First, they measure "synaptic-strength asymmetry," which calculates the difference in weight between forward and reverse links. A stronger forward link compared to the reverse link serves as evidence of a learned causal direction.
Second, they use "functional propagation overlap." This test simulates how a signal would flow through the network if only the source assembly were active. By checking if this signal reliably triggers the correct target assembly, the researchers can quantify how well the system has internalized the causal direction. These methods allow the researchers to inspect the "connectome" of the model directly to see exactly how it represents a causal relationship.

Performance and Interpretability

The researchers tested this framework across multiple domains and found that it achieved perfect structural recovery (Precision@K = 1.0) in supervised settings where the ground-truth causal structure was known. By establishing that neural assemblies can preserve and learn causal information, the authors position this approach as a bridge between biologically plausible neural dynamics and formal causal models.
The primary advantage of this approach is its transparency. Unlike backpropagation-based methods, where it is often difficult to trace why a model made a specific decision, this framework allows for "stage-level diagnostic localization." If a causal link is not correctly recovered, the researchers can pinpoint whether the failure occurred during the initial assembly formation, the directional binding phase, or the final readout. This makes the system inherently explainable, as every causal claim is tied to observable physical changes in the neural network.

Comments (0)

No comments yet

Be the first to share your thoughts!