Research

The science
behind the brain.

OSCEN is grounded in decades of computational neuroscience. We build on established research, and we're honest about where we diverge.

Neuroscience foundations

Each mechanism in OSCEN maps to established neuroscience research.

STDP

+

Bi & Poo (1998), Markram et al. (1997)

The foundational discovery that spike timing determines synaptic plasticity direction and magnitude. Our implementation uses configurable time constants per synapse group.

Eligibility Traces

+

Izhikevich (2007), Gerstner et al. (2018)

Three-factor learning rules where STDP creates a 'tag' that neuromodulators later convert to permanent change. Bridges millisecond spike timing with second-scale reward signals.

BCM Theory

+

Bienenstock, Cooper & Munro (1982)

Sliding threshold for synaptic modification. Active neurons become harder to excite further. Creates competition and selectivity in neural responses.

Neuromodulation

+

Schultz (1998), Hasselmo (2006)

Dopaminergic reward prediction error, cholinergic attention gating, noradrenergic arousal modulation, serotonergic mood regulation. Each modeled as a global gain channel.

Synaptic Scaling

+

Turrigiano et al. (1998)

Multiplicative homeostatic plasticity that normalizes total synaptic input per neuron. Prevents runaway excitation while preserving relative weight distributions.

Critical Periods

+

Hensch (2005), Knudsen (2004)

Experience-dependent windows of heightened plasticity. Our adolescent phase models the reminiscence bump: a period of rapid pruning, myelination, and identity formation.

Energy efficiency

Spiking neural networks compute only when neurons fire. On neuromorphic hardware, this translates to dramatic energy savings.

GPU Inference 50–200W
LLM API Call ~0.3 Wh/query
OSCEN (CPU sim) ~45W
OSCEN (Neuromorphic) <5W
73×
less energy than CPU

Measured on Intel Loihi for robotics peg-in-hole task. IEEE 2024.

109×
less energy than GPU

Measured on Intel Loihi for keyword spotting. Published benchmark.

SNN state of the art

Where spiking neural networks stand today, the capabilities and the gaps.

Capabilities

  • + Continual learning without catastrophic forgetting
  • + Event-driven computation (compute only when needed)
  • + Temporal pattern recognition (spike timing encodes information)
  • + Neuromorphic hardware deployment (Intel Loihi, SpiNNaker)
  • + Real-time sensor processing at <1ms latency

Current Limitations

  • ~ Best ImageNet accuracy: 82.39% vs ANN 91%
  • ~ SNN LLMs max out at ~1.5B parameters
  • ~ Training tools less mature than PyTorch/JAX ecosystem
  • ~ Limited off-the-shelf pretrained models
  • ~ Neuromorphic hardware still in early production

Our position: OSCEN doesn't compete with LLMs on language benchmarks. We compete on continual learning, energy efficiency, and real-time adaptation. The exact capabilities robotics needs and LLMs can't provide.

Competitive landscape

How OSCEN compares to transformer-based robotics AI.

Company Approach Learning Edge
Figure AI VLA Transformer Static No
Google DeepMind Gemini VLA Retrain Partial
Physical Intelligence 3B param VLA Static Cloud
OSCEN Spiking Neural Net Continual Yes

Honest assessment

Can OSCEN match GPT-4 at language?

+

No. And it's not designed to. OSCEN is built for embodied intelligence: real-time sensor processing, motor control, and continuous adaptation. For language, it queries LLMs as tools.

Is it training on one server really enough?

+

For proof of concept, yes. We're demonstrating that a million spiking neurons can learn from raw sensory data. Production deployment on neuromorphic hardware will scale to billions of neurons at a fraction of the power.

What about accuracy benchmarks?

+

We don't yet have published benchmarks on standard tasks. The brain is still in its juvenile developmental phase. Early results show cross-modal association formation and motor pattern learning. Formal benchmarks are planned for Q2 2026.

Why not just use transformers for everything?

+

Transformers excel at static inference. But they can't learn from a single experience, can't run on milliwatt hardware, and can't adapt in real-time. For robotics, those aren't nice-to-haves. They're requirements.

Key references

STDP. Bi, G.-Q. & Poo, M.-M. (1998). Synaptic modifications in cultured hippocampal neurons. J. Neurosci.

Three-factor. Gerstner, W. et al. (2018). Eligibility traces and plasticity on behavioral time scales. Front. Neural Circuits.

BCM. Bienenstock, E., Cooper, L. & Munro, P. (1982). Theory for the development of neuron selectivity. J. Neurosci.

Reward. Schultz, W. (1998). Predictive reward signal of dopamine neurons. J. Neurophysiol.

Scaling. Turrigiano, G. et al. (1998). Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature.

Critical. Hensch, T. (2005). Critical period plasticity in local cortical circuits. Nature Rev. Neurosci.

Loihi. Intel Labs (2024). Neuromorphic computing benchmarks on Loihi 2. IEEE IJCNN.

← Back to overview