LIBELLULA v11

Neuromorphic Predictive Vision Core
Bio-Inspired Motion Prediction for Event-Based Sensors
<0.8µs
Internal Latency
>106
Events/Second
±2px
@ 300 Hz Prediction
<100mW
Power Target
Provisional Patent CA 63/843,696

1 · Overview

LIBELLULA is a neuromorphic predictive-vision processor modeled on the dragonfly's CSTMD1/BSTMD2 visual pathways. Implemented in synthesizable Verilog and fully verified in simulation, it performs selective attention, burst-based gating, and Δ-ahead trajectory prediction—calculating where a moving target will be a fraction of a second into the future, before conventional systems have even finished processing the current frame.

Event-Camera Native: LIBELLULA is designed exclusively for event-based sensors (DVS, DAVIS, Prophesee, Samsung). The architecture exploits the microsecond-resolution asynchronous change signals that only event cameras provide.

2 · Validated Performance

All claims verified through deterministic testbenches with reproducible results. Core v11 includes fixes for NBA ordering hazards, delay lattice pointer logic, and pipeline alignment.

Core Latency
0ns
6 cycles at 200 MHz — well under 0.8 µs
pass
Prediction Accuracy
≤ ±2px
Per prediction at 300 Hz, warm-up discarded
pass
AER Throughput
0M eps
REQ=ACK, zero drops, 4-phase handshake
pass
Test Coverage
0/26
Full testbench suite, all passing
pass

3 · Processing Pipeline

Click any stage to see implementation details. The pipeline ingests asynchronous events from DVS sensors through a feed-forward prediction architecture.

AER Rx
LIF Tile
Delay Lattice
Burst Gate
α-β Pred

aer_rx — Address-Event Receiver

4-phase handshake receiver following standard AER semantics. Designed to interoperate with DVS sensors (Prophesee, iniVation, Samsung) at FPGA bring-up and subsequent ASIC integration. Back-pressure-safe: no events are dropped even under sustained load.

REQ↑ → ACK↑ → REQ↓ → ACK↓ · REQ=2000, ACK=2000 at 1 Meps

lif_tile_tmux — Leaky Integrate-and-Fire Array

Time-multiplexed neuron array with 14-bit precision. Filters noise and performs short-term salience selection—only meaningful motion gets through to downstream stages.

14-bit accumulator · time-multiplexed across spatial array

delay_lattice_rb + reichardt_ds — Delay Lattice & Direction Selectivity

4-direction retinotopic delay lattice with ring buffers. Paired with a Reichardt elementary motion detector for leaky integration of temporal gradients into directional motion vectors. Direct silicon implementation of the Hassenstein-Reichardt model (1956).

4-direction · ring buffer · Reichardt EMD with leaky integrator

burst_gate + conf_gate — Burst Detection & Confidence Scoring

Event density filter suppresses noise and sparse false triggers. Confidence scoring derives from event rate and direction magnitude, gating the predictor's output with a reliability measure.

Density threshold · confidence from rate × direction magnitude

ab_predictor — α-β Continuous-Time Predictor

Kalman-like predictor in Q8.8 fixed-point. Extrapolates trajectories forward in continuous time: p + v·Δt. Downstream consumers subscribe to (x̂, ŷ, conf) with pred_valid strobes.

Q8.8 fixed-point · ≤ ±2 px at 300 Hz · pred_valid strobe output

4 · RTL Modules

Click any module to expand implementation details.

aer_rx 4-phase AER handshake receiver

Address-Event Representation receiver implementing the standard 4-phase handshake protocol (REQ↑, ACK↑, REQ↓, ACK↓). Designed for interoperability with commercial DVS sensors and validated at ≥10⁶ events/s with zero drops.

lif_tile_tmux Time-multiplexed LIF neuron array (14-bit)

Leaky Integrate-and-Fire neuron tile sharing a single accumulator across the spatial array via time-multiplexing. 14-bit precision balances resolution against gate count for efficient FPGA/ASIC utilization.

delay_lattice_rb 4-direction retinotopic delay lattice

Ring-buffer-based delay lattice operating across four cardinal directions. Provides the temporal gradient structure required by the downstream Reichardt motion detector.

reichardt_ds Reichardt elementary motion detector

Elementary motion detector with leaky integration, converting delay lattice outputs into directional motion vectors. A direct computational analogue to biological motion detection in insect optic lobes.

burst_gate Event density filter

Suppresses noise and sparse false triggers by enforcing a minimum event density threshold before downstream processing is engaged.

ab_predictor α-β predictor in Q8.8 fixed-point

Kalman-like α-β predictor performing trajectory extrapolation in continuous time. Q8.8 fixed-point arithmetic keeps gate count minimal. Validated at ≤ ±2 px per prediction at 300 Hz.

conf_gate Confidence scoring from rate × direction

Confidence gate deriving a reliability score from event rate and direction magnitude. Gates the predictor output, ensuring downstream consumers receive predictions only when coherent motion is present.

5 · Build & Reproducibility

All benches are invoked from sim/Makefile. A single sweep can be run via make run-once.

# Clean and run full validation
make clean
make run-once  # latency, px300, meps, power in sequence

# Individual benches
make latency  # Assert ≤ 6 cycles at 200 MHz
make px300    # ±2 px bound at 300 Hz
make meps     # 1 Meps, zero drops
make power    # Toggle count → power_activity.csv
make test-x3  # 3× consistency check

6 · Competitive Landscape

Current drone-vision and motion-prediction systems fall into four categories. None combine sub-millisecond latency with a forward prediction horizon.

Solution Latency Forecast Power
DJI / Skydio commercial stack
Frame CNN on ARM + GPU (est.)
20–40 ms Reactive only 3–8 W
ETH-UZH event-camera avoidance
Sci. Robotics 2020
3.5 ms 0 ms (reactive) ~10 W
FPGA event-vision accelerator
Bonazzi et al., arXiv 2024
~2 ms 0 ms 3–5 W
LIBELLULA ASIC (projected) 0.1–0.5 ms 2–30 ms ahead
Δt selectable
< 20 mW

7 · Why Now

Technical do-ability became real only in the last 24 months, when sensor bandwidth, biological data, and fabrication economics all crossed critical thresholds simultaneously.

Event sensors broke the bandwidth barrier

First-gen DVS (2009–2018) topped out at roughly 50 keps. The Prophesee/Sony IMX636 (2021) reached 1.066 Geps and OmniVision's 3-wafer stacked sensor (ISSCC 2023) demonstrated 4.6 Geps—two orders of magnitude beyond early hardware.

🧬

Burst-coding biology was fully characterised

The selective-attention and gain-modulation behaviour of CSTMD1/BSTMD2 was not quantified until the Wiederman (2017–2024), Fabian (2019), and Nordström/O'Carroll datasets gave precise inter-spike timing and facilitation curves.

🔧

Special-purpose SNN silicon became economical

A fixed-topology, feed-forward lattice uses 2–4× fewer transistors than an all-programmable SNN and can ride 65-nm MPW shuttles for under US$250k—putting first silicon within reach at under US$2M.

IP Whitespace Still Exists

Generic "spiking vision" patents do not disclose (i) an event-driven retinotopic front-end that feeds (ii) direction-selective delayed synapses with (iii) burst-encodings that yield a predictive vector p + v·Δt.

That specific trio—plus the System-on-Module embodiment—represents defensible IP whitespace that no existing patent or publication currently claims.

8 · Strategic Position

LIBELLULA is not a competitor to neuromorphic AI. It is the infrastructure that makes neuromorphic AI deployable.

As event-based vision matures from research demonstrations to certified products, the systems that succeed will combine learned adaptability with deterministic foundations. They will deploy neural networks for pattern recognition, generalization, and context sensitivity—while relying on verified hardware for hard latency bounds, formal verification, activity-proportional power, and certifiable behaviour.

LIBELLULA occupies the latter role: a microsecond-scale, gate-level-verified, power-efficient preprocessing layer that learned systems require but cannot themselves instantiate.

9 · Roadmap & IP

FPGA hardware loop-in with physical DVS front-end (Prophesee, iniVation)

Timing characterization on silicon

Extension of prediction horizon toward 2–30 ms under power caps

Patent coverage: predictive retinotopic delay lattice, confidence-gated bursts, low-power time-multiplexed computation. LIBELLULA v11 · Neuromorphic Predictive Vision Processor | U.S. Provisional Patent Application No. 63/793,528, filed July 14, 2025.

Forward Development: Predictive Mesh Lattice

A planned module—the Predictive Mesh Lattice (PML)—adds short-horizon anticipatory scan steering for improved robustness under vibration, occlusion, and sensor noise.

PML is intentionally excluded from Core v11: tuning is application-specific.