LIBELLULA is a neuromorphic predictive-vision processor modeled on the dragonfly's CSTMD1/BSTMD2 visual pathways. Implemented in synthesizable Verilog and fully verified in simulation, it performs selective attention, burst-based gating, and Δ-ahead trajectory prediction—calculating where a moving target will be a fraction of a second into the future, before conventional systems have even finished processing the current frame.
Event-Camera Native: LIBELLULA is designed exclusively for event-based sensors (DVS, DAVIS, Prophesee, Samsung). The architecture exploits the microsecond-resolution asynchronous change signals that only event cameras provide.
All claims verified through deterministic testbenches with reproducible results. Core v11 includes fixes for NBA ordering hazards, delay lattice pointer logic, and pipeline alignment.
Click any stage to see implementation details. The pipeline ingests asynchronous events from DVS sensors through a feed-forward prediction architecture.
4-phase handshake receiver following standard AER semantics. Designed to interoperate with DVS sensors (Prophesee, iniVation, Samsung) at FPGA bring-up and subsequent ASIC integration. Back-pressure-safe: no events are dropped even under sustained load.
Time-multiplexed neuron array with 14-bit precision. Filters noise and performs short-term salience selection—only meaningful motion gets through to downstream stages.
4-direction retinotopic delay lattice with ring buffers. Paired with a Reichardt elementary motion detector for leaky integration of temporal gradients into directional motion vectors. Direct silicon implementation of the Hassenstein-Reichardt model (1956).
Event density filter suppresses noise and sparse false triggers. Confidence scoring derives from event rate and direction magnitude, gating the predictor's output with a reliability measure.
Kalman-like predictor in Q8.8 fixed-point. Extrapolates trajectories forward in continuous time: p + v·Δt. Downstream consumers subscribe to (x̂, ŷ, conf) with pred_valid strobes.
Click any module to expand implementation details.
Address-Event Representation receiver implementing the standard 4-phase handshake protocol (REQ↑, ACK↑, REQ↓, ACK↓). Designed for interoperability with commercial DVS sensors and validated at ≥10⁶ events/s with zero drops.
Leaky Integrate-and-Fire neuron tile sharing a single accumulator across the spatial array via time-multiplexing. 14-bit precision balances resolution against gate count for efficient FPGA/ASIC utilization.
Ring-buffer-based delay lattice operating across four cardinal directions. Provides the temporal gradient structure required by the downstream Reichardt motion detector.
Elementary motion detector with leaky integration, converting delay lattice outputs into directional motion vectors. A direct computational analogue to biological motion detection in insect optic lobes.
Suppresses noise and sparse false triggers by enforcing a minimum event density threshold before downstream processing is engaged.
Kalman-like α-β predictor performing trajectory extrapolation in continuous time. Q8.8 fixed-point arithmetic keeps gate count minimal. Validated at ≤ ±2 px per prediction at 300 Hz.
Confidence gate deriving a reliability score from event rate and direction magnitude. Gates the predictor output, ensuring downstream consumers receive predictions only when coherent motion is present.
All benches are invoked from sim/Makefile. A single sweep can be run via make run-once.
Current drone-vision and motion-prediction systems fall into four categories. None combine sub-millisecond latency with a forward prediction horizon.
| Solution | Latency | Forecast | Power |
|---|---|---|---|
| DJI / Skydio commercial stack Frame CNN on ARM + GPU (est.) |
20–40 ms | Reactive only | 3–8 W |
| ETH-UZH event-camera avoidance Sci. Robotics 2020 |
3.5 ms | 0 ms (reactive) | ~10 W |
| FPGA event-vision accelerator Bonazzi et al., arXiv 2024 |
~2 ms | 0 ms | 3–5 W |
| LIBELLULA ASIC (projected) | 0.1–0.5 ms | 2–30 ms ahead Δt selectable |
< 20 mW |
Technical do-ability became real only in the last 24 months, when sensor bandwidth, biological data, and fabrication economics all crossed critical thresholds simultaneously.
First-gen DVS (2009–2018) topped out at roughly 50 keps. The Prophesee/Sony IMX636 (2021) reached 1.066 Geps and OmniVision's 3-wafer stacked sensor (ISSCC 2023) demonstrated 4.6 Geps—two orders of magnitude beyond early hardware.
The selective-attention and gain-modulation behaviour of CSTMD1/BSTMD2 was not quantified until the Wiederman (2017–2024), Fabian (2019), and Nordström/O'Carroll datasets gave precise inter-spike timing and facilitation curves.
A fixed-topology, feed-forward lattice uses 2–4× fewer transistors than an all-programmable SNN and can ride 65-nm MPW shuttles for under US$250k—putting first silicon within reach at under US$2M.
Generic "spiking vision" patents do not disclose (i) an event-driven retinotopic front-end that feeds (ii) direction-selective delayed synapses with (iii) burst-encodings that yield a predictive vector p + v·Δt.
That specific trio—plus the System-on-Module embodiment—represents defensible IP whitespace that no existing patent or publication currently claims.
LIBELLULA is not a competitor to neuromorphic AI. It is the infrastructure that makes neuromorphic AI deployable.
As event-based vision matures from research demonstrations to certified products, the systems that succeed will combine learned adaptability with deterministic foundations. They will deploy neural networks for pattern recognition, generalization, and context sensitivity—while relying on verified hardware for hard latency bounds, formal verification, activity-proportional power, and certifiable behaviour.
LIBELLULA occupies the latter role: a microsecond-scale, gate-level-verified, power-efficient preprocessing layer that learned systems require but cannot themselves instantiate.
FPGA hardware loop-in with physical DVS front-end (Prophesee, iniVation)
Timing characterization on silicon
Extension of prediction horizon toward 2–30 ms under power caps
Patent coverage: predictive retinotopic delay lattice, confidence-gated bursts, low-power time-multiplexed computation. LIBELLULA v11 · Neuromorphic Predictive Vision Processor | U.S. Provisional Patent Application No. 63/793,528, filed July 14, 2025.
A planned module—the Predictive Mesh Lattice (PML)—adds short-horizon anticipatory scan steering for improved robustness under vibration, occlusion, and sensor noise.
PML is intentionally excluded from Core v11: tuning is application-specific.