One engine.
Four domains.
Real-time intelligence.
WUKS AI is PIXAM's real-time intelligence engine — a single system that interprets GNSS signals, controls autonomous robots, guides aerial drones and powers logic tools, all simultaneously and under 50 ms.
What WUKS AI powers
Four distinct domains. One shared intelligence core. Every system below runs on the same real-time engine — making each smarter, faster and more reliable in the field.
AI & Signal Intelligence
The core reasoning layer. WUKS AI classifies satellite observations, detects multipath and RF interference, scores positioning confidence in real time and adapts every downstream decision based on what the signals are actually saying.
GNSS & RTK Pipelines
WUKS AI monitors every RTCM correction stream and RTK pipeline — base station health, correction quality and rover fix confidence are continuously evaluated and fed back to the autonomy layer before errors reach navigation.
Autonomous Robot Control
Closed-loop decision engine for WUKI, KNOX, GNSS Crawler and other PIXAM platforms. WUKS AI translates positioning quality and mission parameters into real-time motion commands — path correction, speed adaptation and hardened abort logic.
Logic, Drones & Integration
WUKS AI feeds live confidence data to Logik's visual programming flows, drives in-flight GNSS quality assessment for aerial drones and exports telemetry over MAVLINK, MQTT and ROS — compatible with every tool in the PIXAM and Diginto.tech stack.
What goes in. What comes out.
- Raw GNSS observables Multi-constellation pseudorange, carrier-phase and Doppler from GPS, Galileo, GLONASS and BeiDou receivers.
- RTCM correction streams Live RTK and NTRIP correction data, evaluated for quality and anomalies before being forwarded to navigation.
- Inertial & sensor data 6-axis IMU, barometer, optical flow and camera feeds fused to bridge periods of degraded GNSS geometry.
- Mission context Route plans, geofences, velocity limits and per-mission confidence thresholds set by the operator or autonomy stack.
- Confidence score (0–100) A single, continuously updated positioning quality score consumed by robots, drones and Logik flows to gate mission-critical decisions.
- Motion commands Real-time velocity, heading and mission-state commands delivered to robot and drone platforms via MAVLINK and ROS topics.
- Signal health telemetry Per-satellite quality metrics, fusion weights and degradation events streamed over MQTT and displayed on Diginto.tech dashboards.
- Abort & safe-state triggers Hardened signals that activate hold-position, return-home or emergency-land independently of the operator or mission planner.
Four intelligence layers
WUKS AI is not a single model — it is a layered architecture where each stage refines raw data into a more trusted signal before the next layer acts on it.
Signal Intelligence
Every satellite observation is evaluated before a position fix is computed. LOS/NLOS classification, multipath detection and RF interference monitoring produce a per-satellite quality score that every downstream layer inherits.
Sensor Fusion
When signal quality degrades, Layer 2 blends GNSS with 6-axis IMU, optical flow and barometric data through an extended Kalman filter — bridging GNSS outages of up to 30 seconds on validated PIXAM field platforms.
Autonomy Control
Consumes the fused position and confidence score to drive real-time decisions. Velocity commands scale to live confidence. Path replanning reroutes around degradation zones. Hardened abort logic operates independently of the mission planner.
Logic & Integration
Exports intelligence to Logik visual flows, Diginto.tech dashboards and third-party systems via MAVLINK v2, ROS2 and MQTT. No proprietary SDK required — any standard client can consume WUKS AI outputs directly.
Not synthetic. Not simulated.
Every WUKS AI model is trained on structured logging runs captured during actual PIXAM operations — outdoor robots, multi-constellation base stations and aerial missions, in working field conditions.
PIXAM field datasets
Structured runs capturing raw GNSS observables, RTCM streams, IMU readings and environmental metadata — collected across multiple platforms, terrains and weather conditions.
Engineering-quality labels
Ground-truth labels for LOS/NLOS, multipath severity, fix type and confidence are generated using post-processed RTK references and manually reviewed by PIXAM engineers.
Continuous field validation
New deployment data from live PIXAM systems feeds back into the training pipeline. Models are updated against real-world distribution shift — not frozen at initial release.
Runs on every platform we build
GNSS systems
Monitors and interprets every RTK pipeline — from base station health checks to rover fix confidence scoring in real time. Flags correction stream anomalies before they reach navigation.
GNSS robots
Powers the autonomy layer on WUKI, KNOX and Crawler platforms. Real-time path correction, obstacle response and mission abort logic run continuously from the WUKS AI engine.
Aerial drones
In-flight GNSS quality assessment and signal-adaptive waypoint control — keeping missions on track when satellite geometry degrades or interference is detected mid-flight.