Selective Data Capture with Edge Computers
Flight proven edge computers are:
- Tartan Edge – PolarFire SoC + NVIDIA Jetson Xavier NX
- Polar Edge – PolarFire SoC + NVIDIA Jetson AGX Xavier
- Typhoon Edge – PolarFire SoC + NVIDIA Jetson AGX Orin (Industrial)
With 60–70 TOPS range of processing requirement, the Typhoon Edge (PolarFire SoC + Jetson AGX Orin) is the most suitable platform for your application and provides sufficient performance margin while preserving software continuity on the NVIDIA stack.
NVIDIA Jetson processors within these systems can be selectively powered down or placed offline when not required, allowing meaningful reductions in average power consumption while maintaining availability for peak processing phases.
combining a Jetson module with a PolarFire SoC FPGA — can deliver in TOPS (Tera Operations Per Second):
Core AI/ML Engine: NVIDIA Jetson AGX Orin 64GB Developer Kit / Jetson AGX Orin 32GB H01 Kit
- The Jetson AGX Orin (whether 32 GB or 64 GB variant) delivers up to ~275 TOPS of AI performance (INT8) for AI inference workloads when configured for maximum performance.
- The industrial version (if used) typically rates slightly lower (e.g., ~248 TOPS) due to ECC and extended operating conditions.
✔ This is the primary contributor to high-throughput AI/ML inference in your design.
TIP: “TOPS” here refers to INT8 operations, which is the common way to measure deep-learning inference throughput on Jetson platforms. Actual performance can vary with precision (e.g., FP16, FP32, INT8 sparse) and workload characteristics.
PolarFire SoC FPGA Contribution
- Microchip’s PolarFire SoC FPGA brings programmable fabric + RISC-V cores, logic cells, DSP blocks, and high-speed IO to your edge compute platform, but manufacturers don’t normally quote TOPS for FPGA fabric alone — because:
- FPGA “TOPS” depends on custom hardware designs, clocking, and the specific matrix multiply / NN acceleration implementation.
- PolarFire SoC is optimized for deterministic control, real-time tasks, and space-grade operation rather than raw matrix-multiply counts like a GPU.
Instead of a single “TOPS” figure for the FPGA, it’s better to think of the FPGA’s impact as:
- Custom accelerators (e.g., CNN, video preprocessors, data routing) can significantly augment overall system throughput.
- Offloading specific kernels to FPGA logic can increase effective usable throughput beyond what the GPU alone does — but the exact TOPS depends on your RTL implementation.
In GEO, Typhoon Edge is best positioned as a:
High-performance payload data processor / AI co-processor
not as the sole flight computer.
Typical roles:
- On-board AI inference (Earth observation, RF monitoring, space situational awareness)
- Real-time payload data reduction & compression
- Autonomous event detection (anomalies, jamming, interference, weather)
- Adaptive payload control
- Cross-link / downlink optimization
It complements:
- A radiation-hardened OBC (command & control, AOCS, safe mode)
- Acts as a mission computer / payload computer
PolarFire SoC FPGA: supervisor + watchdog + I/O
Jetson AGX Orin: AI / payload compute
elective capture is one of the strongest advantages of Typhoon Edge when used with a camera.
It can avoid saving and downlinking useless data by design, using a combination of FPGA-level filtering and AI-based decision making on the Jetson.
Here is how it works in practice.
Two-stage selective capture architecture
Stage 1 – Real-time hardware filtering (PolarFire SoC FPGA)
The FPGA sits directly on the camera interface and can:
- Drop frames based on:
- Time windows
- Geographic region (lat/long from GNSS)
- Lighting conditions
- Motion detection
- Simple thresholds (IR intensity, contrast, etc.)
- Perform:
- Frame differencing
- Cloud pre-screening
- Saturation checks
- Noise detection
This is deterministic, low power, radiation-safe, and happens before data reaches the GPU.
Stage 2 – AI-based intelligent filtering (Jetson AGX Orin)
For frames that pass stage 1:
The Jetson runs AI models to decide:
- Is there an object of interest?
- Is it a known class? (ship, fire, satellite, aircraft, storm cell, etc.)
- Is confidence above threshold?
- Is this a new event or duplicate?
- Is temporal persistence confirmed?
Only then:
- Save full-resolution image/video
- Or save cropped region
- Or save metadata only
- Or discard completely
What gets stored vs discarded
Typical policies:
Result | Action |
Empty ocean | Discard |
Cloud-only | Discard |
Low confidence detection | Discard |
Known repeated object | Save metadata only |
New target | Save crop + metadata |
Critical event | Save full frame + priority downlink |