Skip to content

Secure air-gapped ML for aerospace.

MLNavigator builds adapterOS: a deterministic multi-LoRA runtime for regulated, offline deployments.
Each run produces cryptographic receipts teams can verify during audit and incident review.
Built for constrained environments: deterministic policy, signed artifacts, and offline-first operation.

Compliance roadmap Receipt evidence model Air-gap compatible Audit-ready artifacts

→ See our Compliance Roadmap for how we meet CMMC/AS9100 requirements.

→ See: MLNavigator and adapterOS (clean mental model).

Validation signals include NSF I-Corps participation, a $25k grant, and 50+ customer discovery conversations.

Design goals

Air-gap compatible

Designed to run without outbound network calls, license checks, or telemetry.

Traceable results

Receipts, manifests, and signed artifacts link outputs to inputs and configuration so audits don’t depend on screenshots or “trust us.”

Compliance-ready artifacts

Built for audit surfaces shaped by CMMC 2.0 Level 2 (a common requirement for DoD suppliers), AS9100 (aerospace quality), ITAR (export-controlled technical data), and FAA documentation workflows.

Compliance Roadmap

What the workflow looks like

Non-confidential schematic

A typical run looks like: upload a drawing or document package, run offline checks, review flagged issues, then export an audit-ready proof pack (receipts, configuration, and hashes).

Workflow diagram: upload drawing package, run offline checks, review findings, export proof pack

Leadership

Company →

Product and Commercial

Leadership function

Owns deployment scoping, buyer workflow fit, and operational rollout for regulated programs.

Engineering and Architecture

Leadership function

Owns deterministic runtime design, receipt integrity, and hardware-aware execution constraints.

Market and vision

Regulated operators face audit exposure when AI execution cannot be reproduced or explained. MLNavigator focuses on runtime infrastructure that makes execution traceable, repeatable, and reviewable.

Long-term

An AI platform for regulated industries where cloud AI cannot go: local-first, verifiable, and designed for high-assurance environments.

Verifiable is not truth

We do not promise the model is right. We aim to show what ran, with what configuration, against what input. That is what you can verify in an audit.

Provenance

Artifacts should trace back to their origin. Model weights, adapters, and runtime are identified where possible.

Manifests

Structured declarations of what should run. Machine-readable. Diffable.

Signed Configs

Configurations can be signed so tampering is detectable.

Hash-Chained Logs

Log entries can reference the previous; deletion or modification becomes detectable.

Energy is a constraint

In transfer-heavy workloads, data movement dominates energy cost. Unified memory architectures can reduce this cost by eliminating copies between CPU and GPU memory. We measure this with Joules per token.

Methodology

We document a measurement methodology for Joules/token benchmarking on Apple silicon.

macOS powermetrics API • 10-run averaging • thermal normalization • documented tolerances

Recent Research Notes

View all →
Feb 2026

MLNavigator and adapterOS

MLNavigator is the company. adapterOS is the offline inference runtime. Here is what each does, what we can show, and what is still in progress.

Feb 2026

Verification Scope

What adapterOS verification covers, what it does not cover, and where human oversight applies.

Feb 2026

When to Reuse the KV Cache Safely with Adapters

KV-cache reuse is one of the largest inference speedups available, but adapters change the attention weights that produced the cache. A per-layer state hash turns this from a gamble into a verifiable policy.

Stay informed

Get notified when we publish new research or open access to our tools.

No spam, ever. We only email when we have something worth sharing.