Ambiq Launches AI Compression for Always-On Devices

01 May 2026 | NEWS

compressionKIT reduces data, memory, and power needs for wearables and edge sensors

Ambiq Micro, Inc. (“Ambiq®”), a technology leader in ultra-low-power semiconductor solutions for edge AI, announced compressionKIT™, a next-generation AI-based codec in beta, proven to substantially reduce the power and memory costs of handling continuous sensor data in wearable and edge devices.

As always-on devices—from medical wearables to smart home and industrial sensors—generate continuous data streams, storing and transmitting that data has become a significant drain on memory, battery life, and system costs. compressionKIT addresses this at the source by compressing data while preserving the key information needed for AI—allowing devices to do more with less.

compressionKIT enhances Ambiq’s edge AI portfolio by solving a key bottleneck: effectively representing sensor data before it is stored, transmitted, or analysed.

Key Benefits

  • Up to 20x data compression1
    Shrinks continuous sensor streams while retaining the features needed for AI and analytics
  • Up to 16x lower on-device memory usage2
    Enables longer data retention and reduces storage requirements
  • Reduced transmission power
    Fewer bits sent over the air, which translates to improved battery life
  • Multiple inference deployment options
  • Supports inference on-device, in the cloud, or across hybrid edge-cloud pipelines using either compressed or reconstructed data
  • Configurable compression targets
    Enable developers to optimise trade-offs between data rate, quality, and system constraints

Executive Quote

“For always-on devices, managing sensor data efficiently is just as important as running inference efficiently,” said Dr. Adam Page, Head of AI at Ambiq. “compressionKIT gives developers a practical way to reduce storage and transmission demands while preserving the signal information needed for meaningful AI insights.”

For developers, compressionKIT offers configurable compression targets (2x – 20x) and a visual tuning interface to optimise the balance between data rate and signal quality. The platform supports both hybrid DSP + ML approaches for efficient deployment and AI-first neural compression for maximum data reduction.

compressionKIT is currently in beta testing, with rolling improvements to be released in the coming quarters.