Audience's First Multisensory Processor -- the N100 -- to Feature Always-On, Low-Power VoiceQ and MotionQ Technology
BARCELONA, SPAIN--(Marketwired) - Audience, Inc. (NASDAQ: ADNC), the leader in Advanced Voice and a pioneer in Multisensory processing and Natural User Experience (NUE)technology for consumer devices, today announced the first member of the NUE family of Multisensory processors -- the N100. NUE products are targeted at smartphones, tablets, wearables, and IoT devices, with the first N100 devices expected to be available for sampling mid-2015. NUE Multisensory technology is designed to derive intelligence about you and your environment from the exploding quantity sensor data available on modern devices -- enabling true contextual awareness -- so your device can provide insight and awareness in your daily life.
"We envision a world where people naturally interact with their devices, and in an enhanced way with the world around them, based on the insight and awareness that multisensory processing will provide. This vision is driving Audience's development of NUE technology, delivered in the form of Multisensory processors and software," said Peter Santos, President and CEO, Audience. "We designed NUE Multisensory technology to derive intelligence and context from sensor and microphone data about you and your environment, to deliver enhanced context that is useful and always-on, while preserving battery life."
About the NUE N100 Multisensory Processor
The N100 will be the first Audience product to enable a full-fledged Multisensory experience. It will incorporate Audience VoiceQ and MotionQ technology to enable applications with context awareness. Achieving context awareness involves layering sophisticated algorithms on top of the N100's sensory intelligence -- interpreting sensor and microphone data to deduce higher-level information. The N100 will be able to interpret context, such as, "the user is running," from a series of characteristic voice and motion features collected by the sensors.
VoiceQ is Audience's hands-free voice recognition technology that notifies the device to take action when a secure key word is spoken. The N100 VoiceQ implementation works in three stages. Using a low-power, Always-On voice activity detector, the N100 continuously listens for voice signals while staying in an ultra-low power mode. Upon voice detection, the incoming signal is compared to parameter-defined key phrases, or triggers. During these initial stages, only the N100 and digital microphone are awake. All other components in the device are in low-power sleep mode. When the key phrase is detected, the N100 wakes up the device, indicating the user's intent to talk with the device via a voice user interface. VoiceQ on the N100 is designed to support up to five keywords, which can be a combination of user-selected or OEM-chosen keywords.
VoiceQ technology on the N100 is designed to preserve power by dramatically lowering false acceptance rates compared to previous voice trigger implementations. The acceptance of one false keyword equates to about two hours of keyword listening or an additional 20 minutes of unintended phone use. The N100 is also designed to incorporate a power-optimized embedded implementation of Google hotword detection, allowing the N100 to simultaneously detect both VoiceQ and Google keywords in less than 17 MIPS.
Motion Q Technology
The N100 will feature the first hardware implementation of the MotionQ 1.0 library -- from the Audience acquisition of Sensor Platforms. The NUE MotionQ software is designed to include advanced algorithms, power conscious architecture and outstanding context awareness. In Always-On mode, the N100 is designed to continuously monitor the sensors it is connected to, recalibrating them as needed and deducing context from an intelligent mix of sensor inputs. The N100 works with sensors from a wide range of the leading suppliers, supporting any sensor driver with support for Open Sensor Platform (OSP).
The N100 MotionQ implementation design is partitioned to execute low-power, critical tasks such as sensor fusion and context detection in the N100 and memory-intensive context classification in the Application Processor (AP). The N100 implements the OSP Host API allowing it to communicate to the AP using the OSP protocol. The N100 MotionQ implementation is optimized for the Audience instruction set in the N100 with the aim of delivering best-in-class sensor hub processing, sensor fusion and auto-calibration in less than 2 MIPS.
For more information on Audience® processors and smart codecs, please go to www.audience.com/nue.
Audience, the leader in advanced voice, is pioneering multisensory processing and natural user experiences for consumer devices. Its technologies, based on auditory neuroscience, improve the mobile voice experience and enhance speech-based services as well as audio quality for multimedia. In early 2014, the company announced its expansion into multisensory and motion processing. Through the combination of Advanced Voice and multisensory processing, Audience aims to transform the way consumers engage with devices by enabling seamless natural user experiences and context-aware services. The Company's products have been shipped in more than 500 million devices worldwide. For more information, see http://www.audience.com.
Cautionary Note Concerning Forward-Looking Statements
Statements in the press release regarding Audience, Inc., which are not historical facts, are "forward-looking statements" within the meaning of the Private Securities Litigation Reform Act of 1995. These forward-looking statements may be identified by terms such as believe, will, expect, planned, may, look forward and could and the negative of these terms or other similar expressions. These statements, including but not limited to the capabilities, features and performance of our current and future products, are based on current expectations and assumptions that are subject to risks and uncertainties. Our actual results could differ materially from those we anticipate as a result of various factors, including: our ability to successfully design and develop new products; market demand for and acceptance of our products; our ability to integrate with our original equipment manufacturers' product designs; and other risks inherent in fabless semiconductor businesses. For a discussion of these and other related risks, please refer to "Risk Factors" in our most recent Form 10-Q, which is available on the SEC's website at www.sec.gov. Forward-looking statements represent our management's beliefs and assumptions only as of the date made. We assume no obligation to update these forward-looking statements.