With over 100 combined years of audio measurement experience, our team has created a wealth of technical papers, sequences, articles and other useful information to assist you with your audio test needs. Please browse the collection below, or filter by type of resource.
Filter By Category
Author: Steve Temme. Reprinted from the Jan 2020 issue of AudioXpress.
This article discusses tools and techniques that are available to accurately measure the audio performance of voice-controlled and connected devices under the many various real-world conditions they may be used. It covers basic acoustic measurements such as frequency and distortion response, which have always been carried out on conventional wired systems, and the more complex real-world tests that apply specifically to voice-activated devices, along withthe techniques and standards that may be used.
Smart devices that are voice-controlled such as smart speakers, hearables, and vehicle infotainment systems are notoriously complex to test. They have numerous connections from wired to wireless and contain much signal processing, both on the record and the playback side. This means that their characteristics change according to ‘real world’ conditions of the environment that they are used in, such as background noise, playback levels, and room acoustics. Furthermore, their multifunctional nature means that there are many aspects of the device that may need to be tested, ranging from voice recognition to music playback, operation as a hands-free telephone, and in the case of hearables, hearing assistance. Due to their complex non-linear use cases, these devices often need to be tested at different levels and different environmental conditions. This paper focuses on tools and techniques to accurately measure the audio performance of such devices under the many various real-world conditions in which they are used.
Author: Steve Temme, Listen, Inc.
Presented at ISEAT 2019, Shenzhen, China.
Smart headphones or “hearables” are designed not only to playback music but to enhance communications in the
presence of background noise and in some cases, even compensate for hearing loss. They may also provide voice
recognition, medical monitoring, fitness tracking, real-time translation and even augmented reality (AR). They
contain complex signal processing and their characteristics change according to their smartphone application and
‘real world’ conditions of their actual environment, including background noises and playback levels. This paper
focuses on how to measure their audio performance under the many various real-world conditions they are used
Authors: Steve Temme, Listen, Inc.
Presented at AES Headphone Conference 2019, San Francisco, CA.
A tutorial and accompanying paper that was presented at the AES Automotive Conference, Sept 11-13, 2019, Neuburg an der Donau, Germany.
Voice-controlled and smartphone integrated vehicle infotainment systems are notoriously complex to test. They have numerous connections from wired to wireless and contain much signal processing, both on the record and on the playback side. This means that their characteristics change according to ‘real world’ conditions of the vehicle’s environment, including cabin acoustics and background noises from road, wind and motors. Furthermore, their multifunctional nature means that there are many aspects of the device that may need to be tested, ranging from voice recognition to music playback and operation as a hands-free telephone. Due to their complex non-linear use cases, these devices often need to be tested at different levels and different environmental conditions.
This tutorial offers practical hands-on advice on how to test such devices, including test configurations, what to measure, the challenges of making open-loop measurements, and how to select a test system.
This sequence, inspired by AES papers on statistical models to predict listener preference by Sean E. Olive, Todd Welti, and Omid Khonsaripour of Harman International, applies the Harman target curve for in-ear, on-ear and over-ear headphones to a measurement made in SoundCheck to yield the predicted user preference for the device under test. The measurements are made in SoundCheck and then saved to an Excel template which performs the necessary calculations to produce a Predicted Preference score using a scale of 0 to 100. The spreadsheet calculates an Error curve which is derived from subtracting the target curve from an average of the headphone left/right response. The standard deviation, slope and average of the Error curve are calculated and used to calculate the predicted preference score. The sequence also provides the option to recall data rather than making a measurement, which saves time for engineers who already have large quantities of saved data, and enables historical comparison with obsolete products.
This sequence characterizes a microphone’s ability to passively and/or actively reject noise in the user’s environment. Unlike traditional microphone SNR measurements which calculate a ratio based upon a reference signal and the microphone’s noise floor, this method utilizes a signal (speech played from a mouth simulator) and noise (background noise played from two or more equalized source speakers) captured by both a reference microphone and the DUT microphone.
First a recording of the baseline ambient noise in the test environment is made and a 1/3 octave RTA spectrum is calculated from the recording. Next, the speech signal (mouth simulator) and noise signals (Left and Right speakers) are played consecutively and recorded separately using the reference microphone. A 1/3 octave RTA spectrum is calculated from each recorded time waveform. Next the same measurements are repeated using the DUT microphone. The resulting RTA spectra are then post processed to produce a signal gain spectrum and a noise gain spectrum which are then used to derive the SNR spectrum of the DUT mic. For best accuracy, the Signal and Noise spectra should be at least 5 dB above the ambient noise floor of the measurement environment.
This test sequence demonstrates SoundCheck’s Triggered Record – Chirp Trigger function for open loop testing of devices without analog inputs such as smart speakers, wearables, smart home devices, tablets and cellphones. A stimulus WAV file is created in SoundCheck and transferred to the device under test, where it is played back and the response recorded in SoundCheck as if the stimulus were played directly from SoundCheck. The Acquisition step is triggered by the chirp in the stimulus file. Chirp triggers are more robust than level and frequency triggers which are susceptible to false triggering due to background noise.
This sequence measures the Max SPL of a transducer versus frequency that a device can play back with acceptable distortion. It is particularly valuable for designers using DSP algorithms to optimize the performance of their speakers.
It characterizes the Max SPL of a transducer by setting limits on specific metrics (THD, Rub & Buzz, Perceptual Rub & Buzz, Input Voltage and Compression) and then driving the transducer at a series of standard ISO frequencies, increasing the stimulus level until the one of the limits is surpassed. The sequence begins by measuring the frequency response and impedance of the DUT. The user is asked if they wish to use the -3dB from resonance frequency as the test Start Frequency or manually enter another value. The user is then prompted to enter a Stop Frequency, initial test level and limit values for the metrics of interest. The sequence then plays the stimulus Start Frequency in a loop, increasing the level +3dB with each loop iteration until one of the limits is exceeded. The stimulus level is then adjusted -3dB and the sequence continues to a second loop which increases the stimulus level +0.5 dB with each loop iteration until the limit is exceeded. At this point, the limit results are saved to an Excel file, the stimulus frequency is incremented by a constant multiplication step and the process is repeated until the Stop Frequency is achieved. Every time the main loop is completed, the individual SPL and Stimulus Level x-y pairs are concatenated to master curves. At the end of the sequence, the Max SPL and Stimulus Level curves are autosaved in .dat format.