With over 100 combined years of audio measurement experience, our team has created a wealth of technical papers, sequences, articles and other useful information to assist you with your audio test needs. Please browse the collection below, or filter by type of resource.
Filter By Category
This sequence demonstrates how SoundCheck’s Windowing post-processing function is applied to waveforms to remove measurement artifacts that might otherwise create false auto delay values and subsequent analysis errors.
This sequence uses data from a customer who was measuring the directivity of a hearing aid-type device by mounting it on a rotating HATS and using a short duration log sweep. The DUT does not have a perfect seal in the HATS ear and the devices signal processing produces a latency of around 35ms. When viewing the Recorded Time waveforms, both the leakage signal and the amplified signal can be seen. As the DUT approaches 180° the magnitude of the leakage into the HATS ear exceeds that of the amplified signal, creating false Record Delay values and subsequent analysis errors. This sequence applies a window to the Recorded Time Waveform to remove the early-arrival leakage, and calculates the true Record Delay values of the amplified signal, obtaining consistent analysis results at all angles of rotation. This sequence can be adapted to your other requirements, for example, removing early arrival signals from a waveform or editing out excessive delay in a waveform.
TCP/IP control of SoundCheck provides a powerful and expandable command format for controlling and interacting with SoundCheck via any programming language (C#, C++, MATLAB, VB.net, LabVIEW, Python, etc.), on any operating system, either locally or through a network. This is valuable for anyone wishing to control SoundCheck from an external program, e.g. as part of an overall test plan or for factory automation. Using this feature, a single computer can control multiple SoundCheck systems, simplifying production line measurements.
The ability to connect to, and control SoundCheck via TCP/IP first appeared in SoundCheck over 3 years ago, but for version 18, it has been enhanced with the ability to pass test configuration data into the memory list from external programs. This means, for example, that by externally storing parameters such as limits, test levels, and test signals, a single sequence can be used for multiple products, or testing the same product multiple times, simplifying sequence maintenance, and reducing test configuration time.
This application note and accompanying demo scripts walk you through how to use Python to:
- Control a simple loudspeaker test setup, launching SoundCheck and running a sequence
- Run a simple frequency response sequence from a command line interface, creating placeholder curves, values, results and waveforms in the MemoryList and pass values into the placeholders via external control.
- Read a WAV file and use it as a stimulus for performing an FFT Spectrum measurement in SoundCheck
Author: Steve Temme. Reprinted from the Jan 2020 issue of AudioXpress.
This article discusses tools and techniques that are available to accurately measure the audio performance of voice-controlled and connected devices under the many various real-world conditions they may be used. It covers basic acoustic measurements such as frequency and distortion response, which have always been carried out on conventional wired systems, and the more complex real-world tests that apply specifically to voice-activated devices, along withthe techniques and standards that may be used.
Smart devices that are voice-controlled such as smart speakers, hearables, and vehicle infotainment systems are notoriously complex to test. They have numerous connections from wired to wireless and contain much signal processing, both on the record and the playback side. This means that their characteristics change according to ‘real world’ conditions of the environment that they are used in, such as background noise, playback levels, and room acoustics. Furthermore, their multifunctional nature means that there are many aspects of the device that may need to be tested, ranging from voice recognition to music playback, operation as a hands-free telephone, and in the case of hearables, hearing assistance. Due to their complex non-linear use cases, these devices often need to be tested at different levels and different environmental conditions. This paper focuses on tools and techniques to accurately measure the audio performance of such devices under the many various real-world conditions in which they are used.
Author: Steve Temme, Listen, Inc.
Presented at ISEAT 2019, Shenzhen, China.
Smart headphones or “hearables” are designed not only to playback music but to enhance communications in the
presence of background noise and in some cases, even compensate for hearing loss. They may also provide voice
recognition, medical monitoring, fitness tracking, real-time translation and even augmented reality (AR). They
contain complex signal processing and their characteristics change according to their smartphone application and
‘real world’ conditions of their actual environment, including background noises and playback levels. This paper
focuses on how to measure their audio performance under the many various real-world conditions they are used
Authors: Steve Temme, Listen, Inc.
Presented at AES Headphone Conference 2019, San Francisco, CA.
A tutorial and accompanying paper that was presented at the AES Automotive Conference, Sept 11-13, 2019, Neuburg an der Donau, Germany.
Voice-controlled and smartphone integrated vehicle infotainment systems are notoriously complex to test. They have numerous connections from wired to wireless and contain much signal processing, both on the record and on the playback side. This means that their characteristics change according to ‘real world’ conditions of the vehicle’s environment, including cabin acoustics and background noises from road, wind and motors. Furthermore, their multifunctional nature means that there are many aspects of the device that may need to be tested, ranging from voice recognition to music playback and operation as a hands-free telephone. Due to their complex non-linear use cases, these devices often need to be tested at different levels and different environmental conditions.
This tutorial offers practical hands-on advice on how to test such devices, including test configurations, what to measure, the challenges of making open-loop measurements, and how to select a test system.
This sequence, inspired by AES papers on statistical models to predict listener preference by Sean E. Olive, Todd Welti, and Omid Khonsaripour of Harman International, applies the Harman target curve for in-ear, on-ear and over-ear headphones to a measurement made in SoundCheck to yield the predicted user preference for the device under test. The measurements are made in SoundCheck and then saved to an Excel template which performs the necessary calculations to produce a Predicted Preference score using a scale of 0 to 100. The spreadsheet calculates an Error curve which is derived from subtracting the target curve from an average of the headphone left/right response. The standard deviation, slope and average of the Error curve are calculated and used to calculate the predicted preference score. The sequence also provides the option to recall data rather than making a measurement, which saves time for engineers who already have large quantities of saved data, and enables historical comparison with obsolete products.
This sequence characterizes a microphone’s ability to passively and/or actively reject noise in the user’s environment. Unlike traditional microphone SNR measurements which calculate a ratio based upon a reference signal and the microphone’s noise floor, this method utilizes a signal (speech played from a mouth simulator) and noise (background noise played from two or more equalized source speakers) captured by both a reference microphone and the DUT microphone.
First a recording of the baseline ambient noise in the test environment is made and a 1/3 octave RTA spectrum is calculated from the recording. Next, the speech signal (mouth simulator) and noise signals (Left and Right speakers) are played consecutively and recorded separately using the reference microphone. A 1/3 octave RTA spectrum is calculated from each recorded time waveform. Next the same measurements are repeated using the DUT microphone. The resulting RTA spectra are then post processed to produce a signal gain spectrum and a noise gain spectrum which are then used to derive the SNR spectrum of the DUT mic. For best accuracy, the Signal and Noise spectra should be at least 5 dB above the ambient noise floor of the measurement environment.