Posts

SoundCheck 17 New Features Video

In this short video, sales engineer Les Quindipan gives a brief overview and demonstration of the new features and functionality of SoundCheck 17, including the new level and cross-correlation trigger, average curve/waveform post-processing functionality, new color palettes, save to MATLAB option, and more.

Triggered Record Using Chirp Trigger and WAV File (Version 17 and later)

This test sequence demonstrates SoundCheck’s Triggered Record – Chirp Trigger function for open loop testing of devices without analog inputs such as smart speakers, wearables, smart home devices, tablets and cellphones.  A stimulus WAV file is created in SoundCheck and transferred to the device under test, where it is played back and the response recorded in SoundCheck as if the stimulus were played directly from SoundCheck. The Acquisition step is triggered by the chirp in the stimulus file. Chirp triggers are more robust than level and frequency triggers which are susceptible to false triggering due to background noise.

More

Seminars in China: Smart Speakers and Automotive Audio

The Challenges of Testing Speech Controlled Audio Systems

Your devices got smarter. Did your test system?

Learn how to implement open loop tests for both playback and recording in a range of devices including smart speakers, automotive audio, robots, IoT devices and more in this practical seminar.

Open loop testing (testing devices where inputs and outputs are independent) enables many types of smart devices and their components to be tested in various formats and situations including:

  • Smart speakers, smart watches and other smart devices
  • Microphone arrays
  • Speech recognition systems with microphones
  • In-vehicle audio systems
  • Audio devices/systems with no physical inputs or outputs
  • Testing in noisy environments

We explain how to measure the same parameters as traditional loudspeakers, discussing such challenges as injecting and extracting response signals, time delays, and triggering the system using voice commands. We also demonstrate how to measure microphone array performance, speakerphone performance and more. Course content includes:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling

Speakers /主讲人: Steve Temme, Listen, Inc. and Peter Wulf Andersen, GRAS.

Date & Locations /时间与地点:

2019年1月15日(Tuesday) – Taipei台北
2019年1月16日(Wednesday) – Shenzhen深圳
2019年1月18日(Friday) – Suzhou苏州

More information and registration

SoundCheck 16.1 Released – New Frequency Trigger for Open Loop Testing

Listen is excited to announce the release of SoundCheck 16.1. This minor release, which contains some exciting new features, is available free of charge to all registered users of SoundCheck 16.0.

Most significant is the inclusion of a new frequency trigger. This allows frequency-based triggering of acquisition from an external source using a pilot tone at the beginning of a test sweep. This allows for improved accuracy over previous level-based triggering when capturing responses from a device where you don’t have direct access to the microphone or speaker, especially in noisy environments. This technique optimizes open loop test methods for smart speakers and other voice-controlled devices such as smart phones, robots, automotive audio, smart thermostats, hearables and more. You can view a short video explaining this feature here:

 

We also have a free measurement sequence using the frequency trigger that can be downloaded from our website so that you can test it out right away! View Sequence.

SoundCheck now also fully integrates with the Mentor A2B interface for testing automotive audio connected via the Analog Devices A2B digital bus. The Mentor Analyzer, which handles the transmission of signal in to and out of the bus, is viewed as an ASIO interface by SoundCheck, enabling SoundCheck to read/write to the device and therefore analyze any transducer connected to the A2B bus. A custom VI permits control of the Mentor A2B interface configuration via SoundCheck. This means that it can be controlled from within a SoundCheck sequence, for example loading configurations and starting/stopping ASIO streams. This makes it an ideal R&D or production line test solution for automotive audio, or for anyone testing transducers connected via A2B bus. More.

Also available (additional cost) with this new release are two optional sequences – background noise simulation to the ETSI ES 202 396-1 standard, and the TIA-920B dual-bandwidth telephone test standard.

The background noise simulation sequence (part number 3121) is a cost-effective alternative to dedicated background noise generation systems. It calibrates a 4.1 speaker array to conform with the ETSI ES 202 396-1 Standard, providing an equalized, calibrated playback solution to stress devices in a standardized and repeatable way. The sequence includes a library of real world binaural recordings from the standard, and custom or user-defined binaural recordings may also be used. Applications of this sequence include evaluating ANC, noise suppression, voice recognition testing, SNR optimization of microphones, beamforming directionality studies of microphone arrays and more. More.

The TIA-920B sequence (part number 3111) tests to the TIA-920B dual-bandwidth standard that applies to both narrowband (NB) and wideband (WB) devices. It also allows a choice between Free Field (FF) and Diffuse Field (DF) as the Listener Reference Point (LRP). The current release measures digital communications devices with handset features, according to TIA-920.110-B and speakerphones, according to TIA-920.120-B. Support for headset measurements, according to TIA-920.130-B will be added in a future release. More.

Seminar: Smarter Measurements for Smarter Speakers – Denmark

Smarter speakers require smarter test methods!

smart_speakersLearn how to test smart speakers, robots, voice-controlled automotive audio and other IoT devices in our practical half-day seminar taught by industry expert Steve Temme of Listen, Inc., with additional information on microphone and hardware products for testing these devices from G.R.A.S.

Although acoustic testing of smart speakers and other voice-activated devices presents challenges such as injecting and extracting response signals, time delays, and triggering the system using voice commands, it is still possible to measure the same parameters as traditional loudspeakers, In addition we can also measure microphone array performance, speakerphone performance and more.

We demonstrate how to do this, focusing on:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling
  • Microphones, artificial mouths and other hardware for smart speaker testing

There is no charge for this one-day seminar and lunch will be provided. There will also be a factory tour of G.R.A.S. Space is limited, so please RSVP today.

Print

Location and Date:

Holte (Copenhagen area), Denmark. Friday September 28th, 2018 – 9.00am – 3.00pm. GRAS Sound & Vibration A/S, Skovlytoften 33, DK-2840 Holte, Denmark

Agenda (9am-3.00pm)

  • Introduction
  • Equipment for Smart Speaker Testing
    • Listen Product Overview
    • GRAS Product Overview
      • Choosing the right microphone for smart speaker testing environments (far field vs near field vs unconventional environments)
      • When to use a Mouth Simulator vs Head and Torso Simulator for testing
  • Open Loop Testing
    • What is ‘open loop’ testing?
    • Can you really measure frequency response and harmonic distortion in an open loop?
    • Using Frequency Shifting to time align asynchronous stimuli and response
    • Working with cloud based services
  • Testing with Real World Signals
    • How to use real voice and / or non test tones to evaluate acoustic performance
    • Analysis techniques used for evaluating non test tone stimuli
  • Voice Recognition
    • Key Word spotting / Wake Up word testing
    • Room effects, environmental distractors, and SNR…how they affect voice recognition performance
    • Measuring directionality
  • Networking lunch
  • Telephony
    • What voice quality metrics are important for speakerphones?
    • Challenges of testing speakerphones in different environments
  • GRAS Factory tour (optional)

Reserve your space

Seminars in UK and Germany: The Challenges of Testing Speech Controlled Audio Systems

smart_speakersYour devices got smarter. Did your test system?

Learn how to implement open loop tests for both playback and recording in a range of devices including smart speakers, automotive audio, robots, IoT devices and more in this practical half-day seminar.

Open loop testing (testing devices where inputs and outputs are independent) enables many types of smart devices and their components to be tested in various formats and situations including:

  • Smart speakers, smart watches and other smart devices
  • Microphone arrays
  • Speech recognition systems with microphones
  • In-vehicle audio systems
  • Audio devices/systems with no physical inputs or outputs
  • Testing in noisy environments

We explain how to measure the same parameters as traditional loudspeakers, discussing such challenges as injecting and extracting response signals, time delays, and triggering the system using voice commands. We also demonstrate how to measure microphone array performance, speakerphone performance and more. Course content includes:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling

There is no charge for this one-day seminar and lunch will be provided. Space is limited, so please RSVP today.

 

Locations and Dates:

PrintUK: Tuesday Sept 25th, Sharnbrook Hotel, Bedford, England. The Sharnbrook Hotel, Park Lane (off A6), Sharnbrook Bedfordshire MK44 1LX. Note: If you need to stay here overnight, tell them you are with the ACSoft group to take advantage of a discounted rate.

 

PrintGermany: Thursday Sept 27th, NH München Ost Conference Center, Einsteinring 20, 85609 Aschheim, Germany.

 

 

Full Agenda (9am-4.00pm)

  • Introduction to Open Loop Testing
    • What is it?
    • What Kinds of Analysis Can Be Done?
      • Classic Acoustic Testing: Frequency Response and Distortion
        • Stimulus Signals
        • Distortion Methods: THD, THD+N, Rub & Buzz (Squeak & Rattle), Perceptual Rub & Buzz, Non-Coherent Distortion
    • Mixing Digital and Analog Units
    • Capturing Signals Asynchronous to the Stimulus
    • Dealing with Triggering, playback and recording Delays, digital clock Resampling, and corresponding Frequency Shift
  • Open Loop Applications for Playback
    • Smart Devices
      • Frequency Response and Distortion
      • Directionality
    • Automotive
      • 6 Mic Tree for Audio Tuning
      • Distortion methods
      • HATS for Impulse Response Testing @ seated position
  • Networking Lunch
  • Open Loop Applications for Recording
    • Smart Devices
      • Frequency Response and Distortion
      • Voice Recognition
      • Directionality
    • Automotive
      • Frequency Response and Distortion (general and to ITU P.11xx)
      • SNR placement study
      • Impulse Response function from HATS mouth -> microphone
      • Directionality
  • Handsfree Communications
    • Terminology for Handsfree Communications
    • Introduction to Telephony Metrics
  • GRAS Product Overview
    • Choosing the right microphone for your application (far field vs near field vs unconventional environments)
    • When to use a Mouth Simulator vs Head and Torso Simulator for testing

Presenters:

Steve Temme – Listen, Inc.

 

 

Seminar: Smarter Measurements for Smarter Speakers – Chicago and Boston

Smarter speakers require smarter test methods!

smart_speakersLearn how to test smart speakers, robots, voice-controlled automotive audio and other IoT devices in our practical half-day seminar taught by industry experts Steve Temme, and Marc Marroquin of Listen, Inc.

Although acoustic testing of smart speakers and other voice-activated devices presents challenges such as injecting and extracting response signals, time delays, and triggering the system using voice commands, it is still possible to measure the same parameters as traditional loudspeakers, In addition we can also measure microphone array performance, speakerphone performance and more.

We demonstrate how to do this, focusing on:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling

There is no charge for this one-day seminar and lunch will be provided. Space is limited, so please RSVP today.

Print

Locations and Dates:

Addison (Chicago area): Wednesday July 18th, 2018 – 9.00am – 1.30pm. Hilton Garden Inn, 551 N Swift Rd, Addison, IL 60101

Waltham (Boston area): Friday July 20th, 2018 – 9.00am – 1.30pm. Hilton Garden Inn, 450 Totten Pond Rd, Waltham, MA 02451

 

Agenda (9am-1.30pm)

  • Introduction
  • Open Loop Testing
    • What is ‘open loop’ testing?
    • Can you really measure frequency response and harmonic distortion in an open loop?
    • Using Frequency Shifting to time align asynchronous stimuli and response
    • Working with cloud based services
  • Testing with Real World Signals
    • How to use real voice and / or non test tones to evaluate acoustic performance
    • Analysis techniques used for evaluating non test tone stimuli
  • Voice Recognition
    • Key Word spotting / Wake Up word testing
    • Room effects, environmental distractors, and SNR…how they affect voice recognition performance
    • Measuring directionality
  • Telephony
    • What voice quality metrics are important for speakerphones?
    • Challenges of testing speakerphones in different environments
  • Networking lunch

Reserve your space

Open Loop Microphone Testing

This sequence demonstrates the two most common microphone measurements, frequency response and sensitivity, on a microphone embedded in a recording device. Typically, when measuring a microphone the response of the device can be captured simultaneously with the stimulus. However, with devices such as voice recorders and wireless telephone forming a closed loop can be cumbersome or impossible. This sequence demonstrates how to measure such a device by recording the signal on the device under test, transferring that recording to the computer running SoundCheck and then using a Recall step to import the recorded waveform and analyze it.

This specific sequence, v4, is an improvement on the prior versions. The v1 release required that the audio file containing the recorded response waveform be manually windowed outside of SoundCheck before being analyzed. The v2 release utilized a new feature in SoundCheck 14, using values from the memory list to semi-automatically trim the waveform before analysis. The v3 release completely automated waveform editing through the use of an intersection level and windowing post processing steps. Currently the v4 release uses the new Auto Delay+ algorithm, exclusive to SC18 and beyond. Auto Delay+ is capable of detecting and accounting for delays of -0.5 seconds to any positive delay, nullifying the need for windowing steps in the sequence. If you are interested in learning more about this algorithm please refer to the Analysis section of the SoundCheck manual.

More