Posts

Triggered Record Using WAV File (Version 16.1 and later)

This sequence allows you to test devices without an analog input such as smart speakers, tablets, cellphones and MP3 players using SoundCheck’s frequency-based trigger functionality. This method offers improved accuracy over previous level-based triggering, especially in noisy environments. A stimulus WAV file is created in SoundCheck, and copied to the device under test, where it is played and the response recorded in SoundCheck as if the stimulus were played directly from SoundCheck. The stimulus WAV file to be used on the device under test (DUT) may be customized in the stimulus step.

Note that this sequence uses the level-based trigger available in SoundCheck 16.1 and later. If you are using version 16.0 or earlier, please see the level-based trigger sequence.

More

 

Seminar: Smarter Measurements for Smarter Speakers – Denmark

Smarter speakers require smarter test methods!

smart_speakersLearn how to test smart speakers, robots, voice-controlled automotive audio and other IoT devices in our practical half-day seminar taught by industry expert Steve Temme of Listen, Inc., with additional information on microphone and hardware products for testing these devices from G.R.A.S.

Although acoustic testing of smart speakers and other voice-activated devices presents challenges such as injecting and extracting response signals, time delays, and triggering the system using voice commands, it is still possible to measure the same parameters as traditional loudspeakers, In addition we can also measure microphone array performance, speakerphone performance and more.

We demonstrate how to do this, focusing on:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling
  • Microphones, artificial mouths and other hardware for smart speaker testing

There is no charge for this one-day seminar and lunch will be provided. There will also be a factory tour of G.R.A.S. Space is limited, so please RSVP today.

Print

Location and Date:

Holte (Copenhagen area), Denmark. Friday September 28th, 2018 – 9.00am – 3.00pm. GRAS Sound & Vibration A/S, Skovlytoften 33, DK-2840 Holte, Denmark

Agenda (9am-3.00pm)

  • Introduction
  • Equipment for Smart Speaker Testing
    • Listen Product Overview
    • GRAS Product Overview
      • Choosing the right microphone for smart speaker testing environments (far field vs near field vs unconventional environments)
      • When to use a Mouth Simulator vs Head and Torso Simulator for testing
  • Open Loop Testing
    • What is ‘open loop’ testing?
    • Can you really measure frequency response and harmonic distortion in an open loop?
    • Using Frequency Shifting to time align asynchronous stimuli and response
    • Working with cloud based services
  • Testing with Real World Signals
    • How to use real voice and / or non test tones to evaluate acoustic performance
    • Analysis techniques used for evaluating non test tone stimuli
  • Voice Recognition
    • Key Word spotting / Wake Up word testing
    • Room effects, environmental distractors, and SNR…how they affect voice recognition performance
    • Measuring directionality
  • Networking lunch
  • Telephony
    • What voice quality metrics are important for speakerphones?
    • Challenges of testing speakerphones in different environments
  • GRAS Factory tour (optional)

Reserve your space

Seminars in UK and Germany: The Challenges of Testing Speech Controlled Audio Systems

smart_speakersYour devices got smarter. Did your test system?

Learn how to implement open loop tests for both playback and recording in a range of devices including smart speakers, automotive audio, robots, IoT devices and more in this practical half-day seminar.

Open loop testing (testing devices where inputs and outputs are independent) enables many types of smart devices and their components to be tested in various formats and situations including:

  • Smart speakers, smart watches and other smart devices
  • Microphone arrays
  • Speech recognition systems with microphones
  • In-vehicle audio systems
  • Audio devices/systems with no physical inputs or outputs
  • Testing in noisy environments

We explain how to measure the same parameters as traditional loudspeakers, discussing such challenges as injecting and extracting response signals, time delays, and triggering the system using voice commands. We also demonstrate how to measure microphone array performance, speakerphone performance and more. Course content includes:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling

There is no charge for this one-day seminar and lunch will be provided. Space is limited, so please RSVP today.

 

Locations and Dates:

PrintUK: Tuesday Sept 25th, Sharnbrook Hotel, Bedford, England. The Sharnbrook Hotel, Park Lane (off A6), Sharnbrook Bedfordshire MK44 1LX. Note: If you need to stay here overnight, tell them you are with the ACSoft group to take advantage of a discounted rate.

 

PrintGermany: Thursday Sept 27th, NH München Ost Conference Center, Einsteinring 20, 85609 Aschheim, Germany.

 

 

Full Agenda (9am-4.00pm)

  • Introduction to Open Loop Testing
    • What is it?
    • What Kinds of Analysis Can Be Done?
      • Classic Acoustic Testing: Frequency Response and Distortion
        • Stimulus Signals
        • Distortion Methods: THD, THD+N, Rub & Buzz (Squeak & Rattle), Perceptual Rub & Buzz, Non-Coherent Distortion
    • Mixing Digital and Analog Units
    • Capturing Signals Asynchronous to the Stimulus
    • Dealing with Triggering, playback and recording Delays, digital clock Resampling, and corresponding Frequency Shift
  • Open Loop Applications for Playback
    • Smart Devices
      • Frequency Response and Distortion
      • Directionality
    • Automotive
      • 6 Mic Tree for Audio Tuning
      • Distortion methods
      • HATS for Impulse Response Testing @ seated position
  • Networking Lunch
  • Open Loop Applications for Recording
    • Smart Devices
      • Frequency Response and Distortion
      • Voice Recognition
      • Directionality
    • Automotive
      • Frequency Response and Distortion (general and to ITU P.11xx)
      • SNR placement study
      • Impulse Response function from HATS mouth -> microphone
      • Directionality
  • Handsfree Communications
    • Terminology for Handsfree Communications
    • Introduction to Telephony Metrics
  • GRAS Product Overview
    • Choosing the right microphone for your application (far field vs near field vs unconventional environments)
    • When to use a Mouth Simulator vs Head and Torso Simulator for testing

Presenters:

Steve Temme – Listen, Inc.

 

 

Seminar: Smarter Measurements for Smarter Speakers – Chicago and Boston

Smarter speakers require smarter test methods!

smart_speakersLearn how to test smart speakers, robots, voice-controlled automotive audio and other IoT devices in our practical half-day seminar taught by industry experts Steve Temme, and Marc Marroquin of Listen, Inc.

Although acoustic testing of smart speakers and other voice-activated devices presents challenges such as injecting and extracting response signals, time delays, and triggering the system using voice commands, it is still possible to measure the same parameters as traditional loudspeakers, In addition we can also measure microphone array performance, speakerphone performance and more.

We demonstrate how to do this, focusing on:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling

There is no charge for this one-day seminar and lunch will be provided. Space is limited, so please RSVP today.

Print

Locations and Dates:

Addison (Chicago area): Wednesday July 18th, 2018 – 9.00am – 1.30pm. Hilton Garden Inn, 551 N Swift Rd, Addison, IL 60101

Waltham (Boston area): Friday July 20th, 2018 – 9.00am – 1.30pm. Hilton Garden Inn, 450 Totten Pond Rd, Waltham, MA 02451

 

Agenda (9am-1.30pm)

  • Introduction
  • Open Loop Testing
    • What is ‘open loop’ testing?
    • Can you really measure frequency response and harmonic distortion in an open loop?
    • Using Frequency Shifting to time align asynchronous stimuli and response
    • Working with cloud based services
  • Testing with Real World Signals
    • How to use real voice and / or non test tones to evaluate acoustic performance
    • Analysis techniques used for evaluating non test tone stimuli
  • Voice Recognition
    • Key Word spotting / Wake Up word testing
    • Room effects, environmental distractors, and SNR…how they affect voice recognition performance
    • Measuring directionality
  • Telephony
    • What voice quality metrics are important for speakerphones?
    • Challenges of testing speakerphones in different environments
  • Networking lunch

Reserve your space

Smarter Measurements for Smart Speakers

Author: Daniel Knighten (Listen, Inc) and Glenn Hess (Indy Acoustic Research).  Reprinted from the March 2018 issue of Audio Xpress.

In this article, we describe techniques to characterize the frequency response, output level, and distortion of the device under test to enable direct comparisons between Internet of Things (IoT) smart speakers and conventional speakers.

Full Article

Seminar: Smarter Measurements for Smarter Speakers

Smarter speakers require smarter test methods!

smart_speakersLearn how to test smart speakers, robots, voice-controlled automotive audio and other IoT devices in our practical half-day seminar taught by industry experts Dan Knighten, and Marc Marroquin of Listen, Inc.

Although acoustic testing of smart speakers and other voice-activated devices presents challenges such as injecting and extracting response signals, time delays, and triggering the system using voice commands, it is still possible to measure the same parameters as traditional loudspeakers, In addition we can also measure microphone array performance, speakerphone performance and more.

We demonstrate how to do this, focusing on:

  • Stimulating and capturing responses from a device where you don’t have direct access to the microphone or speaker (open loop testing).
  • Delays, asynchronous stimulus / acquisition, and working with the ‘cloud’
  • Testing with real world signals such as speech and music, and how to analyze results from these test stimuli
  • Voice Recognition – key word spotting, wake word testing, directionality, and the effect of background noise on voice recognition performance
  • Telephony – testing smart speakers for hands-free calling

There is no charge for this one-day seminar and lunch will be provided. Space is limited, so please RSVP today.

Print

Locations and Dates:

Cupertino (San Jose area): Wednesday February 14th, 2018 – 9.00am – 1.30pm. Hilton Garden Inn, Cupertino. 10741 N Wolfe Rd, Cupertino, CA 95014

Culver City (Los Angeles area): Friday February 16th, 2018 – 9.00am – 1.30pm. DoubleTree by Hilton Hotel Los Angeles – Westside. 6161 W Centinela Ave, Culver City, CA 90230

Agenda (9am-1.30pm)

  • Introduction
  • Open Loop Testing
    • What is ‘open loop’ testing?
    • Can you really measure frequency response and harmonic distortion in an open loop?
    • Using Frequency Shifting to time align asynchronous stimuli and response
    • Working with cloud based services
  • Testing with Real World Signals
    • How to use real voice and / or non test tones to evaluate acoustic performance
    • Analysis techniques used for evaluating non test tone stimuli
  • Voice Recognition
    • Key Word spotting / Wake Up word testing
    • Room effects, environmental distractors, and SNR…how they affect voice recognition performance
    • Measuring directionality
  • Telephony
    • What voice quality metrics are important for speakerphones?
    • Challenges of testing speakerphones in different environments
  • Networking lunch

 

Reserve your space

Smart Speaker – Embedded Microphone Test Sequence

smart_speaker_final_display_micThis sequence demonstrates a method by which SoundCheck can measure the performance of a microphone embedded in a so-called “smart speaker”. This example assumes that the DUT is an Amazon Echo but it can be adapted for use with virtually any other type of smart speaker by substituting the Echo’s voice activation phrase WAV file (“Alexa”) with one specific to the desired make and model.

The sequence begins by playing a voice activation phrase out of a source speaker, prompting the DUT to record both the voice command and the ensuing stepped sine sweep stimulus. A message step then prompts the operator to retrieve this recording from the DUT’s cloud storage system. This is accomplished by playing back the recording from the cloud and capturing it with a Triggered Record step in the SoundCheck test sequence.  The Recorded Time Waveform is then windowed (to remove the voice command) and frequency shifted prior to analysis and the result (Frequency Response) is shown on the final display step.

More

Smart Speaker – Embedded Loudspeaker Test Sequence

smart_speaker_final_displayThis sequence demonstrates a method by which SoundCheck can measure the performance of a loudspeaker embedded in a so-called “smart speaker”. This example assumes that the DUT is an Amazon Echo but it can be adapted for use with virtually any other type of smart speaker by substituting the Echo’s voice activation phrase audio file (“Alexa, play Test Signal One”) with one specific to the desired make and model.

The sequence begins by playing the voice activation phrase out of a source speaker, prompting the DUT to playback the mp3 stimulus file from the cloud, followed by a pause step to account for any activation latency. Following the pause, a triggered record step is used to capture the playback from the DUT. The Recorded Time Waveform is then frequency shifted prior to analysis and the results (Frequency Response, THD and Perceptual Rub & Buzz) are shown on the final display step.

We recommend reading our AES paper on this subject prior to continuing as it contains additional details on the test methods devised for this sequence.

More

Challenges of IoT Smart Speaker Testing

Quantitatively measuring the audio characteristics of IoT (Internet of Things) smart speakers presents several novel challenges. We discuss overcoming the practical challenges of testing such devices and demonstrate how to measure frequency response, distortion, and other common audio characteristics. In order to make these measurements, several measurement techniques and algorithms are presented that allow us to move past the practical difficulties presented by this class of emerging audio devices. We discuss test equipment requirements, selection of test signals and especially overcoming the challenges around injecting and extracting test signals from the device.

Authors: Glenn Hess (Indy Acoustic Research) and Daniel Knighten (Listen, Inc.)
Presented at the 143rd AES Conference, New York 2017

Full Paper