Add Sound Source Localization, Tracking, Separation and Post-Filtering to the Sensor SDK
The 7-mic array is currently rather useless as an intelligent sensor. However, it IS a 7-mic ARRAY, which means we can do sound source localization, tracking, separation and post-filtering.
Being able to tell what direction a vocal command came from would be really valuable in robotics applications.
Look at the ODAS library on GitHub to see all the cool stuff you can do with sound source localization, tracking, separation and post-filtering. it would be a waste not to have these features as part of the SDK for the Azure Kinect DK when it has that big hulking mic array.
The 7-mic array can be utilized for speech applications via the Microsoft Speech Devices SDK.
For details on how to use this SDK, please visit: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/
Yes, please see Aaron Hilton's response; this is exactly the reasons we want this exposed.
Aaron Hilton commented
Dear Azure admin, sorry but you seem to have missed the validity of the poster's request and declined with an unsuitable option. Speech Devices SDK is a JRE suite whose functions are bound via a Java Runtime. There is a significant missed opportunity to provide this lower level functionality for applications including dynamic zooming of the camera, conferencing sound isolation, and virtual cameras.
As much as Speech Devices SDK is itself useful for Microsoft's cognitive services, it does not expose the needed functions for many application areas.
Your post was pre-covid times, so now it may be much more apparent why this directional mic functionality is actually a very important thing.
Ian Wells commented
The 7-mic array is used for speech applications, see https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-devices-sdk-windows-quickstart
In this context sound source localization, beamforming and noise suppression are used.
The microphone array can be used by the community for other uses as well.