ID R&D ranks first in speaker recognition challenge, highlighting strength of voice biometrics for frictionless security

The first-place finish in VoxCeleb 2023 demonstrates the best-in-class performance of IDVoice® for securing conversational speech without passphrases

NEW YORK,  August 29, 2023 — ID R&D, a global leader in AI-based voice biometrics and liveness detection, today announced the first-place finish of its award-winning IDVoice product in the 2023 VoxCeleb Speaker Recognition Challenge (VoxSRC 2023 Track 2). The win highlights the utility of IDVoice® for authentication without passphrases, useful for securing conversations such as with generative AI-powered chatbots.

VoxSRC, organized by esteemed institutions including the University of Oxford, Google Research, Meta AI, and AWS AI, is designed to evaluate the proficiency of current methodologies in recognizing speakers from speech samples obtained “in the wild”. The dataset used for testing is sourced from online celebrity interview videos and encompasses a wide range of audio content, from professionally edited segments to casual multi-speaker conversations and interviews laden with background noise, laughter, and other audio artifacts. Such recordings present a test environment that simulates real-world authentication challenges.

“VoxCeleb is a rigorous challenge requiring speaker recognition without passphrases during conversations in unpredictable environments,” commented Konstantin Simonchik, Chief Scientific Officer and Co-founder of ID R&D. “Our success can be attributed in part to bringing all available resources to bear to achieve the best results. It’s a particularly important win for ID R&D because it demonstrates the performance of our technology under conditions that could be expected in real-world conversations, such as with emerging generative AI-powered chatbot technology.”

Track 2 of the challenge is specifically designed to assess algorithms trained on an open dataset, as opposed to a closed dataset defined by the organizers. ID R&D’s top-ranking performance in this demanding track is a testament to the robustness and adaptability of its IDVoice product. The challenge results further validate the company’s focus on providing market-leading identity security solutions that are also frictionless. The results of the challenge were announced at the VoxSRC Workshop at INTERSPEECH 2023, which took place in Dublin, Ireland August 20-23. INTERSPEECH is the largest digital speech technology event in the world, with sponsorship and active participation of industry leaders in speech and speaker recognition.

ID R&D will demonstrate IDVoice at Voice & AI, taking place September 5-7 in Washington.

About ID R&D

ID R&D, a Mitek company, is an award-winning provider of AI-based voice and face biometrics and liveness detection. With one of the strongest R&D teams in the industry, ID R&D consistently delivers innovative, best-in-class biometric capabilities that raise the bar in terms of usability and performance. Our proven products have achieved superior results in industry-leading challenges, third-party testing, and real-world deployments in more than 70 countries. ID R&D’s solutions are available for easy integration with mobile, web, messaging, and telephone channels, as well as in smart speakers, set-top boxes, and other IoT devices. ID R&D is based in New York, NY. Learn more at www.idrnd.ai.

About Mitek

Mitek (NASDAQ: MITK) is a global leader in digital access, founded to bridge the physical and digital worlds. Mitek’s advanced identity verification technologies and global platform make digital access faster and more secure than ever, providing companies new levels of control, deployment ease and operation, while protecting the entire customer journey. Trusted by 99% of U.S. banks for mobile check deposits and 7,800 of the world’s largest organizations, Mitek helps companies reduce risk and meet regulatory requirements. Learn more at www.miteksystems.com.

Follow Mitek on LinkedIn, Twitter and YouTube, and read Mitek’s latest blog posts here.