Imagine if you or a loved one could self-check brain function at any time using an app on your phone. This is becoming reality with the development of LOGOS, an automated telephone procedure designed to assess verbal memory.
For this to reach its potential, single spoken word-to text technology needs to improve to >95% accuracy. Natural language speech-to-text technologies (e.g., Siri and Google API) are excellent when they can mine context and usage, but their performance on single isolated words is dismal.
Prof Michael Valenzuela, from the Regenerative Neuroscience Group, collaborated with the Sydney Informatics Hub, with support from the Innovation Hub and the ICT TechLab, to set up a public Kaggle competition to crowd source innovative solutions to this problem.
The competition saw over 60 people sign up to the challenge. The best performing winning entries received recognition on Tuesday night at the Innovation Week Awards ceremony.
Congratulations to Wilmer Yan, the overall winner of the LOGOS prize. Mr Yan is currently completing an Honours year at the University of Sydney. With notable mentions to runner up Jacqueline Huvanandana, a researcher at the Woolcock Institue, for top performance on the Kaggle test-set, and also to Master’s student Mike Li, winner of the Artemis prize (for the best runner-up also using Sydney’s Artemis HPC cluster in developing their solution). Each of the winning entries used different deep learning approaches to build a solution to unseen data. Prof Valenzuela and the team at RNG plans to collaborate with the winners to further develop their models.
The Sydney Informatics Hub thanks all our participants, as well as our partners in the Innovation Hub and Techlab for helping make the Coding Challenge such a great success. We look forward to the next one!
Get in touch for more info email@example.com.