Mozilla's DeepSpeech project is using TensorFlow and some paper from Baidu to make an open source speech to text system, based on deep learning (TensorFlow). The current project allow the training for own local datasets, but also there is a pre-trained model that can be used during the development.
The goal of the project is:
- Connect to mumble or to the local audio stream
- Connect to etherpad
- Map the sound to text, and write it into the etherpad
- Have fun how funny accents break the system
- Redo the etherpad based on what you remember from the meeting and send it to the RESULT mailing list
Looking for mad skills in:
This project is part of:
Hack Week 17
This project is one of its kind!