Mozilla's DeepSpeech project[1] is using TensorFlow and some paper from Baidu to make an open source speech to text system, based on deep learning (TensorFlow). The current project allow the training for own local datasets, but also there is a pre-trained model that can be used during the development.

The goal of the project is:

  • Connect to mumble or to the local audio stream
  • Connect to etherpad
  • Map the sound to text, and write it into the etherpad
  • Have fun how funny accents break the system
  • Redo the etherpad based on what you remember from the meeting and send it to the RESULT mailing list



Be the first to comment!

Similar Projects

This project is one of its kind!