Click here to register.

Speech Recognition in the News

Click the 'Add' link to add a comment to this page.


Voice recognition on Ubuntu (using Google)!
By bboyjkang - 12/2/2012

Voice recognition on Ubuntu!

A small test to show the ability to use google's voice recognition from the ubuntu desktop. In theory this could be made into something like an ubuntu desktop assistant.

This is just a test, and both accuracy and speed could be a lot better if this were written as a real app rather than a script.

Voice Recognition on Ubuntu, Part 2!

Before I showed you the ability to do voice recognition with google's servers without chrome. Now I built a working demo to show you possible uses of an ubuntu voice assistant.

Ubuntu Speech Input
By Fredtechno - 7/13/2012

I have created a system that empowers Ubuntu Desktops with dictation from an Android app.

The site (with forum) is at

Selling well on Google play, with lots of positive feedback, this is a simple solution for those wanting to dictate to many Ubuntu programs without the hassle of configuring soundcards or having to spend hours training any software.

Ubuntu HUD, and future plans to include speech recognition
By kmaclean - 1/24/2012 - 1 Replies

From this article: Ubuntu rips up drop-down menus

Ubuntu is set to replace the 30-year-old computer menu system with a “Head-Up Display” that allows users to simply type or speak menu commands.
Ubuntu plans to integrate voice recognition with HUD in future releases, allowing users to dictate commands to their PC. 

HUD is described as follows

Basically rather than navigating menus to find an application function, just tap ALT and type what you want the application to do.

Some fuzzy logic matches what you typed with the application menus, and the most relevant commands are displayed.  To complete the action just press return, or select one of the alternative functions presented in the auto-complete. 

From Mark Shuttleworth's blog:

Voice is the natural next step

Searching is fast and familiar, especially once we integrate voice recognition, gesture and touch. We want to make it easy to talk to any application, and for any application to respond to your voice. The full integration of voice into applications will take some time. We can start by mapping voice onto the existing menu structures of your apps. And it will only get better from there.


Julius and online speech recognition
By Leslaw Pawlaczyk - 12/28/2011 - 2 Replies


I would like to present a new website which I just launched with a help from some of my friends dedicated to recognizing speech stored in multimedia files. The automatically transcribed speech is then later used in creation of subtitles played using smooth streaming and Silverlight 4.0 The website supports Polish and English languages in transcription. You can find out more on - I hope that this website can popularize speech recognition in general and also present unique benefits of key word searching in media files.


Leslaw Pawlaczyk

By ghanitha - 4/12/2011 - 7 Replies


gnomeSpeak is a two way voice application using GVC and festival.  Prototype is aimed to help the visually impaired. Currently supporting english and tamil.

Appreciate your  feedback on it.



Voice control of Windows using Julius
By Leslaw Pawlaczyk - 4/1/2011 - 3 Replies


Me and my team just released a new open source software under LGPL license for controlling Windows using voice commands. This software is using Julius as a speech recognition engine. Currently we support Polish acoustic models, so anyone who has any knowledge of Slavic language are welcome to download it and try. Once again thanks goes to prof. Lee and his team for developing Julius

Thank you
Leslaw Pawlaczyk

Google Chrome 11 beta includes server-based speech recognition
By kmaclean - 3/24/2011

From the Google Chrome blog:

Today, we’re updating the Chrome beta channel with a couple of new capabilities, especially for web developers. Fresh from the work that we’ve been doing with the HTML Speech Incubator Group, we’ve added support for the HTML speech input API. With this API, developers can give web apps the ability to transcribe your voice to text. When a web page uses this feature, you simply click on an icon and then speak into your computer’s microphone. The recorded audio is sent to speech servers for transcription, after which the text is typed out for you.

You can try this it out yourself on Google's website (you need Google Chrome 11 beta installed)

It works on Linux - I tried it on Fedora 14 and Ubuntu 10.4 with no problems.

Open source dictation for Polish
By Leslaw Pawlaczyk - 6/8/2010 - 4 Replies

Hello everyone

I wanted to announce a first release of an open source project called Skrybot doMowy, which is based on a well known decoder Julius. This software is a result of 3 year research and is a LVCSR dictation system for Windows platform available from

The aim of this software which code is written in C# is to allow other fellow software engineers to write their own plugins and extensions to dictation system.

Currently the program supports only Polish acoustic and language models making it possible to use for dictation of emails or simple documents. It has a live view of a microphone input allowing the user to monitor the volume of their speech.

One of the other aims was to make speech dictation available for free to everyone with a quality similar to commercial programs.

I encourage other researchers or programmers to get into contact with me and and potentially develop other language GUI versions as well as acoustic and language models for their native languages. We are soon considering supporting British English version of this software, however we still need to develop such models.

More details can be found on

Best regards

Leslaw Pawlaczyk

Google building speech capabilities for browsers
By kmaclean - 5/27/2010 - 1 Replies

According to this InfoWorld article, Google is building speech-recognition technologies not just for Chrome, but for all browsers. 


Ian Fette, product manager for the Google Chrome team, said (at the Google I/O conference in San Francisco late last week): 

We're hoping that the text-to-speech APIs as well as the voice input, voice recognition ship in Chrome but also become a Web standard that is implementable by any browser out there.


Rest in Peas: The Unrecognized Death of Speech Recognition
By kmaclean - 5/3/2010

Interesting Article on Speech Recognition.  The author, Robert Fortner, is not impressed with the rate of speech recognition improvements over the years.  The passage that give the gist of his argument is:

We have learned that speech is not just sounds. The acoustic signal doesn’t carry enough information for reliable interpretation, even when boosted by statistical analysis of terabytes of example phrases. As the leading lights of speech recognition acknowledged last May, “it is not possible to predict and collect separate data for any and all types of speech…” The approach of the last two decades has hit a dead end.[...]

However, what is more interesting is the rebuttal by Jeff Foley (Nuance), who says in a comment:

First of all, any discussion of speech recognition is useless without defining the task--with the references to Dragon I'll assume we're talking about large vocabulary speaker dependent general purpose continuous automatic speech recognition (ASR) using a close-talking microphone. Remember that that "speech recognition" is successfully used for other tasks from hands-free automotive controls to cell phone dialing to over-the-phone customer service systems. For this defined task, accuracy goes well beyond the 20% WERR cited here. Accuracy even bests that for speaker independent tasks in noisy environments without proper microphones, but of course those have constricted vocabularies making them easier tasks. In some cases, you write about the failure to recognize "conversational speech," which is a different task involving multiple speakers and not being aware of an ASR system trying to transcribe words. Software products such as Dragon do not purport to accomplish this task; for that, you need other technologies which are still tackling this task.

And with respect to Fortner's comment that "The core language machinery had not changed since the 50s and 60s", Foley says:

[...]  Actually, it was the Bakers' reliance on Hidden Markov Models (HMM) that made NaturallySpeaking possible. Where other ASR attempts focused on either understanding words semantically (what does this word mean?) or on word bigram and trigram patterns (which words are most likely to come next?), both techniques you described, the HMM approach at the phoneme level was far more successful. HMM's are pretty nifty; it's like trying to guess what's happening in a baseball game by listening to the cheers of the crowd from outside the stadium.[...]

Good thing Sphinx, HTK and Julius all use HMM-based acoustic models...