All Speech Recognition Engines ("SRE"s) are made up of the following components:
A recognition Grammar essentially defines constraints on what the SRE can expect as input. It is a list of words and/or phrases that the SRE listens for. When one of these predefined words or phrases is heard, the SRE returns the word or phrase to the calling program - usually a Dialog Manager (but could also be a script written in Perl, Python, etc.). The Dialog Manager then does some processing based on this word or phrase.
The example in the HTK book is that of a voice-operated interface to
for phone dialling. If the SRE hears the sequence of words: 'Call
Steve Young', it returns the textual representation of this phrase to
the Dialog Manager, which then looks up Steve's telephone number and
then dials the number.
It is very important to understand that the words that you can use in your Grammar are limited to the words that you have 'trained' in your Acoustic Model. The two are tied very closely together.
An Acoutic Model is a file that contains a statistical representation
of each distinct sound that makes up a spoken word. It must
contain the sounds for each word used in your grammar. The words
in your grammar give the SRE the sequence of sounds it must listen
for. The SRE then listens for the sequence of sounds that make up
a particular word, and when it finds a particular sequence, returns the
textual representation of the word to the calling program (usually a
Dialog Manager). Thus, when an SRE is listening for words, it is
actually listening for the sequence of sounds that make up
one of the words you defined in your Grammar. The Grammar and the Acoustic
Model work together.
Therefore, when you train your Acoustic Model to recognize the phrase 'call Steve Young', the SRE is actually listening for the phoneme sequence "k", "ao", "l", "s", "t", "iy", "v", "y", "ah" and "ng". If you say each of these phonemes aloud in sequence, it will give you an idea of what the SRE is looking for.
Commercial SREs use large databases of speech audio to create their Acoustic Models. Because of this, most common words that might be used in a Grammar are already included in their Acoustic Model.
When creating your own Acoustic Models and Grammars, you need to make sure that all the phonemes that make up the words in your Grammar are included in your Acoustic Model.
In Julius, a recognition grammar is separated into two files:
The rules governing the allowed words are defined in the .grammar file using
a modified BNF format. A .grammar specification in Julius uses a
set of derivation rules, written as:
Symbol: [expression with Symbols]
A terminal is BNF jargon for a symbol that represents a constant
value. It never appears to the left of the colon. In Julius
terminals represent Word Categories - lists of words that are further defined in a separate ".voca" file.
A nonterminal is BNF jargon for a symbol that can be expressed in terms of other symbols. It can be replaced as a result of substitution rules.
For example, look at the the following derivation rules:
S : NS_B LOOKUP NS_E
In this example, "S" is the initial sentence symbol. NS_B and NS_E correspond to the silence that occurs just before the utterance you want to recognize and after. "S", "NS_B" and "NS_E" are required in all Julius grammars.
"NS_B", "NS_E", "CONNECT", and "NAME" are terminals, and represent Word Categories that must be defined in the ".voca" file. In the ".voca" file,"CONNECT" corresponds to two words: "PHONE" and "CALL" and their pronunciations. "NAME" corresponds to two words: "STEVE" and "YOUNG" and their pronunciations.
"LOOKUP" is a nonterminal, and does not have any definition in the
.voca file. It does have a further definition in the .grammar
file, where it is replaced by the expression "CONNECT NAME". All
nonterminals must be further defined in the .grammar file until they
are finally represented by terminals (which are then defined in the
.voca file as Word Categories).
With Julius, only one Substitution Rule per line is permitted, with the colon ":" as the separator. Alphanumeric ASCII characters and the underscore are permitted for Symbol names, and these are case sensitive.
The ".voca" file contains Word Definitions for each Word Category defined in the .grammar file.
Each Word Category must be defined with "%" preceding it. Word Definitions in each Word Category are then defined one per line. The first column is the string which will be output when recognized, and the rest is the pronunciation. Spaces and/or tabs can act field separators.
[Word Definition] [pronunciation ...]
For example the Word Categories "NS_B", "NS_E", "CONNECT", and "NAME" were referenced in the ".grammar" file above and are defined in a ".voca" as follows:
PHONE f ow n
CALL k ao l
STEVE s t iy v
YOUNG y ah ng
In the above example, the NS_B and NS_E Word Categories each have one Word Definition with a silence model ('sil' is a special silence model defined in your Acoustic Model). These correspond to the head and tail silence in speech input.
"CONNECT" is broken out into two
words "PHONE" and "CALL" with pronunciation information, which are the
phonemes that make up the words to be recognized (and which correspond
to phonemes that will be included in your Acoustic Model).
"NAME" is broken out into two words: "STEVE" and
"YOUNG" and their phonemes
The phonemes used here must match the phonemes used in the creation of your Acoustic Model (which we will create in later steps).
If you have words with different pronunciations, simply create the additional entries on separate lines for the same word but with the different pronunciation.
needs a predefined word lattice file where each word and each
word-to-word transition is
listed explicitly. We get this by compiling the ".grammar" and
".voca" files together to generate the word lattice file (actually it
is two files, but more on that later) with a script. The mkdfa.pl
script does this
by looking for the Initial Sentence Symbol "S" in the .grammar
file and replacing the Word Categories with all the possible Word
Candidates from the .voca file, and making a predefined
list of all the
possible combinations of words and phrases Julius must recognize. In this case, the list
sentences would be:
<s> PHONE STEVE </s>
What this means is that when Julius hears the sounds that make up a word or phrase uttered by a user, it tries to match these sounds to the statistical representations of sounds contained in the Acoustic Model. When a match is made, Julius determines the phoneme corresponding to the sound. It keeps track of the matching phonemes until it reaches a pause in the user's speech. It then searches the compiled grammar for the equivalent series of phonemes. You can think of the compiled grammar as looking something like this:
sil f ow n s t iy v sil
If, for example, a match is made with the list of phonemes: "sil k ao l s t iy v sil", Julius returns the words "
<s> CALL STEVE </s>" to the calling program.
For this tutorial, go to the 'voxforge' folder you created in your home directory. Create a new directory called 'tutorial'.
Next create a file called sample.grammar in your new 'voxforge/tutorial' folder, and add the following text:
S : NS_B SENT NS_E
In this case, NS_B, NS_E, CALL_V, NAME_N, DIAL_V, DIGIT are Word Categories (i.e. terminals in BNF jargon), and they must be defined in a separate .voca file.
"SENT" is the only nonterminal symbol. The "SENT" in the first line will be substituted with either of the following Word Category Phrases:
Each Word Category (i.e. "CALL_V", "NAME_N", "DIAL_V", or "DIGIT") is replaced by one of the Word Definitions set out in the .voca file below.
For this tutorial, create a file called: sample.voca in your 'voxforge/tutorial' folder, and add the following text:
PHONE f ow n
CALL k ao l
DIAL d ay l
STEVE s t iy v
YOUNG y ah ng
FIVE f ay v
FOUR f ow r
NINE n ay n
EIGHT ey t
ONE w ah n
SEVEN s eh v ih n
SIX s ih k s
THREE th r iy
TWO t uw
ZERO z iy r ow
The .grammar and .voca files now need to be compiled into ".dfa" and ".dict" files so that Julius can use them.
Download the Julia mkdfa.jl grammar compiler script to your 'voxforge/bin' folder.
Note: the mkdfa.jl script assumes that the following julius programs:
are accessible from your PATH (which should be the case since they are included as part of the Julius executable you just downloaded).
The .grammar and .voca files need to have the same file prefix, and this prefix is then specified to the mkdfa.jl script. From a command prompt in your 'voxforge/tutorial' directory, compile your files (sample.grammar and sample.voca) using the following command:
julia ../bin/mkdfa.jl sample
Where 'julia' is the name of the julia programming language; and "../bin/mkdfa.jl" tells Julia to go up one directory, then down into the bin directory to execute the "mkdfa.jl" script; and "sample" is the name of the prefix for your grammar files (i.e. your grammar files are "sample.grammar" and "sample.dfa").
The following shows the expected output from running the mkdfa.jl script:
julia ../bin/mkdfa.jl sample
sample.grammar has 3 rules