speech-dispatcher: Current State

 
 1.4 Current State
 =================
 
 In this version, most of the features of Speech Dispatcher are
 implemented and we believe it is now useful for applications as a device
 independent Text-to-Speech layer and an accessibility message
 coordination layer.
 
    Currently, one of the most advanced applications that works with
 Speech Dispatcher is 'speechd-el'.  This is a client for Emacs, targeted
 primarily for blind people.  It is similar to Emacspeak, however the two
 take a bit different approach and serve different user needs.  You can
 find speechd-el on <http://www.freebsoft.org/speechd-el/>.  speechd-el
 provides speech output when using nearly any GNU/Linux text interface,
 like editing text, reading email, browsing the web, etc.
 
    Orca, the primary screen reader for the Gnome Desktop, supports
 Speech Dispatcher directly since its version 2.19.0.  See
 <http://live.gnome.org/Orca/SpeechDispatcher> for more information.
 
    We also provide a shared C library, a Python library, a Java, Guile
 and a Common Lisp libraries that implement the SSIP functions of Speech
 Dispatcher in higher level interfaces.  A golang interface is also
 available on <https://github.com/ilyapashuk/go-speechd>.  Writing client
 applications in these languages should be quite easy.
 
    On the synthesis side, there is good support for Festival, eSpeak,
 Flite, Cicero, IBM TTS, MBROLA, Epos, Dectalk software, Cepstral Swift
 and others.  See ⇒Supported Modules.
 
    We decided not to interface the simple hardware speech devices as
 they don't support synchronization and therefore cause serious problems
 when handling multiple messages.  Also they are not extensible, they are
 usually expensive and often hard to support.  Today's computers are fast
 enough to perform software speech synthesis and Festival is a great
 example.