Igor Jablokov interview on multimodal search

Igor Jablokov

Last Monday night I sat down with Igor Jablokov, an IBM program director working on new methods of multimodal search using open standards, to do a podcast. Multimodal search adds voice commands to a visual display to allow easy access to a long list of commands and contextual information. The technology is currently used in web browsers, mobile phones, and automobile computing systems. I also recorded a presentation by Igor on mobile search at Mobile Monday in April.

IBM is one of the contributors to the VoiceXML proposed standard. Opera and Motorola are also active contributors. IBM promotes a voice-activated system by combining XHTML, VoiceXML, and XML events. The open software works across many server and client platforms including an Eclipse-based environment for creating voice-enabled content.

Igor showed off a Samsung phone running Windows Mobile with a prototype of WebSphere multimodal browser. The browser accepts search queries for Yahoo! Local and returns voice-enabled results using Yahoo!’s web service APIs.

We discussed dynamic grammars, a new development in mobile search that creates acceptable grammars specific to a returned data set. If you are in your car waiting for an urgent e-mail you can ask your car to retrieve all new e-mails with an urgent status and build a grammar based on the senders in the returned data set.

Igor is tasked with building for the future. Many of the technologies we discussed are not expected to be mainstream until 2008 or 2010. Companies involved in creating these voice-enabled interfaces are already planning for 2015.

Thanks to Igor for requesting this interview and Text100 for making all the arrangements.

My audio interview with Igor Jablokov is available in MP3 format. The 28-minute interview is a 12.9 MB download.

Interview questions

  1. What are some of the biggest obstacles in mobile search today?
  2. What is the XHTML+Voice proposal?
  3. What devices and software support the service today?
  4. What companies are outputting content in this format?
  5. What is IBM’s involvement? What other companies are involved?
  6. How is it being used in the car?
  7. How can you accommodate a variety of accents and dialects? A thick Irish accent is supposed to be very difficult to compute.
  8. You brought a new mobile prototype with you today. What’s exciting about this advancement?
  9. Tell me about mixed initiatives. What are the current use cases and implementations?
  10. I’ve used voice software in the past and I felt the need to slow down and annunciate. How has voice recognition improved?
  11. Tell me about JSGF.
  12. How can you create dynamically generated grammars?
  13. Why should I, as a small company, be interested in X+V? Where is the ROI?
  14. What are some ways we can voice-enable our site? What changes do we need to make?
  15. What are some of the largest grammar implementations right now and what sort of hardware is needed to deal with that?
  16. What are some competing standards and implementations? Microsoft Speech?
  17. What are some of the tools I need to get started?
  18. What’s coming next? How can I build an application for the next generation of devices and standards?

Tags: , ,

  • Posted
  • Updated at
  • Comments [2]

2 comments

Commentary on "Igor Jablokov interview on multimodal search":

  1. Kevin Burton on wrote:

    I happen to be sitting at the same table when Niall and Igor were doing this interview. In the middle Niall jumps up and points to me to look behind Igor.

    There, 5-10 feet away, was a nice fat *skunk* !!!

    Luckily for us it just scurried away.

    Ironic that this was the google campus. :)

  2. Igor Jablokov on wrote:

    It was sent by a competitor trying to crash the interview. :-)

    2008 seems so far off…I guess we’ll have to move that date in by a couple years. ;-)