Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm curious, what does an example of this look like? (this is one of the fun things I'm working on right now)


> I'm curious, what does an example of this look like? (this is one of the fun things I'm working on right now)

I work with people with disabilities and iot. If the project is open open source I would love to get in touch.

Back to your question, That largely depends on what kind of interface is being used. Is there a screen or only conveying information text to speech. Users aren't going you want to keep track of their customized commands through just a voice interface.

An example: alias command.

Terminology: Utterance (a series of words) Action (programic action to do something)

1. (Previous utterances)(Some actions)

2. Alias (new utterance)

3. A new command is created by voice to do some action in step 1 with the new to utterance in step 2.

The new command encapsulates both the context of a command (command availability) if there is one and the command itself. Think of it as a voice macro. The essentially it allow you to complete a series of complex tasks in series with a small voice command.

Alternatively the alias command without an utterance could trigger a GUI could pop up showing the history of the last commands where a user can select by voice or touch.

This could work for both LM or Commands. Commands would take priority before the LM for recognition.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: