lichess.org
Donate

Help Lichess Study Voice Recognition

Could be helpful to mention the distinction between round 1 and round 2 right in the beginning.

after seeing that round 2 was for a conversational tone, I realized that I could have spoken more clearly in round 1
I put on some medium background music in Round 3 and sometimes it would fail to submit a request at all if I didn't speak loudly and clearly enough. Still ended up with only 10% accuracy, so I guess the ones that got through were still representative, but I wasn't sure if it was capturing anything about my failed attempts.
Didn't Mozilla do something like this? Why doesn't Lichess use that data instead of reinventing the wheel?
hey, got this error message:

NotSupportedError: AudioContext.createMediaStreamSource: Connecting AudioNodes from AudioContexts with different sample-rate is currently not supported.

I'm using Firefox and Babyface Pro external sound card with microphone, but laptops internal microphone didn't work too with same error
@AsDaGo said in #4:
> Didn't Mozilla do something like this? Why doesn't Lichess use that data instead of reinventing the wheel?

Mozilla's Common Voice data sets can help train voice models, but Lichess is not building a voice model.

We are studying context based disambiguation of commands.
I absolutely support any drive towards accessibility. Voice recognition is something I know a little about professionally and what might be useful with coordinates is allowing users to select their own codewords to avoid confusion between, say, H and A. Nato phonetic alphabet *might* work, but people tend to avoid learning that kind of thing unless they need to, plus different minds work in different ways. So for H rather than Hotel it would not be difficult to program the interface to accept horsey or hinterland or marshmallow or whatever worked for the user.

Anyway, like Wordle here's my scores (one run, might do another but probably encouraging others would be more useful)

Thanks for helping!
Here's how you did:

Round 1: 95%
Round 2: 70%
Round 3: 10%
Thanks for helping!
Here's how you did:

Round 1: 75%
Round 2: 95%
Round 3: 20%

I guess when I converse I am clearer.
@dkol

I just patched something that might get it working for you. shift F5 to refresh your cache and try again if you like.
@schlawg said in #6:
> Mozilla's Common Voice data sets can help train voice models, but Lichess is not building a voice model.
>
> We are studying context based disambiguation of commands.

Oh, ok, thanks for the clarification.

Keep up the great work!