I'm deaf. Something close to standard Canadian English is my native language. Most native English speakers claim my speech is unmarked but I think they're being polite; it's slightly marked as unusual and some with a good ear can easily tell it's because of hearing loss.
Using the accent guesser, I have a Swedish accent. Danish and Australian English follow as a close tie.
It's not just the AI. Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right? I've also been asked if I was Scandinavian.
Interestingly I've noticed that native speakers never make this mistake. They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent. That leads me to the (probably obvious) inference that whatever it is that non-native speakers use to judge accent and competency, it is different from what native speakers use. I'm guessing in my case, phrase-length tone contour. (Which I can sort of hear, and presumably reproduce well, even if I have trouble with the consonants.)
AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable. Even now AI transcription has much more trouble with me than with most people. Yet aside from a habit of sometimes mumbling, I'm told I speak quite clearly, by humans.
> AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable.
I don't know what your transcription use cases are, but you may be able to get an improvement by fine-tuning Whisper. This would require about $4 in training costs[1], and a dataset with 5-10 hours of your labeled (transcribed) speech, which may be the bigger hurdle[2].
1. 2000 steps took me 6 hours on an A100 on Collab, fine-tuning openai/whisper-large-v3 on 12 hours of data. I can shar my notebook/script with you if you'd like.
2. I am working on a PWA that makes it simple for humans to edit initial, automated transcriptions with mistakes for feeding the correct dataset back into the pipeline for fine-tuning, but its not ready yet
I'm also deaf, and I took 14 years of speech therapy. I grew up in Alabama. The only way you would know I'm from the South is because of the pin-pen merger[1]. Otherwise, you'd think I grew up in the American Midwest, due to how my speech therapy went. Almost nobody picks up on it, unless they are linguists that already knew about the pin-pen merger.
I’m aware of the merger, but I literally can’t hear a difference between the words. I certainly pronounce them the same way.
I also think merry-marry-Mary are all pronounced identically. The only way I can conceive of a difference between them is to think of an exaggerated Long Island accent, which, yeah, I guess is what makes it an accent.
That's exactly what the pin-pen merger is! As you know, it's not limited to pin/pen, and hearing ability (in my case, profound hearing loss) is not related to the ability to hear the difference. I don't understand the linguistics, but my very bad understanding is that there's actual brain chemistry here that means that you _can't_ hear the difference because you never learned it, never spoke it, and you pronounce them the same.
My partner is from the PNW and she pronounces "egg" as "ayg" (like "ayyyy-g") but when I say "egg" she can't hear the difference between what I'm saying and what she says. And she has perfect hearing. But she CAN hear the difference between "pin" and "pen", and she gets upset when i say them the same way. lol
But yeah, that's one of the things that makes accents accents. It's not just the sounds that come out of our mouths but the way we hear things, too. Kinda crazy. :)
When I was listening to some of the samples on the page you linked (pronunciation of “when”), it really seemed to me like the difference they were highlighting was how much the “h” was pronounced. Even knowing what I was listening for, it was very like my brain was just refusing to recognize the vowel sound distinction. So I think you must be right about it being a matter of basic brain chemistry.
In the example of the reverse pen/pin merger (HMS Pinafore) on that page, I couldn’t hear “penafore” to save my life. Fascinating stuff.
I used to think of the movie “Fargo” and think “haha comical upper midwestern accents.” And then at some point I realized that the characters in “No Country for Old Men” probably must sound similarly ridiculous to anyone whose grandparents and great grandparents didn’t all speak with a deep, rural West Texas accent - which mine did, so watching the movie it just seemed completely natural for the place and time at a deeply subconscious level.
They are the same phoneme for me in US Eastern suburbia, the only difference is in a subtle shift in the length that you drag it out. "merry" is faster than "marry" which is sometimes but not always faster than "Mary". Most UK accents seems to drag the proper name out an additional beat, and for some of them there's a slight pitch shift that sounds like "ma-ery", at its most extreme in Ireland (this is one early shibboleth by which I recognized Irish people before I really picked up on the other parts of the accent).
As someone with a German accent, to me the difference between merry and marry is the same as between German e (in this case ɛ in ipa) and ä (æ in ipa). Those two sounds are extremely close, but not quite the same. According to the Oxford dictionary that is true in British English, while it shows the same pronunciation (ɛ) for both in American English
Wow, I'm not deaf, but almost everything you mentioned applies to me too. I've never met anyone else who has experienced this before, yet all of your following points apply exactly to me:
> standard Canadian English is my native language
> Most native English speakers claim my speech is unmarked
> Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right?
> They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent.
At least 2 or 3 times a year, someone asks me if I'm British, but me and my parents were born in Canada, and I've never even been to England, so I'm not really sure why some people think that I have a British accent. Interestingly, the accent checker guesses that my accent is
American English 89%
Australian English 3%
French 3%
I was born in Brooklyn, to Yiddish speaking parents and Yiddish was my first language. I now spend half my time in California and half in Israel. The accent checker said 80% American English, 16% Spanish, and 4% Brazilian Portuguese. In Israel they ask if I’m Russian when I speak Hebrew. In the US, people ask where I’m from all the time because my accent—and especially my grammar—is odd. The accent checker doesn’t look for grammatical oddities but that’s where a lot of my “accent” comes from.
Yep, I'm also deaf (since age 6), went through a lot of speech therapy, and have a very pronounced deaf accent. I live in the midwestern US (specifically, Ohio) and at least once a year I get asked where I'm from - England being the most common guess, but I've also had folks ask if I'm Scottish or Australian.
AI struggles massively with my accent. I've gotten the best results out of Whisper Large v2 and even that is only perhaps 60% accurate. It's been on my todo list to experiment with using LLMs to try to clean it up further - mostly so I can do things like dictate blog post outlines to my phone on long car rides - but I haven't had as much time as I'd like to mess around with it.
> Your accent is Dutch, my friend. I identified your accent based on subtle details in your pronunciation. Want to sound like a native English speaker?
I'm British; from Yorkshire.
When letting it know how it got it wrong there's no option more specific than "English - United Kingdom". That's kind of funny, if not absurd, to anyone who knows anything of the incredible range of accents across the UK.
I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
I agree there is no such thing as a "British accent", though I'm lucky that my mockney lilt is considered to be one, but Dutch, Danish and Yorkshire are very similar for historical reasons so it's somewhat understandable for you to be detected as Dutch in this app.
I find Danes speaking Danish to sound like a soft Yorkshire accent, and the vowels that Yorkies use are better written in Danish, like phøne.
> I also think the question "Do you have an accent when speaking English?" is an odd one. Everyone has an accent when speaking any language.
Sure, I agree. But look at it from the perspective of a foreigner living in an English-speaking country, which is probably their target demographic.
We know that as soon as we open our mouth the locals will instantly pigeonhole us as "a foreigner". No matter how good we might be in other areas, we will never be one of "them". The degree of prejudice that may or may not exist against us doesn't matter as much as the ever present knowledge that the locals know that we are not one of them, and the fear of being dismissed because of that.
Nobody likes to stand out like that, particularly when it so clearly puts you at a disadvantage. That sort of insecurity is what this product is aimed at.
It's not ethical to lie to people about whether they need something you're selling, especially if you're playing on their fears of vulnerability to make the sale. Laundering the lies through an AI model doesn't make it any less bad.
BoldVoice is very clear about being an American accent "training app", so that's not (necessarily) what's happening here, but the point remains.
Yeah it's the same for having just one accent "German". Swiss, Austrians but also north vs middle vs south Germans do still sound different - even when they talk English.
It's quite offensive. English is my native tongue, I got a perfect IELTS score, and one of my parents was an English professor. But my accent makes me less than "native".
It's often required for immigration purposes. Countries/Universities will let you off where you're coming from a country that has english as it's main language or have studied a degree in the language, but they often won't if you're a native English speaker living elsewhere.
The first two days were a shock, as I felt it was a different language. But just after some time, god adjusted. And I find endearing both Singlish pronunciation and phrases.
For example, the first time I hear "ondah-cah?" I was puzzled. Then understood that it is "Monday can?". Which, as I learned, means "Would Monday work for you?".
The Australian-Vietnamese continuum is well-explained by Australia being the geographically nearest region which can supply native English language teachers to English language learners in Vietnam, rather than by any intrinsic phonetic resemblance between Vietnamese and Australian English.
> This voice standardization model is an in-house accent-preserving voice conversion model.
Not sure this model works really well. As a french/spanish native speaker, I can immediately recognize an actual French or Spanish person speaking in english, but the examples here are completly foreign to me. If I had to guess where the "french" accent was from I would have guessed something like Nigeria. For example spanish have a very distinct way of pronouncing "r" in english that is just not present here. I would have been unable to correctly guess French or Spanish for the ~10 examples present in each language (mayyybe 1 for French).
It's probably an artifact of them lumping together all varieties/dialects of a given language. I don't speak Spanish, but I know that the R is one of the things that's different in e.g. Argentina.
For sure the voice standardization model is not perfect, but it was important for us to do especially for the voice privacy. It’s still pretty early tech.
Since our own accents generally sound neutral to ourselves, I would love someone to make an accent-doubler - take the differences between two accents and expand them, so an Australian can hear what they sound like to an American, or vice-versa
I agree.
I think there are places in the world where people consider their accent to be 'neutral', but I'm pretty sure no-one from my neck of the woods would think that.
I've found that when I'm listening to recordings of me my accent really sticks out to me in a way that's completely inaudible when listening to myself live. This happens with both English and my native German.
What does it mean mono-tonal and what is an expressive ebook? I assume you are not American born? I had been of the understanding that rythm was more important than the exact sounds in comprehension.
I just got a project running whereby I used python + pdfplumber to read in 1100 pdf files, most of my humble bundle collection. I extracted the text and dumped it into a 'documents' table in postgresql. Then I used sentence transformers to reduce each 1K chunk to a single 384D vector which I wrote back to the db. Then I averaged these to produce a document level embedding as a single vector.
Then I was able to apply UMAP + HDBSCAN to this dataset and it produced a 2D plot of all my books. Later I put the discovered topic back in the db and used that to compute tf-idf for my clusters from which I could pick the top 5 terms to serve as a crude cluster label.
It took about 20 to 30 hours to finish all these steps and I was very impressed with the results. I could see my cookbooks clearly separated from my programming and math books. I could drill in and see subclusters for baking, bbq, salads etc.
Currently I'm putting it into a 2 container docker compose file, base postgresql + a python container I'm working on.
This is fascinating in theory, but I'm confused in practice.
When I play the different recordings, which I understand have the accent "re-applied" to a neutral voice, it's very difficult to hear any actual differences in vowels, let alone prosody. Like if I click on "French", there's something vaguely different, but it's quite... off. It certainly doesn't sound like any native French speaker I've ever heard. And after all, a huge part of accent is prosody. So I'm not sure what vocal features they're considering as "accent"?
I'm also curious what the three dimensions are supposed to represent? Obviously there's no objective answer, but if they've listened to all the samples, surely they could explain the main constrasting features each dimension seems to encode?
Apparently Persian and Russian are close. Which is surprising to say the least. I know people keep getting confused about how Portuguese from Portugal and Russian sound close yet the Persian is new to me.
Idea: Farsi and Russian both have simple list of vowel sounds and no diphtongs. Making it hard/obvious when attempting to speak english, which is rife with them and many different vowel sounds
While Persian has only two diphtongs and 6-8 vowels, Other Languages of Iran are full of them(e.g. Southern Kurdish speakers can pronounce 12+1 vowels and 11 diphtongs). I find it funny if all Iranians are speaking English with the Persian accent.
Did research on accent, pronunciation improvement, phoneme recognition, kaldi ecosystem, etc … nothing really changed in the public domain past few years. There’s no even accurate open source dataset. All self claimedccc manually labelled dataset with 10k+ hours was partly done with automation. Next issue, model models operates in different latent space often with 50ms chunks while pronunciation assessment requires much better accuracy. Just try to say B loud - silent part gathering energy in the lips, loud part, and everything what resonates after. Worst part there are too many ml papers from the last year students or junior phd folks claiming success or fake improvements, etc
The article itself is just a vector projection in 3d space … the actual reality is much complex.
Any comments on pronunciation assessment models are greatly appreciated
You are right and I don't think incentives exist to solve the issues you describe, because currently many of the building blocks people are building are aligned to erase subtleaccent differences: the neural codecs, transcription systems such as whisper want to output clean/compressed representations of their inputs.
I don’t think I’m using it as a metaphor? To “have interesting latent spaces” just means you have access to the actual weights and biases, the artifact produced by fine-tuning/training models, or you can somehow “see” activations as you feed input through the model. This can be turned into interesting 3D visualizations and reveal “latent” connections in the data which often align with and allow us to articulate similarities in the actual phenomena which these “spaces” classify.
Not many people have the privilege of access to these artifacts, or the skill to interpret these abstract, multi-dimensional spaces. I want more of these visualizations, with more spaces which encode different modalities.
It would be interesting to do a wider test like this but instead of trying to clump people together into "American English" and "British English" it would be interesting if the data point was "in which city do people speak like you do?" and create a geographic map of accents.
I'm from the south of Sweden and I've had my "accent" made fun of by people from Malmö just because I grew up outside of Helsingborg, because the accent changes that much in just 60 kilometers.
Fascinating! How did you decouple the speaker-specific vocal characteristics (timbre, pitch range) from the accent-defining phonetic and prosodic features in the latent space?
We didn't explicitly. Because we finetuned this model for accent classification, the later transformer layers appear to ignore non-accent vocal characteristics. I verified this for gender for example.
When people mention a single "British accent", in 99% of the cases it's just a more widely understood shorthand for Received Pronunciation. I don't see how that's bad or wrong, considering how common it is in education.
It's not common any more, very few people really speak RP these days. The more usual thing accent that people might think of is sometimes called "Standard Southern British" (I've heard "BBC English" as well).
I mean, if you want to be like that, you could generalize that statement to "the fact that they believe there to be a single `$LANGUAGE_OR_REGION` accent means this can be quickly discounted as nonsense". Other languages, and other varieties of English, have regional variation as well, after all--although in the case of other languages, I'll grant that the accents of, say, two German speakers from different regions might not be as distinct from each other in English as they are in German.
At any rate, I was looking forward to finding out what the accent oracle thought of my native US English accent, which sounds northern to southerners and southern to northerners, but I guess it'd probably just flag it as "American".
Very nice viz. it reminds me of the visualizations people used to do of the mnist data set in the days when the quintessential ML project was “training a hand writing digits classifier”:
https://projector.tensorflow.org/
I am curious - why UMAP not t-SNE? (See https://pair-code.github.io/understanding-umap/) When I saw the vis, there is a a collection of lines, which look as an artifact. t-SNE (typically) gives more "organic" results of blobs, provided you set perplexity high enough.
Also, while I admire examples of instances, it would be interesting to the map or original laguages - which is close to which, in terms of their English accents.
Why do the voices all sound so similar? I'm not talking about accent, I'm talking about the pitch, timbre, and other qualities of the voice themselves. For instance, all the phrases I heard sounded like they were said by a medium-set 45 year old man. Nothing from kids, the elderly, or people with lower / higher-pitch voices. I assume this expected from the dataset for some reason, but am really curious about that reason. Did they just get many people with similar vocal qualities but wide ranges of accents?
> By clicking or tapping on a point, you will hear a standardized version of the corresponding recording. The reason for voice standardization is two-fold: first, it anonymizes the speaker in the original recordings in order to protect their privacy. Second, it allows us to hear each accent projected onto a neutral voice, making it easier to hear the accent differences and ignore extraneous differences like gender, recording quality, and background noise. However, there is no free lunch: it does not perfectly preserve the source accent and introduces some audible phonetic artifacts.
> This voice standardization model is an in-house accent-preserving voice conversion model.
I'm kind of curious if it would be possible for it to use my own voice but decoupled from accent. I.e. could it translate a recording from my voice to a different accent but still with my voice. If so, I wonder if that makes it easier for accent training if you can hear yourself say things in a different accent.
That would be interesting for sure, but considering you don't hear yourself the same way someone else or a mic does, I'm not sure it would have the benefit you're expecting.
All the accents sound like somebody from... somewhere in the third world...? but with a small trace of the named accent.
I don't know if that's intended - maybe the different recordings are not supposed to sound like their label but like a foreigner who learned English while around people with that accent?
It's second choice was the place I live, and third place was where I'm from, so not too bad overall. I have been told I have a very ambiguous accent though.
Tried again and this time it got me. Second place is still Swedish.
Looking at the UMAP visualisation, there is a South African cluster overlapping with a Swedish cluster, so makes sense I guess.
It would be really cool if it could highlight the parts of the speech that gave you away your accent. It guesses mine correctly most of the time (though not the first time I tried), but also lets me know my accent is pretty light.
It got me, native English speaker with British accent.
I was hoping it might drill down into regional accents though, there is a huge variety in the UK. I have a Midlands accent which can occasionally confuse non-native speakers.
Good question! It's likely because there are lots of different accents of Spanish that are distinct from each other. Our labels only capture the native language of the speaker right now, so they're all grouped together but it's definitely on our to-do list to go deeper into the sub accents of each language family!
Spanish is one of those languages I would love to see as a breakdown by country. I’m sure Chilean Spanish looks very different from Catalonian Spanish.
Not sure, could be the large number of Spanish dialects represented in the dataset, label noise, or something else. There may just be too much diversity in the class to fit neatly in a cluster.
Also, the training dataset is highly imbalanced and Spanish is the most common class, so the model predicts it as a sort of default when it isn't confident -- this could lead to artifacts in the reduced 3d space.
Yeh, we would've loved to see that too. It's on our roadmap for sure. Same for some of the other languages with a large amount of unique accents like e.g. French, Chinese, Arabic, etc...
Fascinated by the cluster of Australian, British and South African. As an Australian living in UK, I hear an enormous difference between these accents - even just in the British ones, the Yorkshireman and the Geordie stick out like a sore thumb to me - the narcissism of small differences perhaps. Interestingly, my partner, who is from England, often says, of various Australians we hear (either on TV or my friends), that they sound British to her. I, meanwhile, can pick an Australian from very few words. What are we hearing differently? It is a mystery to me.
This is a fascinating look at how AI interprets accents! It reminds me of some recent advancements in speech recognition tech, like Google's Dialect Recognition feature, which also attempts to adapt to different accents. I wonder how these models could be improved further to not just recognize but also appreciate the nuances of regional
Using the accent guesser, I have a Swedish accent. Danish and Australian English follow as a close tie.
It's not just the AI. Non-native speakers of English often think I have a foreign accent, too. Often they guess at English or Australian. Like I must have been born there and moved here when I was younger, right? I've also been asked if I was Scandinavian.
Interestingly I've noticed that native speakers never make this mistake. They sometimes recognize that I have a speech impediment but there's something about how I talk that is recognized with confidence as a native accent. That leads me to the (probably obvious) inference that whatever it is that non-native speakers use to judge accent and competency, it is different from what native speakers use. I'm guessing in my case, phrase-length tone contour. (Which I can sort of hear, and presumably reproduce well, even if I have trouble with the consonants.)
AI also really has trouble with transcribing my speech. I noticed that as early as the '90s with early speech recognition software. It was completely unusable. Even now AI transcription has much more trouble with me than with most people. Yet aside from a habit of sometimes mumbling, I'm told I speak quite clearly, by humans.
Hearing different things, as it were.