Oh boy! It sounds like you are in a challenging situation. I was more interested in technical challenges of health-related apps but I wish you all the best with this app!
Inpatient care refers to any treatment where the patient is required to be admitted to a hospital or health care facility facility. On the other hand, the OECD defines outpatient care as care that "comprises medical and ancillary services delivered to a patient who is not formally admitted to a facility and does not stay overnight." Thus, studies like this try to break out "inpatient and outpatient care" as an attempt to represent the "real" service in question, while isolating things like administrative costs.
I think outpatient care mean you don't stay in the hospital overnight. So you could have a surgery and leave within an hour in the recovery area and that is outpatient.
Hey! Hope someone can help me with this question. How does the data exploration phase look like in a federated learning context? Most of the times (most probably always) before applying any ML algorithm we'd look at the data and explore it. How can this be done in this case if data is not available to see? Even in the example in the blog post of Flower the dataset is loaded directly without any pre-processing (which is usually the case in real life).
I'm researching federated learning. It's currently used in a number of contexts including the Google and Apple keyboards on your Android and iOS devices respectively.
Federated learning is a very active field of research. There are no simple frameworks that folks can easily operationalize. Most do not have problems that necessitate federated learning—although the growth in data privacy laws, public-private partnerships, and need to build models on privately held data (think commercial partnerships) are making it more and more prevalent.
I am studying aspects of compression (i.e., gradient compression) in federated learning. I also study problems and applications of federated learning to public-private partnerships (i.e., the cross-silo setting as opposed to the cross-domain setting).
Almost a year ago I decided to start learning Russian so that I can talk with my girlfriend in her native language (the things we do for love )
Initially I tried the usual things like Duolingo, Babbel and some other apps. Out of those things the only one I found useful was Duolingo because it can get you started pretty quickly.
However, I got stuck after that. I couldn't see myself making any progress. Then I stumbled upon the Comprehensible Input theory and TPRS and since then I've been studying Russian using a method that loosely follows these. Here's what I do:
- I find short stories, news articles, social media posts etc online.
- I read those texts and mark down the new words as I learn them. I add those words in a flashcard app and I practice them using SRS. I use an app called Ulangi.
- I ask my girlfriend (a native speaker) to ask me short questions about the text and I have to answer in my target language.
- Once I feel comfortable with that text I repeat the process with the next one.
And it works (at least for me). I grew my vocabulary immensely, I can acquire grammar rules naturally (like I did with my native language) and I get to actually speak the language from day 1. As an added bonus I get to learn a lot about the culture of my target language.
However, translating and saving words in my vocabulary became tedious so I decided to automate this whole process. So I started building a tool for me. I, then, realized that this might be useful for others so I made it public. You can use it for free at Talkabl.com
On mobile, I just tested the default French lesson. Right now, the 'dictionary' seems to be displayed dependent on where the last appearance of the word in the text. This means that if I select a word in the first paragraph but the word also appears in the last paragraph, I have to scroll to the bottom to see its definition (and add it to 'my list'). (ex: 'protestaires' in the default FR lesson) The column, in mobile view, gets bumped to the bottom.
I don't think you have to go as far as a modal window - something as simple as 'position:fixed' would let it pop on top of the text (though right now, the dictionary box doesn't have a defined independent class and is not contained). It's probably best for the large-screen view as well: you don't want your text body to be moving around while reading.
But seriously, this is one of the simplest, smartest and most extensible language learning tools I've come across. Thank you for sharing it.
edit//
It's a FF/Webkit quirk, but because the parent element (.bx--content) has 'transform' property applied to it, it will prevent the child div from actually responding to 'position:fixed'.
Hey there! Wow thanks for the kind words :) Really glad you liked it.
Yes, I know that the dictionary experience on mobile is suboptimal. When I started this project I thought most people would access it on desktop but I couldn't be more wrong :) Most people use it on mobile. I released this week a couple of audio fixes for mobile browsers but didn't get to the dictionary issues. I will fix it pretty soon!
Thanks for noticing and keep using it and sharing your thoughts.
Awesome.
An attempt to create a lesson fails with:
"An error occured while creating the lesson
Network error: Failed to fetch"
So I retried.
Now I have two of the same lessons.
How can I delete?
Also, is the text-to-speech defined by the voices on the local machine? Because I'm getting the playback in native language only. (The target lang voice is installed...) (edit: had french/canada installed and I guess it doesn't count...)
Also, is there a more appropriate place for me to provide feedback/ask questions?
Atm there is no delete button but I'll add that to the backlog. Thanks for sharing!!
The text-to-speech uses Speech synthesis from the Web Speech API and I wish it worked seamlessly on all browsers but it doesn't. Are you using it on mobile? If yes and your device is a Samsung one please check that you are using Google's TTS engine and not Samsung's.
"You can pick which one by going to the Settings app, then Controls->Language and input->Text-to-speech options. Select the gear icon next to Google Text-to-speech Engine, then under Language you can update the exact locale you want to use. If you select "Install voice data" you can even select from a sample of different voices for some locales. You need to restart the device after changing this setting for it to take effect."
Yes, please shoot me an email at admin @ talkabl.com and let's pick it up from there :) Looking forward to hearing your thoughts.
Very interesting. It is great that one can export it to Anki too. Does it handle incremental exports properly? For example, I export the vocabulary today with 10 words and import it to Anki. Then I add 5 more words and export the whole set again. If I import it into Anki, will Anki remember my history with the old words?
That looks great! Definitely warrants its own submission in my opinion.
What do you have to do in order to support extra languages? Perhaps ensure font support if loading from web, and the dictionary comes from Wiktionary so you do need to know what language it is, but other than that should 'just work', anything else?
Hey thanks :) I didn't submit it yet in HN but tried to do so in more relevant communities such as /r/languagelearning etc. Buuuut, HN surely has a lot of people interested in language learning too so I will submit it at some point :)
Regarding new extra languages - good question :) Thankfully because of the way the code was written, not a lot of work is needed to support a new language. I went from just 1 language (Russian) to the ones I support now in just 1-2 days worth of work. Having said that, some languages are trickier. For example, Japanese needed more work on how the tokens (words/characters) are parsed and split in the UI (to be honest I'm still not sure about how well it works). Also, not all languages are created equal according to Wiktionary. Some languages have way more Wiktionary entries than others (e.g. Korean has fewer entries than Russian). Also, I use some NLP libs for tokenization and lemmatization which vary in accuracy depending on the language.
The whole idea of the project is to experiment with how much can language learning be automated. Can we enable more people to learn a new language if they do not have access to native speakers or teachers? So automating some parts might need some sacrifices in the quality of the material but quite frankly from my own experience the benefit outweighs the occasional wrong word here and there :)
I picked French and clicked the "Listen to the lesson" button. But what I got was my computer speaking French words as though they were English. Even my French accent, almost fifty years after failing my high school French exams, is better.
:-)
Can anyone tell me how to make it speak French French instead of Franglais? Apart from that it looks interesting.
Hey :) hahaha indeed the text to speech feature might sound unnatural sometimes. I use the Web Speech API but unfortunately it doesn't work on all browsers seamlessly.
Are you using it on mobile? If yes and your device is a Samsung one please check that you are using Google's TTS engine and not Samsung's.
"You can pick which one by going to the Settings app, then Controls->Language and input->Text-to-speech options. Select the gear icon next to Google Text-to-speech Engine, then under Language you can update the exact locale you want to use. If you select "Install voice data" you can even select from a sample of different voices for some locales. You need to restart the device after changing this setting for it to take effect."
Let me know if you got it working. You can also switch browser to check if it works on a different one. I really wish Web speech API was more consistent.
I had to install the French voice from Microsoft. But Firefox still spoke English even when I chose French in the Windows Speech Settings. It works in Microsoft Edge but there is a different problem there: it won't stop when I hit the pause button.
thanks for sharing! Cross-browser compatibility for the web speech API is really an issue. Unfortunately, different browsers behave differently (and not in a predictable way).
I'll have a look at this though. Thanks for sharing.
Yes, I can mirror this experience for Anki! I still find it very useful though.
About the comprehensible input theory I have been using this method consistently for almost a year now to teach myself Russian.
It really really works. Initially I focused on learning grammar etc through online course etc but it didn't work for me. When I started focusing on consuming increasingly harder input in Russian (news articles, tweets, short stories etc) I saw a huuuge difference! Your grow your vocab tremendously fast and grammar rules are acquired naturally. I'm not saying it's easy I just say that that method really worked for me.
I tried LingQ to help me and even though is a very good resource the UX is incredibly bad and the free version is even more limiting. To the point that I decided to create my own tool talkabl.com. I decided to open it for free for any language learner.
It's an ongoing experiment on how much of the language learning can be automated :)
This is fantastic. The LingQ UX is truly horrible and it has way too many unnecessary features, but as far as I knew it was the only thing available with what it offers. I end up paying a monthly subscription, even though I use a very small subsets of its features, just so I can upload ebooks to it.
I've had the same experience with comprehensible input while learning Dutch. The grammar acquisition is what I've found most interesting. I still know next to nothing about formal Dutch grammar rules, but I was able to pass a few B2 practice exams with good scores regardless. I could pick out the correct answer on the grammar questions through process of elimination because all of the other answers just don't feel right. It's also been great for those words and phrases that don't translate well to English. Just by seeing them over and over in different contexts the meaning just slowly accumulates over time without ever seeing it translated or defined.
What really sold me on the method was that learning only through input actually produced equal levels of improvement in my ability to produce output. My speaking and writing skills seem to consistently lag just a bit behind my reading and listening skills. I think it just takes a bit longer to learn new words and phrases at a deep enough level that just come naturally when producing output.
Yeah this is more or less how I feel about the method too. Another big plus is unlike other methods studying with texts allows you to have unlimited lessons :) Your lessons are as limited as the corpus available for your target language online.
Yous should check out languagetools.io. Like LingQ, but with a much better UI, way cheaper and chill with non-premium users, and they fund a school in Africa as well.
1. Full stack software engineer (Node.js, MySQL, Elasticsearch)
2. Front end software engineer (Ember.js, HTML, CSS)
3. iOS software engineer
4. Growth marketing expert
What is Avocarrot:
Avocarrot is a native mobile ad network with offices in Silicon Valley and Athens. We are building the next generation of mobile advertising technology. Backed by leading corporations as well as institutional and angel investors from the Valley, our aim is to become the de facto standard in mobile app monetization. Our platform has served several billion ad requests and keeps growing at a fast pace.