Hacker Newsnew | past | comments | ask | show | jobs | submit | molasses's commentslogin

Would love to see the base of products like this open sourced.


I'm able bodied, but years back suffered tremendous pain, that led to me having to lie down on my back and use the computer with a keyboard. And TBH even that was a pain. I got quite far under OSX tiger, and browse the web. Use a terminal etc. But at the time it led me into thinking about alternative interfaces. And the one thing that did catch my eye was 'dasher' for text entry. Which could be controlled with a simple pointer. And it did make me wonder how far you could go with a pointer based interface. I wasn't that fast while trying it out. But could construct sentences with a bit of effort.

https://www.inference.org.uk/dasher/

I've been thinking about it some more recently, as AI could assist with word completion, and augmenting it with some AI helpers could really improve things. And if the idea was extended.

Not wanting to sound like an able-ist snob, but interfacing with traditional computers with a keyboard is hard. And the interface is clumsy. Even touch devices and keyboards. My Mum could never use a computer, but she has worked out how to use a tablet and find videos on Youtube. I keep meaning to introduce her to voice input. As this would really benefit her.

Despite the huge tech leaps with smartphones and tablets and what not, I do feel there has been a huge regression in basic communication between people only exacerbated by the pandemic. There's knowledge available easily at people's fingertips and that's great, along with new channels of communication. But text based comms have retarded many people.


Since when we talk, our tongues tap patterns on the roof of the mouth and the back of the teeth, I wonder if AI processing could infer what words you are shaping from these sensors. Maybe it’s possible to input text by mouthing words silently, but without opening your mouth. Kind of like how it’s possible to eavesdrop from just the sound of keyboard clicks:

https://github.com/ggerganov/kbd-audio


Tongue contact might be sufficient (in linguistics, two of the axes of "pronunciation space" are "dental" (whether the tongue makes contact with the teeth) and "palatal" (whether the tongue makes contact with the palate).

There are a number of other dimensions however that are equally important in the creation of word-sounds (e.g., whether the lips are pursed, whether the vocal folds are vibrating, whether the teeth make contact with the lips, where the tongue is located in the space of the mouth [for vowels], etc) and would make determination just from the dental/palatal axes pretty difficult I think. But maybe with enough context, you could get something predictive that is more than good enough, even if it's not into deterministic territory


I think you're talking about subvocal recognition [1]. People are indeed using ML for it, but it looks like it's more complicated than it appears. Still, I think it's only a matter of time before it's available to the average consumer, which I can't wait for because I've wanted something like this for a long time. I do my best thinking when I'm hiking, and I'd love to be able to dictate my thoughts on the move without looking like I'm talking to myself out loud (even though I am, I guess) in public.

[1] https://en.wikipedia.org/wiki/Subvocal_recognition


Several years ago, I was on a long solo drive and thinking about how I would like to be able to communicate with my computer in a subvocal manner. I stuck my pinky finger in my ear canal and "listened" to the deformations of the canal as I spoke, and thought "with a deformation scanner and good machine learning, this could totally work". Later, I registered the domain silentbuds.com to trial the idea but never pursued it. Just did some Googling and see that there are a few new research papers on this approach.


Dasher is really fun!

Remember playing a lot with it back when it was released. Left a lasting impression and broadend my view on input mechanisms.


I guess you can't talk while it is in.


Due to the fact that we incorporate each user’s custom 3D dental scan into the design process, the MouthPad^ is actually quite thin and doesnt prevent speech when worn. With our earliest test users, we have already explored using the MouthPad^ as a complementary tool to voice-control systems.

One example is asking a voice assistant to open “today’s news”, and then using the MouthPad^ to navigate through the options and select the desired link, which is captured briefly in our promotional video (a scene where Krystina is controlling her smart phone). Another example that can be found in our video is when Rocky NoHands uses his MouthPad^ to call his girlfriend, during which he is able to speak with ease. Of course, results may vary, but we have found all our test users are able to speak just fine so far.


The form factor seems similar to a retainer so it might not be much of an impediment to talking as long as you can turn it off and on.


Using google to find specific websites, only amounts to me typing hacker news or facebook. That's about it. My main bulk of queries are question based, but I'm also from a previous era, where I throw keywords at Google rather than questions, which is a learned response from back then. I do 'ask' google assistant questions, and it doesn't do a bad job at it. I would love it to be a glorified PA and general knowledgeable 'person' I can go to for advice.

At the same time, I've containered much of my interaction with Google so it can't draw on my past history. So I have no idea what I'm missing. It did have a tendency to feedback loop a bit too much for me in the past, pigeon holing me to specific websites. There's a huge trust issue, you leak your ID in about four common regular queries so I'm not so sure I can even beat it.


People kind of go through a contrarian phase. I read something the other day, probably on Twitter, that to get the answer you want, you deliberately put out the wrong answer. And leach out the bile to get results.

I think there are still lots of people that are on-boarding. And over time they simmer down. Much is poor etiquette unwittingly forced, as we have finger fumbling interfaces.


I struggle reading threads on Twitter, just as I do on HN. If I could watch the thread grow organically perhaps possibly via a visual map, I'd stand a chance. Fork a thread on Twitter and content can end up totally buried. It's useless.

HN is far more simple in approach, but you only really stand a chance if you monitor your own thread. Kind of. So you just end up glancing this and that. Or trying to read something and then give up. Threads only get traction for about a day and people fall off reading them precisely because engagement requires a silly cognitive leap.

Natural conversations are a bit all over the place, but the brain is quite good at forming some kind of narrative. Despite participants sometimes have wildly differing interpretations. These aren't even apparent in the moment. Ruminating delays, and going back to conversations from yesterday doesn't much happen, unless you have quite an intimate relationship with someone, or you are very topic focused.

Throw many disjointed feeds into the mix, and how on earth do you navigate them, let alone participate in them.

About the only thing that kind of works for me when thinking back is something like newsnet, with basic topic threading. With well considered posts.

Apologies for adding to the noise.


I wonder if we could make a more semantic web by enforcing some meta data standards and having more piecemeal content.


I was reading some tech article the other day. Something like 10 best Foos or something. And there was no date on it. A commenter said, great article but no date, and the author replied that the page was refreshed with evergreen content. You'd have had to be a mind reader to know that. But I guess at least I got my answer via a little Q and A on the page.


What was bump.me? Photo sharing is why Instagram, Facebook and Whatsapp is insanely popular. Sharing photos is just difficult.

I've had to resort in the past of putting a small web server on the phone, just to get files off it via WIFI. But of course that's clunky.

If peer to peer networking were easier, trustable etc, it could be a real life saver. There's some feature in Google Files that's WIFI sharing, but I haven't tried it.


You’d select a photo or any file to transfer, bump your phones together and ta-da, it was sent over. No accounts or friend lists necessary.

It used a mix of geolocation and wireless signals to determine the match and worked flawlessly. You could even do the same with a computer by bumping the phone against the spacebar!

This was over a decade ago. It’s very depressing to see great tech like this just disappear into the void.


I abandoned bluetooth early on, because I kept having issues. Stubborn pairing is still a pain.

For music, I bought a DLNA renderer (more like a Chromecast), and just assumed that lots of software could remotely talk to it. But about the only software that almost works with it is something with a poor UI from yesteryear on Android. And music service support is hit and miss. So I'm edging towards Bluetooth now.

That said, yesterday I resorted to CDs. And today, I've jacked a spare phone into an auxiliary port. And I won't use my phone for Bluetooth music mainly because if I walk out the room or want to take a call it all goes tits up.

I've had Bluetooth on Debian Linux on my Thinkpad for years, and different releases have been hit and miss for things like file transfer. And address book syncing. And that's not confined to Linux either.

When it works... It does feel like magic.

Really I want to easily route sound from one app to a particular device or devices with easy remote management. Voice control is a bit hit and miss. But hands free remote is a good idea.

UI is very esoteric, like when you are offered a list of Bluetooth services. A phone I had would offer itself as a remote device or something, but I never figured out how and what it did. Or the worry that your phone might turn into a data access point accidentally.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: