I developed a tool (https://github.com/smacke/ffsubsync) which can sync subtitles against each other (or even against an audio track), and this can be used in conjunction with other tools such as https://pypi.org/project/srt/ to combine multiple subtitle streams into a single stream. I've used this strategy to good effect to get both English and Chinese subtitles up at once.
For Netflix and YouTube I actually use a language learning chrome extension called Migaku, that has this feature - but if there don’t exist subs, can also sometimes pull it off with a chrome extension for dual subs - forget what it is called but I can download SRT files and load them up alongside a stream. Both are not reliable and require significant fiddling.
I've thought about this, although perhaps not framed the same way, and one of my suggestions is to vibe code in Rust. I don't know how well these models handle Rust's peculariarities, but I believe that one should take all the safety they can get in case the AI assistant makes a mistake.
I think most of the failures of vibe-coding can be fixed by running the agent inside a sandbox (a container or VM) that doesn't have access to any important credentials.
I think the failures like this one, deleting files, etc, are mostly unrelated to the programming language, but rather the llm has a bunch of bash scripting in its training data, and it'll use that bash scripting when it runs into errors that commonly are near to bash scripting online... which is to say, basically all errors in all languages.
I think the other really dangerous failure of vibe coding is if the llm does something like:
cargo add hallucinated-name-crate
cargo build
In rust, doing that is enough to own you. If someone is squatting on that name, they now have arbitrary access to your machine since 'build.rs' runs arbitrary code during 'build'. Ditto for 'npm install'.
I don't really think rust's memory safety or lifetimes are going to make any difference in terms of LLM safety.
That's insightful. So where Rust might help you to program safely (write code free from certain classes of bugs), cargo has much/all the same supply-chain risks we would see in development ecosystems like pip and npm. And your point about operating in the shell is also well-taken.
So yeah, I must narrow my Rust shilling to just the programming piece. I concede that it doesn't protect in other operations of development.
I think Rust is a bad example, but I think the general idea that the design of a programming language can help with the weaknesses of LLMs makes sense. Languages with easy sandboxing (like Deno where workers can be instantiated with their own permissions) or capability-based security could limit the blast radius of LLM mistakes or insecure library choices made by LLMs, while also giving similar benefits to human programmers and code reviewers.
Why is Rust a bad example? Of the code bases I've tried Claude on so far, it's done the best job with the Rust ones. I guess having all the type signatures there and meaningful feedback from the compiler help to steer it in the right direction.
Rust doesn't protect you much further than most typed memory-safe languages do; it won't stop an LLM from writing code to erase your filesystem or from importing a library that sounds useful but is full of malware.
To the author: I love this idea, but your blog has two problems that made it less enjoyable for me to read. The first is the pull quotes. I find them confusing and unnecessary, especially when they repeat sentences in the preceding paragraph. The second is that I got stuck on the moving graphs while scrolling on my phone. I suggest making them smaller with a different background color or simply make them static images.
I feel very inspired by this, thanks for sharing! Didn't know this Run Streak Association existed but now I want to join. Congratulations on your immense achievement.
Very cool, I've often thought that such a short-range chat would be fun on an airplane. Not practical, but it could be neat to chat with the group in the air.
I specialize in computer networking in my day job. Most of what I do is Cisco routers, Cisco switches, and Cisco firewalls. I would be interested in learning more about cellular networks. I haven't put any effort into exploring this for myself. If there is a track similar to CCNA → CCNP → CCIE then it isn't well-known (well, not known to me).
Same background as me, however I moved into cellular/mobile - not really any official routes I know of in the industry, certainly not like a Cisco track.
Typical route is work at a Telco or IoT company as Network Eng or Developer and naturally pivot into telco learning on the job.
Vendors will run training courses when you buy their kit which helps a little, but it’s mostly self learning or on the job.
Price point matters here. I keep getting ads for a competitor called Jumpspeak. They're very explicit that you're chatting with a bot, but their price is many times higher than Duolingo's subscription.
I just wrote a paper a few days ago arguing that "manual thinking" is going to become a rare and valuable skill in the future. When you look around you, everyone is finding ways to be better using AI, and they're all finding amazing successes – but we're also unsure about the downsides. I hedge that my advantage in ten years will be that I chose not to do what everyone else did. I might regret it, we will see.
If AI is going to be as economically efficient as promised, there is going to be no way to avoid using it altogether. So the trick will be to keep your thinking skills functional, while still using AI for speedup. Like focus in the age of Internet is a rare skill, but not using Internet is not an option either.
Not all processes are the same, though. I strongly suspect any efficiency improvements will come in processes that didn't require much "thinking" to begin with. I use it daily, but mostly as a way to essentially type faster—I can read much faster than I can type, so I mostly validate and correct the autocomplete. All of my efforts to get it to produce trustworthy output beyond this seem to trail behind the efficiency of just searching the internet.
Granted, I'm blessed to not have much busywork; if I need to produce corporate docs or listicles AI would be a massive boon. But I also suspect AI will be used to digest these things back into small bullet points.
I figure if AI gets as efficient as people seem to think. Then spending a bunch of effort getting good at using it now is kind of pointless, because it’s just going to get easier and easier to use.
Lately I've been saying often the phrase "the process is the product." When you outsource the process, then the product will be fundamentally different from what you would have delivered on your own. In my own case of knowledge work, the value of the reports I write is not in the report itself (nobody ever reads them...) but rather the thinking that went into them and the hard-won wisdom and knowledge we created in our heads.