Hacker Newsnew | past | comments | ask | show | jobs | submit | tabbott's commentslogin

Yeah my reaction was:

- The class of threat is interesting and worth taking seriously. I don't regret spending a few minutes thinking about it.

- The idea of specifically targeting people looking for Crypto jobs from sketchy companies for your crypto theft malware seems clever.

- The text is written by AI. The whole story is a bit weird, so it's plausible this is a made up story written by someone paid to market Cursor.

- The core claim, that using LLMs protect you from this class of threat seems flat wrong. For one thing, in the story, the person had to specifically ask the LLM about this specific risk. For another, a well-done attack of this form would (1) be tested against popular LLMs, (2) perhaps work by tricking Cursor and similar tools into installing the malware, without the user running anything themselves, or (3) Hide the shellcode in an `npm` dependency, so that the attack isn't even in the code available to the LLM until it's been installed, the payload delivered, and presumably the tracks of the attack hidden.


> be tested against popular LLMs, perhaps work by tricking Cursor and similar tools into installing the malware, without the user running anything themselves

My sense is that the attack isn't nearly as sophisticated as it looks, and the attackers out there aren't really thinking about things on this level — yet.

> Hide the shellcode in an `npm` dependency

It would have to be hidden specifically in a post-install script or similar. Which presumably isn't any harder, but.


They could have made it a setting, with an explanation of the security benefits of it, so that folks who are paranoid can take advantage of it.

A relevant threat scenario is when you're using your phone in a public place. Modern cameras are good enough to read your phone screen from a distance, and it seems totally realistic that a hacked airport camera could email/password/2FA combinations when people log into sites from the airport.

Ideally, you want the workflow to be that you can copy the secret code and paste it, without the code as a whole ever appearing on your screen.


In my view, the core issue here is that Android's permissions system doesn't consider "Running in the background" and "Accessing the Internet" to be things that apps need to ask the user for permission and the user can restrict.

This attack wouldn't work if every app, even an "offline game", has those implicit permissions by default. Many apps should at most have "Only while using the app" permission to access the Internet. Which would not be complete protection -- there's always the risk you misclick on a now-malicious app that you never use -- but it would make the attack far less effective.


> now-malicious app that you never use

Mildly off-topic, do you know of any good studies in the dangerous defect rate of auto-updating vs never/manually updating in a semi-sandboxed environment like Android?


I'm not sure for Android. Chrome's store has a history of legitimate free apps with millions of users but little revenue being purchase by threat actors who then add malware to the app.

But I've seen fewer stories of that sort of thing with Android apps. Maybe the app store review process is able to catch it? But just as likely to me is that it's harder to discover that a mobile app is now maliciously sending data somewhere.


There is an Internet permission, and GrapheneOS allows denying it to apps that declare use of it.

Here is a rather convincing answer about why not require user approval for internet access in Android applications. From the Android developers themselves.

https://old.reddit.com/r/androiddev/comments/ci4tdq/were_on_...

I don't know about "running in the background" but Android work using "intents", which mean an app can be woken up effectively at any time, so "don't allow app to run in the background" may not do what you expect.


I'm sure there's subtle details to manage here. But "You can exfiltrate data be opening a browser" is a weak argument: One can display the URL to be opened to the user if such an Internet-limited app asks to open a browser, or decide that apps that aren't allowed to use the Internet also aren't allowed to open a browser.

I think there's ways to manage the communication with users around which cases it is surprising/suspicious for the app to require that functionality. Personally, I don't love the model that apps ask for certain permissions but aren't required to explain in a way that can be verified by app store reviewers what they need those permissions for.

And even if one doesn't want every consumer to have to explicitly consent to the permission, it seems to me like you could still have an opt-out mechanism, so that the paranoid among us can implement a more restrictive policy, rather than giving up on the idea of having such a permission entirely.


I feel like too little attention is given in this post to the problem of automated troll armies to influence the public's perception of reality.

Peter Pomerantsev's books are eye-opening on the previous generation of this class of tactics, and it's easy to see how LLM technology + $$$ might be all you need to run a high-scale influence operation.


>I feel like too little attention is given in this post to the problem of automated troll armies to influence the public's perception of reality.

I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for. The whole premise of a Democracy is that people have the right to vote however they want. There is no asterisk to that in my opinion.

I really dont see how 1 person 1 vote can survive this idea that people are only as good as the information they receive. If that's true, and people get enough bad information, then you can reasonably conclude that people shouldn't get a vote.


> I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for.

Ban bots from social media and all other speech platforms. We agree that people ought to have freedom of speech. Why should robots be given that right? If you want to express an opinion, express it. If you want to deploy millions of bots to impersonate human beings and distort the public square, you shouldn’t be able to.


> Ban bots from social media and all other speech platforms.

I would agree with that, but how do you do it? The problem is that as the bots become more convincing it becomes harder to identify them to ban them. I only see a couple options.

One is to impose crushing penalties on whatever humans release their bots onto such platforms, do a full-court-press enforcement program, and make an example of some offenders.

The other is to ban the bots entirely by going after the companies that are running them. A strange thing about this AI frenzy is that although lots of small players are "using AI", the underlying tech is heavily concentrated in a few major players, both in the models and in the infrastructure that runs them. It's a lot harder for OpenAI or Google or AWS to hide than it is for some small-time politician running a bot. "Top-down" enforcement that shuts down the big players could reduce AI pollution substantially. It's all a pipe dream though because no one has the will to do it.


You can do it but you won't like the answer. Link social media accounts real ID, you'll be able to spot the bots because one real person is associated to hundreds of posts a minute.

Going that dystopian extreme wouldn't help. Simply rent real IDs from poor people at cheap rates to be your sock-puppet and you have even more convincing bullshit from bots. Sapient actors aren't so easily solved.

I like this idea. The problem isn’t free speech it’s the money which gives monied interests vastly disproportionate weight.

What if we banned the technologies that enable bots in the first place?

Remove the precursor, remove the problem.


Maybe, but I think still the enforcement of that would need to be targeted at the large players. Also I'm a bit leery of banning the knowledge itself. I wouldn't want to ban research on such matters, for instance. The problem is the proliferation of that technology in the public sphere. So treat it like hazardous materials, drugs, etc.

Why do we even need bot technology in the first place? Some future dream that never materializes? Nope. Just ban the technology.

By all means yes, for the clear case of propaganda bots, ban them. The problem is there will still be bots. And there is a huge gray area - many of the cases aren't clear. I think its just an intractable problem. People are going to have to deal with it.

Easier said than done.

The voting becomes a health-check for the information. We shouldn't revoke the rights of the individual based on arbitrary information they may or may not receive.

If you're reality isn't be influenced, then you're creating it yourself. Both are strengths and weakness, depending on context.

I predict that people will end up using the term "vibe engineering" to refer to development processes that involve asking an LLM to build their entire app: UI design, database schema, architecture, devops, QA, debugging, etc, without any of the careful effort to understand and be able to proudly own the resulting code that Simon is imagining.

And I think that is actually the most natural meaning for "vibe engineering", actually: Parallel to "vibe coding" where you serially prompt the AI to write the code for you, "vibe engineering" should be serially prompting the AI to do the entire engineering process for you.

I also predict that a precisely defined term for what Simon is describing will inevitably end up being primarily used by people who are actually doing "vibe engineering". Being disciplined and careful is hard.

People and organizations love to claim/pretend they're doing expensive, mostly invisible work like "building secure software". Given nearly every organization claims they use security best practices no matter what their actual practices, I imagine it will be that way with "actually reading and verifying what the LLM generated for me".

Certainly I've been disappointed how often someone presents something they've "written" in recent months that turns out, on inspection, to be AI slop that the "author" hasn't even read every sentence of carefully.


For what it's worth, Zulip has a Mattermost data import tool, and communities are an important use case when we set product direction.

While I can't promise we won't ever change our exact monetization strategy, we're not venture-funded, and thus are immune to the usual enshittification pressures. https://zulip.com/values/ has some context.

(I lead the Zulip project)


Microsoft Teams dominates the team chat market thanks to anti-competitive bundling practices. Slack's proprietary "Slack Connect" federation system requires both users to pay for Slack for their entire workspace. Slack has very aggressive restrictions on exporting your organization's own messages (https://blog.zulip.com/2025/07/24/who-owns-your-slack-histor...).

And yet the "red flag" being discussed is Zulip having monetization for self-hosted business use? (Mobile notifications have always been free for most communities, and we have discount programs for various use cases detailed on our pricing page).

Look, in 2025, and one should be very wary of rugpulls. But Zulip has no venture investors. I've personally funded the project for almost a decade now, so that it can operate in line with our values (https://zulip.com/values/).

I want to use applications that are ethically managed, self-hostable, privacy-supporting, open-source, and excellent. Zulip aims to be that kind of project, and even with all the community contributions that we've fostered, I don't see how we could maintain Zulip responsibly without our professional team.

Should it be a red flag for an open-source application to have monetization that charges businesses for using services operated by its professional team? Or would the red flag be a project that lacks a professional team who one can count on to maintain it responsibly?


Hey, thanks for reaching out and truly appreciate the feedback from the team, not someone random on the internet. When I say ‘red flag’ in this context I never mean comparing Zulip to Slack. For me personally (but I believe for most others too, especially right here) Slack was a no-go since about a decade ago. Slack is just a poster child of a huge massive gargantuan Red Flag. I’d never even consider using it in any possible scenario. Whilst Zulip was my _primary_ consideration to deploy for a small organisation (still more than 10 people). Alongside Mattermost and Matrix, who don’t have these limits. So this thing _feels_ like a red flag in comparison. Here, in sibling comments I do write about Mattermost too, with their nudges to buy Enterprise edition here and there, and everywhere. And about their new limits too. (For which there’s Mostly Matter now.)

This also is a massive red flag for me, and while I understand that they and you Zulip team has to support a professional team, and you have to do that, and you have every right of doing that, and I’m personally very supportive of this-— still, this move leaves the fear of ‘today this, tomorrow something else.’

Speaking of this very case of 10 mobile users limits, I have a few thoughts. First, it’s entirely possible that you communicate this piece not very well, as I had this impression that Mattermost and Matrix don’t do that, hence maybe it’s possible to host the whole thing on my own and have the notifications. Perhaps they just allow users to use their servers for that for free. This moment is unclear for me, and I had to do my research, which mostly failed, since I still do a guesswork here. I am left with this bitter taste that the issue is artificial on Zulip’s side. Again, not saying it’s your fault, I could be someone who did the research poorly. That was my weekend attempt, and I was super limited on time. Next time I may have more time for that research again (I plan to), but it would happen early next year.

Second, it was mentioned somewhere here as well, the active users strategy. You allow for 10 users, meaning if you’re small team or group, go ahead and use Zulip. But I am a part of an organisation who needs their chat, and they are about 100 people across the country (and the country is Ukraine, meaning they have bigger things to worry about than a chat). But among these 100 people most of them are drivers or cars maintenance team, they mostly need no company chat. If they would use it, it’s a couple of messages a day tops. However, there are managers, and they would use the chat very actively, all day long. They are either less than ten or more than ten (up to 15, 20). I not aware of the exact number of people, since in my city they have just three managers, plus two developers, so there are five people plus me who’d use the chat actively. But since there are others, and they need mobile notifications, we cannot consider Zulip (even when we are able to host in on our own entirely) for this, unless we pay. While the company is for-profit, I cannot even think of asking for anything, and understand the company is better to pay and support you. Yet in this very situation, I’m having hard time explaining it to the boss. He won’t pay for these drivers and cars maintenance teams, as they’re dead souls, technically. They are to receive some instructions and ok them, that’s 90% of the communication for them. So while I’d try my luck with pitching company chat (instead of just using WhatsApp or Facebook or Viber or Telegram), that makes sense only for active users, not for mostly idle users that won’t use the chat actively. And in this very situation, it’s mostly texts, so no heavy images or video calls.

Apart from that, your chat looks one of the best among self-hosted options, I plan at trying it with a group of friends, which is less than 10 people. Forgive me if all this is easily verifiable when you actually used the chat. I only deployed it locally to check the interface (was mostly okay), and researched on the perspectives of using it within a relatively big organisation.

Cheers!


https://blog.zulip.com/2025/06/17/flutter-mobile-app-launche... and the previous blog post linked from there gives some context as to why we rewrote the Zulip mobile apps from React Native to Flutter.

Our Flutter experience over the last few months since launch has been very positive. Most importantly, development velocity is much faster than it was on React Native.


I think this is a dangerous view. As we've seen with the libxz attack, skilled developers are very capable of hiding backdoors/vulnerabilities in security software, even when it is open source. So it's very important whether the developers building the software are trustworthy.

Authoritarian jurisdictions with a modus operandi of compelling their businesses and citizens by force are thus much riskier than Western democracies, even flawed ones. I at least expect it's a lot harder to say no to demands to break your promises that come with credible threats of torturing your family.

I'll also say that it's quite hard to make a messaging app without the servers that run the service having a great deal of power in the protocol. Many types of flaws or bugs in a client or protocol go from "theoretical problem" to "major issue" in the presence of a malicious server.

So if end-to-end security is a goal, you must pay attention to not only the protocol/algorithms and client codebase. The software publisher's risks are important (E.g., Zoom has a lot of risk from a China-centric development team). As are those of the hosting provider (if different from the publisher).

And also less obvious risks, like the mobile OS, mobile keyboard app, or AI assistant that are processing your communications even though they're sent between clients with E2EE.

Reflections on Trusting Trust is still a great read for folks curious about these issues.


> I think this is a dangerous view.

I think you misinterpreted the most important nuance in this post. The rest of your comment is about jurisdiction in the context of who develops the client software.

The blog post is talking about jurisdiction in the context of where ciphertext is stored, and only calls that mostly irrelevant. The outro even acknowledges that when jurisdiction does matter at all, it's about the security of the software running on the end machine. (The topic at hand is end to end encryption, after all!)

So, no, this isn't a dangerous view. I don't think we even disagree at all.


I think we agree here that the US/Europe jurisdiction difference is relatively minor compared to questions about the software itself.

What's dangerous is the framing; many E2EE messengers give the server a LOT more power than "just stores the ciphertext". https://news.ycombinator.com/item?id=33259937 is discussion of a relevant example that's gotten a lot of attention, with Matrix giving the server control over "who is in a group", which can be the whole ball game for end-to-end security.

And that's not even getting into the power of side channel information available to the server. Timing and other side channel attacks can be powerful.

Standard security practice is defense in depth, because real-world systems always have bugs and flaws, and cryptographic algorithms have a history of being eventually broken. Control over the server and access to ciphertext are definitely capabilities that, in practice, can often be combined with vulnerabilities to break secure systems.

If the people who develop the software are different from those who host the server, that's almost certainly software you can self-host. Why not mention self-hosting in the article?

If you're shopping for a third party to host a self-hostable E2EE messenger for you. The framing of the server as just "storing ciphertext" would suggest trustyworthyness of that hosting provider isn't relevant. I can't agree with that claim.


> The framing of the server as just "storing ciphertext" would suggest trustyworthyness of that hosting provider isn't relevant.

In point of argument, it should not be relevant - beyond metadata exposure or failure to provide service. Metadata exposure is mentioned in the post, and is a rather important aspect to consider. Having service turned off when you need it also might be an important consideration depending on the threat model. If I were cabinet members planning military operations, it is quite likely that I would care about both of those.

Beyond that, the 2025 table stakes for a 'secure' messaging service are: "a totally compromised server should not be able to read messages or inject messages that will be acceptable to legitimate clients. It should also not be able to undetectably remove, reorder, or replay messages." [1]

[1] https://www.rfc-editor.org/rfc/rfc9750.html#name-delivery-se...


> What's dangerous is the framing; many E2EE messengers give the server a LOT more power than "just stores the ciphertext". https://news.ycombinator.com/item?id=33259937 is discussion of a relevant example that's gotten a lot of attention, with Matrix giving the server control over "who is in a group", which can be the whole ball game for end-to-end security.

I'm a vocal critic of Matrix, and I would not consider it a private messenger like Signal.

https://soatok.blog/2024/08/14/security-issues-in-matrixs-ol...

When Matrix pretends to be a Signal alternative, the fact that the server had control over group membership makes their claim patently stupid.

> And that's not even getting into the power of side channel information available to the server. Timing and other side channel attacks can be powerful.

A lot of my blog discusses timing attacks and side-channel cryptanalysis. :)

> If the people who develop the software are different from those who host the server, that's almost certainly software you can self-host. Why not mention self-hosting in the article?

Because all of the self-hosting solutions (i.e., Matrix) have, historically, had worse cryptography than the siloed solutions (i.e., Signal, WhatsApp) to the point that I wholesale discount Matrix, OMEMO, etc. as secure messaging solutions.

> If you're shopping for a third party to host a self-hostable E2EE messenger for you. The framing of the server as just "storing ciphertext" would suggest trustyworthyness of that hosting provider isn't relevant. I can't agree with that claim.

It's more of an architecture question.

Is a self-hosted Matrix server that accepts and stores plaintext, but is hosted in Switzerland, a better way to chat privately than Signal? What if your threat model is "the US government"? My answer is a resounding, "No. You should fucking use Signal."


Apple's app store practices are an abusive monopoly, and I wish the Proton folks luck.

We once had a Zulip update rejected by Apple because we had a link to our GitHub project with the source code for the app in the app itself. And it turns out, if you then click around GitHub, you can find a "Pricing" page that doesn't pay Apple's tax.

Details are here for anyone curious: https://news.ycombinator.com/item?id=28175759


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: