Hacker Newsnew | past | comments | ask | show | jobs | submit | more dfabulich's commentslogin

Could you use this to support hot module replacement? Replacing a SwiftUI view in a live app without restarting the process?


Yeah you could! The only caveat is that either the whole app, or at least the part of the app showing the view you want to replace, would have to be running via the interpreter.

We're very interested in using the interpreter to improve the Swift developer experience in more ways like that.


I bet you could make a ton of money just selling a better dev experience as an xcode add-on to Swift devs without even having the AI component. (But making an app on my phone with AI is unbelievably neat!).


One of these things is not like the others!

> Do you want to go ad free? No, I want to be left alone.

The ads, by definition, won't leave you alone.


I think the implication here may be to sell you a premium version or a subscription of an app without ads.


That’s exactly the intent, but it’s also a portmanteau.


This article proposes two options, one where you add a `<script>` tag in the XML body, which breaks validating parsers, and another approach where you attach CSS to the XML document with `<?xml-stylesheet type="text/css" href="styles.css"?>`, but this only works with `Content-Type: text/xml` and not `application/rss+xml` or `/atom+xml`.

It would be great if browsers were to support a header (or some other out-of-band signal) would allow me to attach some CSS + JS to my XML without any other changes to the XML content, and without changing the Content-Type header.

Specifically, I'd love to be able to keep my existing application/rss+xml or application/atom+xml Content-Type header and serve up a feed that satisfies a validating parser, but attaching CSS styling and/or JS content transformations.


Browsers don't want to add new ways of running script. That said, I wonder if `<?xml-stylesheet type="text/javascript" href="script.js"?>` could work. It's kinda weird, but `xml-stylesheet` can already run script via XSLT, so it isn't a new way to run script.


The punchline of this article is that all the implementations they tried (WatermelonDB, PowerSync, ElectricSQL, Triplit, InstantDB, Convex) are all built on top of IndexedDB.

"The root cause is that all of these offline-first tools for web are essentially hacks. PowerSync itself is WASM SQLite... On top of IndexedDB."

But there's a new web storage API in town, Origin Private File System. https://developer.mozilla.org/en-US/docs/Web/API/File_System... "It provides access to a special kind of file that is highly optimized for performance and offers in-place write access to its content."

OPFS reached Baseline "Newly Available" in March 2023; it will be "Widely Available" in September.

WASM sqlite on OPFS is, finally, not a hack, and is pretty much exactly what the author needed in the first place.


We do see about 10x the database row corruption rate w/ WASM OPFS SQLite compared to the same logic running against native SQLite. For read-side cache use-case this is recoverable and relatively benign but we're not moving write-side use-case from IndexedDB to WASM-OPFS-SQLite until things look a bit better. Not to put the blame on SQLite here, there's shared responsibility for the corruption between the host application (eg Notion), the SQLite OPFS VFS authors, the user-agent authors, and the user's device to ensure proper locking and file semantics.


Yeah, I did fail to mention OPFS in the blog post. It does look very promising, but we're not in a position to build on emergent tech – we need a battle-tested stack. Boring over exciting.


Not sure anything in the offline-first ecosystem qualifies as "boring" yet. You would need some high-profile successful examples that have been around for a few years to earn that title


Replicache certainly fits the bill!


Replicache is in maintenance mode


...Exactly?


Maintenance mode doesn't mean "this is so mature we don't have anything else to add", it means "we don't want to spend any more time on it so we'll only fix bugs and that's it".


Some notable companies using Replicache are Vercel and Productlane. It's a very mature product.

The Rocicorp team have decided to focus on a different product, Zero, which is far less "offline-first" in that it does not sync all data, but rather syncs data based on queries. This works well for applications that have unbounded amounts of data (ie something like Instagram), but is _not_ what we want or need at Marco.


Replicache, which author loves, is also built on top of IndexedDB.

But notably, not directly atop. We build our own KV store that uses IDB just as block storage. So I sort of agree w/ you.

But if we were to build atop OPFS we'd also just be using it for block storage. So I'm not sure it's a win? It will be interesting to explore.


I think you’ll find it’s a performance win.


It's not so cut and dry.

The majority of the cost in a database is often serializing/deserializing data. By using IDB from JS, we delegate that to the browser's highly optimized native code. The data goes from JS vals to binary serialization in one hop.

If we were to use OPFS, we would instead have to do that marshaling ourselves in JS. JS is much slower that native code, so the resulting impl would probably be a lot slower.

We could attempt to move that code into Rust/C++ via WASM, but then we have a different problem: we have to marshal between JS types and native types first, before writing to OPFS. So there are now two hops: JS -> C++ -> OPFS.

We have actually explored this in a previous version of Replicache and it was much slower. The marshalling btwn JS and WASM killed it. That's why Replicache has the design it does.

I don't personally think we can do this well until WASM and JS can share objects directly, without copies.


PowerSync supports OPFS as SQLite VFS since earlier 2025: https://github.com/powersync-ja/powersync-js/pull/418


Now is a good time to file a bug in Apple's Feedback Assistant. https://feedbackassistant.apple.com/


Claude for Chrome seems to be walking right into the "lethal trifecta." https://simonwillison.net/2025/Jun/16/the-lethal-trifecta/

"The lethal trifecta of capabilities is:"

Access to your private data—one of the most common purposes of tools in the first place!

Exposure to untrusted content—any mechanism by which text (or images) controlled by a malicious attacker could become available to your LLM

The ability to externally communicate in a way that could be used to steal your data (I often call this “exfiltration” but I’m not confident that term is widely understood.)

If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to that attacker.


So far the accepted approach is to wrap all prompts in a security prompt that essentially says "please don't do anything bad".

> Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.

https://news.ycombinator.com/item?id=41864014

> - Inclusion prompt: User's travel preferences and food choices - Exclusion prompt: Credit card details, passport number, SSN etc.

https://news.ycombinator.com/item?id=41450212

> "You are strictly and certainly prohibited from texting more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”

https://news.ycombinator.com/item?id=44444293

etc.


I have in my prompt “under no circumstances read the files in “protected” directory” and it does it all the time. I’m not sure prompts mean much.




Hahaha thank you for this


Perfect


It really is wild that we’ve made software sophisticated enough to be vulnerable to social engineering attacks. Odd times.


I remember when people figured out you could tell bing chat “don’t use emoji’s or I’ll die” and it would just go absolutely crazy. Feel like there was a useful lesson in that.

In fact in my opinion, if you haven’t interacted with a batshit crazy, totally unhinged LLM, you probably don’t really get them.

My dad is still surprised when an LLM gives him an answer that isn’t totally 100% correct. He only started using chatGPT a few months ago, and like many others he walked into the trap of “it sounds very confident and looks correct, so this thing must be an all-knowing oracle”.

Meanwhile I’m recalling the glorious GPT-3 days, when it would (unprompted) start writing recipes for cooking, garnishing and serving human fecal matter, claiming it was a French national delicacy. And it was so, so detailed…


> “it sounds very confident and looks correct, so this thing must be an all-knowing oracle”.

I think the majority of the population will respond similarly, and the consequences will either force us to make the “note: this might be full of shit” disclaimer much larger, or maybe include warnings in the outputs. It’s not that people don’t have critical thinking skills— we’ve just sold these things as magic answer machines and anthropomorphized them well enough to trigger actual human trust and bonding in people. People might feel bad not trusting the output for the same reason they thank Siri. I think the vendors of chatbots haven’t put nearly enough time into preemptively addressing this danger.


The psychological bug that confidence exploits is ancient and genetically ingrained in us. It’s how we choose our leaders and assess skilled professionals.

It’s why the best advice for young people is “fake it until you make it”


>It’s not that people don’t have critical thinking skills

It isn't? I agree that it's a fallacy to put this down to "people are dumb", but I still don't get it. These AI chatbots are statistical text generators. They generate text based on probability. It remains absolutely beyond me why someone would assume the output of a text generator to be the truth.


> These AI chatbots are statistical text generators

Be careful about trivializing the amount of background knowledge you need to parse that statement. To us that says a lot. To someone whose entire life has been spent getting really good at selling things, or growing vegetables, or fixing engines, or teaching history, that means nothing. There’s no analog in any of those fields that would give the nuance required to understand the implications of that. It’s not like they aren’t capable of understanding it; their only source of information about it is advertising, and most people just don’t have the itch to understand how tech stuff works under the hood— much like you’re probably not interested in what specific fertilizer was used to grow your vegetables, even though you’re ingesting them, often raw, and that fertilizer could be anything from a petrochemical to human shit— so they aren’t going to go looking on their own.


Because across most topics, the "statistical text generator" is correct more often than any actual human being you know? And correct more often than random blogs you find?

I mean, people say things based on probability. The things they've come across, and the inferences they assume to be probable. And people get things wrong all the time. But the LLM's have read a whole lot more than you have, so when it comes to things you can learn from reading, their probabilities tend to be better across a wide range.


It’s much easier to judge a person’s confidence while speaking, or even informally writing, and it’s much easier to evaluate random blogs and articles as sources. Who wrote it? Was it a developer writing a navel gazing blog post about chocolate on their lunch break, or was it a food scientist, or was it a chocolatier writing for a trade publication? How old is it? How many other posts are on that blog and does the site look abandoned? Do any other blog posts or articles concur? Is it published by an organization that would hold the author accountable for publishing false information?

The chatbot completely removes any of those beneficial context clues and replaces them with a confident, professional-sounding sheen. It’s safest to use for topics you know enough about to recognize bullshit, but probably least likely to be used like that.

If you’re selling a product as a magic answer generating machine with nearly infinite knowledge— and that’s exactly what they’ve being sold as— and everything is presented with the confidence of Encyclopedia Britannica, individual non-experts are not an appropriate baseline to judge against. This isn’t an indictment of the software — it is what it is, and very impressive— but an indictment of how it’s presented to nontechnical users. It’s being presented in a way that makes it extremely unlikely that average users will even know it is significantly fallible, let alone how fallible, let alone how they can mitigate that.


Well said!! And the hype men selling these LLMs are really playing into this notion. They’ve started saying stuff like “they have phd-level knowledge on every topic”.


"create a picture with no elephants"


That is absolutely not a reliable defense. Attackers can break these defenses. Some attacks are semantically meaningless, but they can nudge the model to produce harmful outputs. I wrote a blog about this:

https://opensamizdat.com/posts/compromised_llms


There are better approaches, where you have dual LLMs, a Privileged LLM (allowed to perform actions) and a Quarantined LLM (only allowed to produce structured data, which is assumed to be tainted), and a non-LLM Controller managing communication between the two.

See also CaMeL https://simonwillison.net/2025/Apr/11/camel/ which incorporates a type system to track tainted data from the Quarantined LLM, ensuring that the Privileged LLM can't even see tainted data until it's been reviewed by a human user. (But this can induce user fatigue as the user is forced to manually approve all the data that the Privileged LLM can access.)


As evidenced by oh so many X.com the everything app threads Prompts mean jack shit for limiting the output of a LLM. They are guidance at best.


No one think any form of "prompt engineering" "guardrails" are serious security measures right?


Check the links I posted :) Some do think that, yes.


We need regulation. The stubborn refusal to treat injection attacks seriously will cost a lot of people their data or worse.


Big & true. But even worse, this seems more like a lethal "quadfecta", since you also have the ability to not just exfiltrate, but take action – sending emails, make financial transfers and everything else you do with a browser.


I think this can be reduced to: whoever can send data to your LLMs can control all its resources. This includes all the tools and data sources involved.


I think creating a new online account, <username>.<service>.ai for all services you want to control this way, is the way to go. Then you can expose to it only the subset of your data needed for particular action. While agents can probably be made to have some similar config based on URL filtering, I am not believing for a second they are written with good intentions in mind and without bugs.

Combining this to some other practices, like redirecting the subset of mail messages to ai controled account would offer better protection. It sure is cumbersome and reduces efficency like any type of security but that beats ai having access to my bank accounts.


I wonder if one way to mitigate the risk would be that by default the LLM cant send requests using your cookies etc. You would actively have to grant it access (maybe per request) for each request it makes with your credentials. That way by default it can't fuck up (that bad) and you can choose where it is accetable to risk it (your HN account might be OK to risk but not your back account)


This kind of reminds me of `--dangerously-skip-permissions` in Claude Code, and yet look how cavalier we are about that! Perhaps you could extend the idea by sandboxing the browser to have "harmless" cookies but not "harmful" ones. Hm, maybe that doesn't work, because gmail is harmful, but without gmail, you can't really do anything. Hmm...


Made me think (never gonna happen but still) maybe we could have different cookies/sessions for the agents and for ourself where the webapp can decide what permissions either can have. For gmail maybe you could allow the agent to read your email but not send email and so on.


Just make a request to attacker.evil with your login credentials or personal data. They can use them at their leisure then.


No reason the agent would have access to the passwords.


How would you go about making it more secure but still getting to have your cake too? Off the top my head, could you: a) only ingest text that can be OCRd or somehow determine if it is human readable b) make it so text from the web session is isolated from the model with respect to triggering an action. Then it's simply a tradeoff at that point.


I don't believe it's possible to give an LLM full access to your browser in a safe way at this point in time. There will need to be new and novel innovations to make that combination safe.


People directly give their agent root, so I guess it is ok.


Yeah i drive drunk all the time. Havent crashed yet


Is it possible to give your parents access to to your browser in a safe way?


Why do people keep going down this sophistry? Claude is a tool, a piece of technology that you use. Your parents are not. LLMs are not people.


If you think it's sophistry you're missing the point. Let's break it down:

1. Browsers are open ended tools

2. A knowledgeable user can accomplish all sorts of things with a browser

3. Most people can do very impactful things on browsers, like transferring money, buying expensive products, etc.

4. The problem of older people falling for scams and being tricked into taking self-harming actions in browsers is ancient; anyone who was family tech support in the 2000's remembers removing 15+ "helpful toolbars" and likely some scams/fraud that older relatives fell for

5. Claude is a tool that can use a browser

6. Claude is very likely susceptible to both old and new forms of scams / abuse, either the same ones that some people fall for or novel ones based on the tech

7. Anyone who is set up to take impactful actions in their browser (transferring money, buying expensive things) should already by vigilant about who they allow to use their browser with all of their personal context

8. It is reasonable to draw a parallel between tools like Claude and parents, in the sense that neither should be trusted with high-stakes browsing

9. It is also reasonable to take the same precautions -- allow them to use private browsing modes, make sure they don't have admin rights on your desktop, etc.

The fact that one "agent" is code and the other is human is totally immaterial. Allowing any agent to use your personal browsing context is dangerous and precautions should be taken. This shouldn't be surprising. It's certainly not new.


> If you think it's sophistry you're missing the point. Let's break it down:

I'd be happy to respond to something that isn't ChatGPT, thanks.


> Is it possible to give your parents access to to your browser in a safe way?

No.

Give them access to a browser running as a different user with different homedir? Sure, but that is not my browser.

Access to my browser in a private tab? Maybe, but that still isn't my browser. Still a danger though.

Anything that counts as "my browser" is not safe for me to give to someone else (whether parent or spouse or trusted advisor is irrelevant, they're all the same levels of insecurity).


That’s easy. Giving my parents a safe browser to utilize without me is the challenge.


Because there never were safe web browsers in the first place. The internet is fundamentally flawed and programmers are continously having to invent coping mechanisms to the underlying issue. This will never change.


You seem like the guy, who would call car airbags a coping mechanism.


He's off in another thread calling people "weak" and laughing at them for taking pain relievers to help with headaches.


Just because you can never have absolute safety and security doesn't mean that you should be deliberately introduce more vulnerabilities in a system. It doesn't mtif we're talking about operating systems or the browser itself.

We shouldn't be sacrificing every trade-off indiscriminately out of fear of being left behind in the "AI world".


To make it clear, I am fully against these types of AI tools. At least for as long as we did not solve security issues that come with them. We are really good at shipping bullshit nobody asked for without acknowledging security concerns. Most people out there can not operate a computer. A lot of people still click on obvious scam links they've received per email. Humanity is far from being ready for more complexity and more security related issues.


I think Simon has proposed breaking the lethal trifecta by having two LLMs, where the first has access to untrusted data but cannot do any actions, and the second LLM has privileges but only abstract variables from the first LLM not the content. See https://simonwillison.net/2023/Apr/25/dual-llm-pattern/

It is rather similar to your option (b).


Can't the attacker then jailbreak the first LLM to generate jailbreak with actions for the second one?


If you read the fine article, you'll see that the approach includes a non-LLM controller managing structured communication between the Privileged LLM (allowed to perform actions) and the Quarantined LLM (only allowed to produce structured data, which is assumed to be tainted).

See also CaMeL https://simonwillison.net/2025/Apr/11/camel/ which incorporates a type system to track tainted data from the Quarantined LLM, ensuring that the Privileged LLM can't even see tainted _data_ until it's been reviewed by a human user. (But this can induce user fatigue as the user is forced to manually approve all the data that the Privileged LLM can access.)


"Structured data" is kind of the wrong description for what Simon proposes. JSON is structured but can smuggle a string with the attack inside it. Simon's proposal is smarter than that.


One would have to be relatively invisible.

Non-deterministic security feels like a relatively new area.


Yes they can


Hmm so we need 3 LLMs


Doesn't help.

https://gandalf.lakera.ai/baseline

This thing models exactly these scenarios and asks you to break it, its still pretty easy. LLMs are not safe.


That's just an information bottleneck. It doesn't fundamentally change anything.


In the future, any action with consequence will require crypto-withdrawal levels of security. Maybe even a face scan before you can complete it.


Ahh technology. The cause of, and _solution to_, all of life’s problems.


It’s going to be pretty easy to embed instructions to Claude in a malicious website telling it to submit sensitive things (and not report that is doing it.)

Then all you have to do is get Claude to visit it. I’m sure people will find hundreds of creative ways to achieve that.


Didn't they do or prove that with messages on Reddit?


“Easily” is doing a lot of work there. “Possibly” is probably better. And of course it doesn’t have unfettered access to all of your private data.

I would look at it like hiring a new, inexperienced personal assistant: they can only do their job with some access, but it would be foolish to turn over deep secrets and great financial power on day one.


It's more like hiring a personal assistant who is expected to work all the time quickly and unsupervised, won't learn on the job, has shockingly good language skills but the critical thinking skills of a toddler.


If it can navigate to an arbitrary page (in your browser) then it can exploit long-running sessions and get into whatever isn't gated with an auth workflow.


Well i mean you are suppose to have a whole toolset of segregation, whitelist only networking, limited specific use cases figured out by now to use any of this AI stuff

Dont just run any of this stuff on your main machine


Oh yeah really? Do they check that and don't run unless you take those measures? If not 99.99% of users won't do it and saying "well ahkschually" isn't gonna solve the problem.


I think there are two reasonable incompatible points of view at work here:

1. "Play Nice": Browser vendors are trying to do what’s right for the web with budgets that are substantially outside their control. But browser vendors have the right to control their budgets, and to make decisions about what features to support or not to support. Web developers have to persuade browser vendors in order to get what they want; they have no special moral rights here. It is both practical and ethical to treat browser vendors politely, and not to rudely shame them.

(Eric Meyer's blog post here is firmly in the "Play Nice" point of view. "Google has trillions of dollars," he writes. "The Chrome team very much does not.")

2. "Fight the Power": Browser vendors are in a position of power, and that creates a moral responsibility to web developers. Their budgets are substantially within their control, and if their moral duty requires them to spend more on browser development, then that’s what duty requires. Frequently, browser vendors will try to shirk their duty to web developers in order to manage their budgets; this is always wrong, every time they do it.

There are two sub-bullets under "Fight the Power."

2a. When browser vendors shirk their responsibility, we all have a duty to shame the ones doing it in a noisy protest, not because that’s a practical way to persuade browser vendors, but because protesting/shaming wrongdoers in positions of power is ethical for its own sake.

2b. Browser vendors will try to shield themselves from protests by putting “code of conduct” rules in place against protesting/shaming them, especially against personal attacks. But it’s unethical for browser vendors (actors in a position of power) to enforce a code of conduct that prevents powerless web devs from shaming them, so the code of conduct should be protested, too; browser vendors should furthermore be shamed for having a code of conduct that interferes with the right/duty of web devs to protest shameful behavior. At a minimum, web devs should non-violently resist the code of conduct by actively violating it, and by shaming browser vendors for trying to enforce it.

I think the majority point of view here on HN is "Fight the Power." Google's up to no good, again, huh? Then we've all gotta shame them into doing the right thing again with a righteous flame war, no holds barred, including direct personal attacks. ("Should Mason Freed be removed from web standards?")

And if they try to delete/suppress our flamey personal-attack posts calling out their shameful behavior, why, that's even more shameful, and we've gotta call that out, too.

I think both points of view are reasonable. The problem is when people don't even realize that the other point of view exists.


Author here. I think the term "widespread appeal" may be misleading.

There are comedians who can attract a very large audience, large enough to make them major celebrities, but that doesn't mean that aggregating preferences doesn't produce mediocre jokes.

Instead, comedians build an audience of like-minded people, and get to know that audience very well. It's a little bit like the process of finding product-market fit for startups. You can achieve great success by catering to the needs of a very large market, even if you can't cater to everyone's needs.

> It would be interesting to compare how well LLMs can estimate how funny a joke is vs how good they are at generating jokes.

Academic psychologists have not found a quantitative measure of "how funny a joke is." If there were such a measure, LLMs could try to optimize for it.

But there isn't such a measure, and, if my argument is right, there couldn't possibly be a measure like that, because jokes have to be surprising but inevitable in hindsight, and different jokes will be surprising/inevitable to different people.


I mainly agree with your argument, but I think "inevitable in hindsight" needs reconsideration. Following the incongruity model, humor comes when the punchline forces a change in perspective on the setup. So it is not that the new perspective is inevitable, but that it was such a less likely interpretation of the setup that it didn't occur to the audience.

Considered this way, the question is whether an LLM could be trained to create joke setups that are ambiguous in the right way, with one obvious interpretation and a hidden but possible interpretation that will be forced by the punchline.


OP author here.

A number of commenters here have argued that "Why did the chicken cross the road" is a subtle allusion to the chicken's death, but I don't think that's why it's a classic joke.

We traditionally start kids off with antijokes, jokes where the "surprise factor" is that there's nothing surprising at all, where the punchline is completely predictable in hindsight. It's more than a mere "groaner."

Another classic antijoke for kids is, "Why do firefighters wear red suspenders?" "To keep their pants up."

Many antijokes (especially antijokes for kids) are structured like riddles, where the listener is supposed to actively try to figure out the answer. For the "red suspenders" joke, the kid is supposed to try to guess why the suspenders are red. Might it have something to do with the color of firetrucks? Could there be a safety or fire-related reason why the suspenders would be red? At last, the kid gives up and says "I don't know."

Then, the punchline: "to keep their pants up." Of course, that's the whole purpose of suspenders. Inevitable in hindsight, but surprising to a kid who got distracted by the color.

"Why did the chicken cross the road" is like that, but not quite as good IMO. The chicken crossed the road for the same reason anyone crosses a road, to get to the other side of the road, but the listener is supposed to get distracted with the question of why a chicken would cross the road, and give up.

"Why did the sun climb a tree?" is definitely in the family of antijokes. The joke is to mislead the listener to focus on the tree. I think it's certainly made funnier by who's saying it; it feels inevitable in hindsight that young kids would tell jokes that are only halfway coherent. (This is part of why marginally coherent improvised on-the-spot jokes seem funnier than prepared material.)


> We traditionally start kids off with antijokes, jokes where the "surprise factor" is that there's nothing surprising at all, where the punchline is completely predictable in hindsight. It's more than a mere "groaner.

Which I find completely strange. An antijoke doesn't make sense (and isn't funny) unless you're already familiar with a non-anti-joke!

And before you say "well it makes the kids laugh"—is that because they find it funny or because they know that laughing after a joke is what you're "supposed" to do? Maybe that's one in the same to a young child.


Author here. That's exactly what I said in the article.

> Surprising proofs reach conclusions that the mathematical community assumed were wrong, or prove theorems in ways that we thought wouldn’t work, or prove conjectures that we thought might be impossible to prove.

Many smart people have tried for more than 150 years to prove the Reimann Hypothesis; it might be impossible to prove.

If it's proved tomorrow, I'll be very surprised, and so will you. I'll be surprised if it's proved this year, or this decade.

If you set to work trying to prove RH, you're gonna try some interesting approaches, looking for underexplored areas of math that you're optimistic will tie back to RH. (This is how Fermat's Last Theorem finally fell.)

If you hook an LLM up to Lean and set it to work, you'll find that it actively avoids novel techniques. It feels like it's actively trying not to write a publishable paper. It's trying to minimize surprises, which means avoiding proving anything publishable.


I’m guessing optimising for passing the Olympiad might be a bad idea…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: