Hacker Newsnew | past | comments | ask | show | jobs | submit | lwansbrough's commentslogin

I wonder if we should move beyond this messaging. It’s well known to the smart half of the population that climate change is happening. There is apparently some debate on the cause. But this point is mostly irrelevant, it is problem-oriented thinking. By keeping the conversation in the problem-realm you invite troglodytes into the conversation to insert their bullshit. Instead, if we move forward with “presumption of truth” solutions-based messaging, we can start to talk about what we’re going to do.

Climate control is something more people will be on board with compared to trying to have a conversation about climate science to a person who didn’t graduate high school.


[flagged]


Presuming that you yourself have "graduated" (what from is unclear), it's particularly audacious that you make this claim because it shows rather cleanly how poor a marker of quality an education is.

The answer has never laid in ever more elaborate designs to disenfranchise particular members of the population. It's always been in building community.

A community is what helps stabilize, helps tighten up distributions, and wrestles most authentically with the general premise that we are social creatures and only as strong as our weakest link.

If you think you're going to build the perfect society by way of careful electorate curation, I have some unfortunate stories to tell you.


The parent comment said high school which is compulsory, free, and a very low bar. Our nation and world is largely being wrecked by the malicious on behalf of the stupid. Having some bar doesn't seem unreasonable.

The fact that you read my comment and decided to clarify and double down is immaculate for my point.

Have you taken any class ever on disenfranchising events in history?

Also worth mentioning for those in these neighboring threads, the impulse to blame dysfunction during hard times on a particular minority of society has a name, you can read more about it here

https://dictionary.apa.org/scapegoat-theory


A very bad idea, leading to the authocracies and despotias like clockwork. A small relief is to see the authors of the initial segregation getting banned too eventually, but it is too late to fix anything. "Tovarisch Stalin, a terrible mistake has happened!"

[flagged]


With this administration? "To get your voting license, click on all the Nazis in this picture" (your click must be Biden and Hillary or your license is denied)

[flagged]


Why do you think that?

I'm haunted by the criticism Dropbox received from HN users when they posted their project here. While I respect the views many of us have, I think this has the potential to have the StackOverflow effect where the community makes the whole process miserable and worse.

Note well that the most famous example of this is a misreading that has snowballed into a kind of cultural legend. The thread in question was about Dropbox's application to YC, not the value of Dropbox itself, and the feedback was constructive and well-intentioned.

https://news.ycombinator.com/item?id=42392302


If Flutter is dying, so is React Native, if Google Trends is any metric to go by. Flutter has nearly double the search volume over the past 5 years.

How does it handle leap seconds?

It manhadles it like a Swiss stuffed on cheese fondue and chocolate.

As it's based upon UTC+1, I imagine the same way as UTC.

“We sent our people to school and after becoming smarter they stopped believing our ideology so we’re going to keep them stupid instead”

I like to imagine GP just accidentally leaked classified info.


That would hold more weight if this was the War Thunder forum


Can anyone with specific knowledge in a sophisticated/complex field such as physics or math tell me: do you regularly talk to AI models? Do feel like there's anything to learn? As a programmer, I can come to the AI with a problem and it can come up with a few different solutions, some I may have thought about, some not.

Are you getting the same value in your work, in your field?


Context: I finished a PhD in pure math in 2025 and have transitioned to being a data scientist and I do ML/stats research on the side now.

For me, deep research tools have been essential for getting caught up with a quick lit review about research ideas I have now that I'm transitioning fields. They have also been quite helpful with some routine math that I'm not as familiar with but is relatively established (like standard random matrix theory results from ~5 years ago).

It does feel like the spectrum of utility is pretty aligned with what you might expect: routine programming > applied ML research > stats/applied math research > pure math research.

I will say ~1 year ago they were still useless for my math research area, but things have been changing quickly.


Do you use LLM models? Or something else?


I don't have a degree in either physics or math, but what AI helps me to do is to stay focused on the job before me rather than to have to dig through a mountain of textbooks or many wikipedia pages or scientific papers trying to find an equation that I know I've seen somewhere but did not register the location of and did not copy down. This saves many days, every day. Even then I still check the references once I've found it because errors can and do slip into anything these pieces of software produce, and sometimes quite large ones (those are easy to spot though).

So yes, there is value here, and quite a bit but it requires a lot of forethought in how you structure your prompts and you need to be super skeptical about the output as well as able to check that output minutely.

If you would just plug in a bunch of data and formulate a query and would then use the answer in an uncritical way you're setting yourself up for a world of hurt and lost time by the time you realize you've been building your castle on quicksand.


I do / have done research in building deep learning models and custom / novel attention layers, architectures, etc., and AI (ChatGPT) is tremendously helpful in facilitating (semantic) search for papers in areas where you may not quite know the magic key words / terminology for what you are looking for. It is also very good at linking you to ideas / papers that you might not have realized were related.

I also found it can be helpful when exploring your mathematical intuitions on something, e.g. like how a dropout layer might effect learned weights and matrix properties, etc. Sometimes it will find some obscure rigorous math that can be very enlightening or relevant to correcting clumsy intuitions.


Apropos your account name, I just wanted to mention that I used various Xerox D machines back in the day. They were fun.


I'm an active researcher in TCS. For me, AI has not been very helpful on technical things (or even technical writing), but has been super helpful for (1) literature reviews; (2) editing papers (e.g., changing a convention everywhere in the paper); and (3) generating Tikz figures/animations.


I talk to them (math research in algebraic geometry) not really helpful outside of literature search unfortunately. Others around me get a lot more utility so it varies. (Most powerful model i tried was Gemini 2.5 deep think and Gemini 3.0 pro) not sure if the new gpts are much better


I did a theoretical computer science PhD a few years ago and write one or two papers a year in industry. I have not had much success getting models to come up with novel ideas or even prove theorems, but I have had some success asking them to prove smaller and narrower results and using them as an assistant to read papers (why are they proving this result, what is this notation they're using, expand this step of their proof, etc). Asking it to find any bugs in a draft before Arxiving also usually turns up some minor things to clarify.

Overall: useful, but not yet particularly "accelerating" for me.


I work in quantum computing. There is quite a lot of material about quantum computing out there that these LLMs must have been trained on. I have tried a few different ones, but they all start spouting nonsense about anything that is not super basic.

But maybe that is just me. I have read some of Terence Tao's transcripts, and the questions he asks LLMs are higher complexity than what I ask. Yet, he often gets reasonable answers. I don't yet know how I can get these tools to do better.


This often feels like an annoying question to ask, but what models were you using?

The difference between free ChatGPT, GPT-5.2 Thinking, and GPT-5.2 Pro is enormous for areas like logic and math. Often the answer to bad results is just to use a better model.

Additionally, sometimes when I get bad results I just ask the question again with a slightly rephrased prompt. Often this is enough to nudge the models in the right direction (and perhaps get a luckier response in the process). However, if you are just looking at a link to a chat transcript, this may not be clear.


I have openrouter account, so I try different models easily. I have tried Sonnet, Opus, various versions of GPT, Deepseek. There are certainly differences in the quality. I also do rephrase prompts all the time. But ultimately, I can't quite get them to work in quantum computing. Far easier to get them to answer coding or writing related questions.


Both Erdos #728 and #729 were solved with the use of GPT-5.2 Pro. Lesser models have much worse performance on difficult problems like these.


"I don't yet know how I can get these tools to do better."

I have wondered if he has access to a better model than I, the way some people get promotional merchandise. A year or two ago he was saying the models were as good as an average math grad student when to me they were like a bad undergrad. In the current models I don't get solutions to new problems. I guess we could do some debugging and try prompting our models with this Erdos problem and see how far we get. (edit: Or maybe not; I guess LLMs search the web now.)


This was also my experience with certain algorithms in the realm of scheduling.


Which models did you try?


I’m a hobbyist math guy (with a math degree) and LLMs can at least talk a little talk or entertain random attempts at proofs I make. In general they rebuke my more wild attempts, and will lead me to well-trodden answers for solved problems. I generally enjoy (as a hobby) finding fun or surprising solutions to basic problems more than solving novel maths, so LLMs are fun for me.


As the other person said, Deep Research is invaluable; but generating hypotheses is not as good at the true bleeding edge of the research. The ChatGPT 4.0 OG with no guardrails, briefly generated outrageously amazing hypotehses that actually made sense. After that they have all been neutered beyond use in this direction.


My experience has been mixed. Honestly though, talking to AI and discussing a problem with it is better than doing nothing and just procrastinating. It's mostly wrong, but the conversation helps me think. In the end, once my patience runs out and my own mind has been "refreshed" through the conversation (even if it was frustrating), I can work on it myself. Some bits of the conversation will help but the "one-shot" doesn't exist. tldr: ai chatbots can get you going, and may be better than just postponing and procrastinating over the problem you're trying to solve.


They are good for a jump start on literature search, for sure.


The solution may just have to be technological literacy.


It really does not need any literacy to install FF and then ublock origin. Nothing else is needed, the default settings work just fine. Do I miss something?


It requires literacy to know that that’s an option and to know why it’s a good idea.

I’ve met plenty of tech illiterate but otherwise smart people who just use edge, or a mobile phone and whatever browser it has as a default.


GP mentioned a couple of user filters. If that's not technological literacy I have an SAP migration to sell you.


Try asking random people on NYC street if they have heard of Firefox and uBlock Origin.


A large portion of users (a majority, imo) think "web browser" is a specific app they open, rather than a type of app, and don't even understand that there are multiple different ones to choose from.


You need to be savvy enough to know how to deal with the inevitable "broken" site you run across (ideally by leaving and never returning, but sometimes that isn't an option).


Reminds me of The Lonely Island doing a song about being in Japan so that the record label would fly them out to Japan to record the music video.


Kinda the opposite - rapper Little Dicky went door-to-door in a rich neighborhood, asking if he could borrow their house for a music video.

Eventually someone agreed.


An odd choice, for sure. Not much else to be said really.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: