Hacker Newsnew | past | comments | ask | show | jobs | submit | quirino's commentslogin

Honking is common across Brazil but not in the capital Brasília. Signs at some entrances of city read "Dear visitors, in Brasília we avoid honking".


I had some fountain pens over the years, but I was never able to truly enjoy them or recommend them to anyone. I didn't find them practical as daily writing tools and or too fun as a hobby.

I really dislike the feeling that you need to be a bit careful with a tool. I want the peace of mind of being able to drop pens nib-first into the ground. They're also not great for writing on many types of paper and require some care and maintenance.

My experience getting into double-edge razors/nice shaving soaps was much better. They're not just small luxuries, but actually better-performant and more practical than the popular alternatives in almost every way.

(On the pen front, today I'm very satisfied with my "Kaweco LILIPUT Ball Pen Stainless Steel" - it's super compact, has a nice weight to it and just feels well-constructed and solid. I hope to use it for many years to come. (If you want to get one, beware the Aluminium version, which looks identical but is noticeably lighter))


> My experience getting into double-edge razors

It’s called safety razor, if I understood you correctly.

Also, it’s quite hard to write with it, I’ll stick to fountain pens.


If you like to write your ransom notes in blood it’s just the thing.


Eh. I think I see where you're going but that spacing is really hard to get right and it would clog like nobody's business, you're better off just going with regular ink in the 6mm Parallel.


> They're not just small luxuries, but actually better-performant and more practical than the popular alternatives in almost every way.

Most of us who use fountain pen feel this way too.

I literally just an hour ago tried picking up a gel pen for writing and 3 minutes later it went back into storage. It's Uniball One so it's not a bad gel pen either.


I think equally impressive is the performance of the OpenAI team at the "AtCoder World Tour Finals 2025" a couple of days ago. There were 12 human participants and only one did better than OpenAI.

Not sure there is a good writeup about it yet but here is the livestream: https://www.youtube.com/live/TG3ChQH61vE.


And yet when working on production code current LLMs are about as good as a poor intern. Not sure why the disconnect.


Depends. I’ve been using it for some of my workflows and I’d say it is more like a solid junior developer with weird quirks where it makes stupid mistakes and other times behaves as a 30 year SME vet.


I really doubt it's like a "solid junior developer". If it could do the work of a solid junior developer it would be making programming projects 10-100x faster because it can do things several times faster than a person can. Maybe it can write solid code for certain tasks but that's not the same thing as being a junior developer.


It can be 10-100x faster for some tasks already. I've had it build prototypes in minutes that would have taken me a few hours to cobble together, especially in domains and using libraries I don't have experience with.


It’s the same reason leet code is a bad interview question. Being good at these sorts of problems doesn’t translate directly to being good at writing production code.


because competitive coding is narrow well described domain(limited number of concepts: lists, trees, etc) with high volume of data available for training, and easy way to setup RL feeback loop, so models can improve well in this domain, which is not true about typical enterprise overbloated software.


All you said is true. Keep in mind this is the "Heuristics" competition instead of the "Algorithms" one.

Instead of the more traditional Leetcode-like problems, it's things like optimizing scheduling/clustering according to some loss function. Think simulated annealing or pruned searches.


Dude thank you for stating this.

OpenAI's o3 model can solve very standard even up to 2700 rated codeforces problems it's been trained on, but is unable to think from first principles to solve problems I've set that are ~1600 rated. Those 2700 algorithms problems are obscure pages on the competitive programming wiki, so it's able to solve it with knowledge alone.

I am still not very impressed with its ability to reason both in codeforces and in software engineering. It's a very good database of information and a great searcher, but not a truly good first-principles reasoner.

I also wish o3 was a bit nicer - it's "reasoning" seems to have made it more arrogant at times too even when it's wildly off ,and it kind of annoys me.

Ironically, this workflow has really separated for me what is the core logic I should care about and what I should google, which is always a skill to learn when traversing new territory.


Not completely sure how your reply relates to my comment. I was just mentioning the competition is on Heuristics which is different from what you find on CF or most coding competitions.

About the performance of AI on competitions, I agree what's difficult for it is different from what's difficult for us.

Problems that are just applying a couple of obscure techniques may be easier for them. But some problems I've solved required a special kind of visualization/intuition which I can see being hard for AI. But I'd also say that of many Math Olympiad problems and they seem to be doing fine there.

I've almost accepted it's a matter of time before they become better than most/all of the best competitors.

For context, I'm a CF Grandmaster but haven't played much with newer models so maybe I'm underestimating their weaknesses.


https://www.phonearena.com/phones/size/Apple-iPhone-13-mini,...

I was convinced you were wrong but that's correct. The Mini is much smaller and the Zenfone is about the same size as the regular iPhone.


Sivers' list has introduced me to many great books. I can recommend "Sum: Forty Tales from the Afterlives - by David Eagleman" which is the fourth book on the page you linked.


I really like the Poisson Distribution. A very interesting question I've come across once is:

A given event happens at a rate of every 10 minutes on average. We can see that:

- The expected length of the interval between events is 10 minutes.

- At a random moment in time the expected wait until the next event is 10 minutes.

- At the same moment, the expected time passed since the last event is also 10 minutes.

But then we would expect the interval between two consecutive events to be 10+10 = 20 minutes long. But we know intervals are 10 on average. What happened here?

The key is that by picking a random moment in time, you're more likely to fall into a bigger intervals. By sampling a random point in time the average interval you fall into really is 20 minutes long, but by sampling a random interval it is 10.

Apparently this is called the Waiting Time Paradox.


> What happened here?

You went astray when you declared the expected wait and expected passed.

Draw a number line. Mark it at intervals of 10. Uniformly randomly select a point on that line. The expected average wait and passed (ie forward and reverse directions) are both 5, not 10. The range is 0 to 10.

When you randomize the event occurrences but maintain the interval as an average you change the range maximum and the overall distribution across the range but not the expected average values.


When you randomize the event occurences, you create intervals that are shorter and longer than average, so that a random point is more likely to be in a longer interval, so that the expected length of the interval containing a random point is greater than the expected length of a random interval.

To see this, consider just two intervals of length x and 2-x, i.e. 1 on average. A random point is in the first interval x/2 of the time and in the second one the other 1-x/2 of the time, so the expected length of the interval containing a random point is x/2 * x + (1-x/2) * (2-x) = x² - 2x + 2, which is 1 for x = 1 but larger everywhere else, reaching 2 for x = 0 or 2.


I think I understand my mistake. As the variance of the intervals widens the average event interval remains the same but the expected average distances for a sample point change. (For some reason I thought that average distances wouldn't change. I'm not sure why.)

Your example illustrates it nicely. A more intuitive way of illustrating the math might be to suppose 1 event per 10 minutes but they always happen in pairs simultaneously (20 minute gap), or in triplets simultaneously (30 minute gap), or etc.

So effectively the earlier example that I replied to is the birthday paradox, with N people, sampling a day at random, and asking how far from a birthday you expect to be on either side.

If that counts as a paradox then so does the number of upvotes my reply received.


If it wasn't clear, their statements are all true when the events follow a poisson distribution/have exponentially distributed waiting times.


The way, I understand it is that with a Poisson process, at every small moment in time there’s a small chance of the event happening. This leads to on average lambda events occurring during every (larger) unit of time.

But this process has no “memory” so no matter how much time has passed since the last event, the number of events expected during the next unit of time is still lambda.


From last event to this event = 10, from this event to next event = 10, so the time between the first and the third event is 20, where is the surprise in the Waiting Time Paradox?, sure I must be missing some key ingredient here.


The random moment we picked in time is not necessarily an event. The expected time between the event to your left and the one to your right (they're consecutive) is 20 minutes.


I think we must use conditional probability, that is the integral of p(X|A)P(A), for example probability the prior event was 5 minutes ago probabity(the next one is 10 minutes from the previous one (that is 1/2). This is like markov chain, probability of next state depends of current state.


The poster acknowledges this: "I will lose thousands".

https://www.elitefourum.com/t/many-of-the-pokemon-playtest-c...


A few years ago I saw a talk at a Math conference about some mathematical models for how the shapes of snowflakes come to be.

I don't recall the details, but I believe one of them was even able to generate non-hexagonal snowflakes which happen under some circumstances.

I've been hoping to create a website around one of these models for some time now.


Cool, I didn't know that could happen!

I started with 12 sections (or 6 lobes/mirrored sections) then wanted to play around with more section numbers without having to redraw.

I think 12 is for a more advanced paper snowflake maker although I think 8 can look cool too.

Wonder what number in-nature non-hexagonal number was. If you come across it, would love to include it!

If you create a snowflake site, I would love to try it!


I was more surprised by the fact that "Barbie" was said more times than "it", even though all of the "wrong" instances of "it" were counted as well.


It's possible there are just more lines of dialogue in Barbie than It, given the conventions around each genre. I haven't seen It, but I can assume with It being a horror film there are longer periods with no dialogue for suspense etc.


Barbie also has multiple characters named Barbie; There are times where Barbie is said three or four times in a single paragraph and even a sequence that's just a complete graph of Barbies saying "Hi Barbie" to each other.


Traveling to the US recently, I was surprised to see Claude ads around the city/in the airport. It seems like they're investing on marketing there.

In my country I've never seen anyone mention them at all.


Been traveling more recently, and I've seen those ads in major cities like NYC or San Francisco, but not Miami.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: