Hacker Newsnew | past | comments | ask | show | jobs | submit | rdedev's commentslogin

There is a lot of stickiness associated with apple products. Be it their walled gardens or having better hardware or brand recognition. This is especially true in the American market

While polars is better if you work with predefined data formats, pandas is imo still better as a general purpose table container.

I work with chemical datasets and this always involves converting SMILES string to Rdkit Molecule objects. Polars cannot do this as simply as calling .map on pandas.

Pandas is also much better to do EDA. So calling it worse in every instance is not true. If you are doing pure data manipulation then go ahead with polars


Map is one operation pandas does nicely that most other “wrap a fast language” dataframe tools do poorly.

When it feels like you’re writing some external udf thats executed in another environment, it does not feel as nice as throwing in a lambda, even if the lambda is not ideal.


you have map_elements in polars which does exactly this.

https://docs.pola.rs/api/python/dev/reference/expressions/ap...

You can also iter_rows into a lambda if you really want to.

https://docs.pola.rs/api/python/stable/reference/dataframe/a...

Personally I find it extremely rare that I need to do this given Polars expressions are so comprehensive, including when.then.otherwise when all else fails.


That one has a bit more friction than pandas because the return schema requirement -- pandas let's you get away with this bad practice.

It also does batches when you declare scalar outputs, but you can't control the batch size, which usually isn't an issue, but I've run into situations where it is.


I decided to vibe code something myself last week at work. I've been wanting to create a poc that involves a coding agent create custom bokeh plots that a user can interact with and ask follow up questions. All this had to be served using a holoview panel library

At work I only have access to calude using the GitHub copilot integration so this could be the cause of my problems. Claude was able to get slthe first iteration up pretty quick. At that stage the app could create a plot and you could interact with it and ask follow up questions.

Then I asked it to extend the app so that it could generate multiple plots and the user could interact with all of them one at a time. It made a bunch of changes but the feature was never implemented. I asked it to do again but got the same outcome. I completely accept the fact that it could just be all because I am using vscode copilot or my promoting skills are not good but the LLM got 70% of the way there and then completely failed


> At work I only have access to calude using the GitHub copilot integration so this could be the cause of my problems.

You really need to at least try Claude Code directly instead of using CoPilot. My work gives us access to CoPilot, Claude Code, and Codex. CoPilot isn’t close to the other more agentic products.


Vs code copilot extension the harness is not great, but Opus 4.5 with Copilot CLI works quite well.


Do they manage context differently or have different system prompts? I would assume a lot of that would be the same between them. I think GH Copilots biggest shortcoming is that they are too token cheap. Aggressively managing context to the detriment of the results. Watching Claude read a 500 line file in 100 line chunks just makes me sad.


> "[Specialized expertise is] the antithesis of democracy."

> "Democracy works best when men and women do things for themselves, with the help of their friends and neighbors, instead of depending on the state."

These are nice sentiments to have but it does not work in the real world. At a certain point certain problems are too complex for a regular person to understand.


If the world is too complex for a “regular person” to understand then universal suffrage is a mistake.

Just say what you mean: you want technocracy or some other non representative or democratic form of government.


That seems like a radical reading of the text.

It is impossible for every citizen to fully understand every scientific issue. Part of living in a society—in fact, one of the primary purposes of living in a society—is having different people specialize in different things, and trusting each other to actually be good at what they specialize in.

None of this implies that people don't know enough to vote.

Indeed, to the best of my knowledge, the available evidence suggests that a major part of the problem right now is people's votes being suppressed and people being poorly represented by their supposed representatives (both due to deliberate gerrymandering, and more simply due to the fact that the size of the House of Representatives was capped in the early 20th century, leading to one person representing hundreds of thousands or more, rather than the ~10k or so each they represented prior to the cap).


The 3rd graph is interesting. Once the model performance reaches above human baseline, the growth seems to be logarithmic instead of exponential.


Would all AI be hell bent on world domination cause that's what it learnt over and over again in its training data?


The thing they are testing for is reasoning performance. It makes sense to not give tool access.

This is same as the critiques of the LLM paper by apple where they showed that LLMs fail to solve the tower of hanoi problem after a set number of towers. The test was to see how well these models can reason out a long task. People online were like they could solve that problem if they had access to a coding enviornment. Again the test was to check reasoning capability not if it knew how to code and algorithm to solve the problem.

If model performance degrade a lot after a number of reasoning steps it's good to know where the limits are. Wheather the model had access to tools or not is orthogonal to this problem


It's selecting a random word from a probability distribution over words. That distribution is crafted by the LLM. The random sampler is not going to going to choose a word with 1e-6 probability anytime soon. Besides with thinking models, the LLM has the ability to correct itself so it's not like the model is at the mercy of a random number generator


Could you tell me a bit about how you were able to ensure the model is close to the cache?


the secret is to keep things ˢᵐᵒˡ


I moved to Quart instead. It's flask with async support built by the same developer.


Quart was interesting, but it didn't seem to have as much traction as FastAPI. I also seem to understand Flask is trying to integrate some of Quart's ideas.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: