Hacker Newsnew | past | comments | ask | show | jobs | submit | scoresmoke's commentslogin

Yes.


Hello, my name is George C. Parker and I have a bridge to sell you.


Before I buy, can you confirm the bridge is GDPR-compliant, AI-Act-ready, has a digital product passport, and passed its environmental impact assessment? Otherwise the local compliance officer will fine us before it even collapses.


>Before I buy, can you confirm the bridge is GDPR-compliant, AI-Act-ready, has a digital product passport, and passed its environmental impact assessment?

Great comment! We've added double-plus-good to your Palantir-Trumport account and 2% off your next Amazon purchase!


You might also consider a fast implementation of Elo and Bradley–Terry that I have been developing for some time: https://github.com/dustalov/evalica (Rust core, Python bindings, 100% test coverage, and nice API).


would you consider JS bindings? should be easy to vibe code given what you have. bonus points if it runs in the browser (eg export the wasm binary). thank you!


I am thinking about this for a while and I think I’ll vibecode them. Not sure about WASM, though, as the underlying libraries should support it, too, and I am not sure about all of them at the same time.


In our case training and inferencing the models takes days, calculating all of the ELOs take 1min haha. So we didn't need to optimize the calculation.

But, we did need to work on numeric stability!

I have our calculations here: - https://hackmd.io/@-Gjw1zWMSH6lMPRlziQFEw/B15B4Rsleg

tldr; wikipedia iterates on <e^elo>, but that can go to zero or infinity. Iterating on <elo> stays between -4 and 4 in all of our observed pairwise matrices, so it's very well-bounded.


I am working on post-training and evaluation tasks mostly, and I built Evalica as a convenient tool for my own use cases. The computation is fast enough to not bother the user, but the library does not stand in my way during the analysis.


GPT-2 follows the very well-studied architecture of Transformer decoder, so the outcomes of this study might be applicable to the more complicated models.


Ruff and uv are both excellent tools, which are developed by a VC-backed company, Astral: https://astral.sh/about. I wonder what their pitch was.


That's what I'm worried about. What if we start using Rye, bake it into our projects and then Astral goes "Sike! Now we will collect all information available to the Rye binary and require 1$ for each build"


Rye-powered Python deployment platform a la Vercel?


Anaconda competitor? Many companies in this space start out by releasing new OSS tools and then turn into consultancy sweatshops.


The most important changes are deprecations of certain public APIs: https://numpy.org/devdocs/release/2.0.0-notes.html#deprecati...

One new interesting feature, though, is the support for string routines: https://numpy.org/devdocs/reference/routines.strings.html#mo...


Interesting that the new string library mirrors the introduction of variable-length string arrays in Matlab in 2016 (https://www.mathworks.com/help/matlab/ref/string.html).


> One new interesting feature, though, is the support for string routines

Sounds almost like they're building a language inside a language.


No. Native python ops in string suck in performance. String support is absolutely interesting and will enable abstractions for many NLP and LLM use cases without writing native C extensions.


> Native python ops in string suck in performance.

That’s not true? Python string implementation is very optimized, probably have similar performance to C.


It is absolutely true that there is massive amounts of room for performance improvements for Python strings and that performance is generally subpar due to implementation decisions/restrictions.

Strings are immutable, so no efficient truncation, concatenation, or modifications of any time, you're always reallocating.

There's no native support for a view of string, so operations like iteration over windows or ranges have to allocate or throw away all the string abstractions.

By nature of how the interpreter stores objects, Strings are always going to have an extra level of indirection compared to what you can do with a language like C.

Python strings have multiple potential underlying representations, and thus have some overhead for managing and dealing with those multiple representations without exposing those details to user code


There is a built in memoryview. But it only works on bytes or other objects supporting the buffer protocol, not on strings.


stringzilla[1] has 10x perf on some string operations - maybe they don't suck, but there's definitely room for improvement

[1] - https://github.com/ashvardanian/StringZilla?tab=readme-ov-fi...


For numpy applications you have to always box a value to get a new python string. It quite far from fast.


Yeah, operating on strings has historically been a major weak point of Numpy's. I'm looking forward seeing benchmarks for the new implementation.


It's already very much a DSL, and has been for the decade-ish that I've used it.

They're not building a language. They're carefully adding a newly-in-demand feature to a mature, already-built language.


This one will be rough :|

> arange’s start argument is positional-only


Looks like that might get reverted [0].

[0] https://github.com/numpy/numpy/pull/25955


Does numpy use GPU?


No.

You may want to check out cupy

https://cupy.dev/


Thank you! I excluded the coding tasks as most annotators don't possess this expertise. I trust them in comparing pairs of dissimilar model outputs that don't require any specific skill but commonsense reasoning.

The only manual analysis was when I checked the passed/failed prompts of the top-performing model.


Yet they have a copyright notice in the footer: Copyright © 2023, Oracle and/or its affiliates.


It's all about timing and communities. In my echo chamber, everybody is currently making them a part of the product, but I'm interested in a more representative sample. Hiring usually precedes shipping.

Every success in your job search!


Thanks! I appreciate it. :)


I found code LLMs to be very useful for rewriting nested for-loops in Python as nice vectorized operations in NumPy. One has to be careful about unexpected array materialization, but generally it works really well and saves a lot of time.


I have seen that many teams are trying to perform model routing by sending simpler queries to cheaper models and complex queries to more complex and expensive models, e.g., https://www.anyscale.com/blog/a-comprehensive-guide-for-buil...

Were these people laid off because of LLMs? It sounds pretty dystopic. Who were these people?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: