Hacker Newsnew | past | comments | ask | show | jobs | submit | nicebyte's commentslogin

shameless plug: if you want to understand the content of this post better, first read the first half of my article on jumps [1] (up to syscall). goes into detail about relocations and position-independent code.

[1] https://gpfault.net/posts/asm-tut-4.html


including ai generated illustrations in your articles or presentations is very cringe


yeah no. I've mainlined dwm + dmenu all the way back in 200x, I've written tons of makefiles and have the scars to prove it.

These days I'm off of this minimalism crap. it looks good on paper, but never survives collision with reality [1] (funny that this post is on hn front-page today as well!).

[1] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...


I like these tools because they are minimalist.. I don't really care for the fact that they are C/make oriented and would rather help someone rewriting them in go or rust than show that I have a non minimal amount of scar tissue to work with a needlessly complicated past.


my comment isn't about things being written using c/make/whatever, it's precisely about the faulty assumption that complexity is needless.


Oh then I totally disagree (or don't understand why you would need to see a psychoanalysis of a blacksmith to evaluate their offerings?). Many projects have places that need some complexity, configuration or advanced tools that doesn't imply the hardware store should stop selling average hammers or make you wade through an aisle of crap from providers like peloton to see if they better meet your needs.

(I.e. show me where in the article he replaced a standard tool like the hammer or pot with a complex one customized to exactly what he wanted to solve or explain why that advanced tool wouldn't suck given that there's a lot more details than one would expect.)


I just went back to fedora+gnome on my PCs from FreeBSD+(tiling wm). I think minimalism is good when your workflow is very focused and you already know the requirements for your stack. But if you have unexpected workflows coming in everyday, the maintenance quickly becomes a burden. Gnome may not be perfect, but it's quite nice as a baseline for a working environment.


Same. I ran dwm for a long time. These days I just run Gnome. You can make it work very similar to a tiling window manager, and all that random crap the world throws at you (printers, projectors, random other monitors, Java programs) "Just Work".


I bet 90% of the reason this is on the front page is the Berkeley mono font. the system itself sucks.


The first time it was posted I said: I hate the system, but I like the presentation.

The system is great if you like to remember the IPs of the sites you need instead of the urls…


How did you draw that conclusion from reading the contents of the link? This is a benchmark.

> We evaluate model performance and find that frontier models are still unable to solve the majority of tasks.


I already knew a lot of what was written here but for some reason reading this made me uninstall bumble.


I was 11 or 12 when I first saw Clint Eastwood and the video + the song lived in my head rent free. Genius work, and aged well.


>. they are an extremely unusual person and have spent upwards of $10,000

eh? doesn't the distilled+quantized version of the model fit on a high-end consumer grade gpu?


The "distilled+quantized versions" are not the same model at all, they are existing models (Llama and Qwen) finetuned on outputs from the actual R1 model, and are not really comparable to the real thing.


That is semantics and they are strongly comparable with their input and output. Distillation is different to finetuning.

Sure, you could say that only running the 600+b model is running "the real thing"...


a distilled version running on another model architecture does not count as using "DeepSeek". It counts as running a Llama:7B model fine-tuned on DeepSeek.


That’s splitting hairs. Most people refer to running locally as in running model on your hardware rather than the providing company.


Except you're not running the model locally, you're running an entirely different model that is deceptively named.

You can pretend it's R1, and if it works for your purpose that's fine, but it won't perform anywhere near the same as the real model, and any tests performed on it are not representative of the real model.


That’s a good point. Thanks!


Pretty sure this is just layman vs academic expert usage of the word conflicting.

For everyone who doesn’t build LLMs themselves, “running a Llama:7B model fined-tuned on DeepSeek.” _is_ using Deepseek mostly on account of all the tools and files being named DeepSeek and the tutorials that are aimed as casual users all are titled with equivalents of “How to use DeepSeek locally”


> “running a Llama:7B model fined-tuned on DeepSeek.” _is_ using Deepseek mostly on account of all the tools and files being named

Most people confuse mass and weight, that does not mean weight and mass are the same thing.


Ok, but it seemed pretty obvious to me that the OP was using the common vernacular and not the hyper specific definition.


ARM would have been a better example because the amount of people that care about RISC-V is a rounding error compared to x86 or ARM.


I've lived long enough to see pg turn into a boomer-ass uncle lmao.


It's also very funny that he decided to publish this _now_ of all times.


It's no coincidence that PG published this days before Trump's inauguration.

This is yet another Silicon Valley elite kowtowing to their new GOP overlords.


I understand the perception but I know pg well enough to say that I think it is very likely a coincidence. From everything I've seen, he isn't driven by such calculations.


[flagged]


You're probably limited because 75% of all of your account's posts have been flagged to death for being low-quality hate/rage posts. Read the site's rules and guidelines and be a better member.


(dang is also the main HN mod, as an FYI)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: