Hacker Newsnew | past | comments | ask | show | jobs | submit | clusterhacks's commentslogin

"I personally dropped $20k on a high end desktop . . . "

This is where I think current hackers should be headed. I grew up with lots of family who were backyard mechanics, wrenching on cars and motorcycles. Their investment in tools made my occasional PC purchase look extremely affordable. Based on what I read, senior mechanics often have five-figure US dollar investments in tools. Of course, I guess high quality torque wrenches probably outlast current GPU chips? I'd hate to be stuck making a $10K investment every 24 months on a new GPU . . .

I have been renting GPU resources and running open weight models, but recently my preferred provider simply doesn't have hardware available. I'm now kicking myself a little for not simply making a big purchase last fall when prices were better.


Professional mechanics might do that, but a home mechanic can get very far one one $200 set, and then another $300 spent over years buying several useful things for each project.

I've replaced transmissions, head gaskets, and done all work for our family cars for two decades based on a Costco toolkit, and 20 trips to the autoparts store or Walmart when I needed something to help out.

Maybe I'm being a little forgetful that yes I bought a jack, and Jack stands, and have a random pipe as a breaker bar, and other odds and ends. But you can go very far for $1k as a DIYer.


--> I can spot a person's social media app of choice is in 5 minutes.

I find this sadly hilarious. What are the current tells you see? I'm similar in that I read a lot of HN and don't have other social media accounts. But I couldn't even guess at what a person's preferred social media is.


I can spot it because I used the various social media apps in the past for a time.

X users will start intensely talking about societal/political issues in the first 5 minutes of introduction.

Facebook users will often belong to the conservative political party of any given country and will start talking about one of the numerous conspiracy theories that provide a simplistic and satisfying yet false explanation to the complex reality of the current world.

Instagram users will almost always have the implicit belief that the most important thing in life is to be rich or a celebrity. The platform just implants that into their mind. It takes a bit getting to know the person to see that.

Snapchat users are teens/college/youth who are usually very social.

Reddit users? I can spot them by their looks, the way they talk, or their writing. Obviously not %100 accurate but Reddit is by far the platform with the highest hypersocialization effect.

Tiktok users have a secret language constructed of a large repertoire of memes among them and will constantly reference them when talking. Some of the memes they talk about are 10+ years old. As a young person who have always avoided social media honestly it's hard to communicate with some of my peers because I don't get the memes.


Good grief. I'm here cautiously telling my workplace to buy a couple of dgx sparks for dev/prototyping and you have better hardware in hand than my entire org.

What kind of experiments are you doing? Did you try out exo with a dgx doing prefill and the mac doing decode?

I'm also totally interested in hearing what you have learned working with all this gear. Did you buy all this stuff out of pocket to work with?


Yeah, Exo was one of the first things to do - MacStudio has a decent throughput at the level of 3080, great for token generation and Sparks have decent compute, either for prefill or for running non-LLM models that need compute (segment anything, stable diffusion etc). RTX 6000 Pro just crushes them all (it's essentially like having 4x3090 in a single GPU). I bought 2 sparks to also play with Nvidia's networking stack and learn their ecosystem though they are a bit of a mixed bag as they don't expose some Blackwell-specific features that make a difference. I bought it all to be able to run local agents (I write AI agents for living) and develop my own ideas fully. Also I was wrapping up grad studies at Stanford so they came handy for some projects there. I bought it all out of pocket but can amortize them in taxes.


Building AI agents for a living is what I hope to become able to do, too, I consider myself still in learning phase. I have talked with some potential customers (small orgs, freelancers) and learned that local inference would unlock opportunities that have otherwise hard to tackle compliance barriers.


Very cool - thanks for the info.

That you are writing AI agents for a living is fascinating to hear. We aren't even really looking at how to use agents internally yet. I think local agents are incredibly off the radar at my org despite some really good additions as supplement resources for internal apps.

What's deployment look like for your agents? You're clearly exploring a lot of different approaches . . .


My commercial agents are just wrappers on top of GPT/Claude/Gemini so the standard deployment ways on Azure/AWS/GCP apply with integrations to whatever systems customers have like JIRA, Confluence etc. Some need to automate away some folks with repetitive work, some need to improve time to delivery with their people swamped by incoming work, hoping to accelerate cognitively-demanding tasks etc.


Wow, thanks for the link to Texerau. I had no idea a pdf was floating around and have wanted this book for some time. You video looks interesting, especially the part around Ronchi and Focault testing. I have 'Understanding Focault' but have to admit that reading it doesn't give me confidence.

One question I always think about is how much time and effort a "one-time" mirror maker should plan on making to exceed the quality of a generic 8" or 10" F/5-F/7 available from the Chinese mirror makers.

Zambuto seems to imply that whatever magic happens for his mirrors might be in very long, machine driven polishing to smooth out the final surface imperfections that cause scatter. With his retirement and with few mirror makers in the US, it seems like options for buying "high end" mirrors in the 6"- 10" size are very limited. I have been debating an 8" F/7 and would love to just purchase a relatively high quality mirror, but most of the mirror makers seem more taken with significantly larger mirrors.


Watch your local craigslist or facebook marketplace. With a little patience, you will probably find a good 8" or 10" dobsonian at a great price. I picked up a lovely 8" dob for less than $200. Most of the generic 8" F/6 dobsonians seem pretty decent.

Or check your local library. It may have a smaller Starblast table-top dobsonian you can check out - I did that when traveling once.

Whatever you do, do NOT buy a small cheap refractor on some flimsy mount. They are mostly awful.


You know, I haven't even been thinking about those AMD gpus for local llms and it is clearly a blind spot for me.

How is it? I'd guess a bunch of the MoE models actually run well?


I've been running local models on an AMD 7800 XT with ollama-rocm. I've had zero technical issues. It's really just the usefulness of a model with only 16GB vram + 64GB of main RAM is questionable, but that isn't an AMD specific issue. It was a similar experience running locally with an nvidia card.


All those choices seem to have very different trade-offs? I hate $5,000 as a budget - not enough to launch you into higher-VRAM RTX Pro cards, too much (for me personally) to just spend on a "learning/experimental" system.

I've personally decided to just rent systems with GPUs from a cloud provider and setup SSH tunnels to my local system. I mean, if I was doing some more HPC/numerical programming (say, similarity search on GPUs :-) ), I could see just taking the hit and spending $15,000 on a workstation with an RTX Pro 6000.

For grins:

Max t/s for this and smaller models? RTX 5090 system. Barely squeezing in for $5,000 today and given ram prices, maybe not actually possible tomorrow.

Max CUDA compatibility, slower t/s? DGX Spark.

Ok with slower t/s, don't care so much about CUDA, and want to run larger models? Strix Halo system with 128gb unified memory, order a framework desktop.

Prefer Macs, might run larger models? M3 Ultra with memory maxed out. Better memory bandwidth speed, mac users seem to be quite happy running locally for just messing around.

You'll probably find better answers heading off to https://www.reddit.com/r/LocalLLaMA/ for actual benchmarks.


> I've personally decided to just rent systems with GPUs from a cloud provider and setup SSH tunnels to my local system.

That's a good idea!

Curious about this, if you don't mind sharing:

- what's the stack ? (Do you run like llama.cpp on that rented machine?)

- what model(s) do you run there?

- what's your rough monthly cost? (Does it come up much cheaper than if you called the equivalent paid APIs)


I ran ollama first because it was easy, but now download source and build llama.cpp on the machine. I don't bother saving a file system between runs on the rented machine, I build llama.cpp every time I start up.

I am usually just running gpt-oss-120b or one of the qwen models. Sometimes gemma? These are mostly "medium" sized in terms of memory requirements - I'm usually trying unquantized models that will easily run on an single 80-ish gb gpu because those are cheap.

I tend to spend $10-$20 a week. But I am almost always prototyping or testing an idea for a specific project that doesn't require me to run 8 hrs/day. I don't use the paid APIs for several reasons but cost-effectiveness is not one of those reasons.


I know you say you don't use the paid apis, but renting a gpu is something I've been thinking about and I'd be really interested in knowing how this compares with paying by the token. I think gpt-oss-120b is 0.10/input 0.60/output per million tokens in azure. In my head this could go a long way but I haven't used gpt oss agentically long enough to really understand usage. Just wondering if you know/be willing to share your typical usage/token spend on that dedicated hardware?


For comparison, here's my own usage with various cloud models for development:

  * Claude in December: 91 million tokens in, 750k out
  * Codex in December: 43 million tokens in, 351k out
  * Cerebras in December: 41 million tokens in, 301k out
  * (obviously those figures above are so far in the month only)
  * Claude in November: 196 million tokens in, 1.8 million out
  * Codex in November: 214 million tokens in, 4 million out
  * Cerebras in November: 131 million tokens in, 1.6 million out
  * Claude in October: 5 million tokens in, 79k out
  * Codex in October: 119 million tokens in, 3.1 million out
As for Cerebras in October, I don't have the data because they don't show the Qwen3 Coder model that was deprecated, but it was way more: https://blog.kronis.dev/blog/i-blew-through-24-million-token...

In general, I'd say that for the stuff I do my workloads are extremely read heavy (referencing existing code, patterns, tests, build and check script output, implementation plans, docs etc.), but it goes about like this:

  * most fixed cloud subscriptions will run out really quickly and will be insufficient (Cerebras being an exception)
  * if paying per token, you *really* want the provider to support proper caching, otherwise you'll go broke
  * if you have local hardware that is great, but it will *never* compete with the cloud models, so your best bet is to run something good enough, basically cover all of your autocomplete needs, and also with tools like KiloCode an advanced cloud model can do the planning and a simpler local model do the implementation, then the cloud model validate the output


This is the perfect use case for local models. It's why we set out to create cortex.build! A local LLM


Sorry, I don't much track or keep up with those specifics other than knowing I'm not spending much per week. My typical scenario is to spin up an instance that costs less than $2/hr for 2-4 hours. It's all just exploratory work really. Sometimes I'm running a script that is making a call to the LLM server api, other times I'm just noodling around in the web chat interface.


I don't suppose you have (or would be interested in writing) a blog post about how you set that up? Or maybe a list of links/resources/prompts you used to learn how to get there?


No, I don't blog. But I just followed the docs for starting an instance on lambda.ai and the llama.cpp build instructions. Both are pretty good resources. I had already setup an SSH key with lambda and the lambda OS images are linux pre-loaded with CUDA libraries on startup.

Here are my lazy notes + a snippet of the history file from the remote instance for a recent setup where I used the web chat interface built into llama.cpp.

I created an instance gpu_1x_gh200 (96 GB on ARM) at lambda.ai.

connected from terminal on my box at home and setup the ssh tunnel.

ssh -L 22434:127.0.0.1:11434 ubuntu@<ip address of rented machine - can see it on lambda.ai console or dashboard>

  Started building llama.cpp from source, history:    
     21  git clone   https://github.com/ggml-org/llama.cpp
     22  cd llama.cpp
     23  which cmake
     24  sudo apt list | grep libcurl
     25  sudo apt-get install libcurl4-openssl-dev
     26  cmake -B build -DGGML_CUDA=ON
     27  cmake --build build --config Release 
MISTAKE on 27, SINGLE-THREADED and slow to build see -j 16 below for faster build

     28  cmake --build build --config Release -j 16
     29  ls
     30  ls build
     31  find . -name "llama.server"
     32  find . -name "llama"
     33  ls build/bin/
     34  cd build/bin/
     35  ls
     36  ./llama-server -hf ggml-org/gpt-oss-120b-GGUF -c 0 --jinja
MISTAKE, didn't specify the port number for the llama-server

     37  clear;history
     38  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking -c 0 --jinja --port 11434
     39  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking.gguf -c 0 --jinja --port 11434
     40  ./llama-server -hf Qwen/Qwen3-VL-30B-A3B-Thinking-GGUF -c 0 --jinja --port 11434
     41  clear;history
I switched to qwen3 vl because I need a multimodal model for that day's experiment. Lines 38 and 39 show me not using the right name for the model. I like how llama.cpp can download and run models directly off of huggingface.

Then pointed my browser at http//:localhost:22434 on my local box and had the normal browser window where I could upload files and use the chat interface with the model. That also gives you an openai api-compatible endpoint. It was all I needed for what I was doing that day. I spent a grand total of $4 that day doing the setup and running some NLP-oriented prompts for a few hours.


Thanks, much appreciated.


>whole point of the time compression is to spread the grades out

I suspect that is true for standardized tests like the SAT, ACT, or GRE.

I suspect in classroom environments that there isn't any intent at all on test timing other than most kids will be able to attempt most problems in the test time window. As far as I can tell, nobody cares much about spreading grades out at any level these days.


I share your paranoia.

My kids use personal computing devices for school, but their primary platform (just like their friends) is locked-down phones. Combining that usage pattern with business incentives to lock users into walled gardens, I kind of worry we are backing into the destruction of personal computing.


Why?

How strong is the argument that a student completing a test in 1 hour with the same score as a student who took 10 hours that the first student performed "better" or had a greater understanding of the material?


> Why?

Teachers have lives, including needing to eat and sleep.


Sure, but that answer doesn't address the questions of the value of time limits on assessment.

What if instead we are talking about a paper or project? Why isn't time-to-complete part of the grading rubric?

Do we penalize a student who takes 10 hours on a project vs the student who took 1 hour if the rubric gives a better grade to the student who took 10 hours?

Or assume teacher time isn't a factor - put two kids in a room with no devices to take an SAT test on paper. Both kids make perfect scores. You have no information on which student took longer. How are the two test takers different?


Not arguing with any of that, just stating plainly that there are practical reasons for time limits and one of the many reasons is that tests are done supervised and thus must have _some_ sort of time limit. Everything else is you projecting an argument onto me that I didn't make.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: