Hacker Newsnew | past | comments | ask | show | jobs | submit | more checker659's commentslogin

The article doesn't say much about the role of religion in this matter. Surely what one could study was limited by what was allowed by the church.



> cost of compute

DRAM scaling + interconnect bandwidth stagnation


NVLink isn’t magic


As a late diagnosed ADHD person, I’d rather there be over diagnosis than under diagnosis. We don’t understand the brain. There are a lot of us that suffer in silence.


I think FPGAs (or CGRAs really) will make a comeback once LLMs can directly generate FPGA bitstreams.


No need. I gave ChatGPT this prompt: "Write a data mover in Xilinx HLS with Vitis flow that takes in a stream of bytes, swaps pairs of bytes, then streams the bytes out"

And it did a good job. The code it made probably works fine and will run on most Xilinx FPGAs.


> The code it made probably works fine

Solve your silicon verification workflow with this one weird trick: "looks good to me"!


Its how I saved cost and schedule on this project.


I don't even work in hardware, and yet even I have still heard of the Pentium FDIV bug, which happened despite people looking a lot more closely than "probably works fine".


What does "directly generate FPGA bitstreams" mean?

Placement and routing is an NP-Complete problem.


And I certainly can't imagine how a language model would be of any use here, in a problem which doesn't involve language.


They are "okay" at generating RTL, but are likely never going to be able to generate actual bitstreams without some classical implementation flow in there.


I think in theory, given terabytes of bitstreams, you might be able to get an LLM to output valid designs. Excepting hardened IP blocks, a bitstream is literally a sequence of sram configuration bits to set the routing tables and LUTs. Given the right type of positional encoding I think you could maybe get simple designs working at a small scale.


I'd expect a diffusion model to outperform autoregressive LLMs dramatically.


Certainly possible! Or perhaps a block diffusion+autoregressive model or something like GPT 4o's image gen.


AI could use EDA tools


AMDs FPGAs already come with AI engines.


What are you hiring for exactly?


TLDR: Ci/cd for services on CSPs


Surely it should be possible to spoof presence as well. Non-repudiation is not possible with this alone.


What’s stopping someone from using LLMs to create a alt account? Imagine a bot that takes stuff from you actual a/c and posts the mirror opposite posts on the alt one.


> What’s stopping someone from using LLMs to create an alt account?

For the applicant? Visa fraud rules. For people fucking with third parties? Absolutely nothing.


How would one prove fraud? I'm trying to understand the logic behind all of this.


Most large social networks now require biometric authentication of identity to prevent alts.

Even Uber requires facial biometrics for an account now if you try to sign up using a prepaid card and VPN.


Don't post pictures of yourself on the internet (and don't let your relatives do that), and you can say it wasn't you.


Thank you. Any other recommendations?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: