Hacker Newsnew | past | comments | ask | show | jobs | submit | fweimer's commentslogin

Aren't Mali GPUs designed in Europe?

The last time this came up, people said that it was important to filter out unrelated address records in the answer section (with names to which the CNAME chain starting at the question name does not lead). Without the ordering constraint (or a rather low limit on the number of CNAMEs in a response), this needs a robust data structure for looking up DNS names. Most in-process stub resolvers (including the glibc one) do not implement a DNS cache, so they presently do not have a need to implement such a data structure. This is why eliminating the ordering constraint while preserving record filtering is not a simple code change.

Doesn't it need to go through the CNAME chain no matter what? If it's doing that, isn't filtering at most tracking all the records that matched? That requires a trivial data structure.

Parsing the answer section in a single pass requires more finesse, but does it need fancier data structures than a string to string map? And failing that you can loop upon CNAME. I wouldn't call a depth limit like 20 "a rather low limit on the number of CNAMEs in a response", and max 20 passes through a max 64KB answer section is plenty fast.


I don't know if the 20 limit is large enough in practice. People do weird things (after migrating from non-DNS naming services, for example). Then there is label compression, so you can have theoretically have several thousand RRs in a single 64 KiB response. These numbers are large enough that a simple multi-pass approach is probably not a good idea.

And practically speaking, none of this CNAME-chain chasing adds any functionality because recursive servers are expected to produce ready-to-use answers.


For most client interfaces, it's possible to just grab the addresses and ignore the CNAMEs altogether because the names do not matter, or only the name on the address record.

Of course, if the server sends unrelated address records in the answer section, that will result in incorrect data. (A simple counter can detect the end of the answer section, so it's not necessary to chase CNAMEs for section separation.)


There already is an I-D on this topic (based on previous work): https://datatracker.ietf.org/doc/draft-jabley-dnsop-ordered-...

Does it make sense to implement constant folding using peepholes for an ISA like this (with plenty of registers and limited scope for immediate operands)?

I would expect the constant loads to float away from their use sites, so that more instructions can use them. For example, 0 or 1 might be loaded at the start of the function.


A lot of people enter the company or product name into the browser's search field and reach their intended target through an ad at the top of the results. If they proceed to purchase something, does this count as a conversion? I think it does. Unlike traditional advertising, this didn't influence the customer's decision to buy at all.


> A lot of people enter the company or product name into the browser's search field

They had to already know the company name or product name to get there.

This doesn’t just happen. Spreading the name of the product and getting it to stick in people’s minds takes a lot of advertising budget, on the whole.


I'm concerned that companies spend their advertising budget on these redirects because those have the best metrics, instead of actually making the brand and its products more known.


In 1935, Albert Einstein relocated to Princeton permanently, so it's certainly an odd choice of a year in this context.

Random graduate students won't work on classified projects. The vast majority of non-classified studies will not have any impact on national security for years to come. It's unclear what the actual risks are, beyond the general distrust of foreigners.


[flagged]


Being Jewish (even if lapsed) was more of a disadvantage. U.S. immigration policy at the time was heavily influenced by eugenic ideas, and designed to prevent further Jewish migration, particularly from Eastern Europe. Princeton University (which initially housed the Institute for Advanced Study) had its own anti-Jewish quotas.


Albert Einstein was also German.

> It is quite possible to be both. I look upon myself as a man. Nationalism is an infantile disease. It is the measles of mankind.

> Noch eine Art Anwendung des Relativitätsprinzips zum Ergötzen des Lesers: Heute werde ich in Deutschland als "deutscher Gelehrter", in England als "Schweizer Jude" bezeichnet; sollte ich aber einst in die Lage kommen, als "bète noire" präsentiert zu werden, dann wäre ich umgekehrt für die Deutschen ein „Schweizer Jude", für die Engländer ein "deutscher Gelehrter".


It's also oddly self-defeating. If Greenland is made the 51st state (as proposed here: https://fine.house.gov/news/documentsingle.aspx?DocumentID=1...), it's reasonable to assume that the balance of power in the Senate would shift slightly, but significantly given how thin the majorities usually are. Politically, the two new senators would almost certainly be way to the left of the Republican party.

But on the other hand, Puerto Rico and various U.S. territories are still waiting for their senators to be seated (and voting rights in presidential elections, and in some cases, full citizenship rights).


I saw weird results with Gemini 2.5 Pro when I asked it to provide concrete source code examples matching certain criteria, and to quote the source code it found verbatim. It said it in its response quoted the sources verbatim, but that wasn't true at all—they had been rewritten, still in the style of the project it was quoting from, but otherwise quite different, and without a match in the Git history.

It looked a bit like someone at Google subscribed to a legal theory under which you can avoid copyright infringement if you take a derivative work and apply a mechanical obfuscation to it.


LLM's are not archives of information.

People seem to have this belief, or perhaps just general intuition, that LLMs are a google search on a training set with a fancy language engine on the front end. That's not what they are. The models (almost) self avoid copyright, because they never copy anything in the first place, hence why the model is a dense web of weight connections rather than an orderly bookshelf of copied training data.

Picture yourself contorting your hands under a spotlight to generate a shadow in the shape of a bird. The bird is not in your fingers, despite the shadow of the bird, and the shadow of your hand, looking very similar. Furthermore, your hand-shadow has no idea what a bird is.


For a task like this, I expect the tool to use web searches and sift through the results, similar to what a human would do. Based on progress indicators shown during the process, this is what happens. It's not an offline synthesis purely from training data, something you would get from running a model locally. (At least if we can believe the progress indicators, but who knows.)


While true in general, they do know many things verbatim. For instance, GPT-4 can reproduce the Navy SEAL copypasta word for word with all the misspellings.

I'd imagine more than a few basement dwellers could as well.

For integer workloads it seems closer to 60% of RPi 5 performance. There are some benchmarks that depend on vector support or dedicated CPU instructions for good results, and they skew the results.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: