Hacker Newsnew | past | comments | ask | show | jobs | submit | Mathnerd314's commentslogin

I get that this is essentially vibe coding a language, but it still seems lazy to me. He just asked the language model zero-shot to design a language unprompted. You could at least use the Rosetta code examples and ask it to identify design patterns for a new language.

I was thinking the same. Maybe if he tried to think instead of just asking the model. The premise is interesting "We optimize languages for humans, maybe we can do something similar for llms". But then he just ask the model to do the thing instead of thinking about the problem, maybe instead of prompting "Hey made this" a more granular, guided approach could've been better.

For me this is just a lost of potential on the topic, and an interesting read made boring pretty fast.


I don't disagree at all. :)

This was mainly an exercise in exploration with some LLMs, and I think I achieved my goal of exploring.

Like I said, if this topic is interesting to you and you'd like to explore another way to push on the problem, I highly recommend it. You may come up with better results than I did by having a better idea what you're looking for as output.


I tried a thread, I got that both LLMs and humans optimize for the same goal, working programs, and the key is verifiability. So it recommended Rust or Haskell combined with formal verification and contracts. So I think the conclusion of the post holds up - "the things that make an LLM-optimized language useful also happen to make them easier for humans!"

There's also the issue, which is also noted by the author, that LLM-optimization quite often becomes, when shouldn't be just that, token-minimization.

I update Linux maybe once a year. Sure, there are security vulnerabilities. But I'm behind a firewall. And meanwhile, I don't have to spend any time dealing with update issues.


But Windows is made for the big masses. It's definitely a good thing that Microsoft forces Auto-Updates, because otherwise 95% of people would run around with devices that have gaping security holes. And 90% of these people are not being a firewall 100% of their time.

Side effect unfortunately is that they are shoving ad- and bloatware down your throat through these updates.

But that is, because Microsoft does not care about the end user at all. It's not the fault of auto-updates.


Logic minimization is kind of boring? I had to solve a problem once and the answer was still to use the espresso software from the 1980s. It is a pretty specialized problem and honestly I don't see how you would improve on it, besides integrating the digital circuit design research. But in terms of software, there is not really any reason to use a Boolean logic formula instead of just passing around the truth table directly.


well that's exactly why i thought this was exciting. I thought there had been some advances on that front


I'm wondering if a firewall is a solution here. Don't mess around with the stupid device settings. Just block the Xbox store, and then presumably Minecraft uses different server IPs, so you can let those through.


Research from the University of Amsterdam’s IViR “Global Online Piracy Study” (survey of nearly 35,000 respondents across 13 countries) found that for each content type and country, 95% or more of pirates also consume content legally, and their median legal consumption is typically twice that of non‑pirating legal users.


Fun fact, this study was financed by YouTube to create a legal shield.

In 2017/2018, they were in the position where MPAA and RIAA were saying: "Piracy costs us billions; Google must pay" + they had European Parliament on their ass.

Google financed that 'independent' study to support the view "Piracy is not harmful and encourages legal spend".

So the credibility of "independent" studies, is something to consider very carefully.


My real world observations agree with the direction of the study, so I don’t entirely dismiss it as fake based on its funding source.

I am cautious about the conclusion, though. It seems clear there is a spectrum from “unscrupulously pirate everything” to “consume legitimately after pirated discovery”, and quantification is necessary.


Doesn't make it false.


Why do you think this contradicts anything? Heavy users hit a budget limit and continue consuming more via pirating.

You really need something way better than some shoddy survey to counter the obvious fact that price matters


It contradicts the post it was replying to, which was saying, effectively, that people don't want to spend any money on stuff.

I don't think it's required to be making some universal point when you clearly respond to the argument put forward in the post you reply to, do you?


No, you misunderstood the comment, it said that paying nothing is compelling, not that paying something was inconceivable or something; it was a response to a comment with a common misconception that pirating is only some "service problem"


I agree with your earlier comment (GGP) and feel like you're contradicting yourself here. "Too expensive" is either a service problem or at least directly adjacent to it. It's distinct from "well if I can get away with piracy then I'll do it". To say that free is a compelling price is to imply the latter as opposed to the former (at least imo).


Yeah but if a pirate would have not paid the full price why care? It is by definition not a lost sale, the most likely outcome is just an increase by one the player count


Because the price isn't binary? Also, the total spend isn't fixed either, it depends on how easy it's to pirate. So it's by definition still lost revenue, even if later/at reduced price


Consider the two cases

A: I pirated a game 25 years ago and played it after school

B: I didn't

which cases do you think will make me more likely to buy more versions of that game later?


Consider reality instead, you can make any fantasy case you want:

C. You didn't pirate, but played because your friends were deeply into it, so you skipped buying lunch to save money and pay for the game (pirating was hard for this specific DRM). You bought it at a discount on sale (remember, the price isn't fixed?). That feeling of overcoming hardship and friendship fused into a very positive experience, making it 10 times more likely for you to buy the next version than in A. or B. The overall likelihood still was tiny because now you have a family and don't have time to play, so that and

D. Considering the amount of uncertainty (your game company will go out of business in 25 years) the value of your "more likely" is $0


Not paying full price is not a "lost sale". People unwilling to pay full price wait for a discount or price reduction. Look at how popular the seasonal Steam sales are. Pirating the game very likely means they never purchase it at any price, which _is_ a lost sale.


I never paid for games as a kid (starting with 8 years and first PC). We didn’t have the money until much later. Other friends and uncles had games, we copied it all. Eight years later (with 16) I bought two game compilations for birthday and Christmas. Around 40 games, no more than 2 or 3 years old. I had fun for years.

And then much later being a university student, I had money of my own and have bought games I liked. Never pirated to save money. And you know, GOG came along, and I was thrilled having the old games from my childhood again as digital legal copy. With manuals and addons. I bought 20+ old DOS games I already knew. Better late than never.


It's only a lost sale if that person would otherwise have purchased it. At least in my personal experience that was _never_ the case.


There is more to this RE: perceived value of respective sides.

Edit: missed a word



For me, the entire inbox is this DBTC folder. I have notifications set up on my smartwatch and I triage each email in real-time as it comes in. If it's urgent, I act on it. If it is important or I want to follow up, then I add it to my (separate) to-do list, with a Google tasks voice command. And otherwise I just ignore the notification and the email sits there in the inbox until I feel like dealing with it. I use the unread status and pick things off in occasional focus sessions. Some things never get "read", and that's because they don't matter. Zero bandit stuff because I know exactly what's in my inbox at any given time, at least up to what my analog brain can hold. It fits right into the old "I heard a noise. What is it?" routine humans used when we were hunter-gatherers.


Came here to say this. When I'm really pressed for time, I use the custom stars in Gmail to indicate the type of followup needed - reply, separate task, etc.


Well, so what the actual ruling was was that use of the books was okay, but only if they were legally obtained. And so the authors could proceed with a lawsuit for illegally downloading the books. But then presumably compensation for torrenting the books was included as part of the out of court settlement. So the lesson is something like AI is fine, but torrenting books is still not acceptable, m'kay wink wink.


So, the takeaway I get from this paper is that if you have a language model and you set it up so it has an input and it generates an output that is towards some goal (e.g., "make this sentence sound smarter"), then it should converge, because it is following a potential function.

But I have used prompts like this a fair amount, and it is more like stochastic gradient descent - most of the time, once it is close to the target, the model will take a small incremental change, but when it is really close the model will sort of say "this is not improveable as it is" and it will take a large leap to a completely different configuration. And then this will do the incremental optimizations and so on. This could be an artifact of the sampling algorithm, but I think it is also an issue that the model has this potential function encoded, but the prompt and the structure of the model do not actually minimize this potential. So, a real lesson here is that there is actually a lot of work still left to do in terms of smarter sampling. Beam search like is used today is sort of the tip of the iceberg. If we could start doing optimization with the transformer model as a component, like optimizing pipelines of reasoning rather than always generating inputs and outputs sequentially, that is where you could start using this potential function directly and then you would see orders of magnitude smarter AI. There is stuff about prompt optimization, but it is still based on treating models as black boxes rather than the piles of math they are.


That's an interesting observation. I'd suggest modelling the LLM's behaviour in that situation as selecting between different simple strategies, each of which has its own transition function. Some of the strategies will be far more common than others. Some of them may be very simple and obey the detailed balance condition (meaning they are reversible Markov chains), but others, and the overall transition function does not.

The definition of the detailed balance condition is very strict and it's obvious that it won't be met in general by most probabilistic programs (sets of rules with probabilistic output) even if you consider only those where all possible outputs have non-zero probability (as required by detailed balance).

And the LLM+agent is only a Markov chain because of the limited state space of the agent. While an LLM is adding to its context window without reaching the window size limit, it is not a Markov chain, as I explained here: https://news.ycombinator.com/item?id=45124761

And, agreed that better optimisation would be incredible. (I would describe it as a search problem.) I'm not sure how feasible it is improve without changing the architecture, e.g. to a diffusion language model. But LLMs already predict many tokens ahead at once which is why beam search is surprisingly unnecesarr. That's how they're able to write coherent sentences (and rhymes), they've already largely determined at the beginning what they're going to write. (See Anthropic mech interp work.) So maybe if we could tap into that we search over vaguely-formed next blocks of text rather than next words.


It seems an absurd amount of people misuse the term dopamine, I found this video https://youtu.be/x6_Ukic1tRM?t=1297 (in Polish, but there are subtitles and dubs). If you want to continue to spread "manipulative disinformation", by all means, some people have to be evil, but just be clear that it is pseudoscience up front.


Just like practicing "oral hygiene" doesn't mean treating your mouth like an enemy, nor does "dopamine hygiene" mean treating dopamine like an enemy.

It just means keeping track of the difference between empty dopamine, which rewards behaviors that don't benefit you, from dopamine which is working in its normal evolutionary context--to encourage behaviors that do, and being intentional about how often you engage in the former.

"Digital hygiene" sounds like the start of a mental framework with good intentions, and which might help somebody with their World of Warcraft problem. But that problem isn't really unique to digital things, they're just a commonly found example of it. If you have a habit of seeking out empty/fast dopamine loops, where the rewards come frequently and are otherwise useless except as a reason to continue the useless behavior, then you're likely to come off your World of Warcraft addiction and immediately find a (potentially non-digital) addiction to put in its place.

My point is that yes we need a new kind of hygiene to deal with modern kinds of manipulation, but no we shouldn't restrict its scope to computers. I watched the video, but it's pushing back against something altogether weirder than my point here. I don't see how this counts as "manipulative disinformation," or is in contradiction with established science about the function of dopamine.


It's a pet peeve of mine as well and I'm happy to see there being a pushback against it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: