Hacker Newsnew | past | comments | ask | show | jobs | submit | RodgerTheGreat's commentslogin

Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.

It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.

We had no problems with people using it and posting elsewhere, it was the demands that we must allow it that were problematic and made us question whether we were doing the right thing.

No regrets now, though, as we see competitors being flooded with AI slop and they are too invested in it to change now.

Now I see it as the perfect tool for impostors.


>It was a surprise to us how vehemently some folk defended AI content and assumed it was their right to post it within our community.

People often confuse freedom of speech, with freedom to access a specific platform for speech.

Its dead wrong, I dont know why people would want to be in a community where they arent wanted.


> I dont know why people would want to be in a community where they arent wanted.

This is standard predatory behavior. Child abusers hanging out with kids, weirdos hanging out near the women's clothing department, etc.

It's usually a clear indication of the sort of people you don't want to associate with in your online community. They bring a net negative to the table.


What is "it"? Putting the two halves together, the sort of people who want to be in a community where they aren't wanted are the sort of people you don't want in that community. I guess I can't argue with that.

They are talking about social norms. Inversely, "creepers".

Most adults understand why men should not, generally, be hanging out in the women's clothing department. When accidental violations of those norms are pointed out, they apologize and correct. Creepers, OTOH, gonna creep.

For their own well-being, online communities should police repeated violation of social norms. Otherwise the normals leave and creepers take over.


I spose. But labelling deviants (from the norm) and chasing them away is hazardous, because if you overdo it you end up with an echo chamber. How dare you talk to the people who others don't talk to, you traitor! Now you have to be ostracized too ... is how it might go.

(I can't help thinking of the Father Ted Christmas special, where a group of priests have to organize a quasi-military operation in order to escape from Ireland's largest lingerie department without a scandal.)


I have a similar problem in a community I'm a a part of? How are you reliably detecting AI?

It's not about perfectly identifying AI content. There's a relevant XKCD: https://xkcd.com/810/

When posts fall within "acceptable" then it does not actually matter where it comes from. Logorrhea, massively offtopic, and/or shitposting are bad when humans do it. Those should suffer the same fate.

Historically it was tolerable, but has become the highest priority today because machines have cranked up the volume. If we mis-identify human garbage as robot nonsense it does not matter.


In Lil, the readcsv[] function takes an optional string specifying a type for each column to decode:

    purchases:readcsv[read["purchases.csv"] "sii"]
Summing a column:

    sum purchases.amount
To create a summary, we need to reduce each group to a single row:

    select first country sum amount by country from purchases
Discounting:

    select first country sum amount-discount by country from purchases
Lil doesn't have a "median" primitive. Decks can contain multiple modules, but we happen to know this one is alone. Your path will vary:

    stats:first import["stats.deck"]
    select first country sum amount-discount by country where amount<stats.median[amount]*10 from purchases
Calculating the median within each group is merely a matter of reordering clauses:

    select first country sum amount-discount where amount<stats.median[amount]*10 by country from purchases

The string to specify the column types is not a terrible idea. Does it have other configuration options, like whether or not to assume the first row is the headers, or specifying the separator character?

Lil's readcsv[] takes three arguments: a data string, an optional typecode-string (which can also skip columns with "_"), and an optional delimiter character. First row is always assumed to be headers; I find it easy enough to concatenate on a header row before calling the function if I'm ever dealing with a headerless CSV file.

The typecode-string approach in Lil is very similar to how Q handles it with dyadic 0:.

In this specific example I could do without the typecode-string since arithmetic operators like sum, -, and * will coerce string columns into numbers, but I think this way is cleaner.


I see. Kap tries to be as generic as possible, so assuming that the table has headers doesn't feel right. If the table dont have headers, and the reader assumes it does, then you'll potentially silently lose the first row of data.

You have to make the decision somewhere in your code, unless you're willing to lean on a heuristic; all of the examples in R and Lil make assumptions about the names of columns in the file on-disk just as they make assumptions about the delimiter and the presence of headers.

If I knew the CSV file didn't have built-in headers, I'd write the Lil script like this:

    purchases:readcsv["country,amount,discount\n",read["headerless.csv"] "sii"]

Thanks, that makes sense. I guess most CSV data you see in the real world do have headers. Perhaps I was looking too much about thr default CSV export format from Excel, focusing on making sure it can always be parsed. And Excel doesn't have column headers.

The term "GenAI veganism" is deeply disingenuous, and simonw knew exactly what he was doing when he coined it.

In the broader context of most human societies treating meat consumption as a default, with thousands of years of precedent, it deliberately frames abstaining from the use of "GenAI" as an extreme perspective, suggesting that moderate or extensive usage of LLMs and their ilk is more intrinsically "normal". The "GenAI" tools in question have only existed for a few years- or perhaps months in more specific cases- and the unending marketing blitz around them notwithstanding, using them does not remotely represent an engrained cultural default.

The choice of terminology also casually devalues and denigrates the reasons many people have for being actual vegans. It's meant to sneeringly evoke negative stereotypes of vegans as annoying and irrational.

Attempting to carve out a "softened" version of this language with the "vegetarian" label is not descriptively useful.


In Lil[0], this is how ordinary assignment syntax works. Implicitly defining a dictionary stored in a variable named "cat" with a field "age":

    cat.age:3
    # {"age":3}
Defining "l" as in the example in the article. We need the "list" operator to enlist nested values so that the "," operator doesn't concatenate them into a flat list:

    l:1,(list 2,list cat),4
    # (1,(2,{"age":3}),4)
Updating the "age" field in the nested dictionary. Lil's basic datatypes are immutable, so "l" is rebound to a new list containing a new dictionary, leaving any previous references undisturbed:

    l[1][1].age:9
    # (1,(2,{"age":9}),4)
    cat
    # {"age":3}
There's no special "infix" promotion syntax, so that last example would be:

    l:l,5
    # (1,(2,{"age":9}),4,5)
[0] http://beyondloom.com/tools/trylil.html


This is surprising to me:

  l[1][1].age:9
  # (1,(2,{"age":9}),4)
How come it doesn't return just:

  {"age":9}
Or is there something totally different going on with references here? As in, how is this different to:

  l_inner = l[1][1]
  l_inner.age:9


Amending a slice would amend only the slice:

    l_inner:l[1][1]
    # {"age":3}
    l_inner.age:9
    # {"age":9}
    l_inner
    # {"age":9}
    l
    # (1,(2,{"age":3}),4)
If an amending expression isn't "rooted" in a variable binding, it also returns the entire new structure:

    (1,(list 2,list ().age:5),4)[1][1].age:99
    # (1,(2,{"age":99}),4)


It's classic ladder-kicking behavior, reveling in the mild conveniences of "genai" while comfortably impervious to the externalities. Shameful that the moderators of so many online communities turn a blind eye to- or even offer explicit support for- their endless shilling for hideously unethical web-destroying for-profit companies simply because they express their native advertising in a superficially polite register.


I think everyone who believes that they can personally resist the detrimental psychological effects of exposure to LLMs by "remaining aware" or "being careful", because they have cultivated an understanding of how language models work, is falling into precisely the same fallacy as people who think they can't be conned or that marketing doesn't work on them.

Don't kid yourself. If you use this junk, it's making you dumber and damaging your critical thinking skills, full-stop. This is delegation of core competency. You may feel smarter, or that you're learning faster, of that you're more productive, but to people who aren't addicted to LLMs it sounds exactly like gamblers insisting they have a foolproof system for slots, or alcoholics insisting that a few beers make them a better driver. Nobody outside the bubble is impressed with the results.


I fully agree that it’s close to impossible to not eventually fall into the trap of overrelying on them. However, it’s also true that I was able to do things with them that I would never have done otherwise for a lack of time or skill (all sorts of small personal apps, tools, and scripts for my hobbies). Maybe it’s a bit similar to only reading the comment section in a newspaper instead of the news? They will introduce you to new perspectives but if you stop reading the underlying news you’ll harm your own critical thinking? So it’s maybe a bit more grey than black & white?


> If you use this junk, it's making you dumber and damaging your critical thinking skills, full-stop.

Arguably I've been using my critical thinking skills more now that I have a smooth talking, but ultimately not actually intelligent companion.

Every time I put undue trust in it, I regret it, so I got used to veryfing what it outputs via documentation and sometimes even library code.

That being said worst part of this mess is that my usual sources of knowledge like search engines or developer forums dried up, as everyone else is also using LLMs.


I think this is too broad. If, for example, I get Claude to set up a fine tuning pipeline for rf-detr and it one shots it for me, what have I lost? A learning opportunity to understand the details of how to go about this process, sure. But you could argue the same about relying on PyTorch. Ultimately we all have an overarching goal when engaged in these projects and the learning opportunity might be happening at an entirely different level than worrying about the nuts and bolts of how you build component A of your larger project.


> Don't kid yourself. If you use this junk, it's making you dumber and damaging your critical thinking skills, full-stop. This is delegation of core competency.

This is a good way to frame the problem. Consider the offshoring (delegation) of American manufacturing to China, followed by the realization decades later that the US has forgotten how to actually make things and the subsequent frenzied attempt to remember.

I expect the timelines and second-order (third-order...) effects to play out on a similar decadal scale - long after everybody has realized their profits and the western brain has atrophied into slop.


My mind is already going, old age. You only really try anything when you are already losing it. Especially if you had it once.


LLM enthusiasts will always point to whatever scraps of personal value they've extracted from their use of genai as a rationale for their indispensability. Arguing from personal utility rings hollow for anyone who takes the externalities of these "tools" seriously: their erasure of the authorship of their training corpus, their erosion of social contracts, their putrefaction of the commons with endless waves of slop, etc. I'm glad FreeBSD has managed to hold the line against this sort of shortsighted ends-justify-means thinking, and I hope they don't soften their stance against slop in the future.


Agreed, but for some reason the majority of folks don't care about these externalities at all.

I see the externalities and the harm they are and may cause, and at the same time I find it increasingly difficult to avoid using LLMs as there is personal value to be extracted. Further, so many others are using LLMs to pump their productivity numbers (reality may differ and time will tell) its hard to keep up without using LLMs.


Aren't there several papers that indicate that the productivity boost doesn't exist when you start measuring productivity instead of going by feels?


"lemons"


Moss looks much more general and powerful, but Decker has a similar mechanism for custom brush behavior; here's an interactive tutorial with a variety of examples, for comparison: http://beyondloom.com/decker/brushes.html


I agree. I wrote an essay which contrasts the "visualbasic-like" vision that most visual app-builder tools take with the pliable, user-modifiable stack-of-cards approach in HyperCard: https://beyondloom.com/blog/sketchpad.html


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: