You did not share a link to a blog post. The title was "Effective Haskell is a hands-on practical book way to learn Haskell. No math or formal CS needed" and it linked to the site advertising your book for sale. I removed it because we don't get good discussions out of ads.
I shared the story as I remember it. Memory is imperfect. It's been years since I deleted my account, and I don't have the luxury of access to server or moderation logs.
What I do remember unambiguously is being an active member of the site, contributing regularly and in good faith, being accused of spamming, and the general feeling of hostility that I got from the site.
You got a DM and email with the title and URL when your story was removed. This would've been 2023-08-03 with the subject "Your story has been edited by a moderator", if you want to look back: https://github.com/lobsters/lobsters/blob/86e1d0b6ac6bac5210...
But you're correct on the second part, there isn't a level of activity that entitles anyone to post a sales page with nothing to discuss on it. Your activity was taken into account, though. Typically if a new user's first activity is to post an ad I'll also ban the site or user. I understand the rules aren't as permissive as you wanted, but ads don't start good discussions.
IMO, the lobste.rs admin's assertion that the post had "nothing to discuss" is a misjudgment that undercuts the rest of their rationalization. My guess is that they're looking for a win on technicality, instead of addressing the myriad of concerns raised elsewhere in this thread.
I don't know why you think I want a "technical win" from you, but I'm not seeking your approval. I corrected your mistake about the URL and the policy, like I corrected the author's mistake about what I removed. If you and other sites prefer different policies, it's no skin off my nose.
> The government’s position is that I should have known I couldn’t trade stocks I’d publicly praised—for some unspecified period of time. I didn’t lie, I simply traded too soon.
The part Left seems to be responding to in his article is:
> defendant LEFT often built his positions using inexpensive, short-dated options contracts that would expire within zero to five trading days and submitted limit orders to close his positions as soon as the Targeted Security reached a certain price.
A pump and dump has to be false, misleading or deceptive. This is not the case here.
Edit: I have now read some of the complaint. This is just sec fraud, which is consistent with what the article is claiming. "The opposite of insider trading" ie trading in the direction of the advice you're giving.
Gov says the statements were material, false, with intent. If they can't prove false to the level of being a material statement they will lose.
Edit2: This comes right up to the line, whether it's material to have false/non-statements about your intentions. There's another case that will come up if you research this about whether intentions are material.
There's a much higher-ROI way to encode these, write a test that checks the current date. Maybe a very large project would prefer not to fail everyone's build for it, but this is fine for a couple dozen developers.
It must be part of a larger marketing push; their boss(?) appeared on the Odd Lots podcast a couple days ago to talk about this work: https://podcasts.apple.com/us/podcast/scott-kupors-new-plan-... He spent a lot of time promoting this new National Design Studio's attempt to attract tech works for 2-year commitments to drop into existing orgs, which is basically how the 18F PIF program worked before it was dissolved earlier this year. Perhaps abruptly terminating a program to reinvent it from scratch six months later is very efficient.
(A warning about Odd Lots: the hosts never question or push back on people talking their book. This especially bad with politicians and political appointees, who are often very creative during their interviews.)
The scrapers will not attempt to discover and use an efficient representation. They will attempt to hit every URL they can discover on a site, and they'll do it at a rate of hundreds of hits per second, from enough IPs that each only requests at a rate of 1/minute. It's rude to talk down to people for not implementing a technique that you can't get scrapers to adopt, and for matching their investment in performance to their needs instead of accurately predicting years beforehand that traffic would dramatically change.
Asking as a newbie in this area, could you share any pointers to language design for performance?
I'm aware of the early difference between compiled and interpreted languages. Luau has to be interpreted to meet its security goals, and I'm asking with similar goals in mind, so I guess I'm starting from that significant limitation.
Lua gets sone perf with simple types that can represent lots of types without pointers easily. Truthiness is also fast since only nil/false singletons are falsy. Whereas Python has ´__bool__´. But look at metatable stuff for how much lua has to check
All of these introduce guards in with JIT or inline cache, preferable to have no guard at all
This isn't unique to dynamic languages, see C++ map having a layer of indirection forced to support pointer lifetimes of access living past inserts. Whereas Rust doesn't allow borrowing past that, & Go doesn't allow taking address of map value
Other examples: C optimizations having to worry about pointer aliasing. Or Go interfaces having to box everything. It used to have small value types be able to avoid boxing for interface value, but dropped when switching to precise GC
> The idea is that users were relying on a functionality to be maintained (the "rug"), and the Datastar developers decided to continue maintaining it behind a paywall (the "pull").
Why were the users so entitled to free ongoing maintenance that its end is worth describing using a term from financial fraud?
It's also getting a lot flakier. Around October or November of last year I realized that every week I see another bit of SPA jankiness. I lose my scroll position because some component has appeared or disappeared above view. The issues list shows hours-old state and does not refresh. A button doesn't work. A page loads but everything below the header is unstyled. I expand the build details but it collapses closed every time a stage finishes. A filename header appears twice. New comments don't appear. On and on.
It's very frustrating to have a tool that has spent 15 years fading into the background of reliable infrastructure become an intrusive, distracting mess.
After a couple months with jujutsu it's almost completely replaced my use of git. It’s a lot to hope for, but just as jujutsu surveyed a couple decades of VCS to syncretize a huge improvement, I do hope someone will do the same for collaborating with jujutsu. GitHub PRs feel very unfortunately frozen in amber because their popularity makes it very hard to fix the core UI design problems it has with multiple tabs, incoherent timeline, edited commits, missing "8 more comments", and now a steady drip of SPA jank.
The big remaining feature of GitHub is the network effect of coworkers and potential contributors already being logged in, and there could be a race between a competitor neutralizing that with bidirectional sync (see git-bug) and GitHub getting their usability problems sorted. Microsoft's legendary resistance to breaking changes means there's a very big window available.
My favourite github UIUX snafu is being unable to navigate back to the root repo page when I'm in some sort of complex view, or being able to go to the authors profile in less than like 5 clicks.
Re going to author's profile, I usually just give up, and go to the URL to fix the address.
You'd think clicking on author's name in commit log view would bring you to their profile — social coding, after all — yet it simply filters the log by author.
Just checked, apparently, they do have a tooltip popup in "Commits" log view to go directly to the author, but who has the time to wait 1s until the tooltip's appearance? Clicking on the name simply filters commits by the author, clicking again, just reloads the same page you're already on — but, hey, at least they didn't break the URL addresses yet!
reply