To be fair I guess, the way the article is titled goes out of its way to mislead.
It would be quite easy to say "Added 911,000 fewer jobs from March 2024 to March 2025" or "the year starting in March 2024", but they are clearly aiming to deflect from the Biden admin by implying last year's revisions are the fault of the administration inaugurated in January 2025.
Judging by the comments here, it worked marvelously.
To be clear, these are this administration's revisions, about what happened in the previous administration.
They also don't have much credibility. One problem with firing the economist running the BLS for reporting numbers the administration didn't like, and replacing her with a political loyalist is that no one will take the numbers the BLS reports seriously anymore.
The revision process is a normal part of reporting employment numbers. Since real-time actual numbers aren't feasible, they use approximate and correlated indicators initially and later revise when more solid data arrives.
I can't tell what your point is though. It almost sounds like you're trying to say the initial BLS numbers have been politically manipulated for years since revisions have been consistently down for a while? Surely not, though, because that doesn't make any sense -- continuously applying an upward boost on initial numbers, only to have them consistently revised downward, for several years, would simply cause the system to adjust to the new norm. The absolute numbers have never been important; it's the relative numbers that count, so a consistently applied manipulation effectively becomes no manipulation over time.
Anyway, there's a lack of proof of manipulation. Well, until recently, when the political manipulation was made publicly.
OTOH, I suppose simple facts and logic aren't important in our post-fact world though.
Are you my doppelganger? I made almost this exact comment word-for-word to a friend of mine a few weeks ago.
Trudeau's immigration video from December [1] was one of the most dishonest, condescending productions that I've ever seen, basically amounting to, "Yes, we destroyed our previous internationally respected immigration system and imported five million low skilled laborers over a couple years without adding any housing or infrastructure. Yes, that's hurt a lot of you and made you angry. No, we don't think that's a problem. But because you're so angsty, we'll throttle it back a tiny wittle bit over the next year or two before throwing the floodgates wide open again."
Not a shred of anything even resembling self-awareness or humility throughout.
The most shocking thing to me is the heavy involvement of McKinsey consulting in all this[1]. Feels anti-democratic to let a foreign consulting firm set immigration targets.
That's absolutely bananas. $3 million for a _report_ about suggestions for possible immigration reform (to speed it up of course). In the hole $62 billion a year, and a sizable chunk is going to overpriced MBA grads right out of school to produce PDF documents. How the hell do we reign these people in.
The point is that Democratic voters didn't get a chance to have their voice heard. Conducting polling is post-hoc rationalization for Harris being installed by party leaders in unprecedentedly anti-democratic action.
The purpose of a primary is to help the party pick a nominee that has a better chance of winning the general, which is, in turn, the purpose of a political party. The mechanism of binding primaries was set up by party leaders after some bad choices (especially in the 1968 Democratic Convention). This time, the prospective candidates decided that a blitz primary wouldn't serve its purpose. If the voters punish them for this decision, then it will have proved a bad one, but it's neither unprecedented nor undemocratic.
But this is part of the democratic process. If the presidential candidate died a week before election and the VP took his place, we would not be discussing the situation as undemocratic.
When I say democratic, I don't mean the Democratic party's primary process, I mean a process by which people vote to select a leader.
Yes I agree that Joe Biden had the legal authority to step down and appoint Kamala Harris as his successor to run for President. No, I don't think doing that is democratic. What would have followed the democratic process would have been recognize what everyone else knew way in advance and stepping down early enough for potential nominees to run to be the candidate.
There is no way they could have held a second primary in the 28 days between Biden dropping out and the DNC. Polls said Democrats wanted Biden to drop out, Democrats had already elected Harris once, and polls said Democrats were happy with Harris as the nominee. That's about as democratic as you could get given the situation.
Criticizing the Democrats for being anti-democratic here would carry a lot more weight if the Republican nominee wasn't responsible for J6 and the fake electors plot.
Also notable that the author didn't link Haje's (the TechCrunch person who railed against Alexander Wang) actual writing because he knew it doesn't make either Haje, or his argument in favor of Haje, look good. So, here it is:
> I would invite him — and those supporting them — to fuck all the way off. You misunderstand me. You thought I wanted you to fuck only partially the way off. Please, read my lips. I was perfectly clear: Off you fuck. All the way. Remove head from ignorant ass, then fuck all the way off.
This is the quality of writing whose loss the author is arguing is why TechCrunch has lost its global relevance. A perfect inversion of the truth if I've ever seen one.
Yeah that was a weird pivot in the article. I enjoyed it up to that point, but the strident defense of that terrible Haje article seemed out of left field; like I'd switched to reading Twitter for a couple paragraphs. Of all the things that has hurt TC, getting rid of Haje for that article seems like it should be at the very bottom of the list.
I found this by accident on YouTube the other day, and started watching it, not intending at all to sit through the 4+ hour runtime, but by the next day I'd watched the entire thing.
I'm not a Disney Parks person at all, but found the whole thing fascinating. Honestly, at first glance, this seems like such a fun idea, and given the popularity of other parks projects, I couldn't believe it'd failed. But digging in, it all kind of starts to fit together.
I didn't take notes, but off the top of my head:
* The stay was very expensive. You actually had to talk to an agent for specific pricing, but it was around $4k USD for two people for two days, with additional charges if you wanted to add people to your room.
* Despite luxury pricing, the rooms and furnishings are decidedly not. Tiny cabin-style rooms with bunk and pull out beds, and ~no amenities (no gym, no pool, etc.).
* The experience seems to have been a good idea, but executed poorly. This YT creator had trouble accessing any of the theoretically available storylines, and despite an effort to interact as much as possible ended up with results not discernible from random, or having done nothing at all.
* Lots of boring app-driven interactions involving messaging with virtual avatars of the IRL characters. On the day excursion to the rest of the Star Wars world, lots of of scanning QR codes on crates. The whole thing seems to have been very uninspired, and buggy to boot.
* When comparing the final product to concept art and statements from Disney execs, it seems like a lot of corners were cut for the final product and it ended up very underwhelming compared to what it was supposed to be. There seems to be lots of evidence elsewhere that Disney parks have really become cheapskate / penny pinching operations, and a reasonable hypothesis is that there were a lot of big ideas, but they would've been expensive, and they were slowly cut back one by one until the overall product really had nothing special left, and fell flat.
* From Disney's end, it didn't seem to scale very well. The high price tags seems to be related to the fact that the hotel didn't have all that many rooms compared to larger resorts which might have thousands, so they felt that they had to recoup the cost from each one. This didn't sit well with customers though, and after the initial opening period, the hotel seemed to be operating at high vacancy rate because few people were able/willing to pay the very substantial premium.
* Perhaps the biggest reason for its abrupt closure is due to accountants. Disney will be taking up to a $300 million tax write off from the project, so its closure may have been even a neutral or even good thing to its execs. The closure was extremely abrupt, which seems to suggest that they wanted to take the write off for the prior fiscal year specifically.
In Postgres listen/notify are inherently lossy channels — if a notification goes out while a listener wasn't around to receive it, it's gone, so they should never be relied upon in cases where data consistency is at stake.
I find that the main thing they're useful for is notifying on particular changes so that components that care about them can decrease the time until they process those changes, and without sitting in a hot loop constantly polling tables.
For example, I wrote a piece here [1] describe how we use the notifier to listen for feature flag changes so that each running program can update its flag cache. Those programs could be sitting in loops reloading flags once a second looking for changes, but it's wasteful and puts unnecessary load on the database. Instead, each listens for notifications indicating that some flag state changed, then reloads its flag cache. They also reload every X seconds so that some periodic synchronization happens in case an update notification was missed (e.g. a notifier temporarily dropped offline).
Job queues are another example. You'll still be using `SKIP LOCKED` to select jobs to work, but listen/notify makes it faster to find out that a new job became available.
Author here. The Go channel send behavior could certainly be altered depending on the particular semantics of the application, but the reason I chose to use a non-blocking buffered channel is so that no particular subcomponent can slow down the distribution of notifications for everybody.
> Shouldn't the channel rather block than discard if full?
In Go, a blocking channel is one that's initialized without a size (see [1]). You could have a blocking channel where the sender uses a `select/default` to discard after it's full, but that leaves very little margin of error for the receiver. If it's still processing message 1, and then message 2 comes in and the notifier tries to send it, message 2 is gone.
IMO, better to use a buffered channel with some leeway in terms of size, and then write receivers in such a way that they clear incoming messages as soon as possible. i.e. If messages are expected to take time to process, the receiver spins up a goroutine to do so, or has another internal queue of its own where they're placed so that new messages from the notifier never get dropped.
The reason you'd use a non-blocking send is to make sure that in the event of one slow consumer that the entire system doesn't slow down.
Imagine a scaled out version of the notifier in which it's listening on hundreds of topics and receiving thousands of notifications. Each notification is received one-by-one using something like Pgx's `ListenForNotification`, and then distributed via channel to subscriptions that were listening for it.
In the case of a blocking send without `default`, one slow consumer that was taking too much time to receive and process its notifications would cause a build up of all other notifications the notifier's supposed to send, so one bad actor would have the effect of degrading the time-to-receive for all listening components.
With buffered channels, a poorly written consumer could still drop messages for itself, which isn't optimal (it should be fixed), but all other consumers will still receive theirs promptly. Overall preferable to the alternative.
Author here. The behavior of notify with respect to transactions is indeed notable, and definitely a great feature that makes them distinct from pub/sub in other systems. Notifies fire only when data is ready after the transaction is committed, and they're also deduplicated based on payload so listeners don't have to react to many of the same message unnecessarily.
That said, NOTIFY isn't really what this post is about. It concerns itself with the other half of listen/notify by describing a "notifier" pattern, one which listens via `LISTEN` statements and distributes them application subcomponents to help maximize economy around the use of Postgres connections.
Listen does hold a connection, but that doesn't mean it defeats connection pooling.
That's what I was trying to convey in this blog post: you'll keep a fixed number of connections open for use with listen, but as few as possible by reusing a single connection per program to simultaneously listen on all channels that your application cares about (with the notifier distributing messages to each internal component that subscribed).
With your dedicated listen connections accounted for, the rest of the connection pool can operate normally, with programs checking connections in and out only as long as they need them.
So the net-net is that you have a handful of connections dedicated for listen, and the remaining ~hundreds are part of the connection pool for shared use.
It would be quite easy to say "Added 911,000 fewer jobs from March 2024 to March 2025" or "the year starting in March 2024", but they are clearly aiming to deflect from the Biden admin by implying last year's revisions are the fault of the administration inaugurated in January 2025.
Judging by the comments here, it worked marvelously.