It's my belief that much of this flavor of UI/UX degradation can be avoided by employing a simple but criminally underutilized idea in the software world (FOSS portion included), which is feature freezing.
That is, either determine what the optimal set of features is from the outset, design around that, and freeze or organically reach the optimium and then freeze. After implementing the target feature set, nearly all engineering resources are dedicated to bug fixes and efficiency improvements. New features can be added only after passing through a rigorous gauntlet of reviews that determine if the value of the feature's addition is worth the inherent disruption and impact to stability and resource consumption, and if so, approaching its integration into the existing UI with a holistic approach (as opposed to the usual careless bolt-on approach).
Naturally, there are some types of software where requirements are too fast-moving for this to be practical, but I would hazard a guess that it would work for the overwhelming majority of use cases which have been solved problems for a decade or more and the required level of flux is in reality extremely low.
> Turns out CLI interfaces by themselves are (from a usability perspective) incomplete for the kind of collaboration git was designed to facilitate.
git was designed to facilitate the collaboration scheme of the Linux Kernel Mailing List, which is, as you might guess... a mailing list.
Rather than a pull-request (which tries to repurpose git's branching infrastructure to support collaboration), the intended unit of in-the-large contribution / collaboration in git is supposed to be the patch.
The patch contribution workflow is entirely CLI-based... if you use a CLI mail client (like Linus Torvalds did at the time git was designed.)
The core "technology" of this is, on the contributor side:
1. "trailer" fields on commits (for things like `Fixes`, `Link`, `Reported-By`, etc)
2. `git format-patch`, with flags like `--cover-letter` (this is where the thing you'd think of as the "PR description" goes), `--reroll-count`, etc.
3. a codebase-specific script like Linux's `./scripts/get_maintainer.pl`, to parse out (from source-file-embedded headers) the set of people to notify explicitly about the patch — this is analogous to a PR's concept of "Assignees" + "Reviewers"
4. `git send-email`, feeding in the patch-series generated in step 2, and targeting the recipients list from step 3. (This sends out a separate email for each patch in the series, but in such a way that the messages get threaded to appear as a single conversation thread in modern email clients.)
And on the maintainer side:
5. `s ~/patches/patch-foo.mbox` (i.e. a command in a CLI email client like mutt(1), in the context of the patch-series thread, to save the thread to an .mbox file)
6. `git am -3 --scissors ~/patches/patch-foo.mbox` to split the patch-series mbox file back into individual patches, convert them back into an annotated commit-series, and build that into a topic branch for testing and merging.
Subsystem maintainers, meanwhile, didn't use patches to get topic branches "upstream" [= in Linus's git repo]. Linus just had the subsystem maintainers as git-remotes, and then, when nudged, fetched their integration branches, reviewed them, and merged them, with any communication about this occurring informally out-of-band. In other words, the patch flow was for low-trust collaboration, while direct fetch was for high-trust collaboration.
Interestingly, in the LKML context, `git request-pull` is simply a formalization of the high-trust collaboration workflow (specifically, the out-of-band "hey, fetch my branches and review them" nudge email). It's not used for contribution, only integration; and it doesn't really do anything you can't do with an email — its only real advantages are in keeping the history of those requests within the repo itself, and for forcing requests to be specified in terms of exact git refs to prevent any confusion.
It’s unfortunate that so many podcasters have leaned into the myths about being able to manipulate dopamine as some global up/down thing you can adjust by following certain protocols.
Andrew Huberman is among the worst at this because he has enough education that he should know better. I think he’s too enamored with the traffic and clicks he gets when he talks about dopamine, so he’s willing to stretch the truth or even break it completely if it will make for engaging podcast content.
Some of the studies he cites over and over again about dopamine don’t even say what he claims. Huberman is a joke among people who actually know the science, but he became a hero to people who thought he was just a nice guy sharing free knowledge with them.
When I was a fresh EE I was hired as a backdoor to hiring an electrician because an EE is cheaper. Everything about it was hilariously easy and we were wiring up some insane shit, think 300 amp 1kV motors that will kill people if it goes wrong. Theres very little that goes beyond your 101 level linear circuits, some gauge tables, and youtube videos for how to crimp and mount the breakers, ground faults, etc.
It will make more sense when you realize the trades were basically designed to suppress the 'out-group' from joining the union or being hired to keep up the union workers wages. That's why hiring an EE is so much cheaper than hiring an electrician.
That is, the dirty secret is: the trades have a vested interest in having 'shortage.'
A good friend of mine in HS did one semester of college, said "nope. not for me" and became an electrician. Fast forward ~30 years, he planning on retiring in a few years (could retire now if he wanted to).
He's smart, worked hard, learned and taught himself as much as possible. Took & passed all tests asap, got his contractor's license asap, etc. Branched out to learn industrial controls and electrical too.
I think pursuing a trade can be a really good way to go... but you need to go into it planning to learn & teach yourself and push the excel.
I know welders who make >200k USD a year working 4-6 months a year. They are crazy skilled and know their craft inside and out... they know more metallurgical science than me (and way more practical knowledge about metal alloys) and I was chemical and materials engineering major.
I know a master carpenter/craftsman who makea >500k a year, profit, doing high end cabinets, woodworking, furniture, building, kitchen remodels, etc.
here we just call for impeachment of judges who rule against us https://apnews.com/article/donald-trump-federal-judges-impea... - “at a certain point, you have to start looking at what do you do when you have a rogue judge.” - what else you got in mind, exactly, Donald?
and obviously there's the whole impoundment thing we've just been talking about - the executive blatantly ignoring a law that restricted that. should be trivial to imagine ways THAT could be misused.
Musk's entire role is pretty suspect - where was the Senate confirmation hearing or law authorizing the delegation of power to him? where is the authority over departments headed by actual cabinet members beyond "Trump said so"?
another big flashing warning sign is RFK and the CDC re-opening the vaccines/autism thing. wouldn't it be curious if suddenly a new study contradicts decades of research there? seems a bit wasteful to re-litigate unless you have reason to believe you'll get the result you want, not the one the numbers have been pointing to...
What are the good actions? Laying people off to save pennies off the budget? Promising tax cuts to enlarge the national deficit? Renaming shit?
(EDIT: another big corruption-enablement thing would be deciding the FCC is the "let's police speech i don't like" department.)
I think their main target is corporate creative jobs. Background music to ads/videos/etc. And just like with all AI, they will eat the jobs that support the rest of the system, making it a one and done. It will give a one time boost, and then be stuck at that level because creatives won't have the jobs that allowed them to add to the domain. In this case new music styles. New techniques. It's literally eating the seed corn where the sprouts are the creatives working in the boring commercial jobs that allow them to practice/become experts in the tools/etc that they then build up it all. Their goal is cut the jobs that create their training data and the ecosystem that builds up/expands the domain. Everywhere AI touches will basically be 'stuck using Cobol' because AI will be frozen at the point in time where the energy infusing 'sprouts' all had their jobs replaced by AI and without them creating new output for AI to train on it's all ossified.
We are witnessing in real time the answer to why 'The Matrix' was set when it was. Once AI takes over there is no future culture.
I wonder what’s the most polished or “sanded” UI out there?
You would think FAANG would have a half decent UI and UX with the amount of money they have. But anybody that has used Amazon.com or AWS, GCP, or even Azure would beg to differ.
Personally, off the top of my head. The most polished UI/UX has to be “mcmaster.com”. I can find anything I need in what seems like a couple minutes.
Compare this to big box stores like “Home Depot”, “Lowe’s”. I can spend 10-15 minutes just trying to find the right size of screw, board, or whatever using their bloated sites. On mobile it’s even worse.
Goodness is fundamentally a subjective attribute. Trying to measure it objectively is attempting alchemy. You cannot make the subjective out of the objective - you will always have smuggled subjectivity in somehow.
Suppose I say that good code should have the lowest number of curly braces, or the lowest number of subroutines, or the flattest or deepest object hierarchy. A measure being objective doesn't make it good. So why is test coverage good?
In fact, every single one of the metrics proposed in the parent post is something I have seen gamed to the point of maladaptivness.
* execution performance time
- So frequently overly focused on that you can find google hits for "Premature optimization is the root of all evil." I have seen developers spend hours or days saving (sometimes imagined!) run time, where all the time they saved over the life of the product wouldn't add up to the time spent. Extreme pursuit of performance leads to code that is hard to work on - who cares about those milliseconds when critical new features take months to write, or are even impossible?
* build time -
It is very easy to diminish build time by destroying critical architecture. I have done it myself. :)
* regression frequency -
Insisting on only making very safe changes is how you wind up spending six weeks lobbying change control boards for one line of code.
*defect quantity -
In environments that actually track this, people merge issues into a single "fix what's wrong" ticket, degrading the utility of the ticket system. Defect granularity is not actually obvious!
* code size -
Obfuscated X contest entries are often very compact, and people who obsess on saving lines and characters can wind up leaning towards that style.
* dependency quantity -
Leads to attempts to homebrew encryption.
* test automation coverage -
Automated testing ossifies design and architecture, which can paralyze, e.g., an experimental prototype. Full coverage is also costly - time and energy spent maintaining pointless tests can come at the expense of mitigating more realistic risks. I realize I depart from prevailing wisdom in this, but there are times and places when automated testing is simply inappropriate.
* test automation execution duration -
Sometimes the right way to write a test is slow.
I'm not disagreeing that these are generally good things to strive for. They are! I'm saying that if you think these things define goodness, each one can lead you to a cursed place. (I hasten to add that there are times when a metric really does define goodness - sometimes you need speed or reliability or whatever, at any cost. Recognizing that circumstance and its limits - "any cost" does not generally mean any cost - is subjective.) Goodness is subjective, and while objective measures can help you assess it, such measures cannot define it - when and how you use which measures, and when you think they've gone off the rails, is itself a judgement call.
I once inherited a system that was both essential for business operations and a thorn in everyone's side. The guy I inherited it from (and the guy he inherited it from) had taken over a year to learn how to use it. I set about reorganizing, rewriting, documenting, abstracting - all those soft changes in pursuit of clarity and obviousness. They aren't objective, but they do pay off: when I handed the system off to the next guy (and three more after him!), he was off and building on it in a day. That was how I knew I had succeeded! When doing that sort of thing, you do always wonder if what you're writing is clearer for everybody or just you. But surely even the most hardheaded bean counter can see the value of training developers in a day rather than a year. That's good. :)
Goodness is contextual and subjective. I can agree that your goals are generally right, and I can point to circumstances where they're overemphasized or even outright wrong. Sometimes, when the sky is falling, a nasty little bash script that meets none of the usual criteria for "quality" is the best possible thing.
There are people who use subjectivity as a haven for vanity, and build mountains of pointless code in pursuit of some idea of goodness that serves no practical purpose or is even harmful. It is important that we retain our on ability to criticize on subjective grounds, precisely to counter that sort of activity - because you will find it in the land of the objective advocates as well, building mountains of metrics that don't serve any practical purpose either. To recognize a bad abstraction and a bad metric is the same skill, and requires the same confidence in your own good judgement.
Objectivity is no refuge from the necessity of good taste.
You could absolutely get this working via the API right now. Here's how to use my API CLI tool to do that:
pip install files-to-prompt llm llm-claude-3
llm keys set claude
# paste in your Claude API key
llm models default claude-3.5-sonnet
files-to-prompt my-github-repo/ | llm --system 'say OK'
And now you can start an interactive chat with Claude 3.5 Sonnet pre-seeded with your files like this:
llm chat --continue
The --continue will ensure it is a continuation of the previous chat session where you dumped in all those files.
If you want to run against an updated copy of your files, start a new conversation with the files-to-prompt piping again.
Yeah, it's wild. The guy who fixed my garage door recently told me he sold his small time, but well rated garage door repair company to a PE firm. They also bought all his rivals. I had gotten a few quotes and everyone wanted to do a full replacement for several thousand, insisting it was all junk. He fixed it in 20 minutes for $80 and was firm that everything was absolutely in great shape except for the small problem he fixed.
The back end of all these companies is the same, some call center/scheduler that manages everything very cheap. They run the purchased companies as fronts and jack the prices and push for big replacement/upgrades until reviews dip. Then they dissolve the company into a generic regional company and sell that to a national like servicepro.
I love React but I use no state management, apart from useState locally within components.
State management in React is a major source of pain and complexity and if you build you application using events then you can eliminate state entirely.
Most state management in react is used to fill out props and get the application to behave in a certain way - don't do it - too hard, drop all that.
Here is how to control your React application and make it simple and get rid of all state except useState:
Trust me - once you switch to using custom events you can ditch all that crazy state and crazy usage of props to drive the behaviour of your application.
It's a custom build of Firefox with somewhat sensible, sometimes strict, privacy respecting default settings.
There's also the Arkenfox user.js which you can put on top of vanilla Firefox, aiming for the most privacy and security possible.
https://github.com/arkenfox/user.js
I handle it by collecting quotes that tell me to knock it off. I've since started to focus on just the things I really care about:
The purpose of knowledge is action, not knowledge.
― Aristotle
Knowledge isn't free. You have to pay attention
― Richard Feynman
"Information is not truth"
― Yuval Noah Harari
If I were the plaything of every thought, I would be a fool, not a wise man.
― Rumi
Dhamma is in your mind, not in the forest. You don't have to go and look anywhere else.
― Ajahn Chah
Man has set for himself the goal of conquering the world,
but in the process he loses his soul.
― Alexander Solzhenitsyn
The wise man knows the Self,
And he plays the game of life.
But the fool lives in the world
Like a beast of burden.
― Ashtavakra Gita (4―1)
We must be true inside, true to ourselves,
before we can know a truth that is outside us.
― Thomas Merton
Saying yes frequently is an additive strategy. Saying no is a subtractive strategy. Keep saying no to a lot of things - the negative and unimportant ones - and once in awhile, you will be left with an idea which is so compelling that it would be a screaming no-brainer 'yes'.
- unknown
I consider this essay, "The Elements of Style", and "Simple and Direct" by Jacques Barzun to be the Holy Trinity of guidance on clear, straightforward writing.
I would personally get the silenced fans in the largest diameter you can afford / fits in your desired space because it will move larger amounts of air more quietly. The apartment was 700 sq. ft. I got the silenced 6" and it's reasonably quiet at most speeds, and not that loud at the highest speed -- putting filters at both intake and outlet greatly reduced the noise vs. just one side.
If you get the filters and fans in the same diameter they will just slip together without any additional hardware. But it will be quite wobbly. I used velcro (monoprice, laying around for wire-taming) to secure it to a vertical bookshelf.
I'd probably go 8" if I was buying today, I hedged a bit cheaper because I wasn't sure if the quality/performance would be what I needed. This has been running for 9 months now with almost no noticeable degradation in performance, although I'm not currently quantifying it. Eventually I intend to install 3 VOC sensors, one outside the unit near the intake, one inside the unit, and one outside the unit at the outlet....to measure the VOC scrubbing efficiency curve over time and assist in deciding when to replace the carbon.
We've been running it 24 hours per day, usually about 40% but sometimes at the lowest setting (maybe 25%) and sometimes at the highest 100% setting.
After 9 months, it can still use it for point sources of concentrated smells like soldering and it captures 100% of the odors. And this is operating in a high-VOC environment near a lot (dozens) of chemical plants on our side of the city.
What rings true for me in this article is the notion that, as a founder, it’s better to pick a problem space than it is to pick a solution. Amazon chose to live in the problem space of how to bring consumers more products, more conveniently, at a lower price. Geico chose a similar goal but in insurance.
Technical founders often focus on a particular cool way of solving a problem and burn lots of capital building that thing. Sometimes the thing is the right thing and everyone makes money. But sometimes it’s not. Yet if you stick with the same problem space for long enough, and aren’t too connected to your particular solution, I think you have a greater chance of succeeding.
Okay, go ahead and poke at my pontifications now. I’m ready.
"I enjoy just hanging out and having a beer, but also painting a room, building a porch, doing a little demolition"
Perhaps that's the differece: specialization. More contractors and take-out meals mean less working with friends.
Friends are good for unspecialized labor, but when you get into skilled labor it falls apart. The one plumber friend will get swamped with requests, and the quantum physicist may not find an opportunity to reciprocate.
For me, a major part of ADHD management consists of forcing myself to do something, multiple times a day. It's work, learning, chores-like tasks, sport, hygiene, regular sleep, cold morning shower, unprocessed food preparation - pretty much everything that isn't immediately pleasurable.
I'm "unhappy" multiple times a day when I start doing something that doesn't usually have to be done right away. But it's often a choice not between doing something now or later, but doing something now, not never.
However weirdly this may sound - I constantly do things against myself, for myself. I don't get much satisfaction from finishing those tasks but my life quality has increased drastically and I wouldn't go back from moments of discomfort to an ongoing discomfort.
Just throwing my 2c into the well, as someone who used to be highly "pro-science" but lost confidence in much of academia and the validity of scientific research in general after starting a PhD and seeing how the sausage is made.
The biggest problem science is facing is not an external threat from a rabble of ignorant science deniers, but the complete degradation of quality within the scientific institutions themselves.
Most research is flawed or useless, but published anyway because it's expedient for the authors to do so. Plenty of useful or interesting angles are not investigated at all because doing so would risk invalidating the expert status of the incumbents. Of the science that is communicated to the public (much of which is complete horseshit), the scientists themselves are often complicit in the misrepresentation of their own research as it means more of those sweet, sweet grants. The net result is the polished turd that is scientific research in the 21st century.
"Educated" people can say what they want about how important it is to believe in the science and have faith in our researchers and the institutions they work for.
The fact remains that if one were to assume that every single scientific discovery of the last 25 years was complete bullshit, they'd be right more often than they were wrong.
> It's almost like there's something cultural happening in America
Everyone can fill in their pet theory here, but mine is that so much of being American these days is disempowering. Corporations have so much power that not only do they not care about us, but we can't even avoid using them.
Get sick? Good luck fighting your giant health insurance provider or hospital conglomeration.
Looking for an entry-level job? Navigate a Kafka-esque application system only to (if you're lucky) work for a miserable low-level manager who also has no power and transfers that anger onto you.
Want to spend time with friends? Their preferred communication medium is now one of a couple of huge social media companies with horrific privacy policies and a history of emotionally damaging its users.
Want to go to a show? Have fun buying overpriced tickets from one of the two or three ticket monopolies and then pay an exhorbitant, insulting "convenience" fee.
So everyday, in many of our basic daily rituals, we are reminded of how little agency we have in our own lives and how much we both depend on and are subject to the whims of billionaires (who, meanwhile, are busy destroying the Earth).
Given the political discussions about “booster shots”, I find this quote pretty interesting:
> Other studies suggest that a two dose regimen may be counterproductive. One found that in people with past infections, the first dose boosted T cells and antibodies but that the second dose seemed to indicate an “exhaustion,” and in some cases even a deletion, of T cells.34 “I’m not here to say that it’s harmful,” says Bertoletti, who coauthored the study, “but at the moment all the data are telling us that it doesn’t make any sense to give a second vaccination dose in the very short term to someone who was already infected. Their immune response is already very high.”
Those with extreme views certainly do exist but they tend to be pushed to the front by those who have vested interests against homeschooling. Most of the people who I've known who homeschooled did it because their child wasn't getting the attention they needed in the school system. Typically after a couple of years at home, they returned to the public school system, having caught up and developed better learning skills.
What I don't understand is this comment "essential to a child's development to be part of a social classroom experience" but it is a very common view. Ignoring online schools, the largest public high school in the US has over 8,000 students and high schools with 3,000-4,000 student are common. When in human history has it been the norm for children to gather daily in such large numbers everyday away from their families, with only high level supervision from adults who aren't their parents nor have much depth of insight into their lives? It's more akin to Lord of the Flies with the occasional adult commentary than a way to learn good social skills. What they learn is the law of jungle and many people are scared for life by the experience. Even at the elementary and middle school levels, enrollment of 1,000 students in a single school is common. There's absolutely nothing natural about this. It's an absurd artificial construct and it's amazing so many people think it is somehow a good thing.
Another common question is how will children make friends if not at school? For myself, most of my friends were those I met at school but about a quarter were from meeting in the neighborhood, through youth group at church, or through family or friend-of-a-friend connections. Just because someone lived in my neighborhood didn't mean they went to my public school. Some went to religious or private secular schools. Some kids at church also attended private secular schools (our church didn't have a school) or a different public school. If I had been homeschooled, I still would have met about a quarter of my friends outside of school, and probably would have met many of the same in-school friend through those friends due to common interests and geography. All of this was without me being into sports, which is another way children meet friends, even if it wasn't one of the ways I met them.
It’s really unpopular inside the fruit company. I’ve not spoken with a single engineer in Cupertino who thinks this is a good path. There has been radio silence from management on how to address this. It almost feels like they want this to fail. In the past management has given us resources and talking points we can use with our friends and family. Not this time.
Yes, it is easy to express simple tasks like Sudoku in MiniZinc. However, in terms of sophistication and convenience, Prolog is hard to beat, since it is a full-fledged programming language that can also be used to fetch, process and transform any available input in any way necessary to make it amenable to all built-in constraint solvers.
Compared to a Prolog program, a MiniZinc model is quite hard to parse and reason about programmatically: Every Prolog program is also a valid Prolog term, and can be reasoned about logically as long as we keep to the pure monotonic core of Prolog, which also includes constraints. Pure Prolog programs can be debugged declaratively by generalizing away goals, whereas for example removing a line of a MiniZinc model may render the model invalid.
In addition, Prolog provides logic variables and therefore allows interesting questions to be asked about the model, such as queries about partially known data. This, in combination with Prolog being a programming language, allows its application in a way that is not readily available, or may not be possible at all, with more special-purpose tools and modeling languages. For instance, David C. Norris is using Scryer Prolog and its integer constraints to model and analyse dose escalation trials as they arise in clinical oncology, exceeding the expressiveness of other modeling languages:
The publication A Geometric Constraint over k-Dimensional Objects and Shapes Subject to Business Rules by Carlsson, Beldiceanu and Martin gives an overview of what is going on behind the scenes when these constraints are posted:
That is, either determine what the optimal set of features is from the outset, design around that, and freeze or organically reach the optimium and then freeze. After implementing the target feature set, nearly all engineering resources are dedicated to bug fixes and efficiency improvements. New features can be added only after passing through a rigorous gauntlet of reviews that determine if the value of the feature's addition is worth the inherent disruption and impact to stability and resource consumption, and if so, approaching its integration into the existing UI with a holistic approach (as opposed to the usual careless bolt-on approach).
Naturally, there are some types of software where requirements are too fast-moving for this to be practical, but I would hazard a guess that it would work for the overwhelming majority of use cases which have been solved problems for a decade or more and the required level of flux is in reality extremely low.