How so? Good bit of my global claude.md is dedicated to fighting the incessant attribution in git commits. It is on the same level as the "sent from my iphone" signature - I'm not okay with my commits being advertising board for anthropic.
> Blog post written, PR'd, and merged in under 3 minutes.
It's close to or even faster than the time it takes me to read it. I'm struggling to put into words how that makes me feel, but it's not a good feeling.
I believe it's less about politeness and more about pronouns. You used `who`, whereas I would use `what` in that sentence.
In my world view, a LLM is far closer to a fridge than the androids of the movies, let alone human beings. So it's about as pointless being polite to it as is greeting your fridge when you walk into the kitchen.
But I know that others feel different, treating the ability to generate coherent responses as indication of the "divine spark".
I removed my anecdote and flash wear explanation, because of cranky folks like yourself.
The corrosion inhibitors in petrol engine oil get fully depleted within about a year with most brands. One may certainly sell the machine before you see acidified lubricant related problems, but the motor will not reach its full operational lifespan ( https://en.wikipedia.org/wiki/Bathtub_curve .)
I do agree that anyone with a CVT style transmission likely won't have to worry, as that entire section will probably need replaced before you see significant hydrodynamic bearing damage.
I guess we are just boring and/or unimaginative. I don't get that many communications per day to require an abstraction level between me and the messages. The daily automations I need are more efficiently carried out by home assistant / n8n. I'm not in a position where I need automated briefs on every new company started in my area. I genuinely don't see how it could benefit me.
Most humans are unimaginative because to be imaginative is actually really, really hard. People are also incredibly overly optimistic about their own ideas etc... until they go through the craftsmanship of producing something great.
Most peoples thought process is "oh great idea, just gotta do this and that and out pops something that'll improve peoples lives". Erm no... its nothing like that in reality.
I'd counter that n8n (or most other workflow tools) can handle as much ambiguity as OpenClaw - it has a LLM call node. Stuff the ambiguous parts in there, but don't burn a rainforests worth of compute figuring out how to call the weather API each and every time.
Also, in the olden days of pre-AI, if our weather workflow did not notify us because conditions juuust failed to be met, we adjusted the thresholds. Uphill, both ways.
Don't get me wrong, I use a bunch of LLMs for automations. By prompting the model "here is what I want to achieve, here are the tools I have, figure out how to stitch them together". Actual workflows run (mostly) deterministically, with a sprinkling of "classify this image" or "summarize this text" nodes thrown in for a good measure.
My solution was to buy a used Samsung tablet with OLED screen, and control the display on with motion sensors. It sits in the hallway, above the keys drawer. The screen is on only when someone's walking nearby, and around eye level when you go pick up the keys. Designed the dashboard based around muted colours on black background, with brights reserved for "hey pay attention to this" data. And most importantly, the screen is not visible from any spot you're likely to stay at for a longer time. As for mounting, I used calipers, 3d printer and some double sided tape. It's not completely seamless, but damn close for ~10% of the effort.
The tablet itself runs https://www.fully-kiosk.com/ to display a web dashboard. Fully Kiosk has good Home Assistant integration, including screen on / off controls. I also have a bunch of Sonoff battry operated zigbee motion sensors scattered around the place. Then Home Assistant does what it was meant to do - act as a glue layer between various systems, firing screen control commands to Fully Kiosk as a result of select motion sensors triggering.
The dashboard itself is a react app talking to my Home Assitant instance over a websocket. The heavy lifting of bringing various data sources together is done by HA, I just wrote a react app because it seemed easier than learning to customize HA dashboards to the degree I wanted to.
reply