One of the main pushbacks in this article is on the difficulty of later edits once the domain changes. The "make invalid states unrepresentable" mantra really came out of the strongly typed functional programming crowd – Elm, F#, Haskell, and now adopted by Rust. These all have exceptionally strong compilers, a main advantage of which is _easier refactoring_.
Which side of the argument one falls on is likely to be heavily influenced by which language they're writing. The mantra is likely worth sticking to heavily in, say, Haskell or Rust, and I've had plenty of success with it in Swift. Go or Java on the other hand? You'd probably want to err on the side of flexibility because that suits the language more and you can rely on the compiler less during development.
Perhaps it's not really language but the type of programs developers use these languages for? Open vs closed systems, heavy/light interactions with the outside world, how long they're maintained, how much they change, etc.
I can tell you the language is a huge part of it. I used to code in ObjC and now I’m using Swift; refactoring is easy in Swift and was a pain in ObjC.
I know I can trust my Swift code. Usually when it compiles it works because I try to, and often can, make invalid states unrepresentable. My ObjC code was always full of holes because doing so was not so easy (or not possible at all)…
The gist of that advice is that closed systems are that much easier to work with.
The tradeoff open system users have to evaluate is whether they’d rather have data rejected and not evaluated if it doesn’t fit their model of it (extreme case: your program silently drops key instructions), or whether they’d rather have their program work with data it wasn’t built to understand (extreme case: sql injections).
Proponents of this mantra argue that it’s easier to make a process to monitor rejected inputs than to fix issues created by rogue data.
That might be another factor. You could say that a program in a stable ecosystem doesn't need changing so should priorities strictness over flexibility. However even in a changing ecosystem, rather than building in flexibility that allows for incorrect states, you can raise the level of abstraction and build in extension points that still retain nearly the same strictness while still giving you the flexibility to change in the future in the ways that you will need.
This was clearly not legal advice. Soft-deletes come with a lot of complexity at the application layer, more maintenance, more security risk, and require building out user data deletion processes.
Having a deleted data table is a slightly easier approach I've seen, but you still need to be aware about user and legal requirements around deleting data.
> Soft-deletes come with a lot of complexity at the application layer, more maintenance, more security risk, and require building out user data deletion processes.
That depends on your application and requirements. I've worked on situations where a soft delete, where any fields with sensitive customer data are overwritten with a placeholder or random data (for legal compliance reasons) was a lot simpler than doing a hard delete and leaving a bunch of dangling records with ids pointing to records that no longer exist.
And unless your data model is pretty simple, and you are ok with not having any kind of grace period before fully deleting user data, you'll probably need to build a user data deletion process either way.
That's fair, this can be an easier approach, however you do need to make sure that all fields get scrubbed, which can be hard to do as the codebase evolves over time and more fields or field semantics change. It may also leave a bunch of metadata – timestamps can be identifying, entity graphs can be identifying.
Exactly - there's a tension between keeping storage, performance, and cost.
Keep everything actually can make some things faster (though mostly slower) and give you a depth of information you didn't have, but has a big impact on storage and cost.
Most people pick some mix of tradeoffs - for the important things to audit keep every state change, for other ones get the backups out.
The devil is, as always, in the details. I think "make invalid states unrepresentable" is a good thing to teach in order to push back against spaghetti code. State management is hard, and untangling bad state management is near impossible.
But of course, some flexibility is often (not always) needed in the long run, and knowing where to keep the flexibility and where to push for strictness is an important part of skills development as an engineer.
I'm a bit biased, but I find the AI overviews to be basically great. All I want from a search engine is the correct answer, Google's knowledge graph has done that for many queries for a long time, and AI overviews seems like a good next step in that process.
I've not seen many hallucinations, fact checking is fairly straightforward with the onward links, and it's not like I can take any linked content at face value anyway, I'd still want to fact check when it makes sense even if it wasn't AI written.
I constantly get blatant hallucinations. It particularly likes to take feature requests/suggestions and tell me they're presently possible, when they're not. It's long past the point where I just ignore the AI overviews entirely.
> I'm glad I use a Hetzner VPS. I pay about EUR 5 monthly, and never have to worry about unexpected bills.
The trade-off being that your site falls over with some amount of traffic. That's not a criticism, that may be what you want to happen – I'd rather my personal site on a £5 VPS fell over than charged me £££.
But that's not what many businesses will want, it would be very bad to lose traffic right at your peak. This was a driver for a migration to cloud hosting at my last company, we had a few instances of doing a marketing push and then having the site slow down because we couldn't scale up new machines quickly enough (1-12 month commitment depending on spec, 2 working day lead time). We could quantify the lost revenue and it was worth paying twice the price for cloud to have that quick scaling.
This is a different Beeper. I don't want a personal CRM, I know everyone in my Beeper chats well enough that I don't need a sidebar to prompt me about where they live. Conversely, I need Signal and Facebook Messenger because that's where my loved ones are.
However I can absolutely see that some people would want this. CRM for email is a solved problem and many professionals use CRM tools, but it doesn't exist in the same way for chat, and maybe it should (although email feels like the bigger market). This is probably in desperate need for a LinkedIn connection though.
My advice to the Amber team: is this for work or personal use? Pick one, make it great for that, and don't try to force it to be the other.
> Companies aren't obligated to support me doing this
Where does one draw the line on support? If I jailbreak an iPhone, should I still get Apple customer support for the apps on it, even though they may have been manipulated by some aspect of the jailbreak? (Very real problem, easy to cause crashes in other apps when you mess around with root access) Should I still get a battery replacement within warranty from Apple even though I've used software that runs the battery hotter and faster than it would on average on a non-jailbroken iPhone?
I feel like changing the software shouldn't void your warranty, but I can see arguments against that. I probably fall on the side of losing all software support if you make changes like this, but even then it's not clear cut.
It's up to the manufacturer to prove that the software modification had a material impact on the issue being covered. Yes that's expensive, yes that's the point.
As you said, this might be a complex one to figure out. I am biased because I tend not to use customer support services (with more of a "figure it out" approach) and am confident I could replace parts myself, though the latter might be harder with parts pairing today.
Can see how people more interested in the software side of things would care about support from [parent company] though. "Lose all support if you bypass our restrictions" is the relatively straightforward approach, but the collateral damage might be quite high. In an ideal world, perhaps the network of third party repair services could take up the slack?
The line is definitely crossed if you jailbreak your phone. It seems pretty clear. Either you're using the device as the manufacturer intended or not. If I take a device rated for 2m of water down scuba diving to 25m, it voids my warranty too.
But that's no the point here, a more similar point is to have the scuba diving manufacturer imposing which body of water you can use the device in.
And if you decide to give the device a try in your own swimming pool or a random spot you'd like to explore, the device won't work and you might be banned from using it elsewhere. Would that make any sense?
Imagine Lenovo refusing to service your ThinkPad because you've compiled your own kernel.
Charging IC has NTC thermistor and battery absolutely must withstand the system running on 100% and then some.
As for battery lifetime, batteries are cheap, unless you glue them to an expensive assembly and force people to replace whole assembly as phone vendors do.
Laptop manufacturers are most definitely not designing their laptops to run at the top of the thermal envelope for 100% of the time, and honestly that's probably the right choice because no one does that – that's what you pay for when you buy high end servers, and the fact these corners are cut is why consumer hardware is so much cheaper.
If you run the software they provide and their guardrails aren't strict enough, that's clearly a warranty case. But if you modify the software to remove their guardrails, it feels reasonable that they can deny a warranty fix.
Overclocking is perhaps a clearer cut version of this – it's a "software change", but can affect the hardware lifespan.
>Laptop manufacturers are most definitely not designing their laptops to run at the top of the thermal envelope for 100% of the time, and honestly that's probably the right choice because no one does that
At a previous place we used a dreadful email marketing SaaS tool and it caused us no end of fire-fighting, even though we probably only had 500 lines of integration code. We ended up rewriting the functionality we needed and bringing it in-house and saved a ton of pain and money, and added ~3k lines.
TPM and TAM are completely different roles. TPMs are essentially project or program managers across wider parts of the org, and the "technical" means they have something beyond a surface understanding of the technical aspects, but are likely not writing any code. TAMs are account managers in the sales org with a focus on giving clients more technical support or planning integrations etc.
"Technical lead" is not a role profile or ladder, it's what you're doing. You could be a TL at L4 on a small project, and you could not be TL at L7 if it's a big enough project. All very subjective.
The point of this thread is that there are teams with a manager who is the defacto TL for the projects the team is doing, so they have IC responsibilities, and then there are teams where the manager does manager things and there's one or more separate TLs.
I've worked on teams in both structures, both in and out of Google, and whether TLMs vs EMs work well depends on so many factors: who the manager is, their management style, the org's priorities, the projects, etc.
Which side of the argument one falls on is likely to be heavily influenced by which language they're writing. The mantra is likely worth sticking to heavily in, say, Haskell or Rust, and I've had plenty of success with it in Swift. Go or Java on the other hand? You'd probably want to err on the side of flexibility because that suits the language more and you can rely on the compiler less during development.
reply