On that note, the font isn't symmetrical and the bar graph itself uses jagged lines. This makes it hard to read and much less precise. I'd prefer all websites in monospaced fonts with only the straightest of lines.
> you'd have to spam debug printfs all around you expect this chain to happen to replace the debugger which would waste time
It's not wasting time, it's narrowing in on the things you know you need to look for and hiding everything else. With a debugger you have to do this step mentally every time you look at the debugger output.
Trying to guess what that chain is and putting printf's all around that path feels like doing a poor simulation of what debugger can do out of the box and unlike us - precisely. So I'd say it's exactly the opposite.
If you only care about some specific spot, then sure - printf is enough, but you also need to recompile things every time you add a new one or change debug related details, while debugger can do it re-running things without recompilation. So if anything, printf method can take more time.
Also, in debugger you can reproduce printf using REPL.
And just like the rule that it replaced, the rule of three is now often interpreted as the "correct" approach always, while I still find reality to be more nuanced.
Sometimes you do have the domain expertise to make the judgment call.
A recent example that comes to mind is a payment calculation. You can go ahead and tie that up in a nice reusable function from the get go - if you've ever dealt with a bug where payment calculations appeared different in some places and it somehow made it in front of a customer you're well aware of how painful this can be. For some things having a single source of truth outweighs any negatives associated with refactoring.
A fully curated backlog with complete specifications that is kept up to date with current changes in the product/industry? I've never had the privilege of working in an environment like that.
Obviously not every item in the backlog has a full design, for work that might be years out.
But yes, the next few items for the team to work on should always have the necessary specifications to start work. Whether it's UX mocks or a requirements document or whatever. Having that stuff ready to go is a primary job of the PM who manages the backlog.
Obviously the engineering team then has to break it down further into tasks to complete, but that's what engineering is. And you will run into areas that turned out to be underspecified and the PM needs to liaison with other folks to figure out answers, but again that's part and parcel. That's not generally stopping the whole team from work, and teams often work on multiple features at once so even being temporarily blocked on one doesn't keep you from progress on another.
I’ve been bottlenecked by that middle part quite often. A design isn’t finished, we’re awaiting user feedback or testing, specs are done but waiting for sign off from required parties, etc..
I’ve never had a shortage of work as an engineer, but that doesn’t mean that work has always been perfectly optimized to business priorities - there’s plenty of other bottlenecks in the process that are not coding.
Is the case for playwright over puppeteer just in it's crossbrowser support?
We're currently using Cypress for some automated testing on a recent project and its extremely brittle. Considering moving to playwright or puppeteer but not sure if that will fix the brittleness.
In my experience Playwright provided a much more stable or reliable experience with multiple browser support and asynchronous operations (which is the entire point) over Puppeteer. ymmv
I would definitely recommend puppeteer if you can, it's maintained by the Chrome team and always does things the "approved way". The only reason we did playwright is because we're a python library and pyppeteer was abandoned.
While that is the origin story of Playwright in 2020, that's no longer really true. Puppeteer is alive and well and arguably moving faster than Playwright these days because it's updated in lockstep with https://github.com/ChromeDevTools/devtools-frontend
Also recently found it unnecessarily difficult to do profiling of page workers using Playwright's CDPSession wrapper (and they don't seem to have any plans to improve it: https://github.com/microsoft/playwright/issues/22992#issueco...), whereas it was pretty painless in Puppeteer.
So, definitely more useful if you care about more than just your main thread.
I have converted several large E2E test suites from Cypress to Playwright, and I can vouch that it is the better option. Cypress seems to work well at first, but it is extremely legacy heavy, its API is convoluted and unintuitive, and stacks a bunch of libraries/frameworks together. In comparison, Playwright's API is much more intuitive, yes you must 'await' a lot, but it is a lot easier to handle side effects (e.g. making API calls), it can all just be promises.
It is also just really easy to write a performant test suite with Playwright, it is easy to parallelize, which is terrible in Cypress, almost intentionally so to sell their cloud products, which you do not need. The way Playwright works just feels more intuitive and stable to me.
They're all brittle in my experience but Playwright has a lovely test recorder and test runner which is also integrated into VSCode, and it tidies up a lot of the exceptions that would occur in puppeteer if the page state wasn't meticulously-ready for some operation.
Playwright's "trace" viewer is also fantastic providing periodic snapshots and performance debugging.
Does Claude code do this by default?
It seems for most prompts that I give it, it ends up breaking things into TODO lists and reformulating the prompt. This seems to work well for most tasks.
Do you feel like you end up with as clear of a mental model reviewing it as you do if you wrote it?
I'm still trying to figure out the answer to that question for myself. Maybe the answer is, "Probably not, and it probably doesn't matter" but I'm still trying to figure out what kind of downstream effects that may have later on my judgment.
Yes, of course I do. It's rote stuff. To the balance of time we're accruing to me dealing with generated code, add "stripping off all the comments", "fixing variable names to be like I like them", etc. My fingerprints are still all over everything. And it's still radically faster than doing this all by hand.
Mental expenditure on programming is also not linear through a task; it takes much more energy to get started than to do the back half. Ever stared at an empty function for a minute trying to come up with the right variable name, or choosing which value to compute first? LLMs are geniuses at just getting things started.
Which is why comcast goes to such great lengths to ensure they own as much of your network stack as they can - in my area at least, their support is capable of fully managing your router and WiFi remotely if you're leasing their equipment. I imagine this is a great boon for their ability to provide tech support (and includes a host of other "features" that don't serve direct customer needs such as a non-optional guest WiFi access point that any other comcast user can use).
This leads to fun tech support calls if you use your own equipment where you're basically proving to the support underling that you know how to run your equipment for the first 20-30 minutes before they take your issue seriously (yes, the modem light is green, yes, I've already power-cycled, yes, I'm testing on a wired connection, etc)
> Great. Glad to hear you are connected via hard wire Mr. teeray.
> Please wait a moment while I check on some things on your account.
> Thank you for your patience. Can you please confirm for me that you see a green light on the top of the device? Can you tell me whether the light is blinking or is solid?
I know for a while (I switched back to consumer a few years ago) Comcast Business let you persistently opt out, but if you opted out, you couldn't use other people's APs (either "share and get access to that network" or "don't share, and don't").
Back when Comcast made it absolutely mandatory to have a technician come to the house to do the install, I just chatted with the tech about computer networking and our respective home setups. This usually got me the phone number for the local tech support office along with a "Call this if the service is giving you any real issues.".
> This leads to fun tech support calls if you use your own equipment where you're basically proving to the support underling that you know how to run your equipment for the first 20-30 minutes
For analyzing support burden, I think the relevant question here is why have you even had the experience of calling tech support for a non-working connection - and that falls squarely on the non-reliability of Comcast's network.
Exactly. That's the kind of logic that only makes sense in a metastasized corpo. The only times my non-incumbent fiber connection has gone down in 8 years have been overnight maintenance windows that only happen maybe a few times per year.
reply