That would make any movies based on stories in public domain impossible, because it would destroy all financial incentives to make them. No, derivative works should be on their own terms.
1. People still do software based on the GNU license. What's the difference?
2. I'm a mathematician - math is not copyrighted, yet it's still being done.
3. Is it really so important for society that copyrighted movies be based on old stories? Won't society benefit from new stories and characters?
To be clear, I don't propose to really implement it. But the existing system also sucks. I'm thinking that maybe incorporating such an idea into the existing system - limiting what you can do with public domain work - can be beneficial.
Yes, browser can do that. A browser starts with GET and gets new HTTP 1xx or 3xx response with “Age-Verification: required <age>” header. Browser calls your AVP (defined once in preferences) and gets short-lived certificate of age (expires in 30 seconds), then passes it to website in “Age: <age> <certificate>” header. The website uses known public keys to verify “at least certain age” claim in certificate. AVP public keys can be published in some registry and cached by websites.
I don’t think it’s ridiculous. A complex system that delivers customer further from destination than they were originally from, is flawed by design. And we all know what the problem is exactly: it is the design for capabilities that the company could not afford based on its budget/planning/KPIs. DB did not invest enough in infrastructure to support the big dream, and now they also have huge aging and retiring workforce problem. So let’s not pretend it’s normal and business as usual: complexity is not an excuse for mismanagement.
Politicians do not dictate operational decisions. If they allocate X for infrastructure, the company should scale up or scale down to match that budget, rather than overpromise.
”The company” is not one company. The infrastructure manager (DB InfraGo in Germany) is managing tracks on behalf of the state. Operators (a regional operator in the case of this article) run as many trains as they want/are allowed, which in practice is more than the system can handle reliably. There are laws regulating how track access is awarded, so even when a DB group operator runs on tracks managed by DB InfraGo there is no single ”company” that makes a certain promise. The remedy is political either way, either change how track access is awarded to limit the number of trains allowed or increase funding for added capacity and maintenance.
Yeah and already in 2025 it's quite common to be able to pay with a credit card in bars and restaurants too, which was almost unheard of a few years back. Of course these machines break all the time, and suddenly the business can only take cash. This seems to be a very specific problem that only happens in Germany.
I have seen pushback on this kind of behavior because "users don't like error codes" or other such nonsense. UX and Product like to pretend nothing will ever break, and when it does they want some funny little image, not useful output.
A good compromise is to log whenever a user would see the error code, and treat those events with very high priority.
We put the error code behind a kind of message/dialog that invites the user to contact us if the problem persists and then report that code.
It’s my long standing wish to be able to link traces/errors automatically to callers when they call the helpdesk. We have all the required information. It’s just that the helpdesk has actually very little use for this level of detail. So they can only attach it to the ticket so that actual application teams don’t have to search for it.
> I have seen pushback on this kind of behavior because "users don't like error codes" or other such nonsense […]
There are two dimensions to it: UX and security.
Displaying excessive technical information on an end-user interface will complicate support and likely reveal too much about the internal system design, making it vulnerable to external attacks.
The latter is particularly concerning for any design facing the public internet. A frequently recommended approach is exception shielding. It involves logging two messages upon encountering a problem: a nondescript user-facing message (potentially including a reference ID pinpointing the problem in space and time) and a detailed internal message with the problem’s details and context for L3 support / engineering.
I used «powermetrics» bundled with macOS with «bandwidth» as one of the samplers (--samplers / -s set to «cpu_power,gpu_power,thermal,bandwidth»).
Unfortunately, Apple has taken out the «bandwidth» sampler from «powermetrics», and it is no longer possible to measure the memory bandwidth as easily.
> UX and Product like to pretend nothing will ever break, and when it does they want some funny little image, not useful output.
Just ignore them or provide appeasement insofar that it doesn’t mess with your ability to maintain the system.
(cat picture or something)
Oh no, something went wrong.
Please don’t hesitate to reach out to our support: (details)
This code will better help us understand what happened: (request or trace ID)
Nah, that’s easy problem to solve with UX copy. „Something went wrong. Try again or contact support. Your support request number is XXXX XXXX“ (base 58 version of UUID).
It’s a wild violation of SRP to suggest that. Separating concerns is way more efficient. Database can handle audit trail and some key metrics much better, no special tools needed, you can join transaction log with domain tables as a bonus.
Are you assuming they're all stored identically? If so, that's not necessarily the case.
Once the logs have entered the ingestion endpoint, they can take the most optimal path for their use case. Metrics can be extracted and sent off to a time-series metric database, while logs can be multiplexed to different destinations, including stored raw in cheap archival storage, or matched to schemas, indexed, stored in purpose-built search engines like OpenSearch, and stored "cooked" in Apache Iceberg+Parquet tables for rapid querying with Spark, Trino, or other analytical engines.
Have you ever taken, say, VPC flow logs, saved them in Parquet format, and queried them with DuckDB? I just experimented with this the other day and it was mind-blowingly awesome--and fast. I, for one, am glad the days of writing parsers and report generators myself are over.
The problem statement in this article sounds weird. I thought in 2025 everyone logs at least thread id and context id (user id, request id etc), and in microservice architecture at least transaction or saga id. You don’t need structured logging, because grep by this id is sufficient for incident investigation. And for analytics and metrics databases of events and requests make more sense.
reply