> The developer does not review or edit the code, but solely uses tools and execution results to evaluate it and asks the LLM for improvements. Unlike traditional AI-assisted coding or pair programming, the human developer avoids examination of the code, accepts AI-suggested completions without human review, and focuses more on iterative experimentation than code correctness or structure.
Yes, it has. In both breadth and depth. People paying attention know this.
Even within techno (my favorite genre), which is already a quite narrow genre in terms of sounds, the variety of novel sounds birthing new techno sub-genres over the last 10-15 years has been wild.
I dislike calling them genres, they're more like trends or styles. One producer makes something new and unusual that breaks the established patterns, people like it a lot, other producers copy it, and that cycle continues until fans get bored and move on. That lifecycle usually lasts about 2-5 years, sometimes not even long enough to get a proper name, but if you're into the scene you know that "genre" when you hear it.
To give a recent "mainstream" example, Odd Mob has created a certain sound that blew up in popularity despite not fitting neatly in any of the existing boxes we had (tracks like Get Busy, Losing Control, Palm Of My Hands), other producers copied it and by now you have anonymous shitposters on social media complaining that most new songs sound like they were made by him.
As a DJ, the endgame is building a set from a variety of different kinds of music which still sounds great together but doesn’t all follow the same boring formula. And it’s pretty great.
I've been through this with sports, the hierarchy is
(1) good portraits
(2) photos that show players in opposition to each other
(3) photos that tell a story
Developing the habit to do (1) consistently is important because photos like that are still usable. If you just chase the action in most sports the ball is between you and a player and you get a lot of shots of people's behinds so looking for the places where people are open is foundational.
(3) is tough because a play involves a number of events that don't usually appear in one frame except for a few shots in a game like:
Alright, so continue the exercise. In the image in the article, what is the narrative for the people in the foreground? We can’t see where they are coming from or where they are, and their actions don’t seem well defined.
Then in the middle there is a train station(?) where the narrative is also absent or muddled. The people arrived by train to do what?
I would argue the tight shot of the mountain and house is the best capture, because it tells a story of a beautiful place where someone lives.
I like the mountain and house myself. The wide shot isn’t a bad photo but it is pretty cluttered and the parts don’t really work together like you say.
I think there likely are ways to effectively include the people, by getting to a angle where you can isolate a couple of them and include the mountain. I suspect you could also get a good shot with the wide angle by moving closer to the people, although this would emphasize the people more than the mountain.
The first thing that comes to mind is simplifying the identification of a device type, without the necessity of looking up a device list name or updating the list with each new device that gets released.
Foldables have a different UI which often requires different requests to the backend. They need to support both narrow-screen and large-screen formats for content, and you usually want to avoid having critical UI elements fall on the hinge for fairly obvious usability reasons.
That should all be abstracted out to the operating system, not dependent on checking for specific devices. Any app should be able to accommodate different screen/window sizes and safe areas.
The Android OS does provide screen width/height information and safe areas. We use them when appropriate, which is fairly often, but not universally. Safe area support for foldables is pretty weak, though, because it's a relatively new device category that imposes fairly different constraints on devices.
The bigger issue is that there's always a long-tail of product considerations that need to be different on foldables and aren't covered by just feature-detecting the available screen resolution. Logging is one: PMs are very interested in how the category as a whole is performing, if only to direct future hardware plans, and that requires that it actually be categorized as a separate category. Backend requests are another: you can (and should) optimize your bandwidth usage on phones by not shipping to the client information that is only going to be displayed on large screens, and you can (and should) optimize your screen usage on large screens by displaying more information that is not available on phones, but foldables represent the union of the two, and you usually don't want the latency of additional backend requests when the user fold/unfolds the device.
(The irony is that the app in question is Google Play, and I personally know most of the PMs and several of the engineers on Android SysUI.)
“Unit tests are small tests, each one exercising a little piece of functionality. The units tested are usually individual methods, but sometimes clusters of methods or even whole objects.”
Extreme Programming Explained: Embrace Change, 2nd Edition (2004) Kent Beck
You're agreeing with me there. "each one exercising a little piece of functionality". In Beck's approach the tests are added at each step for a simple functionality addition, which then gets implemented in the project to pass the test. These days unit tests are more commonly used as "every method is a unit to be tested completely", often after the implementation is already there.
If you repeat this process until all ambiguities in the spec are eliminated, aren't you essentially left with code? Or at least something that looks more like code than plain English?
183M passengers would be approximately three times the population of the UK. I assume the correct statistic is that 183M journeys were made in a year, or 3.5M per week. But the number of unique passengers will be a lot lower.
You added the phrase "unique passenger" yourself. The article doesn't talk about that at all. It does use the phrase "passenger journeys" and "passenger numbers". While this sentence could perhaps be slightly better phrase, in context it's pretty obvious that it's talking about the number of journeys. All of this is a classic extremely pedantic and obnoxious cherry-picking.
> LLMs are not search engines, and I'm not gaining any followers or customers in any meaningful way because an LLM indexes my site.
Counterpoint: my wife owns an accounting firm and publishes a lot of highly valuable informational content on their website's blog. Stuff like sales tax policies and rates in certain states, accounting/payroll best practices articles, etc. I guess you could call it "content marketing".
Lately they have been getting highly qualified leads coming from LLMs that cite her website's content when answering questions like "What is the sales tax nexus policy in California?". Users presumably follow the citation and then engage with the website, eventually becoming a very warm lead.
So LLMs are obviously not search engines in the conventional sense, but it doesn't mean they are not useful at generating valuable traffic to your marketing website.
> The developer does not review or edit the code, but solely uses tools and execution results to evaluate it and asks the LLM for improvements. Unlike traditional AI-assisted coding or pair programming, the human developer avoids examination of the code, accepts AI-suggested completions without human review, and focuses more on iterative experimentation than code correctness or structure.
https://en.wikipedia.org/wiki/Vibe_coding