One thing is the graphics are a generation behind what you would usually expect from a game of this quality. I’ve been playing it and I can confirm it is wonderful, and the graphics just makes me impressed with the amount of focus the tiny team (33 people, I think?) put into making what matters great (though I’d be excited for a remaster in the future).
The idea of product discovery has value. Advertising funds product discovery by taking some of the funds that you pay for goods, and funneling that money to platforms and creators that are willing to help others discover that product.
There is an alternative model where we simply pay professional product discoverers. Think influencers, but whose customer is the fan not the sponsor. It would be a massive cultural shift, but doesn’t seem so crazy to me.
Businesses will then send the discoverers free samples, provide literature, and send “advisers” to talk with the discoverers, and you’ll be right back where you started.
Is it a consideration with monetary value? Then it’s advertising, much like how bribing public official is still (theoretically) illegal even if you don’t do it in cash. If it’s not, then the discoverer has no incentive to act according to the business’s demand.
I’m not understanding why this is a good standard: right now, anyone who sees a billboard or a TV ad has no incentive to act according to the business’s demand, yet you want to ban those. So you think it would be OK to advertise to discoverers, but not to final purchasers.
For the record, I’m not saying this is the perfect model and we should move to it immediately. My only claim is that it isn’t crazy.
I think the fundamental difference between advertising to discoverers vs advertising to consumers is that currently “discoverers” (platforms, content creators, billboard owners, etc.) make money directly from advertisers. Success as a “discoverer” is at least somewhat correlated to income (with more money, platforms can be more successful; content creators can create more compelling content; landowners can buy more billboards). If that money is coming from advertisers, you are biasing the market to prefer discoverers that can secure the most advertiser funding, which in turn preferences advertisers that can spend the most on advertising. This isn’t fundamentally bad, since a compelling product can make a lot of money that can then be spend on advertising, but it also creates anti-consumer incentives (like marketing something that is just good enough not to return as the next best thing). On the other hand, if discoverers are paid directly by consumers, that biases the market to prefer discoverers who identify products that bring the most value to consumers for their money.
I meant to include his name near my mention of PVC flutes. Love watching him play (both in the sense of playing music and in the sense of seeing his creativity and passion on display). Nicolas Bras is always a fun watch!
I’m genuinely surprised it took this long for Apple to do this. Having a full contacts list has long been one of the most valuable pieces of information for ad targeting. It’s why you can not be on Facebook but they still know everything they need to know about you because enough of your contacts are on their platforms.
Surprised because Apple is the company that made this sort of permission request so granular. Contacts contain some of the most permanent and “graph-building” data you can imagine, but they let this through for 17 years.
One possible reason they didn't address it sooner was Apple was receiving a cut of google's ad revenue on iPhone that had grown to 36% share, until Google's own antitrust case deemed the arrangement illegal earlier this year. The more data available to Google the more effective their advertising. /conspiracy
If I’m not mistaken, the embedded swift mode aims to make ICU (the 27mb file for Unicode support) optional (and thus easily removed where it isn’t needed)
I ran into this problem a while back working at a company that was working to distribute video streams with low latency (lower than Low-Latency HLS) to a large number of viewers. Initially a prototype was built on top of AWS with fan-out/copying and it was terrible. This was partially due to inefficiency, but also due to each link being a reliable stream, meaning dropped packets were re-broadcast even though that isn't really useful to live video.
Moving to our own multicast hardware not only greatly improved performance, but also greatly simplified the design of the system. We required specialized expertise, but the overall project was reasonably straightforward. The biggest issue was that now we had a really efficient packet-machine-gun which we could accidentally point at ourselves, or worse, can be pointed at a target by a malicious attacker.
This 1-N behavior of multicast is both a benefit and a significant risk. I really think there is opportunity for cloud providers to step in and provide a packaged solution which mitigates the downsides (i.e. makes it very difficult to misconfigure where the packet-machine-gun is pointing). My guess is that this hasn't happened yet because there aren't enough use-cases for this to be a priority (the aforementioned video use case might be better served by a more specialized offering), but exchanges could be a really interesting market for such a product.
It would be pretty efficient to multi-cast market state in an unreliable way, and have a fallback mechanism to "fill in" gaps where packets are dropped that is out-of-band (and potentially distributed, i.e. asking your neighbors if they got that packet)
I have in several capacities over the past few years.
VSCode works pretty well with the sswg extension (powered by sourcekit-lsp). Devcontainers are particularly nice if you are into that sort of thing (I develop in a Linux container on a macOS host). I actually find it easier to experiment with new toolchains (for example, the nightlies) in the Linux container than on my host machine (which requires more manual setup).
One thing I’ve noticed in my career is that it is often quite difficult to distinguish “good” technical infrastructure improvements (meaning ones that will actually help a company achieve its goals) from “bad” ones (stuff that is unnecessary, risky or simply not worth the investment). I've seen these decisions made more using the political capital of high level engineers than by anything data-driven. That’s not to say this isn’t a problem, just maybe more of an art than a science.
A lot of things I've seen engineers do in the name of cleaning up technical debt actively make things worse
Left to their own devices a lot of engineers will go to enormous lengths for either marginal improvements or changes that actively hurt. I can understand why leadership is skeptical of these investments because even for someone extremely technical, if you're not in the weeds everyday of that particular problem, it's very hard to know what's worth it
For the record, borrowed references are only going to be really usable in Swift 6 which isn't released yet.
That said, Swift's implementation of borrowing seems significantly more user-friendly than Rust's. While this is very much an advanced feature, I'd expect it to be actually used in many cases where in Rust folks would resort to working around the borrow checking (via things like indexing into arrays and such). As a result I expect it to be significantly more useful.