Hacker Newsnew | past | comments | ask | show | jobs | submit | redox99's commentslogin

Last year they claimed they had $800k in ARR from sponsors alone[1]. Add to that whatever they made by selling Tailwind Plus ($299 individual / $979 teams one time payment)

How much money do you really need to maintain a CSS library? I understand everyone wants a really fancy office in an expensive city, lots of employees with very high salaries and generous perks, and so on. But all that is not needed to maintain a CSS library (that is kind of feature complete already).

I think Tailwind was making a lot of money (surely over a million), expanded and got bloated unnecessarily just because they had all that money, and now that their income dropped to what still is a lot of money for a CSS library, they're angry that they have to cut expenses to a more reasonable level.

I guess it worked out for them because now they have even more sponsoring.

And they used the AI bad get out of jail free card when a lot of their drop in sales probably comes from shadcn/ui and others which offer something similar for free.

[1] https://petersuhm.com/posts/2025/


The biggest food related problem in the US is obesity. Lean meat is very high satiety and really helps with keeping weight in check. Of course a McDonalds meal is the opposite and you eat more than half your day's calories in a few minutes.

Hardware would catch up. And IPv4 would never go away. If you connect to 1.1.1.1 it would still be good ole IPv4. You would only have in addition the option to connect to 1.1.1.1.1.1.1.2 if the entire chain supports it. And if not, it could still be worked around through software with proxies and NAT.

So... just a less ambitious IPv6 that would still require dual-stack networking setups? The current adoption woes would've happened regardless, unless someone comes up with a genius idea that doesn't require any configuration/code changes.

I disagree. The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply. A less ambitious IPv4 is exactly what we need in order to make any progress

It’s not _that_ different. Larger address space, more emphasis on multicast for some basic functions. If you understand those functions in IPv4, learning IPv6 is very straightforward. There’s some footguns once you get to enterprise scale deployments but that’s just as true of IPv4.

Lol! IPv4 uses zero multicast (I know, I know, technically there's multicast, but we all just understand broadcast). The parts of an IPv4 address and their meaning have almost no correlation to the parts of an IPv6 address and their meaning. Those are pretty fundamental differences.

IP addresses in both protocols are just a sequence of bits. Combined with a subnet mask (or prefix length, the more modern term for the same concept) they divide into a network portion and a host portion. The former tells you what network the host is on, the latter uniquely identifies the host on that network. This is exactly the same for both protocols.

Or what do you mean by “parts of an IPv4 address and their meaning”?

That multicast on IPv4 isn’t used as much is irrelevant. It functions the same way in both protocols.


IPv4 uses ARP which is just a half baked multicast. IPv6 is much better designed.

The biggest difference is often overlooked because it's not part of the packet format or anything: IPv4 /32s were not carried over to IPv6. If you owned 1.1.1.1 on ipv4, and you switch to ipv6, you get an entirely different address instead of 1.1.1.1::. Maaybe you get an ipv6-mapped-ipv4 ::ffff:1.1.1.1, but that's temporary and isn't divisible into like 1.1.1.1.2.

And then all the defaults about how basically everything works are different. Home router in v6 mode means no DHCP, no NAT, and hopefully yes firewall. In theory you can make it work a lot like v4, but by default it's not.


multicast has been dead for years

> The current adoption woes are exactly because IPv6 is so different from IPv4. Everyone who tries it out learns the hard way that most of what they know from IPv4 doesn't apply.

In my experience the differences are just an excuse, and however similar you made the protocol to IPv4 the people who wanted an excuse would still manage to find one. Deploying IPv6 is really not hard, you just have to actually try.


Part of the ipv6 ambition was fixing all the suboptimally allocated ipv4 routes. They considered your idea and decided against it for that reason. But had they done it, we would've already been on v6 for years and had plenty of time to build some cleaner routes too.

I think they also wanted to kill NAT and DHCP everywhere, so there's SLAAC by default. But turns out NAT is rather user-friendly in many cases! They even had to bolt on that v6 privacy extension.


What do you mean by suboptimal allocation?

The ipv4 routing table contains many individual /24 subnets that cannot be summarized, causing bloat in the routing tables.

With ipv6, that can be simplified with just a couple of /32 or /48 prefixes per AS.


This, because a bunch of random /24s were sold off to different ISPs, because of address scarcity.

> I disagree. The current adoption woes are exactly because IPv6 is so different from IPv4.

How is IPv6 "so different" than IPv4 when looking at Layer 3 and above?

(Certainly ARP vs ND is different.)


I didn't say it was different 'when looking at layer 3 and above". I said it's different from IPv4. At the IP layer.

At the IP layer just being different is 90% of the trouble. Being less ambitious would have some upsides and downsides but not seriously change that.

> I said it's different from IPv4. At the IP layer.

In what way? Longer addresses? In what way is it "so different" that people are unable to handle whatever differences you are referring to?

We used to have IPv4, NetBEUI, AppleTalk, IPX all in regular use in the past: and that's just on Ethernet (of various flavours), never mind different Layer 2s. Have network folks become so dim over the last few years that they can't handle a different protocol now?


But that is a bug in history. IPv6 was standardized BEFORE NAT.

“most what they know from IPv6” is just NAT.

> A less ambitious IPv4 is exactly what we need in order to make any progress

but we’re already making very good progress with IPv6? Global traffic to Google is >50% IPv6 already.


Current statistics are that a bit over 70% of websites are IPv4 only. A bit under 30% allow IPv6. IPv6 only websites are a rounding error.

Therefore if I'm on an IPv6 phone, odds are very good that my traffic winds up going over IPv4 internet at some point.

We're 30 years into the transition. We are still decades away from it being viable for servers to run IPv6 first. You pretty much have to do IPv4 on a server. IPv6 is an afterthought.


> We are still decades away from it being viable for servers to run IPv6 first.

Just put Cloudflare in front of it. You don’t need to use IPv4 on servers AT ALL. Only on the edge. You can easily run IPv6-only internally. It’s definitely not an afterthought for any new deployments. In fact there’s even a US gov’t mandate to go IPv6-first.

It’s the eyeballs that need IPv4. It’s a complete non-issue for servers.


"Just put Cloudflare in front of it"

Why do I have to get some third party involved??

Listen, you can be assured that the geek in me wants to master IPv6 and run it on my home network and feel clever because I figured it out, but there's another side of me that wants my networking stuff to just work!


If you don’t want to put Cloudflare in front of it, you can dual-stack the edge and run your own NAT46 gateway, while still keeping the internal network v6 only.

You have a point. But you still need DNS to an IPv4 address. And the fact that about 70% of websites are IPv4 only means that if you're setting up a new website, odds are good that you won't do IPv6 in the first pass.

Cloudflare proxy automatically creates A and AAAA records. And you can’t even disable AAAA ones, except in the Enterprise plan. So if you use Cloudflare, your website simply is going to be accessible over both protocols, irrespective of the one you actually choose. Unless you’re on Enterprise and go out of your way to disable it.

Pretty sure NAT was standardized before IPv6.

NAT is RFC 1631.

IPv6 is RFC 1883.

Admitted, that was very basic NAT.


RFC 1631 is a memo, not a standard.

Actually, my bad. NAT was NEVER standardized. Not only NAT was never standardized, it’s never even been on standards track. RFC 3022 is also just “Informational”

Plus, RFC 1918 doesn’t even mention NAT

So yes, NAT is a bug in history that has no right to exist. The people who invented it clearly never stopped to think on whether they should, so here we are 30 years later.


That doesn't really mean much. Basic NAT wasn't eligible to be on the standards track as it isn't a protocol. Same reason firewall RFCs are informational or BCP.

The protocols involving NAT are what end up on the standards track like FTP extensions for NAT (RFC 2428), STUN (RFC 3489), etc.


If only the inventors of NAT had patented it and then refused to license it!

Sort of. I think people would understand

201.20.188.24.6

And most of what they know about how it works clicks in their mind. It just has an extra octet.

I also think hardware would have been upgraded faster.


It would've been even easier and lasted longer to use two bytes of hex at the start. That would've expanded the Internet to 65536x its current space.

Something like aaff:a.b.c.d

Leaving off the prefix: could just mean strictly IPv4.


In IPv6, this is spelled ::ff00:a.b.c.d

It didn’t speed up adoption and people then tried most of the other solutions people are going to suggest for IPv4+. Want the IPv4 address as the network address instead? That’s 2002:a.b.c.d/48 - many ISPs didn’t deploy that either


I think making the extra hex at the end is better, that way its like we are subdividing our existing networks without moving them around

Think of it like phone numbers. For decades people have accepted gradual phone number prefix additions. I remember in rural Ireland my parents got an extra digit in the late 70s, two more in the 90s, and it was conceptually easy. It didn't change how phones work, turn your phone into a party line or introduce letters or special characters into the rotary dial, or allow you to skip consecutive similar digits.

For people who deal with ip addresses, the switch from ipv4 to ipv6 means moving from 4 digits (1.2.3.4) to this:

   2001:0db8:0000:0000:0008:0800:200c:417a
   2001:db8:0:0:8:800:200c:417a
   2001:db8::8:800:200c:417a
Yes, the ipv6 examples are all the same address. This is horrible. Worse than MAC addresses because it doesn't even follow a standard length and has fancy (read: complex) rules for shortening.

Plus switching completely to ipv6 overnight means throwing away all your current knowledge of how to secure your home network. For lazy people, ipv4 NAT "accidentally" provides firewall-like features because none of your home ipv4 addresses are public. People are immediately afraid of ipv6 in the home and now they need to know about firewalls. With ipv4, firewalls were simple enough. "My network starts with 192.168, the Internet doesn't". You need to learn unlearn NAT and port forwarding and realise that with already routable ipv6 addresses you just need a firewall with default deny, and then add rules that "unlock" traffic on specific ports to specific addresses. Of course more complexity gets in the way... devices use "Privacy Extensions" and change their addresses, so making firewall rules work long-term, you should use the device's MAC Address. Christ on a bike.

I totally see why people open this bag of crazy shit and say to themselves "maybe next time I buy a new router I'll do this, but right now I have a home with 4 phones, 3 TVs, 2 consoles, security cameras, and some god damn kitchen appliances that want to talk to home connect or something". Personally, I try to avoid fucking with the network as much as possible to avoid the wrath of my wife (her voice "Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?").


> Yes, the ipv6 examples are all the _same address_. This is _horrible_.

Try `ping 16909060` some day :-)


I used it to get around proxies back in the 2000s

What is confusing about that? That's like complaining that you can write an IPv4 address as 001.002.003.004 or 1.2.3.4. Even the :: isn't much different from being able to write 127.0.0.1 as 127.1 (except it now becomes explicit that you've elided the zeroes).

While it's possible to write an ipv4 address in a bunch of different ways (it's just a number, right?) nobody does it because ipv4 standard notation is easy to remember. Ipv6 is not, and none of these attempts to simplify it really work because they change the "format". I understand it and you understand it, but the point here is that it's unfriendly to anyone who isn't familiar with it.

These are all the same address too: 1.2.3.4, 16909060, 0x1020304, 0100401404, 1.131844, 1.0x20304, 1.0401404, 1.2.772, 1.2.0x304, 1.2.01404, 1.2.3.0x4, 1.2.0x3.4, 1.2.0x3.0x4, 1.0x2.772, 1.0x2.0x304, 1.0x2.01404, 1.0x2.3.4, 1.0x2.3.0x4, 1.0x2.0x3.4, 1.0x2.0x3.0x4, 0x1.131844, 0x1.0x20304, 0x1.0401404, 0x1.2.772, 0x1.2.0x304, 0x1.2.01404, 0x1.2.3.4, 0x1.2.3.0x4, 0x1.2.0x3.4, 0x1.2.0x3.0x4, 0x1.0x2.772, 0x1.0x2.0x304, 0x1.0x2.01404, 0x1.0x2.3.4, 0x1.0x2.3.0x4, 0x1.0x2.0x3.4, 0x1.0x2.0x3.0x4

v6 has optional leading zeros and ":: splits the address in two where it appears". v4 has field merging, three different number bases, and it has optional leading zeros too but they turn the field into octal!


"Why are you breaking shit for ideological reasons? What was broken? What new amazing thing can I do after this?"

LOL. Yup. What can I do after this? The answer is basically "nothing really" or "maybe go find some other internet connection that also has IPv6 and directly connect to one of my computers inside the network (which would have been firewalled I'd hope so I'd, what, have to punch open a hole in the firewall so my random internet connection's IPv6 can have access to the box? how does that work? I could have just VPN'd in with the IPv4 world).

Seriously though, how do I "cherry pick hole punch" random hotel internet connections? It's moot anyway because no hotel on earth is dishing out publicly accessable IPv6 addresses to guests....


The main thing is keeping current addresses, not having both an ipv4 and ipv6 address.

Just like for an apartment you append something like 5B. And for a house you don't need that.


It was doomed the moment you had to maintain two separate stacks, each with its own address, firewall rules and so on.

It should have been ipv4 with extra optional bits, so you could have the same rules and everything for both stacks.

I turn it off because it's a risk having one of either stacks malconfigured.

IPv6 should've been a superset of IPv4, as in addresses are shared, not that you have a separate IPv4 and IPv6 address for your server.


That’s why my home network is IPv6 only. NAT64 and DNS64 and 464XLAT work very well, and you only need to configure IPv4 once: in your router, where you need special configuration anyways.

for me, I don't need to even setup NAT64. My ISP provides it for me free.

What do you do about IoT devices?

Why would that be a desirable quality? Wifi devices (using Matter or not) live on the same network as my PC - meaning a compromised lightbulb (or one that hasn't been updated) can be used to infiltrate and attack my home computers.

Thread+ Matter, despite using a different radio, suffers from the same issue, since a border router is on the Wifi network, a smart bulb using Thread can theoretically access my PC.

Yes, I'm sure there are ways to fix this, but why have the problem in the first place?

Zigbee is entirely incompatible networking standard, and doesn't have this problem.


Another day, another Godwin's law of networking.

>It was doomed the moment you had to maintain two separate stacks

Pray, tell me, how are we supposed to extend IPv4 with another {insert a number here} bits without creating a new protocol (that neccessitates running two stacks)?

Suppose that you have an old computer that understands only 32 bit addresses -- good ol' IPv4. Let's name it 192.168.10.10.

It then receives a packet from another computer with hypothetical "IPv4+" support, 172.12.10.98.12.4.24.31... ...Wait a minute, it can't, because your old computer understands only 32 bit addresses!

What if we really forced it to receive the packet anyway? It will see that the packet is from 172.12.10.98, because once again, it understands 32 bit addresses only.

It then sends back the reply to... you guessed it, 172.12.10.98. Not 172.12.10.98.12.4.24.31.

Yeah,172.12.10.98.12.4.24.31 will never get its reply back.

Do you see why any "IPv4 with extra octets" proposal are doomed to begin with now?


It wouldn't be able to receive it. That simple. Which is not a problem, any server would still have an old ipv4 address (172.12.10.98 from your example), like they currently do and probably will for decades.

Devil's advocate. There could be a extension for ipv4 stacks. Ipv4 stacks would need to be modified to include the extension in any reply to a packet received with one. It would also be a dns modification to append the extension if is in the record. Ipv6 stacks would either internally reconstruct the packet as if it were ipv6.

It would be easy to make such an extension, but you're going to hit the same problem v6 did: no v4 stacks use your extension.

How will you fix that? By gradually reinventing v6, one constraint at a time. You're trying to extend v4, so you can't avoid hitting all of the same limits v6 did when it tried to do the same thing. In the end you'll produce something that's exactly as hard to deploy as 6to4 is, but v6 already did 6to4 so you achieved nothing.


Having just optional field in the ipv4 header with extra address bits would leave all the stack source code with just some 100 lines of extra code. Would mean, you can have one stack that handles just both. Make special addresses where the additional bits are all 0, which means the field is not there at all. These addresses could reach ipv4 only addresses and could be reached from them. When you really want to make sure these devices aren't parsing ipv4+ packets, change the checksum-code for all packages that contain the optional field. That would mean all ipv4 only devices would ignore ipv4+ packages. Instead you could change the version to 5 for all with optional address bits.

This is stuff that could be implemented in any ipv4 stack in some days of work.

IPv6 is overengineered, thats the reason why it's not adopted after 30 years.


You clearly do not understand networking. Or else you won't make such a statement:

>This is stuff that could be implemented in any ipv4 stack in some days of work.

The sysadmins across the world, who had to deal with decades-old, never-updated devices facepalmed in unison.

At least the other comment agreed that "IPv4+" hosts will never be able to talk to IPv4 hosts.

>IPv6 is overengineered, thats the reason why it's not adopted after 30 years.

It is already adopted in many countries. Don't blame the protocol for your countrymen's incompetence.


And 2 listeners

How much energy did evolution "spend" to get us here?

I agree human brains are crazy efficient though.


If you make it more efficient, then you train it for longer or make it larger. You're not going to just idle your GPUs.

And yes of course it's a race, everything being equal nobody's going to use your model if someone else has a better model.


Simulations in general are pretty flawed, and AIs will usually find ways to "cheat" the simulation.

It's a very useful tool of course, but not as good as the software situation.


Movies are mastered for a dark room. It's not going to look good with accurate settings if you are in a lit room.

Having said that, there are a lot of bad HDR masters.


Never had an issue with Stranger Things. Maybe you're using the internal speakers?

I watch YouTube with internal TV speakers and I understand everything, even muddled accents. I cannot understand a single TV show or movie with the same speakers. Something tells me it's about the source material, not the device.

Well of course, YouTube is someone sitting in front of the camera with no background noise and speaking calmly.

In a movie the characters may be far away (so it needs to sound like that, not like a podcast), running, exhausted, with a plethora of background noises and so on.


I can suspend my disbelief for the sake of clearly hearing a character who has something important to say.

In the real life, I can underastand exhausted people or dialog in a kitchen full of background noise.

If we cant do the same in the movie, sound is just badly mixed. It is not the story setup and it is not "realistic".


> In the real life, I can underastand exhausted people or dialog in a kitchen full of background noise.

Because in real life you don't listen through an internal TV speaker, duh.


These are hard to impossible to understand with basically any normal TV setup.

That being said, people listening to TV through TV is 100% predictable. If we cant understand, the mix is not "realistic", it is "badly done".


That would be true, except even in calm scenes in movies it's an issue. Unless I turn the volume high enough, in which case music and sfx become neighbor-waking loud. To be clear: I'm not talking about scenes where characters speak over an explosion. The overall mix does not allow having the same volume for all scenes of the movie, pick your poison: wake the neighbors or don't understand dialogues.

Somehow youtube videos don't have this issue. Go figure /s


It's the same idea, a narrated youtube video is meant to have the same volume throughout, while a movie is meant to have quiet and loud parts.

The problem, as you say, is that if you don't want to have loud parts, you lower the volume so that loud is not loud anymore, and then the quiet but audible parts become inaudibly quiet.

I consider this to be a separate issue to the lack of clarity of internal speakers, and a bit harder to solve because it stems from the paper thin walls common in the US and other places.

You can usually use audio compression to fix this if you can't play the movie at the volume level it's meant to be played.


That the entire problem is intentional does not make it any less of a defect.

Intentionally making audio uncomfortable is not a sign of art or skill, it's a sign of delivering a bad product.


The audio is not uncomfortable, 75 dB is a reasonable calibration level for a home setup (your average scene will be much quieter).

75 dB in real life is your typical restaurant, office, etc.


Did you mean to reply to a different comment? What does calibration or 75 dB have to do with anything I said?

The most common experience on the poorly mixed content that several in this thread are complaining about are: the volume setting necessary for intelligible audio results in uncomfortably loud audio in other parts.

This is a defect of the content, not of the system it's playing on.


A YouTube video is likely a single track of audio or a very minimal amount. A movie mixed for Dolby Atmos is designed for multiple speakers. Now, they will create compromised mixes for something like a stereo setup, and a good set of bookshelf speakers will be able to create a phantom center channel. However, having a dedicated center channel speaker will do a much better job. And using the TV's built in speakers will do a very poor job. Professional mixing is a different beast than most YouTube videos, and accordingly, the sound is mixed quite different.

Yup, I definitely do agree those are wildly different beasts. But the end result is, the professional mixing is less enjoyable than amateur-ish youtube mixing. Which is a shame, really. Mixing is a craft that is getting ruined (imho) by the direction to perform theatrical mixes (where having building-shaking sfx is not an issue) or atmos mixes (leaving no budget/time for plain stereo mixes).

The crux of the issue IMHO is the theatrical mixes. Yes I can tune the TV volume way up and hear the dialogue pretty well. In exchange, any music or sfx is guaranteed to wake the neighbors (I live in a flat, so neighbors are on the other side of the wall/floor/ceiling).


As someone with a dedicated center speaker, people doing audio mixing do not effectively use it. I even have it manually boosted. Sometimes it's 10% better than without one, but nowhere near enough to make a real difference.

YouTube very likely has only a 2.0 stereo mix, TV shows and movies are mostly multichannel. Something tells me it's about the source material being a poor fit for your setup.

I agree. There are absolutely tons of movies and TV series with indecipherable dialogue, but Stranger Things isn't among them.

> Maybe you're using the internal speakers?

Which is just another drama that should not be on consumers shoulders.

Every time I visit friends with newer TV than mine I am floored by how bad their speakers are. Even the same brand and price-range. Plus the "AI sound" settings (often on by default) are really bad.

I'd love to swap my old tv as it shows it's age, but spending a lot of money on a new one that can't play a show correctly is ridiculous.


Just buy a decent external surround sound system, has nothing to do with the TV and will last a long long time.

I really don't want to install multiple new devices. I don't care about the cost, the inconvenience and hassle is a PITA. Plus then you had to fiddle with multiple volume controls instead of one to make it work for your space.

No thank you. We should make the default work well, and if people want a sound optimized experience that requires 6x the pieces of equipment let those who want to do the extra work do what they need to for the small change in audio quality.

Without that change in defaults more and more people will switch to alternatives, like TikTok and YouTube, that bother to get understandability as the default rather than as something requiring hours of work and shopping choices.


> Plus then you had to fiddle with multiple volume controls instead of one to make it work for your space.

Most AVRs come with an automatic calibration option. Though there are cheap 5.1 options on the market that will get results multiple times better than your flatscreen can produce.

> We should make the default work well

Yep, movies should have properly mastered stereo mixes not just dumb downmixes from surround that will be muddy, muffled and with awful variations in loudness.


Ok sure.

However getting a better sound system is a current solution to the problem that doesn't require some broad systemic change that may or may not ever happen.


A far better solution that I take: not consume the media at all. Not only is there an abundance of media these days, but there are many many other better ways to spend time, such as writing comments on Hacker News that very few people will ever see.

I have spent about half an hour investigating sound bars as a result of these discussions, and that's a loss of life that I can never get back, and I regret spending that much time on the problem.


It feels like you are trying to turn this into an ideological debate when all I am saying is "buy some better speakers if you care about audio".

There are a couple of models with good sound. I got a Philips OLED910 a short while ago and that sound system surprised me.

I turned it off though and use an external Atmos receiver and speakers.


I am floored that people really expect integrated TV speakers to be good.

Couldn't they be miles better if we allowed screens to be thicker than a few millimeters?

I believe one could do some fun stuff with waveguides and beam steering behind the screen if we had 2 inch thick screens. Unfortunately decent audio is harder to market and showcase in a bestbuy than a "vivid" screen.


Anyone who cares about audio will have dedicated speakers, so it barely even makes sense to make TV speakers good.

I'm a bit on the fence about this.

If someone buys a TV (y'know, a device that's supposed to reproduce sound and moving pictures), it should at least be decent at both. But if people want a high-end 5.1/7.1/whatever.1 sound then by all means they should be able to upgrade.

My mum? She doesn't want or need that, nor does she realistically have the space to have a high-end home-cinema entertainment setup (much less a dedicated room for it).

It's just a TV in her living room surrounded by cat toys and some furniture.

So, if she buys a nearly €1000 TV (she called it a "stupid star trek TV") it should at least be decent—although at that price tag you'd reasonably expect more than just decent—at everything it's meant to do of the box. She shouldn't need to constantly adjust sound volume or settings, or spend another thousand on equipment and refurbishment to access to decent sound.

In contrast, they say the old TV that's now at nan's house has much better sound (even if the screen is smaller) and are thinking of swapping the TVs since nan moved back in with my mum.


Good speakers isn't really compatible with flatness of modern tv's. You can certainly make one with good speakers, but it would look weird mounted on the wall. Buying external speakers seems like a decent tradeoff for that.

Sure, it would be nice if TVs could have good sound out of the box if that meant no other tradeoffs. But if it means making the TV thicker (and, as other comments have pointed out, it probably would) then I'd be against it, since I never use the built-in TV speaker and frankly don't think anyone should.

Honestly I think high-end TVs should just not include speakers at all, similar to how high-end speakers don't contain built-in amplifiers. Then you could spend the money saved on whatever speakers you want.

> She shouldn't need to constantly adjust sound volume or settings, or spend another thousand on equipment and refurbishment to access to decent sound.

How about €100 on a soundbar?


Everyone cares about hearing the words. Those who care about hearing nuanced and buy extra sound equipment are a distinct and much much much smaller set of viewers. Yet only tha smaller set seems to be able to get decent results.

Nope. That's a misconception. Due to space constraints I don't have dedicated speakers for our living room TV. And I don't think I'm the only one.

And I do own two proper dedicated speakers + amps setups. I also know how to use REW and Sigma Studio. So I guess I qualify regarding "cares".

Sadly I lack time to build a third set of cabinets to the constraints of our living room.


A sound bar, even though fairly bad, is still a million times better than internal speakers, and you'd need a very exotic setup to be unable to fit one.

I'm surprised given you care about audio that you can even tolerate internal speakers. I'd just not use that TV and watch wherever you have better audio.


I don’t expect them to be “good” but I expect to be able to make out the basics.

Your expectations are too high, a 30mm thick screen will never produce good audio.

Various sections of my screen (LG C series) are significantly thicker than 30mm.

Also - this isn’t a speaker problem this is a content problem. I watched the princess bride last week on the TV, and didn’t require captions, but I’m watching Pluribus on Netflix and I’m finding it borderline impossible to keep up without them.


The content is mixed with decent audio systems in mind.

When you listen to that content on a good system you don't have these issues.

Nolan films are a perfect example.


Imagine if we said “hey your audio is only usable on iPhone if you use this specific adapter and high end earphones”. Somehow the music industry has managed to figure out a way to get stuff to sound good on high end hardware, and passable on even the shittiest speakers and earbuds imaginable, but asking Hollywood blockbusters to make the dialog literally audible on the most popular device format is too much?

Why do you think high end audio equipment exists?

You can still watch these movies, its just sounds bad on low quality sound systems.


In a lot of bass music the most important parts are simply inaudible on an iPhone speaker.

> Pluribus on Netflix

on AppleTV/TV+


Apologies, “Netflix” has become like hoover, Google, or Kleenex - eponymous for the product.

Thats definitely an American way of speaking.

Americans don't call a vacuum cleaner a Hoover, do they? The British definitely do.

I don't think so, but I typically hear them use brands like Kleenex, Band-aid, etc instead of tissue, bandage.

Im a bit confused why you’re surprised to see American terminology on a site with a predominantly American user base, or why it’s worth commenting on.

That said, I’m Irish and live in the UK. You’ve never heard people say “I’ll hoover that”, or “you can google that”? Kleenex and band aid are definitely American ones but given the audience I thought it was apt


Most people do, I reckon.

Its why captions have become so popular.

This seems like very low hanging fruit. How is the core loop not already hyper optimized?

I'd have expected it to be hand rolled assembly for the major ISAs, with a C backup for less common ones.

How much energy has been wasted worldwide because of a relatively unoptimized interpreter?


Quite to the contrary, I'd say this update is evidence of the inner loop being hyperoptimized!

MSVC's support for musttail is hot off the press:

> The [[msvc::musttail]] attribute, introduced in MSVC Build Tools version 14.50, is an experimental x64-only Microsoft-specific attribute that enforces tail-call optimization. [1]

MSVC Build Tools version 14.50 was released last month, and it only took a few weeks for the CPython crew to turn that around into a performance improvement.

[1] https://learn.microsoft.com/en-us/cpp/cpp/attributes?view=ms...


Python’s goal is never really to be fast. If that were its goal, it would’ve had a JIT long ago instead of toying with optimizing the interpreter. Guido prioritized code simplicity over speed. A lot of speed improvements including the JIT (PEP 744 – JIT Compilation) came about after he stepped down.


I doubt it would have a JIT a long time ago. Thing is, people have been making JIT compilers for Python for a long time now, but the semantics of the language itself is such that it's often hard to benefit from it because most of the time isn't in the bytecode interpreter itself, it's dispatching things. People like comparing Python to JavaScript, but Python is much more flexible - all "primitive" types are objects can be subclassed for example, and even basic machinery like attribute lookups have a bunch of customization hooks.

So the problem is basically that a simple JIT is not beneficial for Python. So you have to invest a lot of time and effort to get a few percent faster on a typical workload. Or you have to tighten up the language and/or break the C ABI, but then you break many existing popular libraries.


Those people usually overlook the history of Smalltalk, Self and Common Lisp, which are just as dynamic if not more, due to image use, debugging and compilation on the fly where anything can be changed at any time.

For all its dynamism, Python doesn't have anything closer to becomes:.

I would say that by now what is holding Python back is the C ABI and the culture that considers C code as Python.


> People like comparing Python to JavaScript, but Python is much more flexible - all "primitive" types are objects can be subclassed for example, and even basic machinery like attribute lookups have a bunch of customization hooks.

Most of the time, people don't use any of these customisations, don't they?

So you'd need machinery that makes the common path go fast, but can fall back onto the customised path, if necessary?


Descriptors underpin some common language features like method calls (that's how `self` gets bound), properties etc. You can still do it by special casing all those, and making sure that the way you implement all those primitives works exactly as if it used descriptors, sure. But at this point it's not exactly a simple JIT anymore.

Should probably mention that Guido ended up on the team working on a pretty credible JIT effort. Though Microsoft subsequently threw a wrench in it with layoffs. Not sure the status now.


If performance was a goal... hell if it was even a consideration then the language would be very different.


Your are mixing up eras.

For comparison: when Javascript was first designed, performance wasn't a goal. Later on, people who had performance as a goal worked on Javascript implementations. Thanks to heroic efforts, nowadays Javascript is one of the language with decently fast implementation around. The base design of the language hasn't changed much (though how people use it might have changed a bit).

Python could do something similar.


He was part of the driving effort after joining Microsoft though.

Python is full of decisions like this / or rather full of "if you just did some more work it'd be 10x better"


Software has gotten so slow we've forgotten how fast computers are


This is (a) wildly over expectations for open source and (b) a massive pain to maintain, and (c) not even the biggest timewaster of python, which is the packaging "system".


> not even the biggest timewaster of python, which is the packaging "system".

For frequent, short-running scripts: start-up time! Every import has to scan a billion different directories for where the module might live, even for standard modules included with the interpreter.


In the near future we will use lazy imports :) https://peps.python.org/pep-0810/


This can't come soon enough. Python is great for CLIs until you build something complex and a simple --help takes seconds. It's not something easily worked around without making your code very ugly.


It's not that hard to handle --help and --version separately before importing anything.

You could, but it doesn't really seem all that useful? I mean, when are you ever going to run this in a hot loop?

> [...] not even the biggest timewaster of python, which is the packaging "system".

The new `uv` is making good progress there.


I remember a former colleague, (may he RIP) ported a similar optimization to our fork of Python 2.5, circa 2007. We were running Linux on PPC and it gave us that similar 10-15% boost at the time.

If you want fast just use pypy and forget about cpython.


Probably because anyone concerned with performance wasn’t running workloads on Windows to begin with.


Plenty of DAWs, image editing and video editing being done on Windows.

They weren't using Python, anyway.


Games and Proton.

Apparently people that care about performance do run Windows.


None of those games, or a very small amount of them, are written in python. None of the ones that need to be performant for sure.


Games aren't written in Python as a whole, but Python is used as a scripting language. It's definitely less popular now than it used to be, mostly thanks to Lua, but it still happens.


How many games use python for scripting and stay up to date with the version of python they're embedding? My guess is zero.

Doesn't seem all that relevant? New games will benefit from faster Python.

Indeed, but the question was about performance in general.


Games are made for windows because that's where the device drivers have historically been. Any other viewpoint is ignoring reality.


Sure, keep believing that while loading Proton.


Gladly.

> Any other viewpoint is ignoring reality.

Eh, what about users? Games are made for windows, because that's where users (= players) are?

That's even more true for mobile and console games.


But not python.


Sure, but that wasn't the question.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: