Hacker Newsnew | past | comments | ask | show | jobs | submit | more thefilmore's commentslogin

I came across this after listening to several 320kbps MP3 files and found that they sounded noticeably worse than 256kbps AAC versions. Support for AAC is widespread now and it should be preferred over MP3 [1].

[1] https://www.iis.fraunhofer.de/en/ff/amm/consumer-electronics...


MP3 does have an advantage though, being widespread and royalty-free since the Fraunhofer patents expired.


AAC-LC patents have also expired since the codec was introduced in 1997 (26 years ago)


OPUS is still better than AAC and MP3.


I don't disagree. But almost any electronic device released in the last 10 years can play mp3. Probably even your electric toothbrush.


Even if you set aside universality as a desirable property, according to the official site at 128kbps any Opus efficiency advantage disappears. https://opus-codec.org/comparison/


That graph does not describe codec efficiency. It's explaining how the various codecs preserve the frequencies of the raw signal at various bitrates:

Some codecs only work at low bitrates and preserve only narrow bands of frequencies. Some codecs work only at mid bitrates and preserve wider bands. Some codes only work at high bitrates and preserve only the widest bands; you can't get then to drop more frequencies for better savings even if you wanted to. Opus works on all bitrates and gradually and dynamically removes frequency bands as the bitrate drops. Vorbis preserves more or less the same frequencies as Opus at the same bitrate, but loses frequencies a bit faster as the bitrate drops. MP3 drops even faster. AAC works very similarly to Opus, but can't output low bitrate streams.

To compare codec efficiency you would need to do subjective comparisons to see how often each codec achieves transparency (when people can no longer tell if the sound has been compressed or not) at a given bitrate with various types of sounds. This has also been measured, and it's agreed that Opus is basically transparent at 128 kbps. MP3 needs twice as many bits to get the same quality, so Opus is twice as efficient.


> That graph does not describe codec efficiency.

My friend, the vertical axis is literally labeled "Quality", and the horizontal axis "Bitrate". The caption is "The figure below illustrates the quality of various codecs as a function of the bitrate." Quality at a range of bitrates is how codec efficiency is measured.

I'd never heard the claim that "Opus is basically transparent at 128 kbps", but I did find https://wiki.hydrogenaud.io/index.php?title=Opus, which agrees with you: "Very close to transparency". But it also notes, "Most modern codecs competitive (AAC-LC, Vorbis, MP3)", which lines up with the chart.

Early Opus vs. MP3 tests were done with LAME, which is awful. This may be why you're under the impression that MP3 needs twice as many bits to get the same quality.


>the vertical axis is literally labeled "Quality"

And the labels on that axis make it perfectly clear what they mean by "quality". It's how much of the spectrum they preserve at that bitrate. If "quality" referred to subjective quality there's no reason why the chart should stop at 128 kbps. It stops there because the fullband codecs don't brickwall the signal past that point. Instead they use psychoacoustics to compress it.

>Early Opus vs. MP3 tests were done with LAME, which is awful.

That's funny, because other commenters say LAME is currently the benchmark for MP3 encoders.

Here: https://wiki.hydrogenaud.io/index.php?title=Transparency it states that MP3 is considered artifact-free at 192 kbps, although here: https://www.head-fi.org/threads/when-is-mp3-transparent-an-a... someone did an ABX test and they could still hear differences more than half the times at 256 kbps. If I take the lower number, MP3 is still 50% less efficient than Opus.


Which is why I use Opus128 on my iPhone, where storage is a limited, fixed commodity and my listening environment is rarely optimized. Everywhere else is FLAC or mp3-256. My ears aren’t golden enough anymore to justify 320 bit mp3.


This is very timely. I was profiling some code a few days ago and noticed streams were much slower than expected.


We have at least ±5 more changes on the way to improve streams significantly (Node streams, then web streams), reduce the size of each stream and hot-path common cases.


That’s great news, thank you!

Offtopic nitpick: I think you mean to use ~ as the symbol for approximately. I noticed it in the release notes, as well. I would be mistaken.


Will these make it into v20 or will we need to wait till v21 as LTS isn't far away.


That's fantastic!


I can think of at least four reasons and security concerns not to use a service like this:

- Exposing a potentially private IP to an external service

- If testing local IPs, adds a requirement for an internet connection

- Must trust that it will always resolve to the actual IP not another one

- Requires your service to accept a hostname that it likely shouldn't


> Exposing a potentially private IP to an external service

I have to ask why knowing an IP address would be an issue.

Surely it's not the key point in any reasonably possible exploit.


If you're using this for testing, it's likely that you haven't fully locked down your server yet. It's not an exploit in itself but it potentially makes the server a target. No reason to do that until and unless it's required.


But this day want make the IP accessible, just "known".

It's still a private IP.


Why not OpenTransform?


Wny OpenTransform?


What?


Before going to Rust, run `node --cpu-prof main.js`, and open the profile in chrome://inspect -> DevTools for Node -> Performance tab. It will tell you what's taking time and it may not be what you expect.


I maintain updated lists here:

https://manp.gs/mac/1/

https://manp.gs/mac/8/


That's awesome. Bookmarked for later!


Is ActivityPub not scalable? I keep seeing criticisms of it on this front.


The way that instances connect to each other is O(n^2), which is not to say it's exactly n^2 but the curve is of that shape. This isn't strictly a problem with ActivityPub as with how it's used, especially in Mastodon. It's why you keep hearing about super-long queues chewing up RAM, and causing long delays - hours or even days - before content appears. It's why Mastodon instances tend to be clusters of larger machines at a scale that you would think can be served by a single smaller machine.

The usual solution for this set of problems is connection and/or request aggregation via a proxy layer. Relays (in fediverse-speak) do exist, but seem very lightly used and do little or no caching because ActivityPub relies heavily on POST (which IMO was probably a bad decision). Any caching that's done has to be ActivityPub-specific. I wrote about this a while ago and almost certainly got some stuff wrong, but it might be interesting nonetheless. At least it drew some interesting comments at the time.

https://gist.github.com/jdarcy/60107fe4e653819138396257df302...


A certain degree of overhead just has to be there, a distributed system needs to exchange information between nodes after all. And both the protocol and many implementations leave room for optimization at least. I think tackling that is one of the long-term challenges for the ecosystem, but I wouldn't say it is categorically "not scalable".

EDIT: As an example, right now in Mastodon federation traffic is handled by sidekiq queues running Ruby. Someone makes a post -> tasks to send that information to all their followers instances get queued, which then send individual HTTP requests to all those instances. This to me feels like something that could be moved into some smaller and more-efficiently-written component that batches requests, makes sure to reuse connections, ... at least for paths that see a lot of traffic.


The POSIX man pages tend to be much more approachable: https://manp.gs/posix/1/find


It's labeled "Government-funded Media".


As with many recent Musk impulsive changes, it's being rapidly revised, and looks to be changing again soon.

https://www.bbc.com/news/business-65248196

> And he confirmed Twitter will change its newly added label for the BBC's account from "government funded media" to say it is "publicly funded" instead.

https://www.npr.org/2023/04/05/1168158549/twitter-npr-state-... has a screenshot of the "state-affiliated" tag that was recently present.


According to https://help.twitter.com/en/rules-and-policies/state-affilia... 'state-affiliated media' and 'government-funded media' are two distinct labels on Twitter.


Ninja editing this stuff is par for the course at Twitter, as are impromptu decisions, reversals of those decisions and reversals of the reversals. If you want to keep track of stuff you need to screenshot and timestamp it.


This will surely bring advertisers rushing through the doors!


> In our internal tests at Meta, we observed that Buck2 completed builds 2x as fast as Buck1.

Interesting, so twice the bang for your buck.


But if you need Buck2 then you’re back to one bang per buck


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: