Hacker Newsnew | past | comments | ask | show | jobs | submit | ninjin's commentslogin

> I'm reminded of that scene in "Ghost in the Shell" where some guy ask the Major why he is on the team (full of cyborgs) and she responds something along the line of "Because you are basically un-enhanced (maybe without a ghost?) and are likely to respond differently then the rest of us; Overspecialization is death."

The scene you mentioned (amazing movie and holds up to this day) with the Major and Togusa:

https://youtube.com/watch?v=VQUBYaAgyKI

While I frequently use a similar argument, "We need someone 'untainted' to provide a different point of view", my honest opinion is somewhat more nuanced. These models tend to gravitate towards some sort of level of writing competence based on how good we are at filtering pre-training data and creating supervised data for fine-tuning. However, that level is still far below where my current professional writing is and I find it dreadful to read compared to good writing. Plenty of my students can not "see" this, as they are still below the level of current LLMs and I caution them to overly rely on LLMs for writing as they can then never learn good writing and "reach above" LLM-level writing. Instead, they must read widely, reflect, and also I always provide written feedback on their writing (rather than making edits myself) so that they must incorporate it manually into their own and when doing so they consider why I disagree with the current writing and hopefully learn to become better writers.


Bitterpilled. Wow, the audio mixing on that clip is great. I miss art like this. I'm afraid that nothing will recapture the way I felt watching GOTS the first time.

There are some many pieces of media that I wish I could fully scrub my memory of to experience for a second time.

You just invented a category for a list! Going to have fun thinking of mine.

Indeed, Heathrow security is the rudest I have experienced. They get aggressive if you so much as ask a question. Furthermore, I have on numerous occasions had them argue with me to go against the medical advice from both doctors and medical advice manufacturers. Last time going as far as claiming that a scanner does not emit electromagnetic radiation.


Would have loved to see how it holds up with some load via FastCGI and CGI (via slowcgi(8)), since httpd(8) can be used with both of them.


> Although I guess the argument will be that email clients should use AI to summarise the HTML into a plain text summary.

Or you could pass it through ~5,000 lines of C [1] and you will have it done in milliseconds even on hardware that would be old enough to drink.

[1]: https://codemadness.org/webdump.html


I think our contexts are all different. But, to share a different experience, as an academic (with plenty of conversations involving people in industry as well each year) I have used interleaved and bottom-posting for decades and it causes confusion maybe once a year at most and mostly because Microsoft's online client is broken and at times does not even render anything below "Dear Foo," in the HTML view (got to give this small start up in Redmond some more time though, we can not expect them to implement standards that have only been around for over 40 years).


Exactly. The problem is that by their very nature some content has to be dynamically generated.

Just to add further emphasis as to how absurd the current situation is. I host my own repositories with gotd(8) and gotwebd(8) to share within a small circle of people. There is no link on the Internet to the HTTP site served by gotwebd(8), so they fished the subdomain out of the main TLS certificate. I am getting hit once every few seconds for the last six or so months by crawlers ignoring the robots.txt (of course) and wandering aimlessly around "high-value" pages like my OpenBSD repository forks calling blame, diff, etc.

Still managing just fine to serve things to real people, despite me at times having two to three cores running at full load to serve pointless requests. Maybe I will bother to address this at some point as this is melting the ice caps and wearing my disks out, but for now I hope they will choke on the data at some point and that it will make their models worse.


The uniqueness of the situation is that OpenAI et al. poses as an intelligent entity that serves information to you as an authority.

If you go digging on darkweb forums and you see user Hufflepuffed47___ talking about dosages on a website in black and neon green, it is very different from paying a monthly subscription to a company valued in the billions that serves you the same information through the same sleek channel that "helps" you with your homework and tells you about the weather. OpenAI et al. are completely uprooting the way we determine source credibility and establish trust on the web and they elected to be these "information portals".

With web search, it is very clear when we cross the boundary from the search engine to another source (or it used to be before Google and others muddied it with pre-canned answers), but in this case it is entirely erased and over time you come to trust the entity you are chatting with.

Cases like these were bound to happen and while I do not fault the technology itself, I certainly fault those that sell and profit from providing these "intelligent" entities to the general public.



Correct, the pirated music library was before they exited the closed Alpha.


No, that's what they ran on when the general public could join on a referral basis. They called that "beta".

The technology was already proven, i.e. The Pirate Bay and other torrent networks had already been a success for years. What Spotify likely aimed to show was that they could grow very fast and that their growth was too good to just shut down, like the entertainment industry tried to do with TPB.

After they took in the entertainment oligarchs they cut out the warez and substituted with licensed material.


Not sure if it was called "beta" or "alpha" and "closed" is of course up to interpretation, but it was indeed by invitation. Swedish law at the time (still?) had a clause about permitting sharing copyrighted material within a limited circle, which I know Spotify engineers referred to as somewhat legitimising it. I also know for a fact that once the invite-only stage ended there was a major purge of content and I lost about half of my playlist content, which was the end of me having music "in the cloud". Still, this is nearly twenty years ago, so my memory could be foggy.


When I first started using Spotify, a lot of the tracks in my playlists had titles like "Pearl Jam - Even Flow_128_mp3_encoded_by_SHiLlaZZ".

Always made me chuckle, it looked like they had copied half of their catalogue from the pirate bay. It took them a few years to clean that up.


Yes, when the entertainment industry came onboard they immediately made the service much worse. I reacted the same way you did.

IIRC, 2008, a little less than twenty years.


> The technology was already proven, i.e. The Pirate Bay and other torrent networks had already been a success for years.

Spotify showed that you could have a local-like experience with something backed by the cloud. BitTorrent had never really done that. The client wasn't that good, and you couldn't double click and hear a song in two seconds.

The way you said that made me think you might be remembering when it was partially P2P, but I don't remember the timeline, it was only used to save bandwidth costs, and they eventually dropped it because network operators didn't like it and CDNs became a thing.


If you don't remember, why speculate?

Ek had been the CEO of µTorrent and they hired a person who had done research on Torrent technology at KTH RIT to help with the implementation. It was a proven technology that required relatively small adaptations.

They moved away from this architecture after the entertainment industry got involved. Sure, it was a cost issue until this point, but it also turned into a telemetry issue afterwards.


I am somewhat cautious to comment as I know the author is way more experienced than I am and I fear that I may be missing something. However, let me try to accomplish the same with my elementary doas(1) knowledge.

Allowing mounting for a specific group is simple with doas.conf(5):

    permit :mountd cmd /sbin/mount
    permit :mountd cmd /sbin/umount
We can of course tighten it further as the author did:

    permit :mount-usb cmd /sbin/mount /dev/sdb1
    permit :umount-usb cmd /sbin/umount /media/usb
If you want to go more complex than specifying arguments, we could of course create a shell script and specify it instead of a binary.

Likewise, we can do something similar for a service account:

    permit :www-deployment as www-deployment cmd /var/www/bin/build /var/www/application
The key difference here would be that www-deployment can not delegate as easily to arbitrary users, as they would need to ask someone with root access to add additional users to the www-deployment group. But I am left wondering if this use case (if it is important enough) is not equally well served by specifying a location for non-root users to add permissions akin to what we see in doas.conf(5), but with the constraint that they of course can only allow other users to run commands with their privileges. Yes, it would "bloat" doas(1), but these code paths are not that long as long as you keep your scope constrained (doas(1) has a core of just over 500 lines and with environment handling and configuration format parsing we arrive a a final line count at just over 1,300).

At this point, the main advantage I see with capsudod is that you can more easily drop privileges and put in restrictions like pledge(2) before the binary is ever called upon by whatever user we have granted permissions. While with the doas(1) thinking above you have to run over plenty of code that could be exploited. Still, this feels like a rather minor relative improvement to what we already have.

Am I missing something in my ignorance? Lastly, let me also say that I am sure that sudo(8) has the ability to do the same things I proposed to do with doas(1) above, but I know the latter far better.


The whole problem is mapping privilege to users and groups, so doas doesn't solve the issues explained in the article.

> The key difference here would be that www-deployment can not delegate as easily to arbitrary users, as they would need to ask someone with root access to add additional users to the www-deployment group. But I am left wondering if this use case (if it is important enough)...

Delegation is the killer feature of the object capability model. It's not just important enough, it's the most important. Keep in mind that the ACL model allows delegation, too, it's just unsafe. Users share credentials all the time. Capabilities allow delegation in a way that can be attenuated, revoked, and audited.


Firstly, thank you for engaging and trying to enlighten me.

I do understand why capability delegation is useful and I am familiar with using Unix sockets to delegate the control of daemons using socket permissions, which feels similar to what we see here with capsudod (I have not read the code sadly, too much other code to read today).

However, I am still puzzled what the advantage of having a herd of capsudod instances running is to say my proposal of allowing users to set up their own doas.conf(5)s to delegate capabilities. Yes, we still need SUID and we will need to be darn sure 1,000 or so lines are properly secured, but it is attenuable, revocable, auditable, and feels (perhaps wrongly, because I have a bias towards text files describing the state of a system?) more natural to me than putting it all into the running state of a daemon.

Is there some other strength/weakness of these approaches that I am failing to see? I am no systems programmer, but I find topics like this interesting and dream of a day when I could be one.


> However, I am still puzzled what the advantage of having a herd of capsudod instances running is to say my proposal of allowing users to set up their own doas.conf(5)s to delegate capabilities. Yes, we still need SUID and we will need to be darn sure 1,000 or so lines are properly secured, but it is attenuable, revocable, auditable, and feels (perhaps wrongly, because I have a bias towards text files describing the state of a system?) more natural to me than putting it all into the running state of a daemon.

I think two separate discussions are being mixed here. The above seems mostly concerned with the chosen interface of capsudo. Imperative vs. declarative is orthogonal to the discussion about object capabilities vs. ACLs.


sure, but that doesn't change the fact that doas(1) is a suid binary. everything done would be done as root, from parsing the config file, checking the rights, and finally executing the command.

capsudo here would rely on singular unix sockets with file access rights, so in essence, it would indeed be similar to what you could do with doas, but the idea here is to seperate things. with doas, doas would check if you have the correct group or user to do the command, while with capsudo, the kernel would check it, and reject it if you don't have the right.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: