Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah. Plus you'd have FTP sites that would mirror a variety of sources too. Now Chrome has removed FTP support completely.


I wish “removing features” wasn’t such a popular thing for software companies. I mean I understand the cost of having complexity but is it REALLY that bad to just keep an unmaintained feature going that people use?


You do have to maintain features, fix security issues, refactor surfaces get smaller, even unused - bitrotting code generates at least some work.


It's not that bad ... right up until someone finds a security problem involving said feature, and the original code authors have left.


Well, now that QA is a mature process and people expect tests on every single thing, it's more "expensive" to keep a feature around, as you have to keep on testing it.


There is no way to keep an unmaintained feature in. They will break so fast and there is no point keeping a broken feature in. Just download an FTP client.


> Now Chrome has removed FTP support completely.

You can just as effectively have HTTP sites that are simple listings of available software. I don't think FTP removal is really relevant here except for nostalgia.


FTP has some interesting features not found in straight HTTP such as charset conversion and my favorite the HOST command. HOST was the escape valve for commands you needed run that weren't implemented in the FTP protocol itself. I'm sure it's a terrible vulnerability now but it was fun to use and see what you could make a remote machine do.


Disagree. FTP as a protocol made self hosting all kinds of things much easier. We could soon live in a world where people self host their own data powered by protocols that will borrow heavily from FTP.


Did it? I've always had more trouble throwing up quick FTP servers vs a quick "python3 -m http.server" for a throwaway HTTP server.


Hardly. You can install Apache and drop your files in it and the default settings make it look pretty much like an FTP listing.


Gemini says hello!


Hello! If only protocols weren't treated as cash cows


HTTP is one layer too low to replace FTP. HTTP can provide the transport but not the application semantics. We would have to agree on a protocol that uses HTTP to replace FTP.


That is a fair point. WebDAV seems to provide the listing and update semantics that you are looking for. WebDAV is widely supported by webservers so I think it is a great replacement for FTP.


Is there any meaningful difference between a ftp server and a http server with directory listing enabled?


With http you lose all the fun with active/passive FTP and tricks to get around firewalls. Though stateful firewalls already ruined it.


FTP had better support for doing server-server transfer - you could start a receiver on machine 2 and then trigger a transfer from machine 1 to machine 2 without using your own bandwidth. I guess you can achieve the same thing by doing curl on machine 2 in a screen session or something, but it feels a lot less elegant.


I think scp can do stuff like this, I remember occasionally hitting it when wanting to move a file between two servers that can't talk to each other, expecting it would use my bandwidth/route for the transfer.


wow. I used ftp for years and even ran my own server for a while and never knew that was a thing. is that something that was actually used to mirror those sites? Debian mirrors certainly don't use this (anymore?) at least, for example..


It's called FXP: https://en.wikipedia.org/wiki/File_eXchange_Protocol

FlashFXP was a popular client that supported this: https://en.wikipedia.org/wiki/FlashFXP


It's called fxp and you can still find information on it. In the top site scene it was widely used.


IIRC at least initially the FTP had some special modes to transfer binary data, whereas HTTP was not as efficient. And of course, HTTP was meant to be "pull only" whereas FTP had upload capabilities too.


The 'binary mode' of FTP just meant not to translate line endings or potentially character set, in opposition to 'text mode'.

The advantage with HTTP wasn't efficiency, it was that resuming interrupted downloads was typically possible with FTP but not HTTP (because ranged GET took a long while to get support in both servers and clients).


The difference between binary and text is deeper than just line endings and character set; PDP-10's, for example, stored text 5 7-bit characters to a 36-bit word. In text mode, it would unpack that to ASCII, whereas binary mode would pack two words into 9 bytes. If you transferred in the wrong mode, you'd just get an amorphous blob of protoplasm and need to transfer again


Actually I believe http was originally envisioned to be 2 way, a la webdav.


If it's unidirectional (user only downloads) and the files aren't crazy big, then given today's connection speeds and reliability I would say no not really.


This is the right thing to do. FTP should be handled by an FTP client, not an HTTP client.


It's gone the way of the gopher!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: