I wish “removing features” wasn’t such a popular thing for software companies. I mean I understand the cost of having complexity but is it REALLY that bad to just keep an unmaintained feature going that people use?
Well, now that QA is a mature process and people expect tests on every single thing, it's more "expensive" to keep a feature around, as you have to keep on testing it.
There is no way to keep an unmaintained feature in. They will break so fast and there is no point keeping a broken feature in. Just download an FTP client.
You can just as effectively have HTTP sites that are simple listings of available software. I don't think FTP removal is really relevant here except for nostalgia.
FTP has some interesting features not found in straight HTTP such as charset conversion and my favorite the HOST command. HOST was the escape valve for commands you needed run that weren't implemented in the FTP protocol itself. I'm sure it's a terrible vulnerability now but it was fun to use and see what you could make a remote machine do.
Disagree. FTP as a protocol made self hosting all kinds of things much easier. We could soon live in a world where people self host their own data powered by protocols that will borrow heavily from FTP.
HTTP is one layer too low to replace FTP. HTTP can provide the transport but not the application semantics. We would have to agree on a protocol that uses HTTP to replace FTP.
That is a fair point. WebDAV seems to provide the listing and update semantics that you are looking for. WebDAV is widely supported by webservers so I think it is a great replacement for FTP.
FTP had better support for doing server-server transfer - you could start a receiver on machine 2 and then trigger a transfer from machine 1 to machine 2 without using your own bandwidth. I guess you can achieve the same thing by doing curl on machine 2 in a screen session or something, but it feels a lot less elegant.
I think scp can do stuff like this, I remember occasionally hitting it when wanting to move a file between two servers that can't talk to each other, expecting it would use my bandwidth/route for the transfer.
wow. I used ftp for years and even ran my own server for a while and never knew that was a thing. is that something that was actually used to mirror those sites? Debian mirrors certainly don't use this (anymore?) at least, for example..
IIRC at least initially the FTP had some special modes to transfer binary data, whereas HTTP was not as efficient. And of course, HTTP was meant to be "pull only" whereas FTP had upload capabilities too.
The 'binary mode' of FTP just meant not to translate line endings or potentially character set, in opposition to 'text mode'.
The advantage with HTTP wasn't efficiency, it was that resuming interrupted downloads was typically possible with FTP but not HTTP (because ranged GET took a long while to get support in both servers and clients).
The difference between binary and text is deeper than just line endings and character set; PDP-10's, for example, stored text 5 7-bit characters to a 36-bit word. In text mode, it would unpack that to ASCII, whereas binary mode would pack two words into 9 bytes. If you transferred in the wrong mode, you'd just get an amorphous blob of protoplasm and need to transfer again
If it's unidirectional (user only downloads) and the files aren't crazy big, then given today's connection speeds and reliability I would say no not really.