There are a million ways to copy files, a lot of them are simpler to use than your example.
But that's not the point here. Sometimes you just got to make something because you want to, because it fits a specific need that maybe not a lot of other people have, or because it's a learning experience, or just for the heck of it. So, no, it's not required, but that doesn't mean it's useless.
It's cool that the author did something productive that works for him/her, and it's even cooler they shared it with everyone.
I can say that's already infinitely more than what I made today. How about you?
Oops! Looks like both the client and the server have to be trusted, otherwise we've got at least one probably-exploitable vulnerability. And there's no mechanism for authentication, so it's really only safe to use on locally-secured network. (That took about 40 seconds to find, by the way - I would not be particularly suprised if there were more subtly lurking issues.)
I sympathize with the sentiment of "people should go out and try to create things themselves, even at the risk of failing" (or "especially" at the risk of failing), but from any objective standpoint, bcp isn't a good program. 400 lines of C to badly accomplish what 2 lines of shell can do? Someone else commented about it being very much in the unix spirit - no, I don't really think so. netcat + openssl would be in the unix spirit.
Not meant to be a criticism of the author - it's a cool project, if you don't care about certain "real-world" concerns (which isn't as unreasonable as it sounds).
The comments here remind me of the negative feedback Heather Arthur got when she dared to open-source some code: http://harthur.wordpress.com/2013/01/24/771/. The code is on Github. If you find a bug then why not fix it and send a pull-request instead of the negative public criticsm?
I should have noted somewhere, this isn't ready for production by any means, just a first iteration of an idea I had. It is currently intended to be used on a trusted network.
When the title says "Quickly copy a file..." it is reasonable to assume that we're talking about something quicker/easier then at least the most obvious ways to copy. But if we're talking about something you made just because you wanted to - it is more appropriate to present it as something like "Yet another way to copy a file..." - or, better yet, clearly state how your way is different from a million existing ways.
Don't forget also the "yet another HN karma or attaboy builder".
(Having spent years doing minor little things that nobody ever would care to hear about (back in the day) I can of course fully understand the positive aspects of the mental process. You learn something and perhaps people leave comments that make you feel good which spurs you on to do better things.)
Haha, that certainly is not the case. I made a tool, found it useful, a few of my other friends find it useful, so I posted it.
Some people voted it up, I can't control that. I really could care less about karma, in fact, the critical comments (except yours) have been very useful and worth the post.
If you read the `nc` manpage, you'd notice that the -n option on osx and debian disables host lookup and that hostname is defined as
hostname can be a numerical IP address or a symbolic hostname (unless the
-n option is given). In general, a hostname must be specified, unless
the -l option is given (in which case the local host is used).
those hours of time spent writing the code could have been saved by spending minutes learning the standard way of doing it (and given the ubiquity of nc, its utility would stretch far beyond this use case)
I dont begrudge people reinventing the wheel, but i fully suspect this wouldn't have been written if the author knew about nc.
It wasn't many hours and I know nc and have used it this exactly way. However, nc requires you know the hostname, an assumption I didn't want to make.
I appreciate the criticism though, it is always good to question the investment of time. In this case, I feel comfortable with it. Can't discount the joy of programming little things either.
If you read the `nc` manpage, you'd notice that the -n option on osx and debian disables host lookup and that hostname is defined as
hostname can be a numerical IP address or a symbolic hostname (unless the
-n option is given). In general, a hostname must be specified, unless
the -l option is given (in which case the local host is used).
You still need to know the IP. I use nc this exact same way, and it's definitely a hassle -- not a big one, but still -- to have to check ifconfig on one of the machines and then re-input the IP on the other. On internal networks where IPs are assigned by DHCP on connection and can and do change frequently this essentially needs to be done every time you send a file.
I'm not saying that it's such a big hassle that we need complicated ways of avoiding it, but this is definitely a cool project that does a away with a (minor) pain point and I for one applaud the author for scratching his itches and sharing with the world, even if his hack isn't perfect.
1. In general, I am on the computer I want to send from, then I go to the computer I want to receive on. This script seems to require I be at the receiving computer first. Is there a trick to do it my way?
2. A very minor point, but you are still required to know (or at least specify) a the filename on the receiving end. My solution does not.
I'm not at a terminal now so can't verify this but it looks like you swapped the send and receive addresses from what was suggested. Try send on broadcast and receive on 0.0.0.0.
for reading files and transferring over the network: the limit is always going to be the IO devices, not an extra context switch and a few extra copies...
s/always/usually/. Depending on the disks involved and your network hardware (either a super-fast link, or even a system with a slow CPU and cheap network hardware that expects the drivers to do the heavy lifting in software), it actually could end up CPU-constrained.
But for the usual case where it is I/O-constrained, you can often get files across more quickly by throwing CPU at reducing the total amount of bandwidth required by, e.g., using something like bzip or pbzip:
This requires you know the IP/domain of the host. With DHCP on my laptop, this is not always the case. Yes, I know how to lookup my ip, but doing it every time becomes a pain.
1) The code assumes both the client and the server have the same endian. This can be issue since a uint32 is used for both the synchronization code (BCP_CODE) and the port. This can easily fixed with htonl/ntohl conversions.
2) it assumes both the client and the server define int to be the same size (may or may not be true based on compiler/processor/OS combinations). If you use stdint definitions, this typically won't be an issue.
(Again, I don't mean to bash you or this project, just to say that it shouldn't be used in the "real-world" without being looked at rather more carefully. On a locally-secure network, or in an environment where security is unimportant, it's pretty cool.)
Completely agreed, I assumed the project would give the perception that it isn't a fully robust tool yet, but it is always worth to make it completely clear.
Can do this and other related useful things, including multicast of a file to N machines in parallel. Don't let the UDP bother you: it implements retry/checksum/etc on top of datagrams.
It's been around a few years, and is probably already in your distro.
It can pipe, as well as copy files, I wrote about it some time ago:
This is great tool and nice article. The only slightly additional requirement for this tool is you know the name of the file being sent. I actually was thinking of making this optional (for the same reasons they require it, to allow many different files).
I've added a link for udpcast to my readme as an alternative. Thanks!
There's also UFTP, it doesn't require the receiver to know the file name. (But it is different from your implementation in a number of ways, most importantly it transfers all data via UDP.)
The client-server arrangement is backwards though :) Typical arrangement is for the clients to do the "anyone there?" broadcast, for the servers to reply and then the client would select the server, connect to it and they would go about their business. In your case, the server connects to the client. If you re-arrange this to natural client-server order, you should be able to get rid of the fork() call and this will help with portability (not that you probably care at this point).
Also,
size = fread(&buf, 1, 100, ft)
No harm in using chunks larger than 100, especially when dispensing larger files.
Also, consider switching to multicast for discovery.
I agree with you on it being backwards and for some reason I chose to do it this way, though I can not now remember why.. may have just been a flawed though, as I can see no reason not to do it your way at the moment.
Good find on the fread, that is actually a "bug", should be MAXBUFLEN instead of 100.
Agreed on multicast, I will add that to do the todo.
I like how clever this is, but is there an option to name a bcp (or a planned option), so that I can send files among a big network with lots of people using this command concurrently?
I couldn't seem to get this compiled on OSX :( Do you (or anyone) know if there's a OSX version available? What I like about ncp is that it first requires an action at the sender (push) and then at the receiver afterwards (poll).
Your examples are all expected to sort out whether to accept an input stream or start an output stream, but without an explicit command, which poses a problem -- the app would have to determine that the input stream is empty before switching modes to streaming output. That's not so easy -- suppose the operator unintentionally pipes an empty input file? Will the app know what to do?
Most (I won't say all) command-line apps must be explicitly told which mode to use. There's a good reason.
isatty() to the rescue, but that's not to say that there shouldn't be a -- option or defaulting to a different mode depending on argv[0], like gzip does.
Hehe, The first thing I wrote when I learned programming ( around 1990 ) was a tool called ipxcopy which did the same over IPX protocol that we always used at our Lan Parties.
Definitely a interesting take on sharing data over a LAN but I would be worried about the repercussions of using broadcasts to move a large quantity of data on a larger network, cool for smaller/personal networks though.
This has some usability issues, e.g. you drag a file accidentally to another box, then change your mind and drag it back while it is still being copied or moved in the original direction. This sort of thing. It looks like a simple and natural extension to what Synergy has, but in reality it's just a can of worms.
The example in the README could be better if the command prompt showed that you were on different hosts. As it stands, it seems that you sent it from host 'heisenberg' to host 'heisenberg'.
For big files it would make a very noticeable difference actually, try scp with different encryption algorithms even and you will see how much it matters.
With that said, by 'quickly', I was focusing more on easiness and simplicity.
if you read the article, it's because the faster alternatives use multiple streams and/or use compression. it's got nothing to do with encryption. and you can get scp to compress your data first (-C).
I never run an ssh server on my laptops, no reason to really. Secondly, with ssh, it requires you know ip or hostname, both which I don't want to worry about.
$ cat file | nc host.example.com 8888
I don't think a dedicated utility is required.
Edit: Sorry about my comment coming off as a bit hostile, I did not intend it to be.