Yes, but you're using a technical definition of FILE and STREAM (pipes).[1]
However, the author was talking about a conceptual difference instead of the technical one. He's recommending that people think of logs as streams instead of files (again, "files" as an abstraction concept and not as FILE* as Unix o/s descriptor). The shift in thinking would trigger a different approach to writing log data.
In any case, I think the Jay Kreps article on logs has much more information -- especially with distributed systems.[2]
[1](E.g. The C Language fopen() returns a FILE* pointer and can open an actual file on disk or a transient named pipe (mkfifo)).
People should educate themselves and stop thinking of files as, whateveer it is that makes them think that logfiles are bad.
Do they seriously think a file is like a magical stone being dropped where ever it may be? A self-contained unit, like a bit? Well, they're wrong. Files are streams of bits.
Streams of bits can be in transition or stationary. Example, moving bits from here to there, over network or on the harddrive - bits in transition, a collection called a file. A bunch of bits on a hard-drive or in RAM, is still a stream, but not in the process of being copied/moved elsewhere.
However, the author was talking about a conceptual difference instead of the technical one. He's recommending that people think of logs as streams instead of files (again, "files" as an abstraction concept and not as FILE* as Unix o/s descriptor). The shift in thinking would trigger a different approach to writing log data.
In any case, I think the Jay Kreps article on logs has much more information -- especially with distributed systems.[2]
[1](E.g. The C Language fopen() returns a FILE* pointer and can open an actual file on disk or a transient named pipe (mkfifo)).
[2]http://engineering.linkedin.com/distributed-systems/log-what...