Moby Dick, at its slender 752 pages, is 1.2MB of text. You can save the entire text to disk on every keystroke on just about any system today and keep up with typing just fine.
Assuming you are actually dealing with text in a text editor, you should be fine.
If you have 100MB+ files, chances are they aren't actually text.
Many times in my career I've had to open very large files (usually CSV data) and inspect/edit them by hand.
Of course there are other ways to do this work, but a text editor that can efficiently open and edit very large files (e.g. Vim) is a great tool for the job. I will put up with otherwise excellent code-specific editors/IDEs that cannot work on such files but my default general-purpose text editor is always going to be something that can.
I appreciate that the design and implementation of text editors is basically an art form (one I've dabbled in myself) in which simplicity has aesthetic value, but efficiency and flexibility are very important for an editor that's going to be used for real work.
Textual dumps of SQL databases also fall in this category. I remember having downloaded a huge sql dump. I couldn't open it in most editors because they all buffered into RAM. The file was about 13G in size and did not fit into the RAM of any machine I own. But in this case using ropes wouldn't have helped either, I guess.
Text editors should also take huge files into account and provide sequential reading from the disk to memory. Even on Emacs, I couldn't work with the file. I ended up fixing it and importing it to Postgresql. And then I spent hours on indexing the necessary fields :).
I believe you can use the vlf package (https://github.com/m00natic/vlfi) for dealing with large files in Emacs. I haven't used it myself, so I am not sure how stable it is.
Thanks for the pointer. I guess I could also have written a small program or script to go through the file. Regular fopen and fread should work. However the vlf package does exactly what I looked for. Only prerequisite I can see is that one must compile Emacs with bignum support on 32bit systems to read files greater than 512M.
Fair point, but do you actually edit log files in a text editor?
Because if you're just viewing, the "array of characters" representation is going to beat just about any other hands down, especially if you just mmap() the whole thing.
I've used vi where I meant the pager quite often. I was late at making the use of head, tail, grep, wc, du, stat, etc. to peak into data to become muscle memory.
Why not send them into a log aggregator for search and long term storage and logrotate on the server so you don't get this problem? Hell, why not logrotate anyway and keep multiple smaller files?
Sur4e. My response was to the comment that 100MB+ files are most likely not text. Like others I just tried to express that there are plenty of use cases for a text editor that can handle very large files. It's just a very practical thing.
Moby Dick, at its slender 752 pages, is 1.2MB of text. You can save the entire text to disk on every keystroke on just about any system today and keep up with typing just fine.
Assuming you are actually dealing with text in a text editor, you should be fine.
If you have 100MB+ files, chances are they aren't actually text.