Hacker News new | past | comments | ask | show | jobs | submit login
OS X and the Unremovable File (2013) (galvanist.com)
111 points by progval on Nov 20, 2015 | hide | past | favorite | 41 comments



My impression is that the author wants this to be a bigger deal than it is. You found a bug, well there's hundreds of thousands of bugs in OS X, just like any other commercial OS. Maybe the author should file a bug report, and give readers the radar # so we can add impact.

Not that it's wrong to alert users to the problem, but it's more important, and not difficult, to alert Apple that this bug exists and users care about it.


I've experienced issues on Windows before where a path has been too long to remove. Renaming the long directory names to something short can get around it.


In my experience Cygwin (the Unix environment for Windows) also works. It's only the Windows explorer and the command line that has problems with long paths, not the file system/kernel itself.


Try creating a file like `nul.txt` sometime.

http://superuser.com/questions/282194/how-do-i-remove-a-file...


My favourite filename trick on Windows is to create a file called "1", then rename it using Windows Explorer using Alt-255 to create a blank character (not a space, but looks like a space).


That reminds me of creating a directory called * with root permission in the root file system on linux then asking the intern to remove it.

  sudo rm -rf *
They'll learn the hard way to use double quotes around the directory name.


Another ugly trick is to create a junction (mklink /j) to some inner dir and then renaming the file or dirs making the path too long.


On a related note, I once tried to access a long path in an ext4 filesystem on one of my USB HDDs using ext2fs on Win32. I hit the path-length limit and couldn't reach the final directory.

So, I shared the directory I could reach, then browsed to the \\share.

It worked. :D


The dumb-trick way to solve this is to use Robocopy with /PURGE to merge an empty directory with a parent of your non-deletable files. Somehow it manages to helpfully delete all the "extra" files.


When removing files in odd situations on unix systems, I'm a big fan of:

   01:30 shephard:shephard shephard$ touch foo
   01:30 shephard:shephard shephard$ ls -li
   total 0
   102600870 -rw-r--r--  1 shephard  2000  0 Nov 20 01:30 foo
   01:30 shephard:shephard shephard$ find . -inum 102600870 -exec rm {} \;
   01:30 shephard:shephard shephard$ ls -li
But - it doesn't work in this situation of long symlink. Wild.

   shephard:~ root# ls -li $str1/$str2/$str3/$str4
   total 8
   102601971 lrwxr-xr-x  1 root  wheel  3 Nov 20 01:37 L -> ftw
   shephard:~ root# find $str1/$str2/$str3/$str4 -inum 102601971 -exec rm {} \;
   rm: 1.../2.../3.../4..../L: No space left on device
   shephard:~ root#


Why would that behave any differently than just calling rm directly? find isn't somehow invoking rm with the inode, it's just passing the path to rm.


This is useful when the file name gives the shell coniptions, such as when you are trying to remove a file named * or \ or $ or - . Anytime I see a file like that, I don't even try and figure out how to convince the shell/rm to properly escape the characters, I just switch back to the inode approach.

But yes, doesn't help for filenames that are too long.


Just guessing but maybe the shell is eating the filename rather than rm eating it?


The article demonstrated trying to make the syscall directly and still receiving the error, which rules out any shell shenanigans.


My bad. I should rta.


It would not. The way to unlink files is by path and not by inode. I don't think you can actually delete by inode on OSX.


This could go horribly wrong when the filename contains newlines, right? Like `foobar\n/etc/passwd`.


On a related note, does anyone have experience with grayed out folders in Finder? I understand OS X sometimes sets the btime of these folders to a certain value if a copy operation failed, but I would specifically like to know how to reproduce them. I am working on a new kind of sync tool and managed to get a grayed out folder once, but could not reproduce it again.


To evangelize Emacs as the most powerful general purpose Unix tool ever, I used to leave a file called README at the top level of my public ftp directory that contained:

    README: No such file or directory
When people emailed me asking about why they couldn't read the README file, I'd tell them:

Just run "emacs README" and you'll be able to read it with no problem!

Nobody ever got back to me after that bit of advice, but I hope it taught some people to love Emacs.


vi can do that too, you know ..


My bet is on an off-by-one error somewhere amongst all the fixed-length-buffers used to handle paths. In some ways, I'm surprised that modern operating systems still make extensive use of fixed-length-buffers and manipulate paths as strings, since other representations like arrays of inode numbers or even a linked list could be simpler and more efficient to manipulate while avoiding some of the strangeness and edge cases with respect to path parsing.


You cannot represent file paths as inode numbers if you support hardlinks.


HFS+ has to be the worst widely-used filesystem out there. It's a dog's breakfast of this kind of stuff.


FAT32 is still fairly prevalent, and I'd say worse, having dug through the structures of both and gotten intimate with their limitations.

But I doubt that HFS+ is to blame here. It sounds like it's just a bug in the filesystem code.


It's been some time I don't use OSX on non-HFS+ filesystems (or case-sensitive HFS+). Would anyone with such experience like to comment?


Does linux/ext3/ext4 have quirks like this?


Note that this is likely an issue of PATH_MAX being used, a constant which is often used as the maximum path length when constructing buffers.

Different programs might have different limits which are irrespective of the file system itself (and the filesystem has usually a different limit for a single name - PATH_MAX is usually relevant for symlink lookups).

linux/limits.h defines it as 4k, which is longer, but you would be surprised that many tools often define their buffer sizes arbitrarily (with 256 being _quite_ too common).

Given that each fs might have different limits, it's kind of bad practice to assume some fixed length as the maximum path size (I don't know if linux actually enforces PATH_MATH in vfs as a ceiling for all fses anyway).



Hey mods/submitter, can we add a (2013) to this? The post is dated November 19, 2013. Thanks!


At your service.


dang, dang at our service. dang.


so what's the situation like on el capitan ?


On ElCapitan:

    sh-3.2# pwd
    /var/root
    sh-3.2# ln -s ftw $str1/$str2/$str3/$str4/L
    ln: 1.../2.../3.../4.../L: File name too long


Yay, that's actually the right error. MAX_PATH in xnu (the OS X kernel) is 1024 so it's not unsurprising that things get weird when you have paths at least 1025 characters long as in his example.


> MAX_PATH in xnu (the OS X kernel) is 1024 so it's not unsurprising that things get weird when you have paths at least 1025 characters long as in his example.

What is surprising is the asymmetry: you can create the symlink, but cannot remove it.


If you make a directory tree "a/a/a/a/a/a/a" and then rename every directory to "aaaaaaa (x255)" starting on the inner directory... How does other OSes deal with this? Does every rename at a top level folder check the filenames recursively first?


That’s interesting. I’m on El Capitan too (10.11.1) and as root I can create the symlink but not remove it. Might you be using a different filesystem or something?


How strange. I'm also on 10.11.1, and I get "file name too long":

  sh-3.2# ln -s ftw $str1/$str2/$str3/$str4/L
  ln: 1.../2.../3.../4.../L: File name too long

  sh-3.2# mount
  /dev/disk1 on / (hfs, local, journaled)
  ...
Edit: see my comment above - you can indeed create a path too long; just not in one go.


Ah, it looks as though it works in /tmp (where I first tried) though not in my home directory. Very odd…


Not to directly answer your question but, I'm using Yosemite 10.10.5 and it's easy enough to check:

  plo-pro:~ root# pwd
  /var/root
  plo-pro:~ root# str1=$(python -c "print '1' * 255")
  plo-pro:~ root# str2=$(python -c "print '2' * 255")
  plo-pro:~ root# str3=$(python -c "print '3' * 255")
  plo-pro:~ root# str4=$(python -c "print '4' * 253")
  plo-pro:~ root# mkdir -p  $str1/$str2/$str3/$str4
  plo-pro:~ root# ln -s ftw $str1/$str2/$str3/$str4/L
  plo-pro:~ root# (cd $str1/$str2/$str3/$str4; unlink L)
  unlink: L: No space left on device
(all the other obvious delete commands also fail)


El Capitan won't let you create the path in one go (it gives file name too long), but you can create it incrementally:

  sh-3.2# pwd
  /private/var/root/1.../2.../3.../4...
  sh-3.2# ln -s /var/root/ftw L
  sh-3.2# rm L
  rm: L: No space left on device




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: