There was a concept of "db-fs", whereby rather than files in a hierarchical folder structure, you just had blobs/collections of data you could search via the usual search-queries. I suppose similar structure to the average no-sql/JSON db, but as a file-system.
The problem is, it isn't even compatible with the usual interface for a fs driver (read/write, permissions etc). There is no "search" concept in the fs conceptual model. The model assume, to some extent, a finite-size mount, duplication/multi-reference is only poorly supported/emulated via hard/soft links.
plus, existing tools would, as you mention, have no support for what would now be build into the fs - functionality such as provided by 'find' would be now built in, such that you would need a shell syntax/dsl to utilize.
My best guess would be a 'special' query command that would result in a virtual folder popping up on a special virtual fs mount, e.g:
> mk_qdir *some-search-query*
/proc/srch/023013/
but if ls'ing the folder kicked of a proc in the background that initiated the query, you would have to be careful no process (a mount counter for 'df', for example) crawled the partition.
However, it's a little different for an fs system that can quickly (w/o network-speed limits) generate recursive fs structures.
for example, what if a created a vfs that created a 'foo' folder, with a 'foo' folder inside, and so on. The system would crawl an infinite descent of /foo/foo/foo/foo... and so on, which would eventually fill some cache or another.
Good thoughts, and I'm thinking of possible hazards and pitfalls of the approach.
There's already a concept of setting flags to avoid traversal of certain filesystems, and given the proliferation of virtual and networked filesystems, this seems useful: /proc, /sys, /dev, /udev, and a few others (it's getting to the point I don't recognise a full mounts listing readily anymore under Linux).
With the concepts I'm considering, in particular, of the tree as being essentially search traversal rather than a static filesystem (not even one with lots of symlinks all over the place), the potential to create some deep dives or recursive tangles is pretty high.
Another example: tools such as 'locate' or Spotlight shouldn't attempt to traverse this tree when generating indices. Instead, they should query it when requests are made.
Information and metadata leakage is another key consideration when moving data off local host and/or caching among hosts.
Yep, in fact I think a lot of vfs are like this - doesn't /proc/ query the kernel when returning parts of its tree (e.g. representing processes).
wrt the fs representing queries: one of the tag systems is like this, it represents tags as folders, and the search for term 'X' and 'Y' can be found under either folder ./X/Y/ , or ./Y/X/ ; obviously, every new tag combinatorically expands the virtual space.
Right. My vision is of a /docfs under which you might travers /au/stephenson/ti/snowcrash or /ti/snowcrash/au/stephenson, as an example. Either way works.
If a search terminates in multiple (or no) results, it's a directory, if in a single result, it's that file. Plus a few twists (virtual / dynamically generated file formats, summaries, synopses, metadata).
The problem is, it isn't even compatible with the usual interface for a fs driver (read/write, permissions etc). There is no "search" concept in the fs conceptual model. The model assume, to some extent, a finite-size mount, duplication/multi-reference is only poorly supported/emulated via hard/soft links.
plus, existing tools would, as you mention, have no support for what would now be build into the fs - functionality such as provided by 'find' would be now built in, such that you would need a shell syntax/dsl to utilize.
My best guess would be a 'special' query command that would result in a virtual folder popping up on a special virtual fs mount, e.g:
but if ls'ing the folder kicked of a proc in the background that initiated the query, you would have to be careful no process (a mount counter for 'df', for example) crawled the partition.