Your alternative is simply copy-pasting without using the system clipboard (old-fashioned style). I imagine there's a lot less attack vectors by doing it that way, at a small time cost.
Is there no option for a "local only" clipboard with this?
Control over hardware doesn't really buy you anything here. Just about any hardware can use any filesystem with, in the worst case, the requirement that you have a small boot partition using the legacy filesystem.
Interestingly with SSD storage devices, control of the hardware can help a lot more as it can become possible to categorize, fully explore and if needed, ensure a particular behavior of commands like TRIM. Other filesystems have the unenviable task of running on any random piece of storage you throw at it, including things where the firmware straight up lies, or the hardware delays non-volatility past the point the filesystem assumes (potentially producing data loss in a crash) or similar types of problems.
Anyway. Overall, I think it's safe to say hardware control doesn't make most of filesystem development much simpler or easier. But there's a few interesting places it arguably does!
That doesn't really change anything about the filesystem design. A storage device can fail to write data it claims to have because of damage as well as design defects. When that happens, a reliable filesystem will detect it and a less reliable filesystem will catch on fire.
It also doesn't help to control 100% of the built-in storage if anybody can still plug in $GENERIC_USB_MASS_STORAGE_DEVICE and expect to use the same filesystem.
Many filesystems exist that do not run on a "plain" read/write block device, because storage based on flash is more complicated than the old random-sector-access magnetic hard drives. See for example UBIFS and JFFS2 on Linux.
Having full and direct low-level control of on-board SSDs could very well be advantageous for performance and longevity of the flash on modern macbooks. Things like combining TRIM with low-level wear leveling etc.
Taking advantage of the differences between flash and spinning rust only requires that you know which one you're running on.
Moving the wear leveling code into the OS where the filesystem can see it is an interesting idea but why aren't we doing that for all SSDs and operating systems then?
(raw) flash and spinning rust are fundamentally different, because spinning rust drives provide a READ SECTOR and WRITE SECTOR primitive, while raw flash provides READ SECTOR, ERASE (large) BLOCK, WRITE (small) SECTOR primitives. Stuff like UBIFS do try to move the wear leveling code into the OS. But the big players like Windows' NTFS and Mac's HFS were originally designed for the spinning rust primitive, so I guess vendors of flash storage (SSD drives, USB sticks etc) had to deal with providing a translation layer to emulate the spinning rust primitives on top of the nand flash primitives. I'm sure various nand flash vendors have different characteristics / spare blocks / secret sauce / defects that are masked by proprietary firmware, and probably see a significant business advantage on keeping those secret. Even things like building smarts about how a FAT filesystem is likely to have heavy rewrites of the file allocation table compared to file contents, into the firmware for USB sticks where FAT is a likely fs, could prove an advantage. So being a single vendor behind the entire stack from the raw NAND flash memory to the motherboard it's soldered onto to the OS is likely very advantageous.
Apple's EFI firmware has an HFS driver built into it. The way today's macOS boots is the firmware reads the bootloader off the boot partition created these days on Core Storage installations, and the bootloader (more correctly OSLoader) is what enables the firmware pre-boot environment to read core storage (encrypted or not) and thus find and load the kernel and kext cache and then boot.
It would be very awesome to have this backend data user-contributed, makes the burden of adding and improving a lot less on you. I'm looking through the sections and see lots of little tiny things that would be a quick PR on github, and I'm sure there are tons of people who'd help out.
I was under the impression that Numpy also just calls BLAS underneath? Hence why doing element-wise calls in Numpy is far, far faster than simply doing nested for loops.
But I think this is the great strength of Python. It's a glue language. If you need speed, you can always write a wrapper around a C/C++ library.
What would differentiate this from something like cmake's current package system? As far as I can tell the only advantage of something like this is if a majority of packages adopted it (like pip).
> And every so often the community goes off on some crusade. Everything must be Design Patterns. Everything must be specified in XML. Everything must be a Bean. No, everything must be Injection. No, no, everything must be Annotation.
You couldn't have described this better for me. This is my exact frustration with the ecosystem, and it's the only thing that keeps me from using Java as my default language of choice. The language itself can be used very well, just wish the tooling was more concise and functional.
Is there no option for a "local only" clipboard with this?