Is mdfind a Windows executable? Is there a standalone version that I might be able to use on the rare occasions I need to fight with somebody's Windows box?
mdfind is the command line interface to macos "Spotlight", which is the global file index. So it can do things like full text search in addition to matching metadata values or size bigger than X.
I don't know windows well enough to know the equivalent. But I think there is an index on windows, and powershell may be able to poke at it.
mdfind is a built-in with macOS. It's similar to find (should be on your *nix system if you have one) which can be installed with Cygwin on Windows. On Windows, you'd use Powershell and Get-ChildItem (I don't think it's case sensitive, but I don't use PS much).
I meant that I have repositories checked out of github that contain pdf files. Most repositories that I check out of github are in ~/code, which I back up in case it disappears upstream. But it does look like a lot of those files actually are academic papers.
I haven't had it crash. Typically my documents are under a 300 pages (either academic books or papers), but occasionally I use the PDF reference, which is 1300 pages.
Yeah, I've been very happy with it over the years. But my one minor nit-pick is that it uses a _lot_ of memory. I presume it is pre-rendering stuff for speed.
Nice work, thanks for taking the time to write it up. I regret not doing that for my projects.
I also did something similar back around 2014 in https://github.com/dunhamsteve/iwork but I didn't get much further than tables on the Numbers side before taking a break. There I translated iwork files to HTML. That code has been largely neglected since then, and I never wrote up my process. Like the other commenter, I based this on https://github.com/obriensp/iWorkFileFormat
For ObjC programs that don't embed the descriptors, I wrote a python script that reverse engineers protobuf schemas from disassembled code: https://gist.github.com/dunhamsteve/224e26a7f56689c33cea4f0f... I don't remember what project that was for, but maybe it's useful to someone.
And for Notes.app, I reverse engineered the description from the binary protobuf data. Since there is ambiguity between binary data and nested objects, my script would build a tentative schema and then refine it against further examples. I later learned that the full schema, in text form, was embedded in the web version of the application. That project is at https://github.com/dunhamsteve/notesutils and also is neglected. I believe the table format has changed enough that tables are no longer working.
I remember the Sun monitors (21" I think) were about 80 lbs. I've read that part of that was a metal frame to hold all of the wires in front of the screen.
They were fun monitors - we had a lab full of them, they would degauss on startup, and the degausser would induct into the monitor next to it (and a little bit into the monitor after that).
The weight from a CRT is mostly about the amount of glass required to keep the atmosphere out, as it's essentially a vacuum bottle with better marketing. On that 21" display, you've got about 6000 pounds of force trying to push the face inward, not to mention the sides, neck, etc.
Yeah, these monitors were a bit heavier than most consumer 21" monitors at the time. We also got Viewsonic monitors for PCs, which were lighter. At the time, I had assumed the additional weight was from extra shielding, but I later read that some of the weight difference was a metal frame holding the wires. The trinitron had a bunch of vertical wires instead of a grid of holes on the front - If I remember right, they'd shimmer a little if you smacked the side of the monitor with your hand.
It's possible they also had more glass than typical for a 21" monitor, I don't recall if they were any flatter than the Viewsonics or not.
Right. Bought two out of a University lab and lived in an apartment six stories high with only stairs. Moved out those monitors even and they were more difficult to get down than the couch...
Don't let me get started about fixed frequency, X11 modeline guessing (wrong of course) and needing a second monitor to even get back to the original config.
Our app reports all of the runtime exceptions to the server. We had one years ago (maybe before 2008) that was caused by somebody's "toolbar" replacing a method like Element.appendChild with one that sometimes crashed.
This inspired me to add a list of all script tags to error reports.
It was a series of experiments with new approaches to programming. Kind of reminded me of the research that gave us Smalltalk. It would have been interesting to see where they went with it, but they wound down the project.
I worked on this project so I can give some insight. The main reason we didn't keep working on it was it was VC funded and we didn't have a model for making money in the short term. At the end we were pursuing research related to natural language programming and reinforcement learning in that area (I recently blogged about it here: https://mech-lang.org/post/2025-01-09-programming-chatgpt), and were considering folding our small team into OpenAI or Microsoft or something. But we wanted to work as a team and no one wanted to take us as a team, so we called it.
It didn't get far enough to be "used" in a production sense. There was enough interest and people were playing around with it, but no real traction to speak of. Frankly, language projects are difficult because these days they have to be bootstrapped to a certain size before there's any appreciable use, and VCs are not patient enough for that kind of timetable.
In general, yes, although it's nice to have more than one javascript implementation. And one advantage of JSC is that it implements tail call optimization (per the ES6 spec).
I wrote my own language that targeted javascript. When I made my language self-hosting, I initially used `bun` (based on JSC), so I wouldn't have to implement a tail call transformation myself. It was expedient.
My goal was to run in a browser, so I did eventually add that transformation and found that my compiler was 40% faster running in node (based on v8).
> In his 1807 Almanach des Gourmands, gastronomist Grimod de La Reynière presents his rôti sans pareil ("roast without equal")—a bustard stuffed with a turkey, a goose, a pheasant, a chicken, a duck, a guinea fowl, a teal, a woodcock, a partridge, a plover, a lapwing, a quail, a thrush, a lark, an ortolan bunting and a garden warbler—although he states that, since similar roasts were produced by ancient Romans, the rôti sans pareil was not entirely novel.
That gets you your log7 reduction of salmonella, so it is safe to eat, but I don't know if it would be "cooked" (changing to an acceptable texture) if you could instantaneously bring it to 165 F.
I have no idea what that cooking process is like. In a water bath, I run chicken breast at 62C instead of 60C because the texture is better for dicing and putting in kid's lunches or wraps. I might try 60C if I was searing and serving whole. I haven't done dark meat this way, but I suspect it'd need a higher temperature or time to break down connective tissue. And I know that for lower temperatures (58C? - I haven't made that in years), you need to hold short ribs for a couple of days.
I can say I've cooked chicken sous vide incorrectly before that had cooked long and hot enough to be safe, but the texture and feel of the meat could only be described as a meat gusher, if you've ever had those candies. Every bite exploded with liquid and the meat itself was squishy, it was very disgusting
reply