Not to mention that while Parquet fixes the "delimiter problem", it doesn't fix the "encoding problem".
In (simplistic) CSV, you have to pick the right delimiter or it mangles some of your data.
In Parquet you have to pick the right data type encodings for each column for your data or it gets mangled.
Your clean monetary fixed-precision decimal data from the source system becomes floating point slop in your "I didn't want to think about data types"-encoded Parquet file and then starts behaving differently (or even changing values!) due to the nature of floating point precision artifacts. Or your blanks become 0s or nulls, etc, etc.
And don't get me started on character set encodings!
This is the next evolution of the "My film does not use CGI" sneering. Sure, doing proper pre-rendered VFX with photo-realism is great and also people doing it love it. But can it be done on the budgets/fixed bids/turnarounds when the producer comes with "...and all of that will be a full virtual set and it should be streaming next Monday morning", for peanuts?..
If it's Gore saying it - maybe he should talk to his producers then, and ask them whether they actually have budgeted the "proper" VFX talent/timelines for the show. He has creative control - the people doing the work do not.
This is interesting to me, because I see this kind of comment on almost every Zed post.
I haven't used a low-DPI monitor for like... not sure, but more than a decade, I'm pretty sure, so for me the weird blocker I have with Zed is the "OMG YOU HAVE NO GPU!!!! THIS WILL NOT END WELL!" warning (I run a lot of Incus containers via RDP, and they mostly have no GPU available).
But what kind of monitors are you low-DPI people using? Some kind of classic Sony Trinitron CRTs, or what? I'm actually curious. Or is it not the display itself, but some kind of OS thing?
Depending on the definition I'm not a low DPI user myself, but in my friend group, I seem to be the only person who cares about >160 dpi, lots of people are using 1440p displays, or >34" 4k displays. In Apple's mind, high dpi (eg retina) is > 218 dpi, so my lowly 34" 5120x2160 doesn't count for them. But it is > 160 which is my personal threshold for hi dpi.
There aren't all that many >20" displays on the market that meet Apple's definition of high dpi, and not a ton more that meet my much looser definition.
I have a 4-5 year old ultra wide monitor which is a lot of pixels but low dpi. I really like the single monitor containing two screens worth of pixels, but I wish it was high dpi. At the time there weren’t really high dpi ultra wides available, and they’re still expensive enough that upgrading isn’t a high priority for me… but I’m sure I will at some point.
Mine is 2560x1440 which is a pretty nice "sweet spot" size. A comparable 5k to 6k display still commands a substantial price, and - given that I work at two locations - would need me to have two of them. The screen I use as my current (a 3x2 BenQ) also has some amount of subsampling going on, because running it at 2x ("Retina native HiDPI") all the UI controls are too damn big, and space is not enough. Running it at 1x (everything teeeny-tiny) is just not very good for my eyesight and not very workable - and, again, with Zed bumps into the same broken antialiasing rasterizer they have.
And it is not an OS thing. The OS renders subpixel antialiased fonts just fine. But Zed uses its own font rasterizer, and it completely falters when faced with a "standard passable resolution" screen - the letters become mushy, as if they have been blurred - and rather sloppily at that.
Linux and Windows are significantly better for both 1440p and 4k monitors. Both Linux and Windows have subpixel rendering and configurable font hinting for 1440p. And they both have fractional scaling UIs for 4k. macOS on the other hand only really looks acceptable on a 5k monitor.
When people says things like "mine is 2560x1440" on HN, are they talking about the mac scaled resolution? I feel like some context is always missing from resolution discussions, and it's a topic non-technical people can weigh in on as well.
The 2560x1440 is QHD which is kind of a happy medium: high resolution enough to look really sharp, but not so high resolution that you have to scale it up like Macs do on retina displays. Having had retina Macs (and been very happy with them) since they came out, I've been using 16" and 17" QHD panels on my linux laptops for about five years... and they are actually just fine.
I actually don't understand what I'm missing. I'm using two old monitors, a 27" at 2560x1440 and a 23.5" at 1920x1080 (in addition to my high DPI Framework 13 screen). How else can I get at least 4480 across (after scaling to a font size I can read - I'm 49) and still cover that many inches? My DPI right now is about 100, so to double that, wouldn't I need 8960 across 44 inches? I don't really want to pay $1500 for resolution my eyes are probably too old to notice.
It’s okay eyes are just different. I personally enjoy 220DPI, but 60Hz looks absolutely fine. However at the workplace enough people complain about 60Hz that all the monitors at work are 120Hz. I don’t notice any additional smoothness at all so it’s all wasted on me.
Typical DPIs are still all over the place depending on the demographic. Macs have been ~200dpi forever, while cheap PCs are still mostly ~100dpi, and decent PC setups tend to land somewhere in the middle with ~150dpi displays which are pretty dense but not up to Mac Retina standards. Gamers also strongly favor that middle-ground because the ultra-dense Mac-style panels tend to be limited to 60hz.
Zed started out as a Mac-only app, and that's reflected in the way their font rendering works.
I guess that makes sense. I'm a 280ppi convert, so I judge Mac users with pity — Linux and Windows work perfectly with my 31.5" 8K display (from fuckin' 2017 btw...) but Macs can only drive it at 6K, which adds a fuzz factor.
Unless you use it at 4K, but macOS isn't really usable that way (everything way too small).
But yeah, it's 60Hz. Which has sucked ever since I accidentally got a 120Hz display, so now 60 Hz looks like 30Hz used to...
I had a chance to try that LG 45GX950A-B at Yodobashi Camera in Akihbara the other day, and... that measly 125ppi might overperform at the distance you have to put it at. But then again my 50-year-old eyeballs are starting to be like "anyway you need your glasses bro" so... YMMV
What does that mean? If the monitor only requires 15W to operate, that's a good thing, right? Unless monitors are expected to use less than that? I'm not familiar with reading monitor spec sheets.
to add on to what jsheard said, for this feature to be usable (ie, charge your laptop just by plugging in the monitor), you need this number to be about what your laptop's charger is. At 15W, even a macbook air would run out of power slowly while plugged into this monitor, assuming you don't plug a second cable into your laptop. 65W or 90W is a much more normal value for a feature like this.
That all makes sense. The only thing I was missing was that this refers to power output. It seems like kind of a niche and tenuous value-add for a monitor. Why would I want to get power from my monitor?
Both at work and at home, I can plug in my monitor to my laptop with a single cable to my monitor. That single cable charges my laptop, connects the display, and passes through a usb hub that's built into the monitor that connects my keyboard and webcam. It's _incredibly_ convenient. It's also just a lot less cabling. You can think of it like a dock, built into the monitor for free.
> It seems like kind of a niche
Different workflows/circles. It's not something you're likely to use with a desktop, mainly with a laptop. It also really only works well if you use thunderbolt. It's reasonably common but probably not a majority where I work, where 90% of dev machines are macs.
I used to have the same complaint, and recently swapped to 4k monitors. I thought that would solve my zed font problems, but text presentation is still bad. In zed, it feels like there is significantly more spacing between each line compared to vscode (or any other text editor).
It blows my mind that the most ubiquitous computer screen resolution worldwide is considered too niche for decent support by the Zed project. Hopefully that will change in 2026?
The gamer market while overlapping with the developer market, is not a perfect circle. And where the circle does overlap, devs often work on a different display than they game on.
I do not doubt some people experience some issues, but I have regularly used zed with 1080p and 1440p 24" monitors (on macos) and I never experienced any font rendering problem. Saying "zed does not render fonts at low dpi monitors well" is a bit of an exaggeration.
Perhaps reserve your mind being blown for situations where the GP hasn’t confused pixel count and pixel density.
Zed “supports” 1080p monitors just fine. Supports is in quotes because it doesn’t need to do anything nor care at all about the count of pixels on the screen.
If you can call the left image [1] "supporting 1080p" I guess Zed supports it. But it looks like VS Code and other editors somehow support it better without getting blurry.
Keep in mind that Zed developers [2] consider blurry fonts on low DPI displays a Priority 1 issue, and a reproducible bug that is commonly encountered.
I'm sure if there was no blurry font issue with Zed, they would just close this bug report.
Try finding a new 1080p screen small enough to count as high DPI; there aren't many! Using 218ppi from elsewhere in the thread as the threshold, you'd need a 10 inch 1920×1080 screen to achieve it, so a 1080p computer screen is almost certainly low DPI.
"1080p is the most popular screen resolution by far" for people who use Steam and therefore may be optimising for framerate or connecting to a TV, and even that's pretty meaningless without knowing the physical dimensions of the screen. 1080p is a screen resolution, DPI is screen density - 1080p on a phone is pretty high DPI.
Define low dpi. Apple's definition has been >218dpi, which is much higher than 4k@27", which is about the smallest 4k monitor one can buy (exluding 15" portable monitors)
Interesting article, but it mixes up two concerns, I would say. One is retrieving trees from the DB and storing them - which can be annoying but has nothing to do with permissions. Another one is "hiding" unpermitted nodes/branches from the viewer (if that is what applying permissions is about - it can also handle read-only things, for instance). If these two concepts get separated and it is not a big deal to "overfetch" for the current user before doing the filtering - things become way easier. When the tree is reconstructed, you can do breadth-first traversal and compute permissions for every item in there - or retrieve the permissions for items at that level, if you are doing ACL stuff. From there - if there is no permission for the current viewer on that node - you exclude it from further scans and you do not add its' children to further traversals as you go down. Max. number of scans = tree depth. With some PG prowess you could even fold this into sophisticated SQL stuff.
It depends on whether the ELAC is an LRU (line-replaceable unit, i.e. a box with ports that can be swapped at an airport) and whether a software update can be uploaded into a unit that is installed (not all aircraft have a "firmware update via cable or floppy", so to speak)
This ELAC version is 100-something, and the A320 first flew around 1988. Why the updates - for example, there are updates to flight control law transitions, like after 1991 where the aircraft would limit flight control inputs during landing, thinking it would be preventing a stall - because it would not go into the flare law appropriately. See https://en.wikipedia.org/wiki/Iberia_Flight_1456
The cause could have also been an extra check introduced in one of the routines - which backfired in this particular failure scenario.
True. I would say, however, that every "concept" of airliner flight deck has its own gimmicks that can kill. The Airbus "dual input" is such a gimmick. Even though there was, for example, an AF accident with a 777 where there was hardware linkage between yokes and the two pilots were fighting... each other. Physically.
The official report doesn't identify the lack of sidestick linkage as a factor in the accident. Neither of the two pilots who were at the controls had any idea what was happening. Both pulled back on their sticks repeatedly right up to the moment of impact. The captain, who eventually realized (too late) that the plane was stalled, was standing behind them, and so would not have benefited from linked sticks.
There was more than one case where pilots would accidentally fight and break the linkage, or one would overpower the other.
One glider instructor talked about taking a stick with him in case of panicking student, so they could hit them hard enough so they would stop holding the controls.
The animations were there, but they were frame-based with the number of frames carefully calculated to show UI state changes that were relevant. For example, when you would open a folder, there would be an animation showing a window rect animating from the folder icon into the window shape, but it would be very subtle - I remember it being 1 or 2 intermediate frames at most. It was enough to show how you get from "there" to "here" but not dizziingly egregious the way it became in Aqua.
Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
There were also quite a few tricks used all the way from the dithering/blitting optimizations on the early Macs. For example, if you can blit a dotted rect for a window being dragged instead of buffering the entire window, everything underneath, the shadow mask - and then doing the shadow compositing and the window compositing on every redraw - you can save a ton of cycles.
You could very well have do-wait-do-wait loops when custom text compositing or layout was involved and not thoroughly optimized - like in early versions of InDesign, for instance - but it was the exception rather than the rule.
> Truth be told, I do have a suspicion that some folks (possibly - some folks close to Avie or other former NeXT seniors post-acquisition) have noticed that with dynamic loading, hard drive speed, and ubiquitous dynamic dispatch of ObjC OSX would just be extremely, extremely slow. So they probably conjured a scheme to show fancy animations to people and wooing everyone with visual effects to conceal that a bit. Looney town theory, I know, but I do wonder. Rhapsody was also perceptually very slow, and probably not for animations.
Done exactly this myself to conceal ugly inconsistent lags - I don’t think it is that uncommon an idea.
I'm think that ObjC's dynamic dispatch is reasonably fast. I remember reading something about being able to do millions of dynamic dispatch calls per second (so less 1 us per) a long time ago (2018-ish?), but I can't think how to find it. The best I could come up with is [1], which benchmarks it as 2.8 times faster than a Python call, and something like 20% slower than Swift's static calling. In the Aqua time-frame I think that it would not have been slow enough to need animations to cover for it.