Hacker Newsnew | past | comments | ask | show | jobs | submit | mschaef's commentslogin

> The Mac Desktop is vastly inferior to the Linux world

Asking out of curiosity, why is this? What's the functionality you miss on Mac?


Most of it is there but you need a crap-load of third party extension and some even cost money.

Like proper alt-tab, better keyboard configuration, Finder is the worst file manager I have ever used, a classical task bar and so on.

You can manage but the defaults are really bad for power users.

Honestly Apple just needs to let me install a proper Desktop Environment like KDE on it. The unix base is decent, just give me more freedom.


To be fair KDE is also pretty wonky out of the box (basic stuff like turning numlock on boot is unnecessarily buggy or confusing).

you usually also need a bunch of extensions. And 50% of them are broken due to various if you try to use KDE builtin extension thing.


The one I have always missed is proper focus-follows-mouse support. The mac desktop always feels really clunky without that when working with multiple windows.

Personally, most of my problems with MacOS (and Apple's operating systems) would be fixed if it were faster. The OS is full of very lengthy animations that aren't necessary, such as when switching between desktops.

Looks like on Windows it possible to disable all animations, including switch desktop, but you have to press two buttons to switch.

Some of it's size, some of it the fact that the camera is a second device, and some of it's workflow.

I tried a Sony RX100 (1" sensor) when they first came out, optimistic about the possibility of using it for 'general purpose' photography. After all, it's small enough.

The problem was, it's a second device to carry around and keep charged. Then once you capture the image, it's largely stuck on the device until you find a way to offload your images. I briefly experimented with cables that would let me do things like transfer images from the RX100 to my (Android at the time) mobile phone, for archiving and sending to family and friends. That turned the whole thing into the sort of science fair project that I didn't have time for as the parent of a very young child. (Although in fairness, I can't think of a single time in my life when I'd have had the patience, kids or not.)

This is why, for all the arguments you can make against them as cameras, I've come to be very thankful for the amount of effort that Apple and others have made to get appealing images out of devices I always carry around anyway. I can take a set of pictures, edit them, have them automatically archived to cloud storage, and send them to whoever I want.. all with a single device I was carrying around anyway.

This leaves open the fact that the 'real' camera workflow is still an option when there's the need for higher image quality and the time (or money to hire a photographer) to take advantage of what a DSLR or the like can do.

(When I compare what I can do with my iPhone to what my parents had available to them (a 110 format camera and 35mm Nikons), I like the tradeoffs a lot better. the image quality available now is definitely better than the 110. Some of those 35mm exposures are probably better quality than what I can get out of an iPhone, but they're all stuck in albums and slides, and nobody ever looks at them. )


> Then once you capture the image, it's largely stuck on the device until you find a way to offload your images. I briefly experimented with cables that would let me do things like transfer images from the RX100 to my (Android at the time) mobile phone, for archiving and sending to family and friends. That turned the whole thing into the sort of science fair project that I didn't have time for as the parent of a very young child. (Although in fairness, I can't think of a single time in my life when I'd have had the patience, kids or not.)

Most modern cameras now have a WiFi-based photo transfer system that works pretty well. It's not instantaneous, but it is quick enough to copy the photo you want to share with a friend or partner while you finish a meal or drink your coffee.


This is true, but switching to that mode is frustrating and you often have to use AWFUL mobileOS software to get the images. And my DLSL shoots like 25FPS and each raw file is 80MB. This is NOT fast to send over the wifi.

Waiting until I can plug in the 2TB memory card to my Mac and use a huge screen to review all the photos is far more efficient even if it has much higher startup latency.

Honestly this is a good reason to choose the iPhone Pro over the Air or Standard: 10gbps USB port. Plug the Nikon in to the phone for cloud upload. This would be the fastest path of all. Most people are only focused on the USB bandwidth in the iPhones for download from the phone.


I haven't had too much trouble with the Canon App, but YMMV.

The RX100 has had wifi transfer since the 3rd gen.

I understand the "second device to carry around" but it isn't a real point for baby pics you might take at home. A ridiculous number of times I have no idea where I last put my phone anyway and sometimes have to make it ring from kde connect on my laptop so it is not like a smartphone is necessarily readily available at all time anyway.

I also know a number of people who don't leave home with their smartphone amyway for short errands since they have an apple watch, that leave one pocket available for those that would prefer having a camera.


> The RX100 has had wifi transfer since the 3rd gen.

On an iPhone, I can take the picture and I'm immediately a button press away from a photo editor and then whoever I want to send it to.

(A camera that automatically tethered to a phone and dumped pictures into the phone's camera roll would mostly solve the workflow issues I'm mentioning here. Would not surprise me if this already exists.)

> I understand the "second device to carry around" but it isn't a real point for baby pics you might take at home.

Maybe. The camera still has to be charged and in mind and hand. (Then as soon as the kids leave the house you're back to where you were and having to carry something around that you might not otherwise.)

> I also know a number of people who don't leave home with their smartphone anyway

I see that... different people have different sorts of relationships with personal electronics. For me, it wound up being that I'd carry a cell phone and that was about it. Even in the pre-smartphone days, when I might have carried a PDA, I either wouldn't or couldn't.


I have a couple DSLR's and a large frame compact, and I wholly get your point. The image quality on even an older DSLR is better, mainly due to the physics of the optics - there's nothing like a high quality lens dumping a bunch of light on a large sensor.

However.... it's really hard to overstate the workflow and convenience aspects of shooting with a phone. (Particularly as a parent, and even moreso when I was a new parent of a small child.) The phone has the twin benefits of 1) being present almost always and 2) being immediately able to process and transmit an image to the people you might want to see it. For the 99% case, that's far more useful than even a very significant improvement in image quality. For the 1% where it matters, I can and do either hire a professional (with better equipment than my own) or make the production of dragging out my DSLR and all that it entails. This is like so many other cases where inarguable technical excellence of a sort gives way to convenience and cost issues. IOW, "Better" is not just about Image Quality.


> It feels like the industry quickly moved beyond the reach of the "hobbyist". There were no more "clever tricks" to be employed

It happened in a matter of a few years. The Apple II was built as a machine capable of running Breakout in software. Woz picked the 6502 (originally for the Apple One) because he could afford it.

It wasn't that long after that Commodore released the C64. They chose the 6502 because they'd bought the 6502 fab to protect their calculator business (and then they used it to assemble custom video and audio chips). From there, we were off to the races with respect to larger and larger engineering requirements.

Oddly, I wrote a bit about it a few days ago (in the context of John Gruber's recent discussion on the Apple and Commodore microcomputers): https://mschaef.com/c64


The Commodore machine contemporaneous with the Apple II was the PET.

    Apple I - July 1976
    Commodore PET - January 1977
    Apple II - June 1977
    C64 - January 1982
(Dates from Wikipedia)

All four used the 6502.


Apple made the II series for a long time. It was contemporaneous with the PET, but stuck around long enough to be relevant through the C64 and 128 (to the extent the 128 was relevant at all.)


After trying text files and other apps, I wrote my own about ten years ago and have been using it ever since. ( https://famplan.io - I'm starting to turn it into something other people might use.)

I tend to agree with the idea that simpler is better, but a single text file wasn't quite enough. I like being able to see my lists on multiple devices, I tend to like to have multiple lists for different purposes, and it's also very useful to have shared lists for coordinating with my family and others.

The experience of using this has taught me a few things about how to use these lists effectively:

1. Using a list is like writing a journal - you need to be intentional about explicitly working to make it part of your routine. (Part of this is committing to record tasks that need to be done and then committing in some explicit way to actually doing those things.)

2. It needs to be fast, it needs to be easy, and it needs to be present. Anything else gets in the way of point 1.

3. It's important to track when you need/want to do, but lists of things to do can be overwhelming. (It's useful to have at least a few ways to ignore items when you can't or don't want to deal with them. I handle this by having multiple lists, and also having a snooze feature to ignore items for a while.)

4. You need to have a way to handle items or tasks that go on for a while. (You need to make a call, but have to leave a message, and are waiting for a callback... etc. These are places where you need to take action to push something along, but the action doesn't result in a complete task, so you need to revisit it later.)

This is going to sound odd coming from someone who wrote a tool for the purpose, but the key here is really to pick a system (any system) and then actually use it. Spend too much time developing the system, then all you've done is give yourself something else to do.


I tried '4famplan4' as my password just to try it, and it said password insufficiently complex so I backed out. :(


Thanks for trying. (It expects mixed-case, which I need to actually say in the messaging.)

The codebase started out as something I used entirely myself, so the aspects of the workflow that relate to new user onboarding (most important for actually getting customers) are the ones that are the weakest. So this part of the codebase is where I'm working now to clean it up and it's probably also the most rough.


Why does it require mixed-case? It's for TODOs, not healthcare. If I want to use my insecure password to try out your service, please let me! It took extra code here for you to try to be secure, when it's now generally known that password requirements are security theatre at best and anti-security at worst.


Thank you for the feedback. A month ago, it didn't need any text in the password field at all. I may have overshot the mark a bit when I added validation.

Longer term, I mainly want it to just use external auth (Google, etc.) and not use passwords at all.


> Longer term, I mainly want it to just use external auth (Google, etc.) and not use passwords at all.

I usually avoid services that do this because I don't want any issues to my Google account (or any other service) to affect other services I use. Good luck trying to talk with someone at Google if some automated system flags and blocks your account.


I haven't worked in COBOL, but I've worked with it.

This was around 1999, and I was building a system for configuring and ordering custom PC's at a large distribution company. One of the feature requirements was that we display inventory over the various options. (ie: There are 373 20G disks in stock, but only 12 30G disks in stock). The idea was that this would let a customer ordering 200 machines know that they should pick the 20G disk if the wanted it now.

The way inventory at this company was done was via two systems. There was a normal SQL database that had access to a daily snapshot of inventory taken from a mainframe that always had the up to date data. With the mainframe taking a while to process queries, we used the less current SQL database for the majority of the UI, but took the time to query the mainframe once a customer was in the shopping cart. Customers might see a change during the flow, but it would at least let them see the most current data prior to committing to a purchase.

The mainframe query itself was implemented by someone else as a COBOL job that produced the required inventory numbers. From my point of view, it was just a limited sort of query conducted over a specialized JDBC driver. (Not necessarily the weirdest aspect of that design.... for a variety of reasons, we did the UI in ASP/VBScript, the back end using Microsoft's JVM invoked via COM Automation, and the SQL database link was via a dubious use of a JDBC/ODBC bridge to connect to SQL Server. It all worked, but not the most solid architecture.)

==

My only other mainframe experience was working as an intern for a utility company a few years prior (1991-1992). They used CDC Cyber mainframes to run the power grid, using something like 4MM lines of FORTRAN code. The dispatchers themselves interfaced to the system using consoles with 4 19" color displays running at 1280x1024. Heady stuff for 1991. (The real time weather radar screen was cool too, in an age before the internet made it common place.)


> The core problem it addresses is client-server coupling. There are probably countless projects where a small change in a server’s URI structure required a coordinated (and often painful) deployment of multiple client applications. A HATEOAS-driven approach directly solves this by decoupling the client from the server’s namespace. This addresses the quality of evolvability.

Not sure I agree with this. All it does is move the coupling problem around. A client that doesn't understand where to find a URL in a document (or even which URL's are available for what purpose within that document) is just as bad as a client that assumes the wrong URL structure.

At some point, the client of an API needs to understand the semantics of what that API provides and how/where it provides those semantics. Moving it from a URL hierarchy to a document structure doesn't provide a huge amount of added value. (Particularly in a world where essentially all of the server API's are defined in terms of URL patterns routing to handlers. This is explicit hardcoded encouragement to think in a style in opposition to the HATEOAS philosophy.)

I also tend to think that the widespread migration of data formats from XML to JSON has worked against "Pure" REST/HATEOAS. XML had/has the benefit of a far richer type structure when compared to JSON. While JSON is easier to parse on a superficial level, doing things like identifying times, hyperlinks, etc. is more difficult due to the general lack of standardization of these things. JSON doesn't provide enough native and widespread representations of basic concepts needed for hypertext.

(This is one of those times I'd love some counterexamples. Aside from the original "present hypertext documents to humans via a browser" use case, I'd love to read more about examples of successful programmatic API's written in a purely HATEOAS style.)


This is what I don’t understand either.

/user/123/orders

How is this fundamentally different than requesting /user/123 and assuming there’s a link called “orders” in the response body?


With an HTML body the link will be displayed as content and so will be directly clickable. But if the body is JSON then the client has to somehow generate a UI for the user, which requires some kind of interpretation of the data, so I don’t understand that case.


Truly from another era.

If you're not familiar, basementcat is right... DTACK grounded refers to the DaTa ACKnowledgment pin on a Motorola 68000. It's the signal that (when grounded) lets the CPU know that data it has requested from memory is ready to be read off the data bus. Systems with slow memory need to be careful that they ground the pin only when the memory has responded.

However, if your memory system can outrun the CPU, it was possible to just ground the pin and assume that the memory always responded in time to satisfy the CPU's read requests. The centerpiece of "DTACK Grounded" was a set of Motorola 68000 CPU boards that (initially) did just that. The memory parts they used were expensive for the time and small, but they were fast, allowed DTACK to be grounded, and allowed the overall design of these CPU boards to be very simplistic and inexpensive. For a while, these boards were most likely the most accessible path to a 16/32-bit microprocessor like the 68000.

What was also interesting was the way that these boards were used. They were sold as attached processors for Commodore PET's and Apple ][ machines. The software would then patch the internal 8-bit BASIC implementation to delegate math operations to the attached processor. Believe it or not, the speed improvement offered by the 68000 was significant enough to offset all of the other complexity around this implementation choice. The net was an accelerated and mostly compatible BASIC.

Later in the newsletter, the author talks about pairing an Intel 8087 with a 68000 to get better floating point. (The 8087 was a remarkable chip for the time.) The 8086 that was needed to run the 8087 is referred to as a 'clock generator'. I guess the net architecture here was to be a 6502 Host CPU, connected to a 68000 attached processor using an 8086 and attached 8087 to accelerate floating point.

Meanwhile, PC clones had sockets for 8087 chips, Apple was releasing relatively inexpensive 68000 hardware, and the 80386 was well on the way. The writing was on the wall for the DTACK grounded approach to accelerating 8-bit microcomputers, but it must have been interesting while it lasted.


Yeah the era of "memory can outrun the CPU" was brief and glorious. The approach 80s microcomputers used for graphics required it -- multiplexing the video between the VDP (C64 VIC-II, Atari ST Shifter, etc.) and the CPU on odd bus cycles. Nice and fun.

By the end of the decade the CPU was running 2-3x the speed of the fastest RAM.

Now things are soooo complicated.

Not sure about this alternate reality where Apple's 68000 machines were cheap :-) (I say this as an Atari ST owner).

68000 has kind of aged well despite not being made anymore -- is perhaps now the only "retro" architecture which can be targeted by a full modern compiler. You can compile Rust, C++20, whatever, and have it run on a machine from 1981. That's kinda cool.


Well, compared to the first wave of 68000 machines, which were generally high end workstations from the likes of Sun and Apollo, a $2500 Macintosh is cheap. Apples belief in this whole “profit margin” thing did mean it couldn’t compete on price with the Amiga and ST though…


I mostly jest. In the late early 90s the prices of 68k Macs actually dropped into the very affordable range. The II series were great machines, priced well, stable, etc. The shift to PowerPC ruined the classic Mac, IMO.

In that era I had a 486/50 running (early) Linux and my mother had a Mac LC II. I actually really enjoyed using that machine.


Just curious why do you think the shift to PPC ruined the classic Mac? I never owned a Mac before but I did buy an iBook G4 because I somehow got fascinated by the PPC machines.


The PPC architecture is fine enough. The problem was their "operating system" was written as a 68k OS with no memory protection and a weird memory model generally, and for almost a decade they ran with 68k emulation in order to make it all work.

And it crashed constantly. Very unreliable machines.

They did crash here and there in the 68k days, but overall they worked pretty good. Albeit cooperative multitasking, etc.

But in the mid-90s, with System 7.6, it was like walking through landmines. e.g. I helped admin an office with a bunch of them and you couldn't run Netscape and FileMaker at the same time because they just wrote all over each other's memory and puked.

System 8 and 9 improved things markedly but the reputation was still there.

Meanwhile they had these grandiose OS rewrite projects that all failed until they ended up buying NeXT... and then spent 5 years turning NeXTstep into OS X.

In retrospect Apple could have skipped the whole PPC era and done much better for themselves by just switching to x86 (and then ARM as they've done now) after a brief foray through ColdFire.

Or just jumped straight to ARM instead -- they were an ARM pioneer with the Newton! -- rather than betting the farm on the IBM/Motorola PowerPC alliance, which ultimately ended falling badly with power hungry chips that couldn't keep up with x86.


Thanks for sharing. I never used one before so don't know how good/bad it was. My iBook runs OS X so it is pretty good.

It's a bit embarrassing as the 68k emulation was part of the reason that I got fascinated. But I just want to learn binary translation, not really use them, anyway.

I think Apple in the early 90s threw things on the wall and hope something stuck. Bad for consumers, nightmare for admins but good for engineers who managed to make the throw.


Early 90s Apple was a bit like Google today, maybe. Big and ineffective at actually delivering, but with a history of innovation and illustrious past and a lot of smart people working there.

The problem with PowerPC was Motorola folded and IBM didn't have any real long term interest in the consumer PC CPU market.

So they just fell further and further behind.


Interesting. I wonder if their interview standard fell during that period (because many engineers may leave or refuse to join a dying company). Same for Google in the near future.



> Yeah the era of "memory can outrun the CPU" was brief and glorious.

I don't think I fully recognized at first what was happening when wait states, page mode DRAM, and caches started appearing in mainstream computers. :-)

> Not sure about this alternate reality where Apple's 68000 machines were cheap :-) (I say this as an Atari ST owner).

Yeah... I should have cast a broader net. The Atari ST machines were much better deals IIRC. In any event, DTACK grounded PoV was that the 68000 was targeted at minicomputer scale machines, so anything that fit on a desk at all was arguably going to be inexpensive. (Years later, I did embedded work on 68K class machines intended to run in low power environments. They had to be "intrinsically safe" in potentially flammable industrial control environments. That architecture had a long path from 'minicomputer class' to where either wound up.)

The other thread this reminds me of is a bit later, Definicon was selling boards like the DSI-780. These were PC AT boards with an onboard 68020/68881 and local memory. Computationally intensive jobs could be offloaded to that board, which was supposedly like a VAX-11/780 on your desk. In some ways, it served a similar role to the DTACK attached processors, but at a slightly later point in time.

Like the DTACK grounded products, the window of time in which these products had value was oh so short, relatively speaking.


> Getting all this on a smart, distributed runtime seems very promising.

Hopefully it is.

This CPS article is the first of the Rama blog posts where it seemed like there might be something there. The earlier posts - "I built Twitter scale Twitter in 10Kloc" - were never really all that convincing. The thing they claimed to have built was too ambitious a claim.


Oh I think there’s a lot of good stuff baked in there. The big idea downstream is that you have incrementally calculated, indexed data structures to query all the results of this fancy CPS logic. It’s all slightly esoteric even coming from a Clojure background but it ticks every box I want from a modern data platform, short of speaking SQL.


> I feel like CPS is one of those tar pits smart developers fall into. ... eventually the language designers just sighed and added promises.

Bear with me, but raising kids taught me a lot about this kind of things.

Even at two or three years old, I could say things to my children that relied on them understanding sequence, selection, and iteration - the fundamentals of imperative programming. This early understanding of these basic concepts why you can teach simple imperative programming to children in grade school.

This puts the more advanced techniques (CPS, FP, etc.) at a disadvantage. For a programmer graduating college and entering the workforce, they've had life time of understanding and working with sequencing, etc. and comparatively very little exposure to the more advanced techniques.

This is not to say it's not possible to learn and become skillful with these techniques, just that it's later in life, slower to arrive, and for many, mastery doesn't get there at all.


I feel like these explanations based on cognitive development always end up with unprovable assertions which inevitably support their author's views. The same exist about natural language, and they're always (unconvincingly) used to rationalize why language A is better than language B.

In my experience, when you ask people to tell you what "basic" operations they do for e.g. multi-digit number additions or multiplications, you get many different answers, and it is not obvious that one is better than another. I don't see why it would be different for languages, and any attempt to prove something would have a high bar to pass.


> I feel like these explanations based on cognitive development...they're always (unconvincingly) used to rationalize why language A is better than language B.

I'm not arguing that one language is _better_ than another... just that people are exposed to some programming concepts sooner than others. That gives these ideas an incumbency advantage that can be hard to overcome.

> any attempt to prove something would have a high bar to pass.

Honestly, the best way to (dis)prove what I'm saying would be to put together a counterexample and get the ideas in broader use. That would get FP in the hands of more people that could really use it.


I take your point about mastery. Especially FP, where it's very clear that mastery of it is extremely powerful. On the other hand, there are some like our regular synchronization primitives where not even mastery will save you. Even experienced developers will make mistakes and find them harder to deal with than other higher-level abstractions. Where CPS fits on this curve, I don't know. I feel pretty confident about where FP and Mutexes sit. But I have yet to see something where I feel I'd rather use CPS than an async stream result.


> Especially FP, where it's very clear that mastery of it is extremely powerful. On the other hand, there are some like our regular synchronization primitives where not even mastery will save you.

This alludes to my biggest frustration with FP... it solves problems and should be more widely used. But by the time people are exposed to it, they've been doing imperative programming since grade school. It's harder for FP to be successful developing critical mass in that setting.

At least, this is my theory of the case. I'd love counter examples or suggestions to make the situation better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: