> the evolution of iPadOS has stagnated over the years.. truly native, modern iPad apps by the most popular web services essentially don’t exist anymore.. If not even Apple is putting in the time, care, and resources to make exceptional iPad apps, why should a third party do it.. Web apps may be common everywhere these days, but the Mac has a vibrant ecosystem of indie apps going for it, and the Vision Pro has better multitasking and proper integration with macOS. The iPad has neither.
iPad is the new butterfly keyboard. Perchance rebirth awaits in the convergence of iOS, macOS and VisionOS.
For browsing the web and using web apps the iPad is perfect. It's cheap enough that I don't mind doing slightly dangerous things with it that might kill it once every three years.
Inevitably, for things like Bluesky and Mastodon, the experience of using the web app on the iPad is perfect, but the app experience is "phonish" and just doesn't fit the screen size but why should I care when the web experience is perfect?
We are already experiencing this for years on Windows, that is why its current development experience turned into a mess.
New generation of employees, without any Windows development culture, or even experience, as from the blank stares when asked about stuff that has been on Windows for decades during community calls, and Webview2 being pushed all over the place on the OS.
It continues to astound me that Apple seems not only reluctant – but actively disinterested – in bringing general purpose computing to touchscreen devices.
I want whole-ass macOS on the iPad, and not only are they not doing it, they're doing the reverse: massive adoption beyond early apple's technical/creative userbase is leading to the iOS-ification of macOS. Lower level controls are increasingly locked behind padded walls. I'm old enough to remember when the radio button for "allow apps from unidentified developers" was just... there. I didn't need to hunt stackoverflow for wacky CLI workarounds just to install indie software on my own computer.
It's uniquely unfortunate for apple too, because it's apple. Surely bringing desktop computing patterns to finger/pencil interfaces has a lot of hard problems in need of novel solutions, but there was a time when apple was an HCI powerhouse, and would have been as good a candidate as anyone to try and tackle such things. Could ~2013 apple have done windows 8 better than windows 8? In their sleep, IMO.
Anyway, do people have favorite takes on the actual motivations behind this? Does it truly come down to a desire to own the platform e2e and keep raking in that sweet sweet app store tax? Or is there some kind of more nuanced incentive structure at work?
Making an open ecosystem will never be in the best interest of their shareholders. They didn't make it to the top of Wall Street by accident.
Why would they make profit on their hardware alone if they can have an almost passive income in the form of an app store.
They'll put as many scary pop-ups as they can to prevent a regular user from installing third party software as they can.
They'll block third party developers from integrating with their ecosystem as much as they can get away with.
It'll be really interesting to see how much EU can influence them, but I wouldn't hold my breath to see them doing more than absolute bare legal minimum.
Google's ads business dwarfs everything else, so as long as ads can be integrated into apps deployed through Google Play Services, they're happy.
Third party app stores mostly provide unprofitable utility apps Google aren't interested in stopping. And thank goodness for that, because it allows any Android phone to be used as a relatively open computing device without having to register with Play Services.
They're disinterested because they get to sell more hardware that way. Combine all functions in one device and you'll only buy that one device, split those functions arbitrarily over several devices while making sure that those devices only really play along well within their own family and the target market has shown to buy those devices even though they sputter and fume about it being astounding that they're not combining functions.
The solution is clear but hard to implement since the congregation will have to be convinced to stop going for the next sacrament.
This claim is frequently repeated, but does not reflect reality for those who buy Apple devices. When iPads actually meet user needs, the result is iPad proliferation, not replacement of MacBooks. An iPad with macOS/Linux VM is needed for adhoc time-sensitive tasks which require non-iOS software, like a quick doc edit or Unix toolchain. For scheduled work sessions, Macbook is better.
As the article noted, there is a $3500 iPad. If that iPad variant ran macOS/Linux VMs, it would sell more units. Anyone willing to spend that much money for portability and convenience would not blink at buying a Macbook for other use cases.
> This claim is frequently repeated, but does not reflect reality for those who buy Apple devices. When iPads actually meet user needs, the result is iPad proliferation, not replacement of MacBooks
Of course more versatile tablets will replace laptops/notebooks, the only reason the latter are preferred over the former is because they offer features which the former do not yet have. Add those features - which mostly comes down to removing restrictions - and there will be far less need for ultralight notebooks. Add the features to (i.e. remove restrictions from) large-screen tablets and laptops will be far less necessary. Even people with more money than sense will see the utility in having only one device which they need to keep charged and for which they need to get a mobile data subscription.
An uncrippled modern tablet with a good keyboard/cover is a good replacement for many if not most common tasks notebooks are used for. A larger uncrippled tablet with such a cover is a good replacement for many tasks laptops are used for.
Not for all tasks, not yet. That time will come as well but we're not there yet due to battery capacity restrictions and thermal limits but it will come. The market went from fridge-sized desk-side to tower to desktop to 'luggable' to portable to laptop to notebook, the next step is either a tablet of some sort or some form of wearable. If and when some form of direct interaction without need for a keyboard and monitor ever comes to pass for mainstream applications it will be the latter.
What's an example shipping device for "uncrippled modern tablet"?
> Of course more versatile tablets will replace laptops/notebooks
Maybe the distinction is irrational, but as we spend more time on screens, there is some aesthetic difference in a device used mostly for work/creation and a device used mostly for leisure/consumption. Even if the underlying electronics are near-identical.
Thanks to eBay and the end of Moore's Law, multiple older devices are affordable and quite functional, e.g. enterprise $2000 Thinkpad 2-in-1 Yoga tablets for a few hundred. But nothing yet compares to the usability of iPadOS, despite no shortage of attempted competitors.
I keep thinking this will be a market opportunity for someone to make phones and tablets into computers that people own, that don't have to ask permission from someone to use, but it hasn't gone anywhere.
I have two internal web applications that I developed on the same code base. (1) YOShInOn, a smart RSS reader with a content-based recommender model and (2) Fraxinus, an image sorting and viewing application.
These work great on desktop, great on iPad, pretty good on Android and sorta OK on phones where the screen size is a little too small. I even found out they work great on the Meta Quest 3, where it isn't a real XR app but I can view and sort images on three great big monitors hanging in my room or in a Japanese temple [1]
For years I had a series of bottom-of-the-line Android phones that left me thinking that "app" is a contraction of "crap" as opposed to "application". When I heard Skype was ending I ported my number to an iPhone and I can see with a high end device and 5G you don't need to wait a minute for the app to get into your gym to load. On the iPhone a number of factors come together for the app to be worthwhile but I think it was never true on the iPad and web applications have long been undervalued, even though the experience of most web apps on the iPad, even if they weren't designed for it, is the definition of "just works".
It's an astonishing mistake that Apple never caught up with the Microsoft Surface and made the iPad Pro compatible with Mac applications, from a hardware reason there is no reason why not other than Apple thinks anyone who buys a Pro is made of money and can afford a macbook too -- it's like the way Digital struggled in the microcomputer age because they were afraid that cheap microcomputers compatible with the PDP-11 or VAX would cut off their minicomputer business.
It's hard to say what part of the Vision Pro failure was "too expensive" (the Meta Quest consumer thinks the MQ3 is too expensive) vs the moral judgement that using controllers is like putting your hand in a toilet; the MQ3 shows you can make great fully immersive games and experiences if you have controllers... At Vision Pro prices the device has to do it all and watching 3-d movies and using phone apps on a screen in midair just doesn't cut it.
[1] Curious how it works w/ the Vision Pro: web apps are easy to work with the controllers on the MQ3 but I found that it really helps to meet the WCAG AAA size requirements to make the targets easy to hit. How is it with the hand tracking on the Vision Pro? I found that I can do stuff w/ tracking and the clicker on the original Hololens but it isn't easy.
> Apple thinks anyone who buys a Pro is made of money and can afford a macbook too
Owner of iPhone, iPad Mini, iPad Pro, MBA & Mac Mini hardware: after waiting for VMs on iPad, I moved some workflows to Google Pixel Tablet + GrapheneOS, which has inferior UX and no hardware keyboard, but at least it can run Debian Linux VM for local development. Next step is to try Pixel 8+ with USB-c DisplayPort docking to monitor/kb/mouse.
Sandboxing giveth and sandboxing taketh away. The security value of sandboxing, particularly in a web browser, is obvious. But as a result web apps can't access a meaningful filesystem (not counting React Native, don't get me started on IndexDB or OPFS) and this has a significant impact on application development - pushing developers to build much more complicated client-server architectures which increases the cost of making a great app.
Swift and SwiftUI are seductive for building cross-platform iOS/Mac/IpadOS applications but despite Apple's marketing have significant frictions when building real-world apps. There's a reason large companies are still using Objective-C and UI-Kit - SwiftUI, and DEFINITELY SwiftData, are arguably not ready for production yet in direct contradiction to Apple's community messaging.
Look you can build a great app with any of these stacks, there's a lot of nuance in choosing between them, and the most cost effective and quality effective path will be decided by the developers strengths and weaknesses not the latest blog article or what happened to be "successful" for somebody else.
The values and viewpoints of the developer certainly matters.
One of the "eternal September" moments of web app development was in the late 1990s when Microsoft went "all-in" and Microsoft-oriented developers flooded forums with tearful laments about how "MY BOSS NEEDS TO ME TO BE ABLE TO ACCESS LOCAL FILES IN A WEB APPLICATION!"
From the viewpoint of a web-native developer though, you need local files about as much as Superman needs a Kryptonite sandwich. (You didn't lose local files, you gained the universe! Look at how multimedia software on CD-ROM was utterly destroyed by the world wide web!)
That image sorter, for instance, has a million images sampled from a practically infinite pool all available instantly to any device anywhere in the world that's attached to my TailNet -- though you'd better believe I keep RAWs from my Sony on local file systems. [1]
I had a boss who ran a web design company circa 2005 which had a great spiel about how with web applications small businesses could finally afford custom software, I had my own technical version of it which went something like. "Back in the Windows '95 age obsolete desktop applications kept their state as a disorganized graph of pointers that inevitably gets corrupted just like cheese goes bad and crash; modern web applications keep their state in a transaction-protected database (if you're smart enough to NOT GET SEDUCED BY SESSION VARIABLES PROVIDED BY YOUR RUNTIME) so your application state is refreshed with every page update"
[1] Adobe's relationship to the local filesystem drives me nuts even though I've talked w/ people there like Larry Masinter and deeply studied file formats enough to understand their point of view. My first instinct, having worked at a library, is that you want to associate metadata with an object, in fact I really want to "name" an object after a hash of the contents and have the object be immutable.
On the other hand, XMP which stuffs a metadata packet into a file, is a good fit for the way people use desktop apps. Still, my #1 complaint about Photoshop is thinks a file has a changed (some metadata has changed) when I do some operation that I don't think of as a mutation such as making a print or exporting a compressed and scaled JPG with the "Save to Web" Dialog. Since I do a lot of different things with my machine I am always forcing shutdowns which means Photoshop is always harassing me to recover files that I "didn't save" even though I didn't really change them. Sometimes I just save files that I don't really want to save but it feels icky because the source file might be a JPG which might not round trip perfectly, where I might feel compelled to save at a higher quality than the original file and all of that.
My argument is more that the filesystem is a very very very good abstraction to build on and the addition of a network layer exponentially increases complexity and as a result decreases quality.
You can certainly do it, and sometimes it works out just fine, but generalizing all popular software client/server architectures contribute to modern software woes in my opinion (not to mention it discourages privacy! your tailnet example being a good counter-example).
Cross-platform development is another can of worms, and a strong advantage of the web for software distribution. There's a lot of snake oil - like SwiftUI, React-Native, Flutter, that always gets people excited but rarely makes an appearance in world-class software.
A few years back I did an eval of cross-platform frameworks to target (in order) Windows, MacOS, Linux, maybe Android. This was around the time of peak complaining about Electron bloating out the installer for every crapplet to 35MB -- so I was prejudiced against Electron going in.
I came to the conclusion that they were all atrociously awful for both DX and UX except for Electron and JavaFX (maybe that's because I'm a Java fanatic.)
The amount of hoops you have to jump through to develop for Apple platforms stand in stark contrast to the increase in revenue you can expect from it. Mac used to be a platform for connoisseurs, but the iOS platforms are just mass market race-to-the-bottom shovelware dumps. Users do not appreciate its applications and developers in turn do not appreciate its users. Everyone gets what they deserve.
iPad is the new butterfly keyboard. Perchance rebirth awaits in the convergence of iOS, macOS and VisionOS.