Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The key is composability.

On a normal desktop OS, it is routine to use multiple applications in concert to do work, using files and the clipboard as means of composition.

On an iPad (or mobile devices in general), it is indeed possible to build delightful and useful end-to-end integrated suites for doing so-called "Real work".

The problem is that if you want to perform a task that requires features from two or more applications, you may well be out of luck! Applications can rarely share data, often resorting to cloud services (and thus a mandatory network connection) as a clumsy workaround, and even copying and pasting data between applications can be challenging and error-prone.

On a desktop, any user can invent new workflows. On an ipad, a user can only buy them wholesale. Users are at the mercy of the degree of forethought the app developers put into their designs.



I completely agree with this. For decades, operating systems have been designed to make it as easy as possible for us to create our own ad hoc solutions to problems by combining subsets of tools in customized ways. A folder holds a project, which contains a completely file/app-neutral hierarchy of data files. A command line allows a constellation of tools to be combined via file/app-neutral means. Various types of scripting let you create tools that call tools. The GUI offered additional ways to use apps in combination. Copy and paste, drag-and-drop, save as... from one and open from any other that can read the file format.... Plug in or mount any kind of drive or mount via wire or wireless, and all is present before you simultaneously on your workbench/desktop/project directory, ready to be worked on with your favorite tools.

The OSes were removing friction and making it easier and easier to craft our own solutions from tools in an expanding and interoperating toolbox.

And then we got iOS and iPad and each app has its private data, no more command line, no more generic project folders, no file system access, not even copy and paste, and years of "well, I guess you could email it to yourself" or "just pay us every month for iCloud instead of plugging in a thumb drive" nonsense. Those restrictions are being partly and awkwardly eased somewhat, but we're being assured that having self-contained, pre-designed apps for everything is the new way. Apple (which seldom makes claims about the future) is telling us explicitly that the iPad is what the future of computing is going to be. That is not appealing to me, nor is it intended to be.

While it is true of course that you can do "real work" if "there is an app for that", you can't even write your own app for that without paying Apple a yearly fee and getting their approval to put it in the app store. I hope this won't be the only future of computing.


I wholeheartedly agree. In some ways, iOS and its spinoff iPadOS are the complete opposite of the vision that Apple pursued in the 1990s (before Steve Jobs' return) of scriptable, composable software through AppleScript and OpenDoc, which themselves were influenced by the everything-is-an-object world of Smalltalk. Sadly AppleScript isn't supported by all Mac applications and it seems to be rather deemphasized these days, and OpenDoc was killed in its infancy upon Steve Jobs' return.

The iPhone and iPad are fine platforms; I use them regularly for my work. However, they are a far cry from Alan Kay's Dynabook idea. A better base for developing on the Dynabook idea would be the Microsoft Surface line of tablets, which can run the full versions of Windows 10 Home or Pro and thus can run modern Smalltalk implementations such as Squeak or Pharo.

I wish there were more work done in the area of composable GUIs, where the core logic of applications is scriptable and where these elements can be combined in ways that could be even more expressive than Unix pipes are. Every now and then I have thoughts about taking a modern Smalltalk implementation and writing a suite of composable GUI apps as a demonstration of this idea.

This topic always takes me back to a Hacker News comment I read many years ago (https://news.ycombinator.com/item?id=13573373) about composable software versus monolithic applications and how commercial software companies are predisposed to support the latter while the free software movement should have focused on the former. I still don't think it's too late, however, as long as there are some vendors selling hardware that users can install their own software on without needing to use a certified app store.


> commercial software companies are predisposed to support the latter while the free software movement should have focused on the former.

There is a related issue at play here when it comes to how systems are developed. It takes an enormous amount of sustained effort and time to create whole computing systems (hardware, OS, design, etc). The FOSS movement is not really capable by definition of doing this, since it is dispersed and not funded at the level that today's major platforms have been funded. This is precisely why FOSS is so Unix-centric: that is the predominant and free OS that was available at the time the whole movement really came about.

It would take a years-long, well funded research effort to build a truly new computing environment. On the technical side, the desire and some of the knowledge is there. We simply lack funders that are willing to pony up a good amount to the right people and leave them alone. That is was ARPA, PARC, and Bell Labs were able to do in their time, and the disappearance of this kind of funding environment is exactly why we are still reiterating on the accomplishments of those institutions today.


> I wish there were more work done in the area of composable GUIs, where the core logic of applications is scriptable and where these elements can be combined in ways that could be even more expressive than Unix pipes are. Every now and then I have thoughts about taking a modern Smalltalk implementation and writing a suite of composable GUI apps as a demonstration of this idea.

Where the browser is the platform, you might consider using custom elements to fill this role: https://html.spec.whatwg.org/multipage/custom-elements.html#...


I agree with your assertion but this only stands at more advanced level for a niche audience, us. Overall, for what they are and for the wide audience they cater to, Ipads give a decent abstraction so one doesn't get bogged down by details. Yes, at the details part unfortunately sometimes it fails short and for that reason I personally have no real use for an Ipad, I can get by fine with just using a smartphone and a desktop/laptop for most of my uses. As stand alone devices, Ipads can be put to a lot of uses though.


I think of the iPad as a computer the same way I think of visual basic as a programming language. It may make 95% of things easy, but that 5% is a real pain!


> if you want to perform a task that requires features from two or more applications, you may well be out of luck!

Ok... but if you don’t need that you’re not doing ‘real work’?


If iPads didn’t have a productivity problem then Apple wouldn’t have to dance around that narrative and our conversations would shift more to Apples direction — that the iPad can be your first and primary device. That would be another revolution in personal computing. That iPads are so well priced and that this hasn't happened says something.

Apple has pushed the narrative of "What's a Computer", so they're just reaping what they sow.


that the iPad can be your first and primary device

For tens of millions of people it can be.

Evidence: Hundreds of million of iPhone users who do not have computers. It's why Apple started iCloud backup and restore for the iPhones — So people without computers can upgrade without losing all of their data and settings.


I'm not implying that work done in integrated suites or "apps" is illegitimate. I'm just trying to close the philosophical gap by explaining that this sort of work is a subset of all the work one might want to do with a computer.


What if your work requires detailed onscreen notation and drawing or requires mobility and a rear camera? These aren’t work subsets of a computer without bulky and expensive peripherals.


I have an older Surface with front and rear cameras and a pen. I can draw, annotate documents, and take photos just fine while using Windows 10. I can also use the whole universe of software available for Windows. Occasionally I might have to plug in a mouse or attach the keyboard/touchpad cover to use older software that isn't multitouch-friendly.

If Apple sold a device which was physically identical to an iPad but ran OSX (with some minor affordances to close the gaps between touch, pen, and mouse input- much as Microsoft has pursued in recent Windows iterations) I'd be ecstatic with it. I've considered trying a Modbook, but the price is a bit eye-watering.

Having access to a desktop-style OS would only add to the range of things you could do with smaller devices. Would it be harder to use than an iPad? Maybe.


You can put Linux on most Surfaces, and many of the cheaper Windows tablets. With pmOS, you may soon be able to put a desktop Linux OS on many cheap Android devices. The UX work for these use cases has been done - GNOME is now very much a mobile-ready and touch-ready environment, supporting real professional work. Plasma Mobile is not too far behind.


>What if your work requires detailed onscreen notation and drawing or requires mobility and a rear camera?

Then your work is not a typical example of real work, the same thing being a rock star is not -- even if people exist that are rock stars and get paid for it, so it's still "real work" in that sense.


Still with the gate keeping.

"Real work" is only real if it agrees with your narrow world view.


You got it backwards.

It's you who want to enforce much narrower areas of work as characteristic of real work.

"Real work" in the casual sense (what most people consider work, or work most people's work is like) is what agrees with the statistically wider kinds of work people do.

This also has nothing to do with gatekeeping. People still get to be stunt drivers and rock stars and vloggers paid to eat their lunch live on YouTube, whether we consider what they do representative of "real work" or not.


Being a rock star is real work, btw. So is stunt driving and vlogging.


It's real work in the sense of it involves real effort (work here used as in the phrase "hard work").

But "Real work" for the purposes of this thread is work that is representative of what most people do / realistic for people to have -- and statistically speaking, being a rock star is neither common, not representative of the kind of job most people do.

To recap the argument made:

A: People can't do real work on the iPad because it lacks X. B: Really? And what about the people that are Y, they don't need X feature. A: Well, Y is not real work.

What the person A says is not that (a) Y doesn't exist, (b) nobody has Y as their role, (c) nobody makes money by doing Y (and it is thus, their work).

The charitable interpretation, which is obvious since we're talking about e.g. "fighting aliens" (which indeed, is not a "real work" in any sense of the term), is that A means Y is not really representative of the work most people do, and that the percentage of people doing Y is small to not count for the purposes of how useful an iPad is for real work.


I think you’re totally confusing yourself


Well, you might still do real work (if you get paid for doing whatever you do) but you're in a much much smaller niche of real work.

It makes sense then to define "real work" by what the majority of "real work" is, not the exceptions, even if the exceptions are still real work.

Like the phrase "Nobody uses X" means "very few use X, so few that it doesn't matter", "Y is not good for real work" means "Y is not good for 90% of real work" (the most common types) not "Y is not good for any kind of real word at all ever period".


Real work is only real if you get paid for it ?

You do realise this is HN where quite a lot of us are building startups and not getting paid for it.


Well, you do realise that if you don't get paid for it it's not really your work. It's just an effort you might or might not turn into a work.


I think I see more normal working people using iPads to do their 'real work' day-to-day than I see using laptops to do their 'real work'. I think it may not be as much of a niche as you think it is.


Do you have numbers to backup this statement? I would expect the group of people not requiring composability to be larger than the composability group.


I guess us developers who are staying within a single IDE are not doing "real work".

Also what about people doing CAD, Data Science etc. they also almost always stay within one app.


> I guess us developers who are staying within a single IDE

Are you? Apart from IDE, I also run a version control system, a build system, a CI/CD system, a language runtime..

These are all from different vendors.


"UNIX is my IDE."


This, but unironically.


It is kind of a shame that in the day and age of containers we can't just have Linux as an "app" on our iPads. Instead people are forced to write a worse version of everything and bundle it together as an app.

I kind of appreciate Apple's approach to simplicity. I use an iPad because I don't want to maintain another computer; syncing configuration, updating packages, etc. But it basically means that I use it as a "dumb terminal" when I hear it has a surprisingly fast processor. Seems like a waste.


You can, iSh is a container of Alpine Linux.

I admittedly don't do much of anything with it, but it's neat to have.


Neat, I didn't know about that.

I wonder why they emulate x86 instead of targeting ARM though.


It doesn't make much difference, since either way they can't run native code - iOS makes this virtually impossible.

More in depth discussion here: https://news.ycombinator.com/item?id=18425106


I can commit code, build projects and deploy work all through my IDE.

It's actually why they call it IDE because it's an integrated development environment.


It is called an Integrated Development Environment for a reason.


Most source code is portable to a different IDE if you wanted.

I work on open standards for CAD to allow portability of that and am starting to apply this to data science too.


it takes a tremendous amount of configuration to stay in your ide 100% of the time. I refuse to believe you never have to copy paste from another app into the IDE.


It takes about 5 minutes to configure an IDE.

Add Git credentials, importing a project takes care of the setup and build and you can create new Run tasks in most IDEs for calling a different part of your build file e.g. deploy or test.

And I occasionally copy/paste from a browser. But you can do that just fine on an iPad or iPhone.


For small solo dev projects sure. I've never seen a large software company that had everything running inside an IDE though.


Yeah, opening the settings panel is such an effort. /s


This was true, but is no longer true in most cases. Copy/paste works between many iPad applications. Increasingly, applications can expose interfaces that allow information to be exchanged between them. Pythonista, in particular, provides a number of ways to use, pile, and process data from and between applications. The Files app has also contributed to making this simpler.

That said... it’s still less productive to move data between applications than on a PC or Mac. The gap hasn’t been closed yet.


> Copy/paste works between many iPad applications.

Meanwhile, on desktop I can't paste between Emacs and Confluence because Atlassian went out of their way to break it.

I know what people are trying to get at here, but I think there's also a lot of tunnel vision and selective memory.

Realistically, it's only easier to share data on the PC if the application developers made it easy to share data. There's no shortage of proprietary and undocumented file formats. Heck, there's a small industry built up just for converting CAD files between different formats.

It's really not much different on the iPad.


Actualy, I think it is.

You said it yourself:

> on desktop I can't paste between Emacs and Confluence because Atlassian went out of their way to break it.

Which presupposes that by default (i.e without Atlassian's messing things up), it would have worked. On desktop applications, File-level and copy-paste interoperability are there by default.

This is the difference: what is the amount of interoperability you get by default, i.e without specific actions from the app developers?

The fact that you can break interoperability is irrelevant.


On a normal desktop OS, it is routine to use multiple applications in concert to do work, using files and the clipboard as means of composition.

Applications share data on the iPad just like on the desktop - copy and paste and opening a document in multiple apps.

often resorting to cloud services (and thus a mandatory network connection) as a clumsy workaround, and even copying and pasting data between applications can be challenging and error-prone.

You can store documents on your device that can be shared just like you can on iCloud and Dropbox.

On a desktop, any user can invent new workflows. On an ipad, a user can only buy them wholesale. Users are at the mercy of the degree of forethought the app developers

Apple actually bought the company that made the Workflow app and integrated more tightly into iOS to allow cross app automation


”On an ipad, a user can only buy them wholesale.”

Not quite. There’s Shortcuts (https://support.apple.com/guide/shortcuts/welcome/ios).

Limited? Yes, but not nothing, either. This _may_ evolve into something that’s more powerful.


...and there’s pythonista: http://omz-software.com/pythonista/ ,I have some python shortcuts I use. And something similar for Lua, and some variants of julyter notebooks. I oftne use Panic’s code editor (formerly coda) and workingcopy for my git repos.


BTW, you might check out Pyto. I’ve completely replaced Pythonista with it (YMMV, of course), and it has the enormous advantage that it’s actively developed with new releases very frequently.


I’ve had hardware failures/loss at a mercifully low rate of one a decade. I learned long ago that storing work on your own computer is a recipe for disaster. If it’s worth doing, it’s worth uploading to the network.

I can tell not everyone feels that way when I tell others that my computer died and they offer sympathy like I just lost a relative. It shouldn’t take more than three hours to set up a machine with your work stuff, or somebody isn’t doing their job.


Having the ability to back stuff up via a network is great.

Most of the time when we say "Cloud Services", though, it means exactly one specific vendor. If an app is designed to integrate with Dropbox and GitHub, I can't necessarily substitute an alternative, or run my own replacement instance. If a cloud-based solution becomes unavailable because the company is bought out, shut down, drops old API support, or simply because I've stepped onto an airplane, I'm stuck. The cloud is both a convenience and an accident waiting to happen.

My NAS at home has never sent me an email thanking me for participating in an Incredible Journey and disappeared.


I'm not talking about backing stuff up. I'm talking about not keeping data of business value on single points of failure in the first place. Those hardware failures weren't resolved in 3-4 hours by restoring backups, they were resolved by fetching and setting up things from systems of record. Version control. Artifactory. Runbooks in a Wiki.

If this data is useful, why am I bogarting it? I should be sharing it, as soon as possible. If some ears are too sensitive to see rough drafts, we can use obscurity or permissions to control premature messaging.

It's the same argument as Bus Numbers, or for DVCS, or feature branches. I don't want days or weeks worth of work hanging out on some engineer's system, which can get wet or take a tumble down the stairs on the way to a meeting or stay home for a week while down with the flu.

Backups are for people with a bus number of 1 (ie, mismanaged teams or personal projects).


Storing all your work in a Google account and then getting locked out of it for some reason is also kind of bad.


Getting locked out of online accounts is the 21st century's version of the twentieth-century's hard drive crash: "It's not a matter if of, but a matter of when."

So you should have a second repository of any data you have in any account, in a different location, provider, hard drive, or whatever. Because it's only a matter of time until you need it.


One, I didn’t say Google, but two, what are you doing that’s getting you locked out of a Google account for work?

Firewalls, people.


Say, logging into your account in a fashion which trips whatever heuristic they have for deciding if your account is compromised or just 'breaking their terms of service' with no further explanation. For example using 'suspicious' IPs like from some cloud provider, or from multiple countries in a short period of time, as these algorithms were written post-internet but pre-air travel. Who knows, honestly.

And yes this has happened to me, coincidentally after a period of travelling.

Some backed up hard drives in my house and a family member's are probably more reliable.


Sometimes nothing.


On the other hand, I’ve found it way easier to script complicated workflows with x-callback-url and Shortcuts on my iPad than on my laptop. Replacing some of the shortcuts I use daily on my MacBook would probably involve busting out Xcode and making a full-blown app. I can sure, but making a shortcut (augmented with Python or JavaScript as needed) is easy and fun.


The Shortcuts app used to be called Workflow...and it allows exactly that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: