Hacker News new | past | comments | ask | show | jobs | submit login
In Defense of Floppy Disks: The Vocabulary of the Interface (boxesandarrows.com)
57 points by vectorbunny on June 20, 2013 | hide | past | favorite | 48 comments



What I've always found fascinating was that how much easier I've always found words to find and use than images.

You'd think little pictograms would be quicker but whenever an interface focuses on using lots of little icons I find it a mental workout to find which one I'm looking for.

For example, many people dislike the new gmail layout, but the biggest ux stumble I have is clicking the attach pictogram instead of the the link one. I thought it was interesting that in the results in the article 21% of people reported the reverse, they associated attach with the link pictogram.

In RL companies use pictograms because you don't have to localize, but the internet is dynamic and the tooltips are already localized, why not just use words?


> why not just use words

You may run out of room in the interface if the word is substantially longer in some language.

This is a particular problem if the original work is in a language like Japanese, which has short words compared to, say, English. (Japanese Kanji [1] has thousands of characters available, compared to the measly twenty-six of the English alphabet, so it's not surprising that Japanese words are shorter on average.)

Many HN readers are probably familiar with the example of video games that are English localizations of Japanese originals. Sometimes the UI or art simply doesn't have room to accomodate the longer English equivalents. (Particularly in earlier decades when localization wasn't as high a priority as it is today.)

[1] http://en.wikipedia.org/wiki/Kanji#Total_number_of_kanji


Your mention of Kanji is particularly salient, as the blog post catches these skeuomorphic icons in the act of evolving from pictograms to ideograms...


Good point. In 100 years maybe physical floppy disks will be long-forgotten by everyone but museums and historians, but the icon will still be in everyday use.

And maybe the icon will change over time. As the public's consciousness of physical floppies dies away, maybe designers will feel freer to use a more stylized representation, like the Voicemail icon's representation of a...telephone handset? Cassette tape?


It's the tape inside a cassette. Like the old answering machines.


Sometimes, you don't have space, you don't want to promote use of the function or maybe it's clear enough without words.

An example of the latter one: an [x] in the top right hand corner.

Or the magnifying glass inside an input box.


I wonder if those icons are simply reinforcing your premade assumption of what clicking in that area does. I honestly do not know the answer to this question: without ever seeing it performed for them, would someone know how to close a window the first time?


Well, amongst other icons, you'd want some indication of which one to click.

And even if it's reinforcing what you assume; isn't that a good thing? It means you recognized the icon. That's the point, isn't it?


Yes, it gives you more confidence in what you were already doing. I'm talking about situations where you don't know already what you're doing. Where you are looking for the right icon, rather than being greeted by it.


I think that the usability of words/pictograms/etc. depends a lot on the user. I find pictograms generally easier to distinguish at a glance, but only if the color content is different, or if the shape is strikingly different. I tend to have a problem with the way a lot of icons for programs come in a couple different shades of blue. Words only work effectively for me if the words are different lengths or at least if the first letter is different -- I tend to only notice the first letter and the shape.


Note that Gmail includes a setting to switch the icons back to text labels (Settings --> General --> Button Labels).


No dice on the gmail android application unfortunately, where the icons are indecipherable.

Anyway, I am confused as to why the librarians would think college students have never seen floppies, particularly "a few years ago". Flash drives really only started to cut into floppy use in around 2003-2004 from my perspective. Before that everyone of course used CD's for big stuff, but floppies if they wanted to shuffle documents around. I remember submitting assignments to teachers on floppy drives as late as 2004.

This seems to be the modern equivalent of: "I bet you've never seen one of these before!" * points to a vinyl record *.


but very few people who are in college right now were perhaps old enough to remember floppy disks. I graduated from college in 2012 and remember having floppy disks as a kid (I think for Encarta?) but don't remember ever using them.

I think that we're right on the cusp of people graduating college who have never used a floppy disk.


I figure a college freshman's knowledge of tech should surely go at least twelve years back in all but the rare case. I haven't seen continuous stationary printers, 8-tracks, or even 5.25" floppies since I was in elementary school, but those things would not baffle me today, let alone 12 years from when I saw them.

12 years ago flash drives were barely 1 year old, were quite expensive, and barely stored more than a floppy anyway. Network storage in organizations like schools was abysmal (actually, this hasn't really changed from what I have seen...), services like dropbox were non-existent for regular consumers, and who the hell ever used zip drives? Floppies were everywhere.

If you told me that people graduating highschool right now did not know why floppies were called "floppies", then I would not be terribly surprised, but I think we've still got a few years left until they don't know what they are.

I don't know, maybe my school district had some sort of technology lagging bubble around it. That actually seems plausible.


I also remember using floppy disks in around 2004, so it's not just you.


Icons are mnemonics, not signposts.

Icons alone are never sufficient when someone encounters an interface for the first time, you should always include a text label.

New users to an interface will read the text labels, and after time, the icons become a quick mnemonic for them to locate functions they've accessed before.

Even more important than visuals or text is location. Our spatial memory has a higher priority than either. In repeated user tests, I've observed that once people become used to a button resting in, say, the top left corner of a UI, they will click there again for the same function, even if the button itself has changed.

Don't worry too much about the exact semiotics of your icons. Just keep them reasonably meaningful, clearly distinct from one another, include text labels, and be consistent with where you put them.


This is really the most important point. When i use a new app i do not look at the icons and try to figure out what the images mean. What i DO is look for icons that are in similar places, or look like icons i'm used to in other applications. When i can't immediately find what i'm looking for, i use tooltips to search further and once found will easily remember which little picture does what i wanted, or at least roughly where the little picture was.

What the picture shows does not matter, only that i can remember it.


I'm always a bit frustrated that Google Docs doesn't have a save icon. It simply says "All changes saved in drive". But you see this doesn't stop my habitual compulsion to press the save icon (or at least do :w in Vim).


After using the autosave plugin for notepad++, the need to manually press ctrl+s in excel or word annoys me. Why can't I tell these programs to autosave when they lose focus? Instead, I get weird "autorecover" options that usually take several minutes to restore my file.

After using an autosave plugin for minecraft server, I'm annoyed by the need to unfocus notepad++ to get it to save. I would almost rather have my changes written directly to disk automatically. I don't want to have to think about if my file is saved or not.


Do you not find use cases where you do -not- want the document saving?

I spend a lot of time editing code that is being served by a local development server, and much of the time I do not want my changes saved until they are a complete set simply because I don't need it to start feeding me a bunch of errors from half-finished code.

Similarly, there are a lot of times when I will open a document, modify or reformat part of its contents in preparation for copying them somewhere else with different requirements, and then close the document without saving it.

So aren't there any times in your workflows where you don't want the files saved unless you explicitly want to commit some changes?


Not really, though (and this relates to the first of your examples) I do what persisting (avoid loss) changes made distinct from committing (identifying as in a done/stable/ready-for-some-use-besides-continued-editing state) changes.


Excel and word have a proper autosave that runs in the background and writes to a hidden file. That is the way every program should work by default. I have no idea what you mean about minutes to restore, but if so that's a bug, not a flaw in the idea.


But they don't autosave. They autorecover. Which means that instead of just reopening my file, I have 2 files that Excel or Word must try to delta check to rebuild my file.

When Excel or Word crash, I have to wait several minutes while the program closes, reports to the reporting server and checks for problems, then tries to recover my file. Or my other choice is to cancel the reporting and reopen the file manually, where it still has to recover.

I can't even just reopen my unsaved file to redo my changes.

This has been my experience since using Office 2007 on Windows Vista. It's still the same with Office 2010 on 7, and I don't know that it will change with Office 2013 on 8.


I thought autorecover was implemented with a normal save, not a delta.

Anyway if you have issues waiting minutes for the program to report the error then something has gone terribly wrong with your install. Possibly you have a broken DNS? I don't know, that is weird and has absolutely nothing to do with the saving mechanism.

The point I was making is that there should be constant/regular automatic saves that don't touch the original file.


>The point I was making is that there should be constant/regular automatic saves that don't touch the original file.

That's not quite what I want. I want the original file to be updated basically constantly.

>Anyway if you have issues waiting minutes for the program to report the error then something has gone terribly wrong with your install. Possibly you have a broken DNS? I don't know, that is weird and has absolutely nothing to do with the saving mechanism.

This issue has persisted for me across multiple desktops and laptops, on business and residential connections, from multiple ISPs over differing physical media across the Puget Sound region. If DNS were the issue, it would be a massive and persistent issue affecting a major tech hub...


I'm utterly baffled then. I've never seen the error reporting do more than take a memory dump and ask me if I want to send it in, there has never been notable waiting.

Also you really want your original file affected when you cut a paragraph with the intent to paste it somewhere else, or when you're doing some analysis and destructively sort the file?


This is quite the future right? The problem isn't the save icon, it is the notion of "save" itself, which is completely unnaturalistic; i.e. the floppy disk icon is not anachronistic, the floppy disk itself is! When I write into a physical notebook, I don't have to "save my work", this just happens automatically...you've probably heard this before.

What we really need is a good icon for "bookmark version", and there aren't many good time travel analogies to leverage to come up with good metaphors.


The problem is that there really are times when I don't want to write out changes. Discrete saving isn't an anachronism, it is a feature.


If you want to play around with another version of the document that you'll throw away, fork. Alternatively, make your changes, then throw them way by going back in time.

Save is an anachronism. Don't enforce usability hardships for the common case to satisfy the special rare niche case.


You are ignoring cases where you want to present your changes in an atomic fashion to other systems and people. I don't need my compiler to be churning on code that I know won't compile and I don't need dropbox constantly reuploading binary files that I am in the process of modifying. What is the point of burning network IO every time a user drags his stylus across an image in Photoshop? Atomic interactions are the future, not the past.

Furthermore, "forking" is what happens when you "check out" a file to memory and "merging" your changes occurs when you write them back. You are only modifying the terminology. Replacing the "save" verb with "merge" would not be doing the user any favours.


I'm not ignoring them, I'm just saying they aren't common. The hole point of bookmarking a version of the document is to flag it for special treatment.

I'm totally for getting rid of the discrete "compile" and "debug" icons in the IDE, my research centers around that.


So if you are running an application and editing a config file at the same time, that becomes undefined behaviour? No thanks... I'll keep my atomic interaction.

Do you also continuously merge your feature branches with mainline in realtime?


So wait, the application doesn't have a settings tab, but rather an old-fashioned config file instead? When presented with anachronistic interactions, I think you are completely justified in demanding anachronistic interactions.

Don't use high-level word processors to edit your config files, problem solved. But don't demand that our word processors to behave like your text editor that you use to edit config files.


There's more mental overhead involved in forking/merging that simply saving to a flat file. This is why source control has a higher barrier to entry than archiving old versions to zip files.


Heh, I wrote about this: http://techcrunch.com/2012/07/14/iconoclasm/

The reflection of the real world in the virtual world has to do with abstractions, usually going back to among the earliest of concepts that had a similar function at the time but not necessarily for the current age. Which is why we still have pencils, clouds, houses, and arrows instead of ... try to think of something better.


My personal favorite is the word "chipset" from PC hardware.


I always thought that referred to the "set of chips" that were integral to motherboard operation -- during my enthusiast years these were the bios and cmos, HDD controllers, then northbridge and southbridge controller, and eventually audio and network integrated circuits.


I'm not really a hardware person, but I always thought "chipset" meant the on-board non-CPU chips that don't drive external hardware, but are nonetheless an essential part of the PC architecture expected to be present by all OS's, like the 8253 timer [1], the PIC [2], or the DMA controller [3].

By contrast, I'd say hardware like the 16550 UART [4] isn't part of the chipset, rather it's part of the serial port which it drives.

> northbridge and southbridge controller

Yeah, memory and I/O control is part of the chipset too, since a CPU without I/O or memory is just an expensive paperweight (or, if powered, an expensive heater).

> HDD controllers, audio and network integrated circuits.

I'd consider these to be more like devices or device drivers.

> bios and cmos

This is kind of a special case. If you ask a CPU guy, it's not a part of the chipset, they're just memory modules with different properties from normal RAM. If you ask a motherboard guy, it is part of the chipset. It depends on who you ask.

I can't really cite these distinctions anywhere, it's more along the lines of the intuitions I've picked up from spending decades around computers.

[1] http://en.wikipedia.org/wiki/Intel_8253

[2] http://en.wikipedia.org/wiki/Programmable_Interrupt_Controll...

[3] http://en.wikipedia.org/wiki/Direct_memory_access

[4] http://en.wikipedia.org/wiki/16550


I stand thoroughly corrected.


It generally only included the northbridge and southbridge.


Do you mind to elaborate? Because I've never heard anything about that and my (admittedly short) googling didn't bring up anything relevant.


Remember the original IBM PC and PC AT motherboards that used discrete chips like the 8259 and 8254?


Phones haven't hung in decades? Does the author truly believe offices don't have phones anymore?


No, they used to hang so you could "hang up".


Plenty of phones are still wall mounted. The most common place to see them is wherever there are red emergency phones.


A headset connected to my computer counts as a phone? Well, it can make phone calls, but I miss having a physical phone.


Scott Hanselman wrote a similar article last year [1]

[1]: http://www.hanselman.com/blog/TheFloppyDiskMeansSaveAnd14Oth...


who cares because "Save" has become obsolete.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: