In a random discussion I’m in elsewhere, somebody indicated that they’d tried a number of productivity apps and consistently wound up not using them when life gets crazy.
Cue a design school student hopping in and asserting that the existing productivity apps are therefore badly-designed, because OP wasn’t helped and therefore the apps don’t help “the users that need the most help”.
Obviously I think this is nonsense for a number of reasons, the foremost being that the person with the complaint didn’t even blame the app - they just said they stopped using them when life gets crazy.
But it’s got me thinking. Obviously there’s an huge variety of productivity apps, from simple stuff like Reminders to crazy-complex stuff like Omnifocus. And they all work somewhat differently, largely based around the cognitive models for various productivity methods and/or designers’ preferences.
When it comes to apps, what criteria do you think separates “this is how they’ve chosen to approach this problem, which doesn’t work for me” from “this is badly designed”?
I have been meaning to put out something about exactly this.
In design science, the issue you’re describing is called indeterminacy.
Any design has a theory behind it: the design theory of the thing. The design theory captures the purpose of the design, its design principles, how to implement/use it, and a host of other prescriptive aspects of the design.
(All things have design theories, even if they’re never articulated.)
Now, here’s the kicker: any one designed and used thing — we call these “instantiations” — might not stay true to its design theory. There are a number of sources of variance in how the thing on the ground/in the field might differ from its idealized design theory. Those differences separate use from design, and may cause failure for the use case, even though the design was still perfect.
Yet, because we don’t have a good grasp on design theory indeterminacy, we might blame the design anyway. Like type I and II errors, these issues are the source of bad findings.
Here’s the paper my supervisor co-authored defining this concept, titled “Design Theory Indeterminacy: What Is it, How Can it Be Reduced, and Why Did the Polar Bear Drown?”
The first book I read about this, and the one I still think of, is Alan Cooper’s About Face: The Essentials of User Interface Design. It’s been a minute, but I recall such questions as, why do you go to the file menu when you want to print? Of course we’re all used to this now, but it isn’t intuitive. When the book was written, continuous calendars weren’t a thing, and mimicked paper calendars, requiring the user to go from month to month in one-month increments. And lots more.
The workflow comes first, not the app. If the workflow can’t survive first contact with life’s chaos, an app —even one that is superbly designed by any objective measure—is unlikely to solve the problem.
I’ve had the experience of being asked to make a failed paper process paperless without having been given permission to fix the underlying process itself. The end result was of course a failed paperless process.
If I can’t make an app work for me, I can usually chalk it up to either me having an inept workflow or the app being built with a different workflow than mine in mind.
Beside the different UI, I think it is a Hugh (kind of design) problem that all those apps, methods, and so on, don’t really tell the people, that not a single item off their workload disappears by using those apps and methods.
A lot of people seems to thing, that those apps and e.g. GTD is doing their work, and they will be eased of the workload, as soon as they using the app.
No one is really telling those people, that this is not the case, and therefore they abandon that use, in the moment they run into a corner, because the workload is still the same.
This should be changed by the Designers, to pick up the user from the right expectations, and then they will, most probably, continue the use of the UI best fit for them, to get back out of the corner…
I feel like we’re discussing the design version of Plato’s “ideal forms” and “accidents” and all that such.
Would this be the case when, for example, why at least one (English-speaking) user I know periodically loses incoming mail because he uses the folder explicitly called ‘Trash’ to store his email archive?
Agree 100%. In this particular scenario, the OP had asked why “most organizing apps don’t work”, admitting that they’d tried several, and did well at first but always fell out of the habit when life got crazy. A number of people pointed out the obvious - that the problem wasn’t likely with the apps.
And that’s when the design school guy jumped in.
@ryanjamurphy, the term that kept coming up was “designing for extremes” - which I understood to be the idea that you need to focus not only on your intended designed workflow, but the myriad other ways a user may use the app/product and/or interact with it. Accessibility and similar workflows, workarounds that users come up with for tasks, the ways people actually use the app vs. how you expected them to, etc.
In this case, the claim was that “[blaming the user] goes against everything Don Norman says about product design”. Quite literally, the assertion was that the single data point of the OP’s experience (seemingly) not being able to stick with an app meant that just about every app in the category was objectively poorly designed. OP was the “extreme user”, and all the apps had failed them, so the apps were poorly designed.
Surely that can’t be a sensible way to look at the design field, can it?
This is a good point. I know David Allen calls this sort of stuff out in his GTD materials, but obviously DA isn’t an app designer.
I know there are some apps that are pretty egregious about asserting that their app will help somebody actually get more done, as if the app will somehow magically fix the issue somebody has with overcommitting and having too much work. It would probably be better if more of them - instead of being neutral - actually tipped in the opposite direction, though.
“Trash”, certainly in British English, is only a proximal synonym for rubbish or waste, which I understand to be the American English definition of the word. In the UK one is more likely to speak of a “rubbish bin” or “waste bin” (usually in the sense of that found in a kitchen or workshop), or a “waste paper bin” or perhaps “waste paper basket” in an office or study. In England one would empty the aforesaid receptacles into the “dust bin”, maybe the “rubbish bin”, not the “garbage”. In Scotland as likely as not they are emptied into the “bucket, to be collected by the “scaffies”.
This is not to explain why someone would use “trash” as an archive, but does at least suggest that linguistical descriptions are not universal and designers should recognise that,
On which, point number two: using software is a learned skill; sure good design should facilitate the learning process?
For example, I have yet to see any application or file system that attaches a readily accessible descriptive element to folders, thus allowing a user to determine it’s intended purpose; names are not always enough. Perhaps if a tool tip had appeared saying “Messages put in the Trash folder will be deleted, either when you choose to ‘empty trash’ or automatically after 30 days or if the number of messages in Trash exceeds 100” your user would have more easily discovered the designer’s intent.
I like tool tips. I like them because they are a universally applicable tool; once I’ve learned of the existence of the concept of a tooltip, I can — or at least, should — be able to apply it anywhere. Also, they play on the hesitancy that naturally arises from caution in the face of uncertainty.
Yes, yes, yes! There are things like tool tips, of course, but it’s also possible to 1) build in discoverability, 2) stick to well-established convention when it’s reasonable to do so, 3) let the user add complexity step-by-step while they use the app on a regular basis, and 4) provide good tutorials that explain how to use the app in accordance with its design.
Then there’s the kind of app that practically revels in its impenetrability. (I’m looking at you, Tinderbox.)
Boy did this one hit home this morning. I am currently in the middle of five large home improvement projects, an “I don’t know” number of work projects, and I’m teaching two programming classes at the community college. I have too much going. OmniFocus has become a wasteland I’m just not looking at, and when I do look at it it’s full of garbage (rubbish?) that I’m just not going to do, or I’m not going to do today.
Is this bad design on the part of OmniFocus? A failing in the GTD system I’ve been following for years? Or is it my fault that I’m not following the system like I should. I even forgot to take the trash to the curb the other day, something I haven’t done in years because it’s a regularly scheduled recurring event and OmniFocus reminds me to do it. For whatever reason, I’ve found the breaking point of my personal productivity system.
For now, I’m breaking things into large blocks. At work, I’m going to work on X project today, I’ll work on it till it’s finished, then move on to project Y. When the clock chimes and it’s time to work on household projects, I’m going to work on project A till finished. Writing down “next actions” is getting me nowhere when I need to build a shed. But putting things on a shopping list as I realize I need them, then scheduling a run to the hardware store has been worthwhile.
I’m beginning to think that a lot of the time I’ve been putting into OmniFocus, GTD, and task management in general has just been screwing around to pass the time. Feeling productive without actually being productive.
About 2 weeks ago (15 days to be exact) we went to do the first field test of the AnimalTrakker Male Breeding Soundness application. This is an expansion of the January incarnation, Bull Breeding soundness, and was designed and programmed to handle other species specifically sheep. We had 2 full development systems, primary, backup and OSB tablets, redundant reader hardware, battery backups, spare printer, label paper, power cables an inverter to run off the car batter if necessary and more. (This is not the first time we’ve done in the field testing with new stuff. )
In addition to a plethory of hardware issues, each fixable but slow, the workflow we encountered was not one I had ever envisioned someone using. So the workflow the software was designed to handle didn’t work.
Code did not survive first contact with the sheep.
I took lots of notes, and at the end of the day we left with it not quite a total disaster but pretty darned close to it.
2 very long days of programming and I had reworked the user interface to handle BOTH my original designed in workflow but also the radically different one we encountered in the field. We made changes and the user came to pick up the system in 2 days time. We are still in early alpha code at best.
Long story short, I believe part of the problem is that app developers as a rule do not involve the final customer early enough in the design process to discover the real problems the users want apps to help with. Users, in general, cannot articulate what they expect in terms that a programmer can interpret and transform into an app. It is only by getting out there with a laptop and your IDE (in case you can make simple changes on the fly) and shadow a user as they attempt to use your app to help them with a problem you thought they had that you begin to get an inkling of the REAL problem they want you to solve.
Productivity apps suffer from the fact that there is no universal agreement among users as to what productivity means or how it looks in practice. So developers, each trying to make the next best thing, or build a lucrative business, or solve their own personal problem all approach the issue from a different perspective.
For me, good design means users have found a system that works with their own workflow models and maybe provides a bit of a nudge to a more efficient or easier wrokflow but does not mandate it.
Good design is one that helps the user reach a goal.
My file disposal receptacle on macOS says “Trash”. Is this perhaps a localization thing that’s not localized everywhere?
With absolutely zero judgement expressed or implied (we’ve all been there!), my guess is that Omnifocus is probably tracking everything that you’ve put in there, but that you have a lot of things in there that you aren’t actually committed to doing. Per GTD, you should stop tracking those things - no matter what method you’re using for task tracking.
This is why I think it’s OK for an app to be opinionated and up front about it. The “up front about it” part is critical: if an app is designed with a particular workflow or workflow philosophy in mind—e.g., the way OmniFocus was built around classic GTD—say so.
Don Norman’s rules help create excellent designs for everyday things, but not all things are everyday. Everyone needs to use a door to get into a building. Not everyone needs to organize their work in accordance with GTD. To this end… a reminder, the root of “design” is “designation.” The act of designing is to designate what is and what is not important. Ergo, It is up to the designer as to whether the extremes matter or not.
Certainly there are many cases in which designing for extremes creates a better product for all — cf. accessibility and the subfield of “lead user innovation”. But not every user’s a lead user, and the designer’s only got so much time in the day to do designing. So the designer may have to create a design that is bad for some but is better for others. Then, deciding whether the design is bad or not obviously just depends on which one you are.
So, in these cases, I would argue that objectively assessing whether a design is good or not just isn’t a thing.
However, another way of assessing the quality of a design is its adherence to the design theory that drives it. In my own research/work in this area lately, I’m hoping to make it easier for folks to articulate the design theories and principles underpinning their designs such that:
a. it is easier to separate and compare design with artifact and implementation; and
b. it is easier to measure/judge the degree to which artifact+implementation fulfills the design.
By separating design (theory) from artifact and implementation, I think you can judge the quality of a design: simply look at design theory as the promises the artifact + implementation is supposed to fulfill. If it fails to fulfill some promises, ask why: if the reason is because the designer has introduced design error by making mistakes between theory and actualization, then sure, it’s a bad design.