Jump to Navigation

Design Blog

Syndicate content
Articles for people who make web sites.
Updated: 4 weeks 4 days ago

Webmentions: Enabling Better Communication on the Internet

Thu, 07/19/2018 - 9:00am

Over 1 million Webmentions will have been sent across the internet since the specification was made a full Recommendation by the W3C—the standards body that guides the direction of the web—in early January 2017. That number is rising rapidly, and in the last few weeks I’ve seen a growing volume of chatter on social media and the blogosphere about these new “mentions” and the people implementing them.

So what are Webmentions and why should we care?

While the technical specification published by the W3C may seem incomprehensible to most, it’s actually a straightforward and extremely useful concept with a relatively simple implementation. Webmentions help to break down some of the artificial walls being built within the internet and so help create a more open and decentralized web. There is also an expanding list of major web platforms already supporting Webmentions either natively or with easy-to-use plugins (more on this later).

Put simply, Webmention is a (now) standardized protocol that enables one website address (URL) to notify another website address that the former contains a reference to the latter. It also allows the latter to verify the authenticity of the reference and include its own corresponding reference in a reciprocal way. In order to understand what a big step forward this is, a little history is needed.

The rise of @mentions

By now most people are familiar with the ubiquitous use of the “@” symbol in front of a username, which originated on Twitter and became known as @mentions and @replies (read “at mentions” and “at replies”). For the vast majority, this is the way that one user communicates with other users on the platform, and over the past decade these @mentions, with their corresponding notification to the receiver, have become a relatively standard way of communicating on the internet.

Tweet from Wiz Khalifa

Many other services also use this type of internal notification to indicate to other users that they have been referenced directly or tagged in a post or photograph. Facebook allows it, so does Instagram. Google+ has a variant that uses + instead of @, and even the long-form article platform Medium, whose founder Ev Williams also co-founded Twitter, quickly joined the @mentions party.

The biggest communications problem on the internet

If you use Twitter, your friend Alice only uses Facebook, your friend Bob only uses his blog on WordPress, and your pal Chuck is over on Medium, it’s impossible for any one of you to @mention another. You’re all on different and competing platforms, none of which interoperate to send these mentions or notifications of them. The only way to communicate in this way is if you all join the same social media platforms, resulting in the average person being signed up to multiple services just to stay in touch with all their friends and acquaintances.

Given the issues of privacy and identity protection, different use cases, the burden of additional usernames and passwords, and the time involved, many people don’t want to do this. Possibly worst of all, your personal identity on the internet can end up fragmented like a Horcrux across multiple websites over which you have little, if any, control.

Imagine if AT&T customers could only speak to other AT&T customers and needed a separate phone, account, and phone number to speak to friends and family on Verizon. And still another to talk to friends on Sprint or T-Mobile. The massive benefit of the telephone system is that if you have a telephone and service (from any one of hundreds or even thousands of providers worldwide), you can potentially reach anyone else using the network. Surely, with a basic architecture based on simple standards, links, and interconnections, the same should apply to the internet?

The solution? Enter Webmentions!

As mentioned earlier, Webmentions allow notifications between web addresses. If both sites are set up to send and receive them, the system works like this:

  1. Alice has a website where she writes an article about her rocket engine hobby.
  2. Bob has his own website where he writes a reply to Alice’s article. Within his reply, Bob includes the permalink URL of Alice’s article.
  3. When Bob publishes his reply, his publishing software automatically notifies Alice’s server that her post has been linked to by the URL of Bob’s reply.
  4. Alice’s publishing software verifies that Bob’s post actually contains a link to her post and then (optionally) includes information about Bob’s post on her site; for example, displaying it as a comment.

A Webmention is simply an @mention that works from one website to another!

If she chooses, Alice can include the full text of Bob’s reply—along with his name, photo, and his article’s URL (presuming he’s made these available)—as a comment on her original post. Any new readers of Alice’s article can then see Bob’s reply underneath it. Each can carry on a full conversation from their own websites and in both cases display (if they wish) the full context and content.

Using Webmentions, both sides can carry on a conversation where each is able to own a copy of the content and provide richer context.

User behaviors with Webmentions are a little different than they are with @mentions on Twitter and the like in that they work between websites in addition to within a particular website. They enable authors (of both the original content and the responses) to own the content, allowing them to keep a record on the web page where it originated, whether that’s a website they own or the third-party platform from which they chose to send it.

Interaction examples with Webmention

Webmentions certainly aren’t limited to creating or displaying “traditional” comments or replies. With the use of simple semantic microformats classes and a variety of parsers written in numerous languages, one can explicitly post bookmarks, likes, favorites, RSVPs, check-ins, listens, follows, reads, reviews, issues, edits, and even purchases. The result? Richer connections and interactions with other content on the web and a genuine two-way conversation instead of a mass of unidirectional links. We’ll take a look at some examples, but you can find more on the IndieWeb wiki page for Webmention alongside some other useful resources.

Marginalia

With Webmention support, one could architect a site to allow inline marginalia and highlighting similar to Medium.com’s relatively well-known functionality. With the clever use of URL fragments, which are well supported in major browsers, there are already examples of people who use Webmentions to display word-, sentence-, or paragraph-level marginalia on their sites. After all, aren’t inline annotations just a more targeted version of comments?

An inline annotation on the post “Hey Ev, what about mentions?,” in which Medium began to roll out their @mention functionality. Reads

As another example, and something that could profoundly impact the online news business, I might post a link on my website indicating I’ve read a particular article on, say, The New York Times. My site sends a “read” Webmention to the article, where a facepile or counter showing the number of read Webmentions received could be implemented. Because of the simplified two-way link between the two web pages, there is now auditable proof of interaction with the content. This could similarly work with microinteractions such as likes, favorites, bookmarks, and reposts, resulting in a clearer representation of the particular types of interaction a piece of content has received. Compared to an array of nebulous social media mini-badges that provide only basic counters, this is a potentially more valuable indicator of a post’s popularity, reach, and ultimate impact.

Listens

Building on the idea of using reads, one could extend Webmentions to the podcasting or online music sectors. Many platforms are reasonably good at providing download numbers for podcasts, but it is far more difficult to track the number of actual listens. This can have a profound effect on the advertising market that supports many podcasts. People can post about what they’re actively listening to (either on their personal websites or via podcast apps that could report the percentage of the episode listened to) and send “listen” Webmentions to pages for podcasts or other audio content. These could then be aggregated for demographics on the back end or even shown on the particular episode’s page as social proof of the podcast’s popularity.

For additional fun, podcasters or musicians might use Webmentions in conjunction with media fragments and audio or video content to add timecode-specific, inline comments to audio/video players to create an open standards version of SoundCloud-like annotations and commenting.

SoundCloud allows users to insert inline comments that dovetail with specific portions of audio. Reviews

Websites selling products or services could also accept review-based Webmentions that include star-based ratings scales as well as written comments with photos, audio, or even video. Because Webmentions are a two-way protocol, the reverse link to the original provides an auditable path to the reviewer and the opportunity to assess how trustworthy their review may be. Of course, third-party trusted sites might also accept these reviews, so that the receiving sites can’t easily cherry-pick only positive reviews for display. And because the Webmention specification includes the functionality for editing or deletion, the original author has the option to update or remove their reviews at any time.

Getting started with Webmentions Extant platforms with support

While the specification has only recently become a broad recommendation for use on the internet, there are already an actively growing number of content management systems (CMSs) and platforms that support Webmentions, either natively or with plugins. The simplest option, requiring almost no work, is a relatively new and excellent social media service called Micro.blog, which handles Webmentions out of the box. CMSs like Known and Perch also have Webmention functionality built in. Download and set up the open source software and you’re ready to go.

If you’re working with WordPress, there’s a simple Webmention plugin that will allow you to begin using Webmentions—just download and activate it. (For additional functionality when displaying Webmentions, there’s also the recommended Semantic Linkbacks plugin.) Other CMSs like Drupal, ProcessWire, Elgg, Nucleus CMS, Craft, Django, and Kirby also have plugins that support the standard. A wide variety of static site generators, like Hugo and Jekyll, have solutions for Webmention technology as well. More are certainly coming.

If you can compose basic HTML on your website, Aaron Parecki has written an excellent primer on “Sending Your First Webmention from Scratch.”

A weak form of Webmention support can be bootstrapped for Tumblr, WordPress.com, Blogger, and Medium with help from the free Bridgy service, but the user interface and display would obviously be better if they were supported fully and natively.

As a last resort, if you’re using Tumblr, WordPress.com, Wix, Squarespace, Ghost, Joomla, Magento, or any of the other systems without Webmention, file tickets asking them to support the standard. It only takes a few days of work for a reasonably experienced developer to build support, and it substantially improves the value of the platform for its users. It also makes them first-class decentralized internet citizens.

Webmentions for developers

If you’re a developer or a company able to hire a developer, it is relatively straightforward to build Webmentions into your CMS or project, even potentially open-sourcing the solution as a plugin for others. For anyone familiar with the old specifications for pingback or trackback, you can think of Webmentions as a major iteration of those systems, but with easier implementation and testing, improved performance and display capabilities, and decreased spam vulnerabilities. Because the specification supports editing and deleting Webmentions, it provides individuals with more direct control of their data, which is important in light of new laws like GDPR.

In addition to reading the specification, as mentioned previously, there are multiple open source implementations already written in a variety of languages that you can use directly, or as examples. There are also a test suite and pre-built services like Webmention.io, Telegraph, mention-tech, and webmention.herokuapp.com that can be quickly leveraged.

Maybe your company allows employees to spend 20% of their time on non-specific projects, as Google does. If so, I’d encourage you to take the opportunity to fbuild Webmentions support for one or more platforms—let’s spread the love and democratize communication on the web as fast as we can!

And if you already have a major social platform but don’t want to completely open up to sending and receiving Webmentions, consider using Webmention functionality as a simple post API. I could easily see services like Twitter, Mastodon, or Google+ supporting the receiving of Webmentions, combined with a simple parsing mechanism to allow Webmention senders to publish syndicated content on their platform. There are already several services like IndieNews, with Hacker News-like functionality, that allow posting to them via Webmention.

If you have problems or questions, I’d recommend joining the IndieWeb chat room online via IRC, web interface, Slack, or Matrix to gain access to further hints, pointers, and resources for implementing a particular Webmention solution.

The expansion of Webmentions

The big question many will now have is Will the traditional social media walled gardens like Facebook, Twitter, Instagram, and the like support the Webmention specification?

At present, they don’t, and many may never do so. After all, locking you into their services is enabling them to leverage your content and your interactions to generate income. However, I suspect that if one of the major social platforms enabled sending/receiving Webmentions, it would dramatically disrupt the entire social space.

In the meantime, if your site already has Webmentions enabled, then congratulations on joining the next revolution in web communication! Just make sure you advertise the fact by using a button or badge. You can download a copy here.

Categories: Around The Web

Order Out of Chaos: Patterns of Organization for Writing on the Job

Thu, 07/05/2018 - 9:18am

A few years ago, a former boss of mine emailed me out of the blue and asked for a resource that would help him and his colleagues organize information more effectively. Like a dutiful friend, I sent him links to a few articles and the names of some professional writing books. And I qualified my answer with that dreaded disclaimer: “Advice varies widely depending on the situation.” Implication: “You’ll just have to figure out what works best for you. So, good luck!”

In retrospect, I could have given him a better answer. Much like the gestalt principles of design that underpin so much of what designers do, there are foundational principles and patterns of organization that are relevant to any professional who must convey technical information in writing, and you can adapt these concepts to bring order out of chaos whether or not you’re a full-time writer.

.row{margin:0 132px 24px}.col ol,.col ul{margin-left:40px}.row:after{clear:left;content:"";display:block}.col{float:left;width:50%}.col ul{list-style-type:disc}.col ul li{margin-bottom:9px}@media only screen and (max-width:37.5em){.row{margin:0 0 24px}.col{float:none;width:100%}.col+.col{margin-top:24px}} Recognize the primary goals: comprehension and performance

Not long after I wrote my response, I revisited a book I’d read in college: Technical Editing, by Carolyn D. Rude. In my role as a technical writer, I reference the book every now and then for practical advice on revising software documentation. This time, as I reviewed the chapter on organization, I realized that Rude explained the high-level goals and principles better than any other author I’d read up to that point.

In short, she says that whether you are outlining a procedure, describing a product, or announcing a cool new feature, a huge amount of writing in the workplace is aimed at comprehension (here’s what X is and why you should care) and performance (here’s how to do X). She then suggests that editors choose from two broad kinds of order to support these goals: content-based order and task-based order. The first refers to structures that guide readers from major sections to more detailed sections to facilitate top-down learning; the second refers to structures of actions that readers need to carry out. Content-based orders typically start with nouns, whereas task-based orders typically begin with verbs.

Content-Based Order Example

Product Overview

  • Introduction
  • Features
    • Feature 1
    • Feature 2
    • Feature n
  • Contact
  • Support

Task-Based Order Example

User Guide (WordPress)

  • Update your title and tagline
  • Pick a theme you love
  • Add a header or background
  • Add a site icon
  • Add a widget

Of course, not all writing situations fall neatly into these buckets. If you were to visit Atlassian’s online help content, you would see a hybrid of content-based topics at the first level and task-based topics within them. The point is that as you begin to think about your organization, you should ask yourself:

  • Which of the major goals of organization (comprehension or performance) am I trying to achieve?
  • And which broad kind of order will help me best achieve those goals?

This is still pretty abstract, so let’s consider the other principles from Carolyn Rude, but with a focus on how a writer rather than an editor should approach the task of organization.1

Steal like an organizer: follow pre-established document structures

In his book Steal Like an Artist, Austin Kleon argues that smart artists don’t actually create anything new but rather collect inspiring ideas from specific role models, and produce work that is profoundly shaped by them.

“If we’re free from the burden of trying to be completely original,” he writes, “we can stop trying to make something out of nothing, and we can embrace influence instead of running away from it.”

The same principle applies to the art of organization. To “steal like an organizer” means to look at what other people have written and to identify and follow pre-established structures that may apply to your situation. Doing so not only saves time and effort but also forces you to remember that your audience may already expect a particular pattern—and experience cognitive dissonance if they don’t get it.

You are probably familiar with more pre-established structures than you think. News reports follow the inverted pyramid. Research reports often adhere to some form of the IMRAD structure (Introduction, Methodology, Results, and Discussion). Instruction manuals typically have an introductory section followed by tasks grouped according to the typical sequence a user would need to follow. Even troubleshooting articles tend to have a standard structure of Problem, Cause, and Solution.

All this may sound like common sense, and yet many writers entirely skip this process of adapting pre-made structures. I can understand the impulse. When you face a blank screen, it feels simpler to capture the raw notes and organize it all later. That approach can certainly help you get into the flow, but it may also result in an ad hoc structure that fails to serve readers who are less familiar with your material.

Instead, when you begin the writing process, start by researching available templates or pre-made structures that could support your situation. Standard word processors and content management systems already contain some good templates, and it’s easy to search for others online. Your fellow writers and designers are also good resources. If you’re contributing to a series of documents at your organization, you should get familiar with the structure of that series and learn how to work within it. Or you can do some benchmarking and steal some ideas from how other companies structure similar content.

My team once had to do our own stealing for a major project that affected about half our company. We needed to come up with a repeatable structure for standard operating procedures (SOPs) that any employee could use to document a set of tasks. Knowing SOPs to be a well-established genre, we found several recommended structures online and in books, and came up with a list of common elements. We then decided which ones to steal and arranged them into a sequence that best suited our audience. We made out like bandits.

Structural SOP Elements We Found Our Assessment Overview Steal Roles Involved Steal Dependencies Steal Estimated Level of Effort Nah, too hard to calculate and maintain. Process Diagram Meh, kind of redundant, not to mention a lot of work. No thanks. Tasks Steal Task n Steal Task n Introduction Steal Task n Responsibility Steal Task n Steps Steal See Also Steal

But what if there is no pre-established pattern? Or what if a pattern exists, but it’s either too simple or too complex for what you’re trying to accomplish? Or what if it’s not as user-friendly as you would like?

There may indeed be cases where you need to develop a mostly customized structure, which can be daunting. But fear not! That’s where the other principles of organization come in.

Anticipate your readers’ questions (and maybe even talk to them)

Recently I had an extremely frustrating user experience. While consulting some documentation to learn about a new process, I encountered a series of web pages that gave no introduction and dove straight into undefined jargon and acronyms that I had never heard of. When I visited related pages to get more context, I found the same problem. There was no background information for a newbie like me. The writers failed in this case to anticipate my questions and instead assumed a great deal of prior knowledge.

Don’t make this mistake when you design your structure. Like a journalist, you need to answer the who, what, where, when, how, and why of your content, and then incorporate the answers in your structure. Anticipate common questions, such as “What is this? Where do I start? What must I know? What must I do?” This sort of critical reflection is all the more important when organizing web content, because users will almost certainly enter and exit your pages in nonlinear, unpredictable ways.

If possible, you should also meet with your readers, and gather information about what would best serve them. One simple technique you could try is to create a knowledge map, an annotated matrix of sorts that my team once built after asking various teams about their information priorities. On the left axis, we listed categories of information that we thought each team needed. Along the top axis, we listed a column for each team. We then gave team representatives a chance to rank each category and add custom categories we hadn’t included. (You can learn more about the process we followed in this video presentation.)

A knowledge map my team created after asking other teams which categories of information were most important to them.

The weakness of this approach is that it doesn’t reveal information that your audience doesn’t know how to articulate. To fill in this gap, I recommend running a few informal usability tests. But if you don’t have the time for that, building a knowledge map is better than not meeting with your readers at all, because it will help you discover structural ideas you hadn’t considered. Our knowledge map revealed multiple categories that were required across almost all teams—which, in turn, suggested a particular hierarchy and sequence to weave into our design.

Go from general to specific, familiar to new

People tend to learn and digest information best by going from general to specific, and familiar to new. By remembering this principle, which is articulated in the schema theory of learning, you can better conceptualize the structure you’re building. What are the foundational concepts of your content? They should appear in your introductory sections. What are the umbrella categories under which more detailed categories fall? The answer should determine which headings belong at the top and subordinate levels of your hierarchy. What you want to avoid is presenting new ideas that don’t flow logically from the foundational concepts and expectations that your readers bring to the table.

Consider the wikiHow article “How to Create a Dungeons and Dragons Character.” It begins by defining what Dungeons and Dragons is and explaining why you need to create a character before you can start playing the game.

Writers at wikiHow help readers learn by starting with general concepts before moving on to specifics.

The next section, “Part 1: Establishing the Basics,” guides the reader into subsequent foundational steps, such as deciding which version of the game to follow and printing out a character sheet. Later sections (“Selecting a gender and race,” “Choosing a class,” and “Calculating ability scores”) expand on these concepts to introduce more specific, unfamiliar ideas in an incremental fashion, leading readers up a gentle ramp into new territory.

Use conventional patterns to match structure to meaning

Within the general-to-specific/familiar-to-new framework, you can apply additional patterns of organization that virtually all humans understand. Whereas the pre-established document structures above are usually constructed for particular use cases or genres, other conventional patterns match more general mental models (or “schemas,” as the schema theory so elegantly puts it) that we use to make sense of the world. These patterns include chronological, spatial, comparison-contrast, cause-effect, and order of importance.

Chronological

The chronological pattern reveals time or sequence. It’s appropriate for things like instructions, process flows, progress reports, and checklists. In the case of instructions, the order of tasks on a page often implies (or explicitly states) the “proper” or most common sequence for a user to follow. The wikiHow article above, for example, offers a recommended sequence of tasks for beginner players. In the case of progress reports, the sections may be ordered according to the periods of time in which work was done, as in this sample outline from the book Reporting Technical Information, by Kenneth W. Houp et al.:

Beginning

  • Introduction
  • Summary of work completed

Middle

  • Work completed
    • Period 1 (beginning and end dates)
      • Description
      • Cost
    • Period 2 (beginning and end dates)

      • Description
      • Cost
  • Work remaining

    • Period 3 (or remaining periods)
      • Description of work to be done
      • Expected cost

End

  • Evaluation of work in this period
  • Conclusions and recommendations

The principles of organization listed in this article are in fact another example of the chronological pattern. As Carolyn Rude points out in her book, the principles are arranged as a sort of methodology to follow. Try starting at the top of the list and work your way down. You may find it to be a useful way to produce order out of the chaos before you.

Spatial

The spatial pattern refers to top-to-bottom, left-to-right structures of organization. This is a good pattern if you need to describe the components of an interface or a physical object.

Take a look at the neighbor comparison graph below, which is derived from a sample energy efficiency solution offered by Oracle Utilities. Customers who see this graph would most likely view it from top to bottom and left to right.

A neighbor comparison graph that shows a customer how they compare with their neighbors in terms of energy efficiency.

A detailed description of this feature would then describe each component in that same order. Here’s a sample outline:

  • Feature name
    • Title
    • Bar chart
      • Efficient neighbors
      • You
      • Average neighbors
    • Date range
    • Performance insight

      • Great
      • Good
      • Using more than average
    • Energy use insight
    • Comparison details (“You’re compared with 10 homes within 6 miles …”)
Comparison-contrast

The comparison-contrast pattern helps users weigh options. It’s useful when reporting the pros and cons of different decisions or comparing the attributes of two or more products or features. You see it often when you shop online and need to compare features and prices. It’s also a common pattern for feasibility studies or investigations that list options along with upsides and downsides.

Cause-effect

The cause-effect pattern shows relationships between actions and reactions. Writers often use it for things like troubleshooting articles, medical diagnoses, retrospectives, and root cause analyses. You can move from effect to cause, or cause to effect, but you should stick to one direction and use it consistently. For example, the cold and flu pages at Drugs.com follow a standard cause-effect pattern that incorporates logical follow-up sections such as “Prevention” and “Treatment”:

  • What Is It? (This section defines the illness and describes possible “causes.”)
  • Symptoms (This section goes into the “effects” of the illness.)
  • Diagnosis
  • Expected Duration
  • Prevention
  • Treatment
  • When to Call a Professional
  • Prognosis

For another example, see the “Use parallel structure for parallel sections” section below, which shows what a software troubleshooting article might look like.

Order of importance

The order of importance pattern organizes sections and subsections of content according to priority or significance. It is common in announcements, marketing brochures, release notes, advice articles, and FAQs.

The order of importance pattern is perhaps the trickiest one to get right. As Carolyn Rude says, it’s not always clear what the most important information is. What should come in the beginning, middle, and end? Who decides? The answers will vary according to the author, audience, and purpose.

When writing release notes, for example, my team often debates which software update should come first, because we know that the decision will underscore the significance of that update relative to the others. FAQs by definition are focused on which questions are most common and thus most important, but the exact order will depend on what you perceive as being the most frequent or the most important for readers to know. (If you are considering writing FAQs, I recommend this great advice from technical writer Lisa Wright.)

Other common patterns

Alphabetical order is a common pattern that Rude doesn’t mention in detail but that you may find helpful for your situation. To use this pattern, you would simply list sections or headings based on the first letter of the first word of the heading. For example, alphabetical order is used frequently to list API methods in API documentation sites such as those for Flickr, Twitter, and Java. It is also common in glossaries, indexes, and encyclopedic reference materials where each entry is more or less given equal footing. The downside of this pattern is that the most important information for your audience may not appear in a prominent, findable location. Still, it is useful if you have a large and diverse set of content that defies simple hierarchies and is referenced in a non-linear, piecemeal fashion.

Group related material

Take a look at the lists below. Which do you find easier to scan and digest?

  1. Settle on a version of D&D.
  2. Print a character sheet, if desired.
  3. Select a gender and race.
  4. Choose a class.
  5. Name your character.
  6. Identify the main attributes of your character.
  7. Roll for ability scores.
  8. Assign the six recorded numbers to the six main attributes.
  9. Use the “Point Buy” system, alternatively.
  10. Generate random ability scores online.
  11. Record the modifier for each ability.
  12. Select skills for your character.
  13. List your character’s feats.
  14. Roll for your starting gold.
  15. Equip your character with items.
  16. Fill in armor class and combat bonuses.
  17. Paint a picture of your character.
  18. Determine the alignment of your character.
  19. Play your character in a campaign.

Part 1: Establishing the Basics

  1. Settle on a version of D&D.
  2. Print a character sheet, if desired.
  3. Select a gender and race.
  4. Choose a class.
  5. Name your character.

Part 2: Calculating Ability Scores

  1. Identify the main attributes of your character.
  2. Roll for ability scores.
  3. Assign the six recorded numbers to the six main attributes.
  4. Use the “Point Buy” system, alternatively.
  5. Generate random ability scores online.
  6. Record the modifier for each ability.

Part 3: Equipping Skills, Feats, Weapons, and Armor

  1. Select skills for your character.
  2. List your character’s feats.
  3. Roll for your starting gold.
  4. Equip your character with items.
  5. Fill in armor class and combat bonuses.

Part 4: Finishing Your Character

  1. Paint a picture of your character.
  2. Determine the alignment of your character.
  3. Play your character in a campaign.

(Source: wikiHow: How to Create a Dungeons and Dragons Character.)

If you chose the second list, that is probably because the writers relied on a widely used organizational technique: grouping.

Grouping is the process of identifying meaningful categories of information and putting information within those categories to aid reader comprehension. Grouping is especially helpful when you have a long, seemingly random list of information that could benefit from an extra layer of logical order. An added benefit of grouping is that it may reveal where you have gaps in your content or where you have mingled types of content that don’t really belong together.

To group information effectively, first analyze your content and identify the discrete chunks of information you need to convey. Then tease out which chunks fall within similar conceptual buckets, and determine what intuitive headings or labels you can assign to those buckets. Writers do this when creating major and minor sections within a book or printed document. For online content, grouping is typically done at the level of articles or topics within a web-based system, such as a wiki or knowledge base. The Gmail Help Center, for example, groups topics within categories like “Popular articles,” “Read & organize emails,” and “Send emails.”

It’s possible to go overboard here. Too many headings in a short document or too many topics in a small help system can add unnecessary complexity. I once faced the latter scenario when I reviewed a help system written by one of my colleagues. At least five of the topics were so short that it made more sense to merge them together on a single page rather than forcing the end user to click through to separate pages. I’ve also encountered plenty of documents that contain major section headings with only one or two sentences under them. Sometimes this is fine; you may need to keep those sections for the sake of consistency. But it’s worth assessing whether such sections can simply be merged together (or conversely, whether they should be expanded to include more details).

Because of scenarios like these, Carolyn Rude recommends keeping the number of groupings to around seven, give or take a few—though, as always, striking the right balance ultimately depends on your audience and purpose, as well as the amount of information you have to manage.

Use parallel structure for parallel sections

One of the reasons Julius Caesar’s phrase “I came, I saw, I conquered” still sticks in our memory after thousands of years is the simple fact of parallelism. Each part of the saying follows a distinct, repetitive grammatical form that is easy to recall.

Parallelism works in a similar manner with organization. By using a consistent and repetitive structure across types of information that fit in the same category, you make it easier for your readers to navigate and digest your content.

Imagine you’re writing a troubleshooting guide in which all the topics follow the same basic breakdown: Problem Title, Problem, Cause, Solution, and See Also. In this case, you should make sure that each topic includes those same headings, in the exact same hierarchy and sequence, and using the exact same style and formatting. This kind of parallelism delivers a symmetry that reduces the reader’s cognitive load and clarifies the relationships of each part of your content. Deviations from the pattern not only cause confusion but can undermine the credibility of the content.

Do This

ABC Troubleshooting Guide

  • Introduction
  • Problem 1 Title
    • Problem
    • Cause
    • Solution
    • See Also
  • Problem 2 Title

    • Problem
    • Cause
    • Solution
    • See Also
  • Problem 3 Title

    • ...
  • Don’t Do This

    ABC Troubleshooting Guide

    • Introduction
    • Problem 1 Title
      • Problem
      • Root causes
      • How to Fix it
      • Advanced Tips and tricks
      • Related
    • Problem 2 title

      • Issue
      • Steps to Fix
      • Why did this happen, and how can I avoid it next time?
      • See also
    • Problem 3 title

      • ...

    This last principle is probably the easiest to grasp but may be the most difficult to enforce, especially if you are managing contributions from multiple authors. Templates and style guides are useful here because they invite authors to provide standard inputs, but you will still need to watch the content like a hawk to squash the inconsistencies that inevitably emerge.

    Conclusion

    In one sense, my response to my former boss was accurate. Given the endless variety of writing situations, there is no such thing as a single organization solution. But saying that “advice varies widely depending on the situation” doesn’t tell the whole story. There are flexible patterns and principles that can guide you in finding, customizing, and creating structures for your goals.

    The key thing to remember is that structure affects meaning. The sequence of information, the categories you use, the emphasis you imply through your hierarchy—all of these decisions impact how well your audience understands what you write. Your ideal structure should therefore reinforce what you mean to say.

    Footnotes
    • 1. The principles in this article are based on the same ones that Carolyn Rude outlines in chapter 17, pp. 289–296, of the third edition of her book. I highly recommend it for anyone who’s interested in gaining an in-depth understanding of editing. The book is now in its fifth edition and includes an additional author, Angela Eaton. See Technical Editing (Fifth Edition) for details. The examples and illustrations used in this article are derived from a variety of other sources, including my own work.
Categories: Around The Web

Your Emails (and Recipients) Deserve Better Context

Thu, 06/28/2018 - 9:05am

Email communication is an integral part of the user experience for nearly every web application that requires a login. It’s also one of the first interactions the user has after signing up. Yet too often both the content and context of these emails is treated as an afterthought (at best), with the critical parts that users see first—sender name and email, subject, and preheader—largely overlooked. Your users, and the great application you’ve just launched, deserve better.

A focus on recipient experience

Designing and implementing a great email recipient experience is difficult. And by the time it comes to the all-important context elements (name, subject, and so on), it’s commonly left up to the developer to simply fill something in and move on. That’s a shame, because these elements play an outsized role in the email experience, being not only the first elements seen but also the bits recipients use to identify emails when searching through their archives. Given the frequency with which they touch users, it really is time we started spending a little more effort to fine-tune them.

The great news is that despite the constraints imposed on these elements, they’re relatively easy to improve, and they can have a huge impact on engagement, open rates, and recipient satisfaction. When they all work together, sender name and email, subject, and preheader provide a better experience for your recipients.

So whether you’re a developer stuck fixing such oversights and winging it, or on the design or marketing team responsible for making the decisions, use the following guide to improve your recipient’s experience. And, if possible, bring it up with your whole team so it’s always a specific requirement in the future.

Details that matter

As they say, the devil is in the details, and these details matter. Let’s start with a quick example that highlights a few common mistakes.

In the image below, the sender is unnecessarily repeated within the subject, wasting key initial subject characters, while the subjects themselves are all exactly the same. This makes it difficult to tell one email from the next, and the preview content doesn’t help much either since the only unique information it provides is the date (which is redundant alongside the email’s time stamp). The subject copy could be more concise as well—“Payment Successfully Processed” is helpful, but it’s a bit verbose.

Avoid redundancy and make your sender name, subject, and preheaders work together. Periscope repeats the sender name, and doesn’t provide unique or relevant information in the subject or preheader.

Outside of the sender and the dates on the emails, there’s not much useful information until you open the email itself. Fortunately, none of these things are particularly difficult to fix. Weather Underground provides a great example of carefully crafted emails. The subject conveys the most useful information without even requiring the recipient to open the email. In addition, their strategic use of emojis helps complement that information with a very rich, yet judicious, use of subject-line space.

Weather Underground does a great job with the sender and even front-loads the subject with the most valuable bit of information. The date is included, but it’s at the end of the subject.

Weather Underground also makes use of Gmail Inbox Actions to provide a direct link to the key information online without needing a recipient to open the email to follow a link. Gmail Inbox Actions require some extra work to set up and only work in Gmail, but they can be great if you’re sending high volumes of email.

Both scenarios involve recurring emails with similar content from one to the next, but the difference is stark. With just a little effort and fine-tuning, the resulting emails are much more useful to the recipients. Let’s explore how this is done.

Emphasizing unique content for recurring emails

With the earlier examples, both organizations are sending recurring emails, but by focusing on unique subject lines, Weather Underground’s emails are much more helpful. Recurring emails like invoices may not contain the most glamorous content, but you still have an opportunity to make each one unique and informative.

Instead of a generic “You have a new invoice” notification, you can surface important or unique information like the invoice total, the most expensive products or services, or the due date.

By surfacing the most important or unique information from the content of the email, there’s additional context to help the recipient know whether they need to act or not. It also makes it easier to find a specific invoice when searching through emails in the future.

Clarifying the sender

Who (or what) is sending this email? Is it a person? Is it automated? Do I want to hear from them? Do I trust them? Is this spam? These questions and more automatically run through our heads whenever we see an email, and the sender information provides the first clue when we start processing our inbox. Just as for caller ID on incoming phone calls, recognition and trust both play a role. As Joanna Wiebe said in an interview with Litmus, “If the from name doesn’t sound like it’s from someone you want to hear from, it doesn’t matter what the subject line is.” This can be even more critical on mobile devices where the sender name is the most prominent element.

The first and most important step is to explicitly specify a name. You don’t want the recipient’s email client choosing what to display based on the email address alone. For instance, if you send emails from “alerts@example.com” (with no name specified), some clients will display “alerts” as the name, and others will display “alerts@example.com.” With the latter, it just feels rough around the edges. In either case, the experience is less than ideal for the sender.

Without a name specified, email clients may use the username portion of an email address or truncate longer email addresses, making the name portion incomplete or less helpful to recipients.

The technical implementation may vary depending on your stack, but at the simplest level, correct implementation is all in the string formatting. Let’s look at “Jane Doe <email@example.com>” as an example. “Jane Doe” is the name, and the email is included after the name and surrounded by angle brackets. It’s a small technical detail, but it makes a world of difference to recipients.

But what name should we show? This depends on the type of email, so you’ll want to consider the sender for each email independently. For example, with a receipt or invoice you may want to use “Acme Billing.” But with a comment notification, it may be more informative for recipients if you use the commenter’s name, such as “Jane Doe via AcmeApp.” Depending on the context, you could use “with” or “from” as well, but those have an extra character, so I’ve found “via” to be the shortest and most semantically accurate option.

Similarly, if your business entity or organization name is different from your product name, you should use the name that will be most familiar to your recipients.

Recipients aren’t always familiar with the names of corporate holding companies, so make sure to use the company or product name that will be most familiar to the recipient. In the above cases, while “Jane Doe” may have made the comment, the email isn’t directly from her, so it’s best to add something lik “via Acme Todos” to make it clear that it was sent on Jane’s behalf. In the case of “Support,” content doesn’t clarify which product it refers to. Since users could have a variety of emails from “Support” for different products, it fails to provide important context. Avoiding contact confusion

In the case where you use someone’s name—like with the “Jane Doe via AcmeApp” example above—it’s important to add a reference to the app name. Since the email isn’t actually from Jane, it’s inaccurate to represent that it’s from Jane Doe directly. This can be confusing for users, but it can also create problems with address books. If you use just “Jane Doe,” your sending email address can be accidentally added to the recipient’s address book in association with Jane’s entry. Then, when they go to email Jane later, they may unwittingly send an email to “notifications@acme.com” instead of Jane. That could lead to some painful missed emails and miscommunication. The other reason is that it’s simply helpful for the recipient to know the source of the email. It’s not just from Jane, it’s from Jane via your application.

You’ll also want to put yourself in your recipient’s shoes and carefully consider whether a name is recognizable to your recipient. For example, if your corporate entity name and product name aren’t the same, recipients will be much less likely to recognize the sender if you use the name of your corporate entity. So make sure to use the product name that will be most familiar to the recipient. Similarly, you’ll want to avoid using generic names that could be from any company. For example, use “Acme Billing” instead of just “Billing,” so the recipient can quickly and easily identify your product.

Finally, while names are great, the underlying sending address can be just as important. In many ways, it’s the best attribute for recipients to use when filtering and organizing their inbox, and using unique email addresses or aliases for different categories of emails makes this much easier. There’s a fine line, but the simplest way to do this is to group emails into three categories: billing, support, and activity/actions. You may be able to use more, like notifications, alerts, or legal, but remember that the more you create, the more you’ll have to keep track of.

Also, keep the use of subdomains to a minimum. By consistently only sending transactional email like password resets, receipts, order updates, and other similar emails from your primary domain, users learn to view any emails from other domains as suspicious. It may seem like a minor detail, but these bits of information add up to create important signals for recipients. It is worth noting, however, that you should use a different address, and ideally a separate subdomain, for your bulk marketing emails. This helps Gmail and other inbox providers understand the type of email coming from each source, which in turn helps ensure the domain reputation for your bulk marketing emails—which is traditionally lower—doesn’t affect delivery of your more critical transactional email.

Subject line utility

Now that recipients have clearly identifiable and recognizable sender information, it’s time to think about the subjects of your emails. Since we’ve focused on transactional emails in the examples used so far, we’ll similarly focus on the utility of your subject line content rather than the copywriting. You can always use copywriting to improve the subject, but with transactional emails, utility comes first.

The team at MailChimp has studied data about subject lines extensively, and there are a few key things to know about subjects. First, the presence of even a single word can have a meaningful impact on open rates. A 2015 report by Adestra had similar findings. Words and phrases like “thank you,” “monthly,” and “thanks” see higher engagement than words like “subscription,” “industry,” and “report,” though different words will have different impacts depending on your industry, so you’ll still need to test and monitor the results. Personalization can also have an impact, but remember, personalization isn’t just about using a person’s name. It can be information like location, previous purchases, or other personal data. Just remember that it’s important to be tasteful, judicious, and relevant.

The next major point from MailChimp is that subject line length doesn’t matter. Or, rather, it doesn’t matter directly. After studying 6 billion emails, they found “little or no correlation between performance and subject length.” That said, when line length is considered as one aspect of your overall subject content, it can be used to help an email stand out. Clarity and utility are more important than brevity, but when used as a component to support clarity and utility, brevity can help.

One final point from the Adestra report is that open rates aren’t everything. Regardless of whether someone opens an email, the words and content of your subject line leaves an impression. So even if a certain change doesn’t affect your open rates, it can still have a far-reaching impact.

Clearing out redundancy

The most common mistake with subjects is including redundant information. If you’ve carefully chosen the sender name and email address, there’s no need to repeat the sender name in the subject, and the characters could be better applied to telling the recipient additional useful information. Dates are a bit of a gray area, but in many cases, the email’s time stamp can suffice for handling any time-based information. On the other hand, when the key dates don’t correlate to when the email was sent, it can be helpful to include the relevant date information in the subject.

With these examples, after the sender, there’s no new or useful information displayed, and some form of the company name is repeated several times. Even the preheader is neglected leaving the email client to use alternate text from the logo.

With the subject of your application emails, you’ll also want to front-load the most important content to prevent it from being cut off. For instance, instead of “Your Invoice for May 2018,” you could rewrite that as “May 2018 Invoice.” Since your sender is likely “Acme Billing,” the recipient already knows it’s about billing, so the month and year is the most important part of the subject. However, “May 2018 Invoice” is a bit terse, so you may want to add something at the end to make it more friendly.

Next, in situations where time stamps are relevant, avoid relying on relative dates or times. Phrases like “yesterday,” “last week,” or “two hours ago” don’t age well with email since you never know when someone will receive or read it. Similarly, when someone goes to search their email archives, relative dates aren’t helpful. If you must use relative dates, look for opportunities to add explicit dates or time stamps to add clarity.

With regularly occurring emails like reports or invoices, strive to give each message a unique subject. If every report has the subject “Your Monthly Status Report,” they can run together in a list of emails that all have the same subject. It can also make them more difficult to search later on. The same goes for invoices and receipts. Yes, invoice numbers and order numbers are technically unique, but they aren’t particularly helpful. Make sure to include useful content to help identify each email individually. Whether that’s the date, total value, listing the most expensive items, or all three, it’s easier on recipients when they can identify the contents of an email without having to open it. While open rates are central to measuring marketing emails, transactional emails are all about usefulness. So open rates aren’t as purely correlated with successful transactional emails.

There’s a case to be made that in some contexts a great transactional email doesn’t need to be opened at all for it to be useful. The earlier Weather Underground example does an excellent job communicating the key information without requiring recipients to open it. And while the subject is the best place for key content, some useful content can also be displayed using a preheader.

Making the most of preheaders

If you’re not familiar with the preheader, you can think of it as a convenient name for the content at the beginning of an email. Campaign Monitor has a great write-up with in-depth advice on making the most of your preheaders. It’s simply a way of acknowledging and explicitly suggesting the text that email clients should show in the preview pane for an email. While there’s no formal specification for preheaders, and different email clients will handle them differently, they’re still widely displayed.

Most importantly, well-written and useful preheaders of 40–50 characters have been shown to increase overall engagement, particularly if delivering a concise call to action. A study by Yes Lifecycle Marketing (signing up required) points out that preheader content is important, especially on mobile devices where subjects are truncated and it can act as a sort of extended subject.

If the leading content in your email is a logo or other image, email clients will often use the alternate text for the image as the preview text. Since “Acme Logo” isn’t very helpful, it’s best to include a short summary of text at the beginning of your email. Sometimes this short summary text can interfere with the design of your email, so it’s not uncommon for the design to accommodate some visually muted—but still readable—text at the beginning. Or, as long as you’re judicious, in most cases you can safely hide preheader text entirely by using the display: none CSS declaration. Abusing this could get you caught in spam filters, but for the most part, inbox providers seem to focus on the content that is hidden rather than the fact that it’s hidden.

If you’re not explicitly specifying your preheader text, there’s a good chance email clients will use content that at best is less than useful and at worst makes a bad impression.

If your email can be designed and written such that the first content encountered is the useful content for previews, then you’re all set. In the case of receipts, invoices, or activity summaries, that’s not always easy. In those cases, a short text-based summary of the content makes a good preheader.

Context element interplay

The rules outlined above are great guidelines, but remember that rules are there to be broken (well, sometimes …). As long as you understand the big picture, sender, subject, and preheader can still work together effectively even if some of those rules are bent. A bit. For example, if you ensure that you have relevant and unique content in your preheader for the preview, you may be able to get away with using the same subject for each recurring email. Alternatively, there may be cases where you need to repeat the sender name in the subject.

The key is that when you’re crafting these elements, make sure you’re looking at how they work together. Sometimes a subject can be shortened by moving some content into the preheader. Alternatively, you may be able to use a more specific sender to reduce the need for a word or two in the subject. The application of these guidelines isn’t black and white. Simply being aware of the recipient’s experience is the most important factor when crafting the elements they’ll see in preview panes.

Finally, a word on monitoring and testing

Simple changes to the sender, subject, and preheader can significantly impact open rates and recipient experience. One critical thing to remember, however, is that while some of these improvements are guaranteed winners, monitoring and testing things like open rates and click rates is critical to validate any changes made. And since these elements can either play against each other or work together, it’s best to test combinations and view all three elements holistically.

The value of getting this right really is in the details, and despite their tendency to be overlooked, taking the time to craft helpful and useful sender names and addresses, subject lines, and preheaders can drastically improve the experience for your email recipients. It’s a small investment that’s definitely worth your time.

Categories: Around The Web

Discovery on a Budget: Part III

Thu, 06/21/2018 - 8:59am

Sometimes we have the luxury of large budgets and deluxe research facilities, and sometimes we’ve got nothing but a research question and the determination to answer it. Throughout the “Discovery on a Budget” series we have discussed strategies for conducting discovery research with very few resources but lots of creativity. In part 1 we discussed the importance of a clearly defined problem hypothesis and started our affordable research with user interviews. Then, in part 2, we discussed competing hypotheses and “fake-door” A/B testing when you have little to no traffic. Today we’ll conclude the series by considering the pitfalls of the most tempting and seemingly affordable research method of all: surveys. We will also answer the question “when are you done with research and ready to build something?”

A quick recap on Candor Network

Throughout this series I’ve used a budget-conscious, and fictitious, startup called Candor Network as my example. Like most startups, Candor Network started simply as an idea:

I bet end-users would be willing to pay directly for a really good social networking tool. But there are lots of big unknowns behind that idea. What exactly would “really good” mean? What are the critical features? And what would be the central motivation for users to try yet another social networking tool? 

To kick off my discovery research, I created a hypothesis based on my own personal experience: that a better social network tool would be one designed with mental health in mind. But after conducting a series of interviews, I realized that people might be more interested in a social network that focused on data privacy as opposed to mental health. I captured this insight in a second, competing hypothesis. Then I launched two corresponding “fake door” landing pages for Candor Network so I could A/B test both ideas.

For the past couple of months I’ve run an A/B test between the two landing pages where half the traffic goes to version A and half to version B. In both versions there is a short, two-question survey. To start our discussion today, we will take a more in-depth look at this seemingly simple survey, and analyze the results of the A/B test.

Surveys: Proceed with caution

Surveys are probably the most used, but least useful, research tool. It is ever so tempting to say, “lets run a quick survey” when you find yourself wondering about customer desires or user behavior. Modern web-based tools have made surveys incredibly quick, cheap, and simple to run. But as anyone who has ever tried running a “quick survey” can attest, they rarely, if ever, provide the insight you are looking for.

In the words of Erika Hall, surveys are “too easy.” They are too easy to create, too easy to disseminate, and too easy to tally. This inherent ease masks the survey’s biggest flaw as a research method: it is far, far too easy to create biased, useless survey questions. And when you run a survey littered with biased, useless questions, you either (1) realize that your results are not reliable and start all over again, or (2) proceed with the analysis and make decisions based on biased results. If you aren’t careful, a survey can be a complete waste of time, or worse, lead you in the wrong direction entirely.

However, sometimes a survey is the only method at your immediate disposal. You might be targeting a user group that is difficult to reach through other convenience- or “guerilla”-style means (think of products that revolve around taboo or sensitive topics—it’s awfully hard to spring those conversations on random people you meet in a coffee shop!). Or you might work for a client that is reluctant to help locate research participants in any way beyond sending an email blast with a survey link. Whatever the case may be, there are times when a survey is the only step forward you can take. If you find yourself in that position, keep the following tips in mind.

Tip 1: Try to stick to questions about facts, not opinions

If you were building a website for ordering dog food and supplies, a question like “how many dogs do you own?” can provide key demographic information not available through standard analytics. It’s the sort of question that works great in a short survey. But if you need to ask “why did you decide to adopt a dog in the first place?” then you’re much better off with a user interview.

If you try asking any kind of “why” question in a survey, you will usually end up with a lot of “I don’t know” and otherwise blank responses. This is because people are, in general, not willing to write an essay on why they’ve made a particular choice (such as choosing to adopt a dog) when they’re in the middle of doing something (like ordering pet food). However, when people schedule time for a phone call, they are more than willing to talk about the “whys” behind their decisions. In short, people like to talk about their opinions, but are generally too lazy or busy to write about their opinions. Save the why questions for later (and see Tip 5).

Tip 2: Avoid asking about the future

People live in the present, and only dream about the future. There are a lot of things outside of our control that affect what we will buy, eat, wear, and do in the future. Also, sometimes the future selves we imagine are more aspirational than factual. For example, if you were to ask a random group of people how many times they plan to go to the gym next month, you might be (not so) surprised to see that their prediction is significantly higher than the actual number. It is much better to ask “how many times did you go to the gym this week?” as an indicator of general gym attendance than to ask about any future plans.

I asked a potentially problematic, future-looking question in the Candor Network landing page survey:

How much would you be willing to pay, per year, for Candor Network?

  • Would not pay anything
  • $1
  • $5
  • $10
  • $15
  • $20
  • $25
  • $30
  • Would pay more

In this question, I’m asking participants to think about how much money they would like to spend in the future on a product that doesn’t exist yet. This question is problematic for a number of reasons, but the main issue is that people, in general, don’t know how they really feel about pricing until the exact moment they are poised to make a purchase. Relying on this question to, say, develop my income projections for an investor pitch would be unwise to say the least. (I’ll discuss what I actually plan to do with the answers to this question in the next tip.)

Tip 3: Know how you are going to analyze responses before you launch the survey

A lot of times, people will create and send out a survey without thinking through what they are going to do with the results once they are in hand. Depending on the length and type of survey, the analysis could take a significant amount of time. Also, if you were hoping to answer some specific questions with the survey data, you’ll want to make sure you’ve thought through how you’ll arrive at those answers. I recommend that while you are drafting survey questions, you also simultaneously draft an analysis plan.

In your analysis plan, think about what you are ultimately trying to learn from each survey question. How will you know when you’ve arrived at the answer? If you are doing an A/B test like I am, what statistical analysis should you run to see if there is a significant difference between the versions? You should also think about what the numbers will look like and what kinds of graphs or tables you will need to build. Ultimately, you should try to visualize what the data will look like before you gather it, and plan accordingly.

For example, when I created the two survey questions on the Candor Network landing pages, I created a short analysis plan for each. Here is what those plans looked like:

Analysis plan for question 1: “How much would you be willing to pay per year for Candor Network?”

Each response will go into one of two buckets:

  • Bucket 1: said they would not pay any money;
  • and Bucket 2: said they might pay some money.

Everyone who answered “Would not pay anything” goes in Bucket 1. Everyone else goes in Bucket 2. I will interpret every response that falls into Bucket 2 as an indicator of general interest (and I’m not going to put any value on the specific answer selected). To see whether any difference in response between landing page A and B is statistically significant (i.e., attributable to more than just chance), I will use a chi-square test. (Side note: There are a number of different statistical tests we could use in this scenario, but I like chi-square because of its simplicity. It is a test that’s easy for non-statisticians to run and understand, and it errs on the conservative side.)

Analysis plan for question 2: “Would you like to be a beta tester or participate in future research?”

The question only has two possible responses: “yes” and “no.” I will interpret every “yes” response as an indicator of general interest in the idea. Again, a chi-square test will show if there is a significant difference between the two landing pages. 

Tip 4: Never rely on a survey by itself to make important decisions

Surveys are hard to get right, and even when they are well made, the results are often approximations of what you really want to measure. However, if you pair a survey with a series of user interviews or contextual inquiries, you will have a richer, more thorough set of data to analyze. In the social sciences, this is called triangulation. If you use multiple methods to triangulate and study the same phenomenon, you will get a richer, more complete picture. This leads me to my final tip …

Tip 5: End every survey with an opportunity to participate in future research

There have been many times in my career when I have launched surveys with only one objective in mind: to gather the contact information of potential study participants. In cases like these, the survey questions themselves are not entirely superfluous, but they are certainly secondary to the main research objective. Shortly after the survey results have been collected, I will select and email a few respondents, inviting them to participate in a user interview or usability study. If I planned on continuing Candor Network, this is absolutely what I would do.

Finally, the results

According to Google Optimize, there were a total of 402 sessions in my experiment. Of those sessions, 222 saw version A and 180 saw version B. Within the experiment, I tracked how often the “submit” button on the survey was clicked, and Google Optimize tells me “no clear leader was found” on that measure of engagement. Roughly an equal number of people from each condition submitted the survey.

Here is a breakdown of the number of sessions and survey responses each condition received:

Version A:
better mental health Version B:
privacy and data security Total Sessions 222 180 402 Survey responses 76 68 144

When we look at the actual answers to the survey questions, we start to get some more interesting results.

Bucket 1:
would not pay any money Bucket 2:
might pay some money Version A 25 51 Version B 14 54

Breakdown of question 1, “How much would you be willing to pay per year for Candor Network?”

Plugging these figures into my favorite chi-square calculator, I get the following values: chi-square = 2.7523, p = 0.097113. In general, bigger chi-square values indicate greater differences between the groups. And the p-value is less than 0.1, which suggests that the result is marginally significant (i.e., the result is probably not due to random chance). This gives me a modest indicator that respondents in group B, who saw the “data secure” version of the landing page, are more likely to fall into the “might pay some money” bucket.

And when we look at the breakdown and chi-square calculation of question two, we see similar results.

No Yes Version A 24 52 Version B 13 55

Breakdown of question 2, “Would you like to be a beta tester or participate in future research?”

The chi-square = 2.9189, and p = .087545. Again, I have a modest indicator that respondents in group B are more likely to say yes to participating in future research. (If you’d like to learn more about how to run and interpret chi-square tests, the Interaction Design department at the University of California, San Diego has provided a great video tutorial.)

How do we know when it’s time to move on?

I wish I could provide you with a formula for calculating the exact moment when the research is done and it’s time to move on to prototyping, but I’m afraid no such formula exists. There is no definitive way to determine how much research is enough. Every round of research teaches you something new, but you are always left with more questions. As Albert Einstein said, “the more I learn, the more I realize how much I don’t know.”

However, with experience you come to recognize certain hallmarks that indicate it’s time to move on. Erika Hall, in her book Just Enough Research, described it as feeling a “satisfying click.” She says, “[O]ne way to know you’ve done enough research is to listen for the satisfying click. That’s the sound of the pieces falling into place when you have a clear idea of the problem you need to solve and enough information to start working on a solution.” (Just Enough Research, p. 36.)

When it comes to building a product on a budget, you may also want to consider that research is relatively cheap compared to the cost of design and development. The rule I tend to follow is this: continue conducting discovery research until the questions you really want answered can only be answered by putting something in front of users. That is, wait to build something until you absolutely have to. Learn as much as you can about your target market and user base until the only way forward is to put some sketches on paper.

With Candor Network, I’m not quite there yet. There is still plenty of runway to cover in the research cycle. Now that I know that data privacy is a more motivating reason to consider paying for a social networking tool, I need to work out what other features will be essential. In the next round of research, I could do think-aloud studies and ask participants to give me a tour of their Facebook and other social media pages. Or I could continue with more interviews, but recruit from a different source and reach a broader demographic of participants. Regardless of the exact path I choose to take from here, the key is to focus on what the requirements would be for the ultra-private, data-secure social network that users would value.

A few parting words

Discovery research helps us learn more about the users we want to help and the problems they need a solution for. It doesn’t have to be expensive either, and it definitely isn’t something that should be omitted from the development cycle. By starting with a problem hypothesis and conducting multiple rounds of research, we can ultimately save time and money. We can move from gut instincts and personal experiences to a tested hypothesis. And when it comes time to launch, we’ll know it’s from a solid foundation of research-backed understanding.

Recommended reading

If you’re testing the waters on a new idea and want to jump into some (budget-friendly) discovery research, here are some additional resources to help you along:

Books

Articles

Categories: Around The Web

The Problem with Patterns

Thu, 06/14/2018 - 9:01am

It started off as an honest problem with a brilliant solution. As the ways we use the web continue to grow and evolve, we, as its well-intentioned makers and stewards, needed something better than making simple collections of pages over and over again.

Design patterns, component libraries, or even style guides have become the norm for organizations big and small. Having reusable chunks of UI aids consistency and usability for users, and it lends familiarity and efficiency to designers. This in turn frees up designers’ time to focus on bigger problems, like solving for their users’ needs. In theory.

The use of design patterns, regardless of their scope or complexity, should never stifle creativity or hold back design progress. In order to achieve what they promise, they should be adaptable, flexible, and scalable. A good design pattern is undeterred by context, and most importantly, is unobtrusive. Again, in theory.

Before getting further into the weeds, let’s define what is meant by the term pattern here. You’re probably wondering what the difference is between all the different combinations of the same handful of words being used in the web community.

Initially, design patterns were small pieces of a user interface, like buttons and error messages.

Buttons and links from Co-op

Design patterns go beyond the scope and function of a style guide, which deals more with documenting how something should look, feel, or work. Type scales, design principles, and writing style are usually found within the bounds of a style guide.

More recently, the scope of design patterns has expanded as businesses and organizations look to work more efficiently and consistently, especially if it involves a group or family of products and services. Collections of design patterns are then commonly used to create reusable components of a larger scope, such as account sign-up, purchase checkout, or search. This is most often known as the component library.

Tabs from BBC Global Experience Language (GEL)

The final evolution of all these is known as a design system (or a design language). This encompasses the comprehensive set of design standards, documentation, and principles. It includes the design patterns and components to achieve those standards and adhere to those principles. More often than not, a design system is still used day-to-day by designers for its design patterns or components.

The service design pattern

A significant reason why designing for the web has irrevocably changed like this is due to the fact that more and more products and services live on it. This is why service design is becoming much more widely valued and sought after in the industry.

Service patterns—unlike all of the above patterns, which focus on relatively small and compartmentalized parts of a UI—go above and beyond. They aim to incorporate an entire task or chunk of a user’s journey. For example, a credit card application can be represented by some design patterns or components, but the process of submitting an application to obtain a credit card is a service pattern.

Pattern for GOV.UK start pages

If thinking in terms of an analogy like atomic design, service patterns don’t fit any one category (atoms, molecules, organisms, etc). For example, a design pattern for a form can be described as a molecule. It does one thing and does it well. This is the beauty of a good design pattern—it can be taken without context and used effectively across a variety of situations.

Service design patterns attempt to combine the goals of both design patterns and components by creating a reusable task. In theory.

So, what’s the problem? The design process is undervalued

Most obvious misuses of patterns are easy to avoid with good documentation, but do patterns actually result in better-designed products and services?

Having a library of design components can sometimes give the impression that all the design work has been completed. Designers or developers can revert to using a library as clip art to create “off-the-shelf” solutions. Projects move quickly into development.

Although patterns do help teams hesitate less and build things in shorter amounts of time, it is how and why a group of patterns and components are stitched together that results in great design.

For example, when designing digital forms, using button and input fields patterns will improve familiarity and consistency, without a doubt. However, there is no magic formula for the order in which questions on a form should be presented or for how to word them. To best solve for a user’s needs, an understanding of their goals and constraints is essential.

Patterns can even cause harm without considering a user’s context and the bearing it may have on their decision-making process.

For example, if a user will likely be filling out a form under stress (this can be anything from using a weak connection, to holding a mobile phone with one hand, to being in a busy airport), an interface should prioritize minimizing cognitive load over the number of steps or clicks needed to complete it. This decision architecture cannot be predetermined using patterns.

Break up tasks into multiple steps to reduce cognitive load Patterns don’t start with user needs

Components and service patterns have a tendency to serve the needs of the business or organization, not the user.

Pattern Service User need Organization need Apply for something Get a fishing license Enjoy the outdoors Keep rivers clean; generate income Apply for something Apply for a work visa Work in a different country Check eligibility Create an account Online bank account Save money Security; fraud prevention Create an account Join a gym Lose weight Capture customer information Register Register to vote Make my voice heard Check eligibility Register Online shopping Find my order Security; marketing

If you are simply designing a way to apply for a work visa, having form field and button patterns is very useful. But any meaningful testing sessions with users will speak to how confident they felt in obtaining the necessary documents to work   abroad, not if they could simply locate a “submit” button.

User needs are conflated with one another

Patterns are also sometimes a result of grouping together user needs, essentially creating a set of fictional users that in reality do not exist. Users usually have one goal that they want to achieve efficiently and effectively. Assembling a group of user needs can result in a complex system trying to be everything to everyone.

For example, when creating a design pattern for registering users to a service across a large organization, the challenge can very quickly move from:

“How can I check the progress of my application?”
“Can I update or change my delivery address?”
“Can I quickly repeat or renew an application?”

to:

“How can we get all the details we need from users to allow them to register for an account?”

The individual user needs are forgotten and replaced with a combined assumed need to “register for an account” in order to “view a dashboard.” In this case, the original problem has even been adapted to suit the design pattern instead of the other way around. 

Outcomes are valued over context

Even if they claim to address user context, the success of a service pattern might still be measured through an end result, output, or outcome. Situations, reactions, and emotions are still overlooked.

Take mass transit, for example. When the desired outcome is to get from Point A to Point B, we may find that a large number of users need to get there quickly, especially if they’re headed home from work. But we cannot infer from this need that the most important goal of transportation is speed. Someone traveling alone at night or in unfamiliar surroundings may place greater importance on safety or need more guidance and reassurance from the service.

Sometimes, service patterns cannot solve complex human problems like these. More often than not, an over-reliance on outcome-focused service patterns just defeats the purpose of building any empathy during the design process.

For example, date pickers tend to follow a similar pattern across multiple sectors, including transport, leisure, and healthcare. Widely-used patterns like this are intuitive and familiar to most users.

This does not mean that the same date picker pattern can be used seamlessly in any service. If a user is trying to book an emergency doctor appointment, the same patterns seen above are suddenly much less effective. Being presented with a full calendar of options is no longer helpful because choice is no longer the most valuable aspect of the service. The user needs to quickly see the first available appointment with minimal choices or distractions.

Digital by default

Because patterns are built for reuse, they sometimes encourage us to use them without much question, particularly assuming that digital technology is the solution.

A service encompasses everything a user needs to complete their goal. By understanding the user’s entire journey, we start to uncover their motivations and can begin to think about new, potentially non-digital ways to solve their problems.

For example, the Canadian Immigration Service receives more than 5.2 million inquiries a year by email or phone from people looking for information about applications.

One of the most common reasons behind the complaints was the time it took to complete an application over the phone. Instead of just taking this data and speeding up the process with a digital form, the product team focused on understanding the service’s users and their reasons behind their reactions and behaviors.

For example, calls received were often bad-tempered, despite callers being greeted by a recorded message informing them of the length of time it could take to process an application, and advising them against verbally abusing the staff. 

The team found that users were actually more concerned with the lack of information than they were with the length of time it took to process their application. They felt confused, lost, and clueless about the immigration process. They were worried they had missed an email or letter in the mail asking for missing documentation.

In response to this, the team decided to change the call center’s greeting, setting the tone to a more positive and supportive one. Call staff also received additional training and began responding to questions even if the application had not reached its standard processing time.

The team made sure to not define the effectiveness of the design by how short new calls were. Although the handling time for each call went up by 16 percent, follow-up calls dropped by a whopping 30 percent in fewer than eight weeks, freeing up immigration agents’ time to provide better quality information to callers.

Alternatives to patterns

As the needs of every user are unique, every service is also unique. To design a successful service you need to have an in-depth understanding of its users, their motivations, their goals, and their situations. While there are numerous methodologies to achieve this, a few key ones follow:

Framing the problem

Use research or discovery phases to unearth the real issues with the existing service or process. Contextual research sessions can help create a deeper understanding of users, which helps to ensure that the root cause of a problem is being addressed, not just the symptoms.

Journey maps

Journey maps are used to create a visual representation of a service through the eyes of the user. Each step a user takes is recorded against a timeline along with a series of details including:

  • how the user interacts with the service;
  • how the service interacts with the user;
  • the medium of communication;
  • the user’s emotions;
  • and service pain points.
Service teams, not product teams

Setting up specialist pattern or product teams creates a disconnect with users. There may be common parts to user journeys, such as sign-up or on-boarding, but having specialist design teams will ultimately not help an organization meet user (and therefore business) needs. Teams should consider taking an end-to-end, service approach.

Yes No Mortgage service Registration; Application Passports service Registration; Application Tax-return service Registration; Submit Information

Assign design teams to a full service rather than discrete parts of it

Be open and inclusive

Anyone on a wider team should be able to contribute to or suggest improvements to a design system or component library. If applicable, people should also be able to prune away patterns that are unnecessary or ineffective. This enables patterns to grow and develop in the most fruitful way.

Open-sourcing pattern libraries, like the ones managed by a11yproject.com or WordPress.org, is a good way to keep structure and process in place while still allowing people to contribute. The transparent and direct review process characteristic of the open-source spirit can also help reduce friction.

Across larger organizations, this can be harder to manage, and the time commitment can contradict the intended benefits. Still, some libraries, such as the Carbon Design System, exist and are open to suggestions and feedback.

In summary

A design pattern library can range from being thorough, trying to cover all the bases, to politely broad, so as to not step on the toes of a design team. But patterns should never sacrifice user context for efficiency and consistency. They should reinforce the importance of the design process while helping an organization think more broadly about its users’ needs and its own goals. Real-world problems rarely are solved with out-of-the-box solutions. Even in service design.

Categories: Around The Web

Orchestrating Experiences

Thu, 06/07/2018 - 9:04am

A note from the editors: It’s our pleasure to share this excerpt from Chapter 2 (“Pinning Down Touchpoints”) of Orchestrating Experiences: Collaborative Design for Complexity by Chris Risdon and Patrick Quattlebaum, available now from Rosenfeld Media.

If you embrace the recommended collaborative approaches in your sense-making activities, you and your colleagues should build good momentum toward creating better and valuable end-to-end experiences. In fact, the urge to jump into solution mode will be tempting. Take a deep breath: you have a little more work to do. To ensure that your new insights translate into the right actions, you must collectively define what is good and hold one another accountable for aligning with it.

Good, in this context, means the ideas and solutions that you commit to reflect your customers’ needs and context while achieving organizational objectives. It also means that each touchpoint harmonizes with others as part of an orchestrated system. Defining good, in this way, provides common constraints to reduce arbitrary decisions and nudge everyone in the same direction.

How do you align an organization to work collectively toward the same good? Start with some common guidelines called experience principles.

A Common DNA

Experience principles are a set of guidelines that an organization commits to and follows from strategy through delivery to produce mutually beneficial and differentiated customer experiences. Experience principles represent the alignment of brand aspirations and customer needs, and they are derived from understanding your customers. In action, they help teams own their part (e.g., a product, touchpoint, or channel) while supporting consistency and continuity in the end-to-end experience. Figure 6.1 presents an example of a set of experience principles.

Figure 6.1: Example set of experience principles. Courtesy of Adaptive Path

Experience principles are not detailed standards that everyone must obey to the letter. Standards tend to produce a rigid system, which curbs innovation and creativity. In contrast, experience principles inform the many decisions required to define what experiences your product or service should create and how to design for individual, yet connected, moments. They communicate in a few memorable phrases the organizational wisdom for how to meet customers’ needs consistently and effectively. For example, look at the following:   

  • Paint me a picture.
  • Have my back.
  • Set my expectations.
  • Be one step ahead of me.
  • Respect my time.
Experience Principles vs Design Principles
Orchestrating experiences is a team sport. Many roles contribute to defining, designing, and delivering products and services that result in customer experiences. For this reason, the label experience—rather than design—reflects the value of principles better that inform and guide the organization. Experience principles are outcome oriented; design principles are process oriented. Everyone should follow and buy into them, not just designers. Patrick Quattlebaum

Experience principles are grounded in customer needs, and they keep collaborators focused on the why, what, and how of engaging people through products and services. They keep critical insights and intentions top of mind, such as the following:

  • Mental Models: How part of an experience can help people have a better understanding, or how it should conform to their mental model.
  • Emotions: How part of an experience should support the customer emotionally, or directly address their motivations.
  • Behaviors: How part of an experience should enable someone to do something they set out to do better.
  • Target: The characteristics to which an experience should adhere.
  • Impact: The outcomes and qualities an experience should engender in the user or customer.
Focusing on Needs to Differentiate
Many universal or heuristic principles exist to guide design work. There are visual design principles, interaction design principles, user experience principles, and any number of domain principles that can help define the best practices you apply in your design process. These are lessons learned over time that have a broader application and can be relied on consistently to inform your work across even disparate projects.

It’s important to reinforce that experience principles specific to your customers’ needs provide contextual guidelines for strategy and design decisions. They help everyone focus on what’s appropriate to specific customers with a unique set of needs, and your product or service can differentiate itself by staying true to these principles. Experience principles shouldn’t compete with best practices or universal principles, but they should be honored as critical inputs for ensuring that your organization’s specific value propositions are met. Chris Risdon Playing Together

Earlier, we compared channels and touchpoints to instruments and notes played by an orchestra, but in the case of experience principles, it’s more like jazz. While each member of a jazz ensemble is given plenty of room to improvise, all players understand the common context in which they are performing and carefully listen and respond to one another (see Figure 6.2). They know the standards of the genre backward and forward, and this knowledge allows them to be creative individually while collectively playing the same tune.

Figure 6.2: Jazz ensembles depend upon a common foundation to inspire improvisation while working together to form a holistic work of art. Photo by Roland Godefroy, License

Experience principles provide structure and guidelines that connect collaborators while giving them room to be innovative. As with a time signature, they ensure alignment. Similar to a melody, they provide a foundation that encourages supportive harmony. Like musical style, experience principles provide boundaries for what fits and what doesn’t.

Experience principles challenge a common issue in organizations: isolated soloists playing their own tune to the detriment of the whole ensemble. While still leaving plenty of room for individual improvisation, they ask a bunch of solo acts to be part of the band. This structure provides a foundation for continuity in the resulting customer journey, but doesn’t overengineer consistency and predictability, which might prevent delight and differentiation. Stressing this balance of designing the whole while distributing effort and ownership is a critical stance to take to engender cross-functional buy-in.

To get broad acceptance of your experience principles, you must help your colleagues and your leadership see their value. You will need to craft value propositions for your different stakeholders, educate the stakeholders on how to use experience principles, and pilot the experience principles to show how they are used in action. This typically requires crafting specific value propositions and education materials for different stakeholders to gain broad support and adoption. Piloting your experience principals on a project can also help others understand their tactical use. When approaching each stakeholder, consider these common values:

  • Defining good: While different channels and media have their specific best practices, experience principles provide a common set of criteria that can be applied across an entire end-to-end experience.
  • Decision-making filter: Throughout the process of determining what to do strategically and how to do it tactically, experience principles ensure that customers’ needs and desires are represented in the decision-making process.
  • Boundary constraints: Because these constraints represent the alignment of brand aspiration and customer desire, experience principles can filter out ideas or solutions that don’t reinforce this alignment.
  • Efficiency: Used consistently, experience principles reduce ambiguity and the resultant churn when determining what concepts should move forward and how to design them well.
  • Creativity inspiration: Experience principles are very effective in sparking new ideas with greater confidence that will map back to customer needs. (See Chapter 8, “Generating and Evaluating Ideas.”)
  • Quality control: Through the execution lifecycle, experience principles can be used to critique touchpoint designs (i.e., the parts) to ensure that they align to the greater experience (i.e., the whole).

Pitching and educating aside, your best bet for creating good experience principles that get adopted is to avoid creating them in a black box. You don’t want to spring your experience principles on your colleagues as if they were commandments from above to follow blindly. Instead, work together to craft a set of principles that everyone can follow energetically.

Identifying Draft Principles

Your research into the lives and journeys of customers will produce a large number of insights. These insights are reflective. They capture people’s current experiences—such as, their met and unmet needs, how they frame the world, and their desired outcomes. To craft useful and appropriate experience principles, you must turn these insights inside out to project what future experiences should be.

When You Can’t Do Research (Yet)
If you lack strong customer insights (and the support or time to gather them), it’s still valuable to craft experience principles with your colleagues. The process of creating them provides insight into the various criteria that people are using to make decisions. It also sheds light on what your collaborators believe are the most important customer needs to meet. While not as sound as research-driven principles, your team can align around a set of guidelines to inform and critique your collective work—and then build the case for gathering insights for creating better experience principles. Patrick Quattlebaum From the Bottom Up

The leap from insights to experience principles will take several iterations. While you may be able to rattle off a few candidates based on your research, it’s well worth the time to follow a more rigorous approach in which you work from the bottom (individual insights) to the top (a handful of well-crafted principles). Here’s how to get started:

  • Reassemble your facilitators and experience mappers, as they are closest to what you learned in your research.
  • Go back to the key insights that emerged from your discovery and research. These likely have been packaged in maps, models, research reports, or other artifacts. You can also go back to your raw data if needed.
  • Write each key insight on a sticky note. These will be used to spark a first pass at potential principles.
  • For each insight, have everyone take a pass individually at articulating a principle derived from just that insight. You can use sticky notes again or a quarter sheet of 8.5”’’x 11”’ (A6) template to give people a little more structure (see Figure 6.3).
Figure 6.3: A simple template to generate insight-level principles quickly.
  • At this stage, you should coach participants to avoid finding the perfect words or a pithy way to communicate a potential principle. Instead, focus on getting the core lesson learned from the insight and what advice you would give others to guide product or service decisions in the future. Table 6.1 shows a couple of examples of what a good first pass looks like.
  • At this stage, don’t be a wordsmith. Work quickly to reframe your insights from something you know (“Most people don’t want to…”) to what should be done to stay true to this insight (“Make it easy for people…”).
  • Work your way through all the insights until everyone has a principle for each one.
Table 6.1: From insights to draft principles Insight Principle Most people don’t want to do their homework first. They want to get started and learn what they need to know when they need to know it. Make it easy for people to dive in and collect knowledge when it’s most relevant. Everyone believes their situation (financial, home, health) is unique and reflects their specific circumstances, even if it’s not true. Approach people as they see themselves: unique people in unique situations. Finding Patterns

You now have a superset of individual principles from which a handful of experience principles will emerge. Your next step is to find the patterns within them. You can use affinity mapping to identify principles that speak to a similar theme or intent. As with any clustering activity, this may take a few iterations until you feel that you have mutually exclusive categories. You can do this in just a few steps:

  • Select someone to be a workshop participant to present the principles one by one, explaining the intent behind each one.
  • Cycle through the rest of the group, combining like principles and noting where principles conflict with one another. As you cluster, the dialogue the group has is as important as where the principles end up.
  • Once things settle down, you and your colleagues can take a first pass at articulating a principle for each cluster. A simple half sheet (8.5” x 4.25” or A5) template can give some structure to this step. Again, don’t get too precious with every word yet.  (see Figure 6.4). Get the essence down so that you and others can understand and further refine it with the other principles.
  • You should end up with several mutually exclusive categories with a draft principle for each.
Designing Principles as a System

No experience principle is an island. Each should be understandable and useful on its own, but together your principles should form a system. Your principles should be complementary and reinforcing. They should be able to be applied across channels and throughout your product or service development process. See the following “Experience Principles Refinement Workshop” for tips on how to critique your principles to ensure that they work together as a complete whole.

Categories: Around The Web

The Cult of the Complex

Thu, 05/31/2018 - 9:15am

’Tis a gift to be simple. Increasingly, in our line of work, ’tis a rare gift indeed.

In an industry that extols innovation over customer satisfaction, and prefers algorithm to human judgement (forgetting that every algorithm has human bias in its DNA), perhaps it should not surprise us that toolchains have replaced know-how.

Likewise, in a field where young straight white dudes take an overwhelming majority of the jobs (including most of the management jobs) it’s perhaps to be expected that web making has lately become something of a dick measuring competition.

It was not always this way, and it needn’t stay this way. If we wish to get back to the business of quietly improving people’s lives, one thoughtful interaction at a time, we must rid ourselves of the cult of the complex. Admitting the problem is the first step in solving it.

And the div cries Mary

In 2001, more and more of us began using CSS to replace the non-semantic HTML table layouts with which we’d designed the web’s earliest sites. I soon noticed something about many of our new CSS-built sites. I especially noticed it in sites built by the era’s expert backend coders, many of whom viewed HTML and CSS as baby languages for non-developers.

In those days, whether from contempt for the deliberate, intentional (designed) limitations of HTML and CSS, or ignorance of the HTML and CSS framers’ intentions, many code jockeys who switched from table layouts to CSS wrote markup consisting chiefly of divs and spans. Where they meant list item, they wrote span. Where they meant paragraph, they wrote div. Where they meant level two headline, they wrote div or span with a classname of h2, or, avoiding even that tragicomic gesture toward document structure, wrote a div or span with verbose inline styling. Said div was followed by another, and another. They bred like locusts, stripping our content of structural meaning.

As an early adopter and promoter of CSS via my work in The Web Standards Project (kids, ask your parents), I rejoiced to see our people using the new language. But as a designer who understood, at least on a basic level, how HTML and CSS were supposed to work together, I chafed.

Cry, the beloved font tag

Everyone who wrote the kind of code I just described thought they were advancing the web merely by walking away from table layouts. They had good intentions, but their executions were flawed. My colleagues and I here at A List Apart were thus compelled to explain a few things.

Mainly, we argued that HTML consisting mostly of divs and spans and classnames was in no way better than table layouts for content discovery, accessibility, portability, reusability, or the web’s future. If you wanted to build for people and the long term, we said, then simple, structural, semantic HTML was best—each element deployed for its intended purpose. Don’t use a div when you mean a p.

This basic idea, and I use the adjective advisedly, along with other equally rudimentary and self-evident concepts, formed the basis of my 2003 book Designing With Web Standards, which the industry treated as a revelation, when it was merely common sense.

The message messes up the medium

When we divorce ideas from the conditions under which they arise, the result is dogma and misinformation—two things the internet is great at amplifying. Somehow, over the years, in front-end design conversations, the premise “don’t use a div when you mean a p” got corrupted into “divs are bad.”

A backlash in defense of divs followed this meaningless running-down of them—as if the W3C had created the div as a forbidden fruit. So, let’s be clear. No HTML element is bad. No HTML element is good. A screwdriver is neither good nor bad, unless you try to use it as a hammer. Good usage is all about appropriateness.

Divs are not bad. If no HTML5 element is better suited to an element’s purpose, divs are the best and most appropriate choice. Common sense, right? And yet.

Somehow, the two preceding simple sentences are never the takeaway from these discussions. Somehow, over the years, a vigorous defense of divs led to a defiant (or ignorant) overuse of them. In some strange way, stepping back from a meaningless rejection of divs opened the door to gaseous frameworks that abuse them.

Note: We don’t mind so much about the abuse of divs. After all, they are not living things. We are not purists. It’s the people who use the stuff we design who suffer from our uninformed or lazy over-reliance on these div-ridden gassy tools. And that suffering is what we protest. div-ridden, overbuilt frameworks stuffed with mystery meat offer the developer tremendous power—especially the power to build things quickly. But that power comes at a price your users pay: a hundred tons of stuff your project likely doesn’t need, but you force your users to download anyway. And that bloat is not the only problem. For who knows what evil lurks in someone else’s code?

Two cheers for frameworks

If you entered web design and development in the past ten years, you’ve likely learned and may rely on frameworks. Most of these are built on meaningless arrays of divs and spans—structures no better than the bad HTML we wrote in 1995, however more advanced the resulting pages may appear. And what keeps the whole monkey-works going? JavaScript, and more JavaScript. Without it, your content may not render. With it, you may deliver more services than you intended to.

There’s nothing wrong with using frameworks to quickly whip up and test product prototypes, especially if you do that testing in a non-public space. And theoretically, if you know what you’re doing, and are willing to edit out the bits your product doesn’t need, there’s nothing wrong with using a framework to launch a public site. Notice the operative phrases: if you know what you’re doing, and are willing to edit out the bits your product doesn’t need.

Alas, many new designers and developers (and even many experienced ones) feel like they can’t launch a new project without dragging in packages from NPM, or Composer, or whatever, with no sure idea what the code therein is doing. The results can be dangerous. Yet here we are, training an entire generation of developers to build and launch projects with untrusted code.

Indeed, many designers and developers I speak with would rather dance naked in public than admit to posting a site built with hand-coded, progressively enhanced HTML, CSS, and JavaScript they understand and wrote themselves. For them, it’s a matter of job security and viability. There’s almost a fear that if you haven’t mastered a dozen new frameworks and tools each year (and by mastered, I mean used), you’re slipping behind into irrelevancy. HR folks who write job descriptions listing the ten thousand tool sets you’re supposed to know backwards and forwards to qualify for a junior front-end position don’t help the situation.

CSS is not broken, and it’s not too hard

As our jerrybuilt contraptions, lashed together with fifteen layers of code we don’t understand and didn’t write ourselves, start to buckle and hiss, we blame HTML and CSS for the faults of developers. This fault-finding gives rise to ever more complex cults of specialized CSS, with internecine sniping between cults serving as part of their charm. New sects spring up, declaring CSS is broken, only to splinter as members disagree about precisely which way it’s broken, or which external technology not intended to control layout should be used to “fix” CSS. (Hint: They mostly choose JavaScript.)

Folks, CSS is not broken, and it’s not too hard. (You know what’s hard? Chasing the ever-receding taillights of the next shiny thing.) But don’t take my word for it. Check these out:

CSS Grid is here; it’s logical and fairly easy to learn. You can use it to accomplish all kinds of layouts that used to require JavaScript and frameworks, plus new kinds of layout nobody’s even tried yet. That kind of power requires some learning, but it’s good learning, the kind that stimulates creativity, and its power comes at no sacrifice of semantics, or performance, or accessibility. Which makes it web technology worth mastering.

The same cannot be said for our deluge of frameworks and alternative, JavaScript-based platforms. As a designer who used to love creating web experiences in code, I am baffled and numbed by the growing preference for complexity over simplicity. Complexity is good for convincing people they could not possibly do your job. Simplicity is good for everything else.

Keep it simple, smarty

Good communication strives for clarity. Design is its most brilliant when it appears most obvious—most simple. The question for web designers should never be how complex can we make it. But that’s what it has become. Just as, in pursuit of “delight,” we forget the true joy reliable, invisible interfaces can bring, so too, in chasing job security, do we pile on the platform requirements, forgetting that design is about solving business and customer problems … and that baseline skills never go out of fashion. As ALA’s Brandon Gregory, writing elsewhere, explains:

I talk with a lot of developers who list Angular, Ember, React, or other fancy JavaScript libraries among their technical skills. That’s great, but can you turn that mess of functions the junior developer wrote into a custom extensible object that we can use on other projects, even if we don’t have the extra room for hefty libraries? Can you code an image slider with vanilla JavaScript so we don’t have to add jQuery to an older website just for one piece of functionality? Can you tell me what recursion is and give me a real-world example? “I interview web developers. Here’s how to impress me.” Growing pains

There’s a lot of complexity to good design. Technical complexity. UX complexity. Challenges of content and microcopy. Performance challenges. This has never been and never will be an easy job.

Simplicity is not easy—not for us, anyway. Simplicity means doing the hard work that makes experiences appear seamless—the sweat and torture-testing and failure that eventually, with enough effort, yields experiences that seem to “just work.”

Nor, in lamenting our industry’s turn away from basic principles and resilient technologies, am I suggesting that CDNs and Git are useless. Or wishing that we could go back to FTP—although I did enjoy the early days of web design, when one designer could do it all. I’m glad I got to experience those simpler times.

But I like these times just fine. And I think you do, too. Our medium is growing up, and it remains our great privilege to help shape its future while creating great experiences for our users. Let us never forget how lucky we are, nor, in chasing the ever-shinier, lose sight of the people and purpose we serve.

Categories: Around The Web

Onboarding: A College Student Discovers A List Apart

Tue, 05/22/2018 - 9:02am

What would you say if I told you I just read and analyzed over 350 articles from A List Apart in less than six weeks? “You’re crazy!” might have passed through your lips. In that case, what would you say if I was doing it for a grade? Well, you might say that makes sense.

As a part of an Independent Research Study for my undergraduate degree, I wanted to fill in some of the gaps I had when it came to working with the World Wide Web. I wanted to know more about user experience and user interface design, however, I needed the most help getting to know the industry in general. Naturally, my professor directed me to A List Apart.

At first I wasn’t sure what I was going to get out of the assignment other than the credit I needed to graduate. What could one website really tell me? As I read article after article, I realized that I wasn’t just looking at a website—I was looking at a community. A community with history in which people have struggled to build the right way. One that is constantly working to be open to all. One that is always learning, always evolving, and sometimes hard to keep up with. A community that, without my realizing it, I had become a part of. For me, the web has pretty much always been there, but now that I am better acquainted with its past, I am energized to be a part of its future. Take a look at some of the articles that inspired this change in me.

A bit of history

I started in the Business section and went back as far as November 1999. What a whirlwind that was! I had no idea what people went through and the battles that they fought to make the web what it is today. Now, I don’t mean to date any of you lovely readers, but I would have been three years old when the first business article on A List Apart was published, so everything I read until about 2010 was news to me.

For instance, when I came across Jeffrey Zeldman’s “Survivor! (How Your Peers Are Coping with the Dotcom Crisis)” that was published in 2001, I had no idea what he was talking about! The literal note I wrote for that article was: “Some sh** went down in the late 1990s???” I was in the dark until I had the chance to Google it and sheepishly ask my parents.

I had the same problem with the term Web 2.0. It wasn’t until I looked it up that I realized I didn’t know what it was, because I never experienced Web 1.0 (having not had access to the internet until 2004). In that short time, the industry had completely reinvented itself before I ever had a chance to log on!

The other bit of history that surprised me was how long and hard people had to fight to get web standards and accessibility in line. In school I’ve always been taught to make my sites accessible, and that just seemed like common sense to me. I guess I now understand why I have mixed feelings about Flash.

What I learned about accessibility

Accessibility is one of the topics I took a lot of notes on. I was glad to see that although a lot of progress had been made in this area, people were still taking the time to write about and constantly make improvements to it. In Beth Raduenzel’s “A DIY Web Accessibility Blueprint,” she explains the fundamentals to remember when designing for accessibility, including considering:

  • keyboard users;
  • blind users;
  • color-blind users;
  • low-vision users;
  • deaf and hard-of-hearing users;
  • users with learning disabilities and cognitive limitations;
  • mobility-impaired users;
  • users with speech disabilities;
  • and users with seizure disorders.

It was nice to have someone clearly spell it out. However, the term “user” was used a lot. This distances us from the people we are supposed to be designing for. Anne Gibson feels the same way; in her article, she states that “[web] accessibility means that people can use the web.” All people. In “My Accessibility Journey: What I’ve Learned So Far,” Manuel Matuzović gives exact examples of this:

  • If your site takes ten seconds to load on a mobile connection, it’s not accessible.
  • If your site is only optimized for one browser, it’s not accessible.
  • If the content on your site is difficult to understand, your site isn’t accessible.

It goes beyond just people with disabilities (although they are certainly not to be discounted).

I learned a lot of tips for designing with specific people in mind. Like including WAI-ARIA in my code to benefit visually-impaired users, and checking the color contrast of my site for people with color blindness and low-vision problems. One article even inspired me to download a Sketch plugin to easily check the contrast of my designs in the future. I’m more than willing to do what I can to allow my website to be accessible to all, but I also understand that it’s not an easy feat, and I will never get it totally right.

User research and testing methods that were new to me

Nevertheless, we still keep learning. Another topic on A List Apart I desperately wanted to absorb was the countless research, testing, and development methods I came across in my readings. Every time I turn around, someone else has come up with another way of working, and I’m always trying to keep my finger in the pie.

I’m happy to report that the majority of the methods I read about I already knew about and have used in my own projects at school. I’ve been doing open interview techniques, personas, style tiles, and element collages all along, but I was surprised by how many new practices I’d come across.

The Kano Model, the Core Model, Wizard of Oz prototyping, and think-alouds were some of the methods that piqued my curiosity. Others like brand architecture research, call center log analysis, clickstream analysis, search analytics, and stakeholder reviews I’ve heard of before, but have never been given the opportunity to try. 

Unattended qualitative research, A/B testing and fake-door testing are those that stood out to me. I liked that they allow you to conduct research even if you don’t have any users in front of you. I learned a lot of new terms and did a lot of research in this section. After all, it’s easy to get lost in all the jargon.

The endless amount of abbreviations

I spent a lot of my time Googling terms during this project—especially with the older articles that mentioned programs like Fireworks that aren’t really used anymore. One of my greatest fears in working with web design is that someone will ask me something and I will have no idea what they are talking about. When I was reading all the articles, I had the hardest time with the substantial amount of abbreviations I came across: AJAX, API, ARIA, ASCII, B2B, B2C, CMS, CRM, CSS, EE, GUI, HTML, IIS, IPO, JSP, MSA, RFP, ROI, RSS, SASS, SEM, SEO, SGML, SOS, SOW, SVN, and WYSIWYG, just to name a few. Did you manage to get them all? Probably not.

We don’t use abbreviations in school because they aren’t always clear and the professors know we won’t know what they mean. To a newbie like me, these abbreviations feel like a barrier. A wall that divides the veterans of the industry and those trying to enter it. I can’t imagine how the clients must feel.

It seems as if I am not alone in my frustrations. Inayaili de León says in her article “Becoming Better Communicators,” “We want people to care about design as much as we do, but how can they if we speak to them in a foreign language?” I’m training to be a designer, I’m in Design, and I had to look up almost every abbreviation listed above.

What I learned about myself

Prior to taking on this assignment, I would have been very hesitant to declare myself capable of creating digital design. To my surprise, I’m not alone. Matt Griffin thinks, “… the constant change and adjustments that come with living on the internet can feel overwhelming.” Kendra Skeene admits, “It’s a lot to keep track of, whether you’ve been working on the web for [twenty] years or only [twenty] months.”

My fear of not knowing all the fancy lingo was lessened when I read Lyza Danger Gardner’s “Never Heard of It.” She is a seasoned professional who admits to not knowing it all, so I, a soon-to-be-grad, can too. I have good foundations and Google on my side for those pesky abbreviations that keep popping up. As long as I just remember to use my brain as Dave Rupert suggests, when I go to get a job I should do just fine.

Entering the workplace

Before starting this assignment, I knew I wanted to work in digital and interaction design, but I didn’t know where. I was worried I didn’t know enough about the web to be able to design for it—that all the jobs out there would require me to know coding languages I’d never heard of before, and I’d have a hard time standing out among the crowd.

The articles I read on A List Apart supplied me with plenty of solid career advice. After reading articles written by designers, project managers, developers, marketers, writers, and more, I’ve come out with a better understanding of what kind of work I want to do. In the article “80/20 Practitioners Make Better Communicators,” Katie Kovalcin makes a good point about not forcing yourself to learn skills just because you feel the need to:

We’ve all heard the argument that designers need to code. And while that might be ideal in some cases, the point is to expand your personal spectrum of skills to be more useful to your team, whether that manifests itself in the form of design, content strategy, UX, or even project management. A strong team foundation begins by addressing gaps that need to be filled and the places where people can meet in the middle.

I already have skills that someone desperately needs. I just need to find the right fit and expand my skills from there. Brandon Gregory also feels that hiring isn’t all about technical knowledge. In his article, he says, “personality, fit with the team, communication skills, openness to change, [and] leadership potential” are just as important.

Along with solid technical fundamentals and good soft skills, it seems as if having a voice is also crucial. When I read Jeffrey Zeldman’s article “The Love You Make,” it became clear to me that if I ever wanted to get anywhere with my career, I was going to have to start writing.

Standout articles

The writers on A List Apart have opened my eyes to many new subjects and perspectives on web design. I particularly enjoyed looking through the game design lens in Graham Herrli’s “Gaming the System … and Winning.” It was one of the few articles where I copied his diagram on interaction personality types and their goals into my notebook. Another article that made me consider a new perspective was “The King vs. Pawn Game of UI Design” by Erik Kennedy. To start with one simple element and grow from there really made something click in my head.

However, I think that the interview I read between Mica McPheeters and Sara Wachter-Boettcher stuck with me the most. I actually caught myself saying “hmm” out loud as I was reading along. Sara’s point about crash-test dummies being sized to the average male completely shifted my understanding about how important user-centered design is. Like, life-or-death important. There is no excuse not to test your products or services on a variety of users if this is what’s at stake! It’s an article I’m glad I read.

Problems I’ve noticed in the industry

During the course of my project, I noticed some things about A List Apart that I was spending so much time on. Like, for example, it wasn’t until I got to the articles that were published after 2014 that I really started to understand and relate to the content; funnily enough, that was the year I started my design degree.

I also noticed that it was around this time that female writers became much more prominent on the site. Today there may be many women on A List Apart, but I must point out a lack of women of color. Shoutout to Aimee Gonzalez-Cameron for her article “Hello, My Name is <Error>,” a beautiful assertion for cultural inclusion on the web through user-centered design.

Despite the lack of representation of women of color, I was very happy to see many writers acknowledge their privilege in the industry. Thanks to Cennydd Bowles, Matt Griffin, and Rian van der Merwe for their articles. My only qualm is that the topic of privilege has only appeared on A List Apart in the last five years. Because isn’t it kinda ironic? As creators of the web we aim to allow everyone access to our content, but not everyone has access to the industry itself. Sara Wachter-Boettcher wrote an interesting article that expands on this idea, which you should read if you haven’t already. However, I won’t hold it against any of you. That’s why we are here anyway: to learn.

The takeaway

Looking back at this assignment, I’m happy to say that I did it. It was worth every second (even with the possible eye damage from reading off my computer screen for hours on end). It was worth it because I learned more than I had ever anticipated. I received an unexpected history lesson of the recent internet past. I was bombarded by an explosion of new terms and abbreviations. I learned a lot about myself and how I can possibly fit into this community. Most importantly, I came out on the other end with more confidence in myself and my abilities—which is probably the greatest graduation gift I could receive from a final project in my last year of university. Thanks for reading, and wish me luck!

Thanks

Thanks to my Interactive Design professor Michael LeBlanc for giving me this assignment and pushing me to take it further.

Categories: Around The Web

The Slow Death of Internet Explorer and the Future of Progressive Enhancement

Tue, 05/15/2018 - 8:55am

My first full-time developer job was at a small company. We didn’t have BrowserStack, so we cobbled together a makeshift device lab. Viewing a site I’d been making on a busted first-generation iPad with an outdated version of Safari, I saw a distorted, failed mess. It brought home to me a quote from Douglas Crockford, who once deemed the web “the most hostile software engineering environment imaginable.”

The “works best with Chrome” problem

Because of this difficulty, a problem has emerged. Earlier this year, a widely shared article in the Verge warned of “works best with Chrome” messages seen around the web.

Hi Larry, we apologize for the frustration. Groupon is optimized to be used on a Google Chrome browser, and while you are definitely able to use Firefox or another browser if you'd like, there can be delays when Groupon is not used through Google Chrome.

— Groupon Help U.S. (@GrouponHelpUS) November 26, 2017

Hi Rustram. We'd always recommend that you use Google Chrome to browse the site: we've optimised things for this browser. Thanks.

— Airbnb Help (@AirbnbHelp) July 12, 2016

There are more examples of this problem. In the popular messaging app Slack, voice calls work only in Chrome. In response to help requests, Slack explains its decision like this: “It requires significant effort for us to build out support and triage issues on each browser, so we’re focused on providing a great experience in Chrome.” (Emphasis mine.) Google itself has repeatedly built sites—including Google Meet, Allo, YouTube TV, Google Earth, and YouTube Studio—that block alternative browsers entirely. This is clearly a bad practice, but highlights the fact that cross-browser compatibility can be difficult and time-consuming.

The significant feature gap, though, isn’t between Chrome and everything else. Of far more significance is the increasingly gaping chasm between Internet Explorer and every other major browser. Should our development practices be hamstrung by the past? Or should we dash into the future relinquishing some users in our wake? I’ll argue for a middle ground. We can make life easier for ourselves without breaking the backward compatibility of the web.

The widening gulf

Chrome, Opera, and Firefox ship new features constantly. Edge and Safari eventually catch up. Internet Explorer, meanwhile, has been all but abandoned by Microsoft, which is attempting to push Windows users toward Edge. IE receives nothing but security updates. It’s a frustrating period for client-side developers. We read about new features but are often unable to use them—due to a single browser with a diminishing market share.

Internet Explorer’s global market share since 2013 is shown in dark blue. It now stands at just 3 percent.

Some new features are utterly trivial (caret-color!); some are for particular use cases you may never have (WebGL 2.0, Web MIDI, Web Bluetooth). Others already feel near-essential for even the simplest sites (object-fit, Grid).

A list of features supported in Chrome but unavailable in IE11, taken from caniuse.com. This is a truncated and incomplete screenshot of an extraordinarily long list. The promise and reality of progressive enhancement

For content-driven sites, the question of browser support should never be answered with a simple yes or no. CSS and HTML were designed to be fault-tolerant. If a particular browser doesn’t support shape-outside or service workers or font-display, you can still use those features. Your website will not implode. It’ll just lack that extra stylistic flourish or performance optimization in non-supporting browsers.

Other features, such as CSS Grid, require a bit more work. Your page layout is less enhancement than necessity, and Grid has finally brought a real layout system to the web. When used with care for simple cases, Grid can gracefully fall back to older layout techniques. We could, for example, fall back to flex-wrap. Flexbox is by now a taken-for-granted feature among developers, yet even that is riddled with bugs in IE11.

.grid > * { width: 270px; /* no grid fallback style */ margin-right: 30px; /* no grid fallback style */ } @supports (display: grid) { .grid > * { width: auto; margin-right: 0; } }

In the code above, I’m setting all the immediate children of the grid to have a specified width and a margin. For browsers that support Grid, I’ll use grid-gap in place of margin and define the width of the items with the grid-template-columns property. It’s not difficult, but it adds bloat and complexity if it’s repeated throughout a codebase for different layouts. As we start building entire page layouts with Grid (and eventually display: contents), providing a fallback for IE will become increasingly arduous. By using @supports for complex layout tasks, we’re effectively solving the same problem twice—using two different methods to create a similar result.

Not every feature can be used as an enhancement. Some things are imperative. People have been getting excited about CSS custom properties since 2013, but they’re still not widely used, and you can guess why: Internet Explorer doesn’t support them. Or take Shadow DOM. People have been doing conference talks about it for more than five years. It’s finally set to land in Firefox and Edge this year, and lands in Internet Explorer … at no time in the future. You can’t patch support with transpilers or polyfills or prefixes.

Users have more browsers than ever to choose from, yet IE manages to single-handedly tie us to the pre-evergreen past of the web. If developing Chrome-only websites represents one extreme of bad development practice, shackling yourself to a vestigial, obsolete, zombie browser surely represents the other.

The problem with shoehorning

Rather than eschew modern JavaScript features, polyfilling and transpiling have become the norm. ES6 is supported everywhere other than IE, yet we’re sending all browsers transpiled versions of our code. Transpilation isn’t great for performance. A single five-line async function, for example, may well transpile to twenty-five lines of code.

“I feel some guilt about the current state of affairs,” Alex Russell said of his previous role leading development of Traceur, a transpiler that predated Babel. “I see so many traces where the combination of Babel transpilation overhead and poor [webpack] foo totally sink the performance of a site. … I’m sad that we’re still playing this game.”

What you can’t transpile, you can often polyfill. Polyfill.io has become massively popular. Chrome gets sent a blank file. Ancient versions of IE receive a giant mountain of polyfills. We are sending the largest payload to those the least equipped to deal with it—people stuck on slow, old machines.

What is to be done?Prioritize content

Cutting the mustard is a technique popularized by the front-end team at BBC News. The approach cuts the browser market in two: all browsers receive a base experience or core content. JavaScript is conditionally loaded only by the more capable browsers. Back in 2012, their dividing line was this:

if ('querySelector' in document && 'localStorage' in window && 'addEventListener' in window) { // load the javascript }

Tom Maslen, then a lead developer at the BBC, explained the rationale: “Over the last few years I feel that our industry has gotten lazy because of the crazy download speeds that broadband has given us. Everyone stopped worrying about how large their web pages were and added a ton of JS libraries, CSS files, and massive images into the DOM. This has continued on to mobile platforms that don’t always have broadband speeds or hardware capacity to render complex code.”

The Guardian, meanwhile, entirely omits both JavaScript and stylesheets from Internet Explorer 8 and further back.

The Guardian navigation as seen in Internet Explorer 8. Unsophisticated yet functional.

Nature.com takes a similar approach, delivering only a very limited stylesheet to anything older than IE10.

The nature.com homepage as seen in Internet Explorer 9.

Were you to break into a museum, steal an ancient computer, and open Netscape Navigator, you could still happily view these websites. A user comes to your site for the content. They didn’t come to see a pretty gradient or a nicely rounded border-radius. They certainly didn’t come for the potentially nauseating parallax scroll animation.

Anyone who’s been developing for the web for any amount of time will have come across a browser bug. You check your new feature in every major browser and it works perfectly—except in one. Memorizing support info from caniuse.com and using progressive enhancement is no guarantee that every feature of your site will work as expected.

The W3C’s website for the CSS Working Group as viewed in the latest version of Safari.

Regardless of how perfectly formed and well-written your code, sometimes things break through no fault of your own, even in modern browsers. If you’re not actively testing your site, bugs are more likely to reach your users, unbeknownst to you. Rather than transpiling and polyfilling and hoping for the best, we can deliver what the person came for, in the most resilient, performant, and robust form possible: unadulterated HTML. No company has the resources to actively test their site on every old version of every browser. Malfunctioning JavaScript can ruin a web experience and make a simple page unusable. Rather than leaving users to a mass of polyfills and potential JavaScript errors, we give them a basic but functional experience.

Make a clean break

What could a mustard cut look like going forward? You could conduct a feature query using JavaScript to conditionally load the stylesheet, but relying on JavaScript introduces a brittleness that would be best to avoid. You can’t use @import inside an @supports block, so we’re left with media queries.

The following query will prevent the CSS file from being delivered to any version of Internet Explorer and older versions of other browsers:

<link id="mustardcut" rel="stylesheet" href="stylesheet.css" media=" only screen, only all and (pointer: fine), only all and (pointer: coarse), only all and (pointer: none), min--moz-device-pixel-ratio:0) and (display-mode:browser), (min--moz-device-pixel-ratio:0) ">

We’re not really interested in what particular features this query is testing for; it’s just a hacky way to split between legacy and modern browsers. The shiny, modern site will be delivered to Edge, Chrome (and Chrome for Android) 39+, Opera 26+, Safari 9+, Safari on iOS 9+, and Firefox 47+. I based the query on the work of Andy Kirk. If you want to take a cutting-the-mustard approach but have to meet different support demands, he maintains a Github repo with a range of options.

We can use the same media query to conditionally load a Javascript file. This gives us one consistent dividing line between old and modern browsers:

(function() { var linkEl = document.getElementById('mustardcut'); if (window.matchMedia && window.matchMedia(linkEl.media).matches) { var script = document.createElement('script'); script.src = 'your-script.js'; script.async = true; document.body.appendChild(script); } })();

matchMedia brings the power of CSS media queries to JavaScript. The matches property is a boolean that reflects the result of the query. If the media query we defined in the link tag evaluates to true, the JavaScript file will be added to the page.

It might seem like an extreme solution. From a marketing point of view, the site no longer looks “professional” for a small amount of visitors. However, we’ve managed to improve the performance for those stuck on old technology while also opening the possibility of using the latest standards on browsers that support them. This is far from a new approach. All the way back in 2001, A List Apart stopped delivering a visual design to Netscape 4. Readership among users of that browser went up.

Front-end development is complicated at the best of times. Adding support for a technologically obsolete browser adds an inordinate amount of time and frustration to the development process. Testing becomes onerous. Bug-fixing looms large.

By making a clean break with the past, we can focus our energies on building modern sites using modern standards without leaving users stuck on antiquated browsers with an untested and possibly broken site. We save a huge amount of mental overhead. If your content has real value, it can survive without flashy embellishments. And for Internet Explorer users on Windows 10, Edge is preinstalled. The full experience is only a click away.

Internet Explorer 11 with its ever-present “Open Microsoft Edge” button.

Developers must avoid living in a bubble of MacBook Pros and superfast connections. There’s no magic bullet that enables developers to use bleeding-edge features. You may still need Autoprefixer and polyfills. If you’re planning to have a large user base in Asia and Africa, you’ll need to build a site that looks great in Opera Mini and UC Browser, which have their own limitations. You might choose a different cutoff point for now, but it will increasingly pay off, in terms of both user experience and developer experience, to make use of what the modern web has to offer.

 

Categories: Around The Web

So You Want to Write an Article?

Thu, 05/10/2018 - 10:30am

So you want to write an article. Maybe you’ve got a great way of organizing your CSS, or you’re a designer who has a method of communicating really well with developers, or you have some insight into how to best use a new technology. Whatever the topic, you have insights, you’ve read the basics of finding your voice, and you’re ready to write and submit your first article for a major publication. Here’s the thing: most article submissions suck. Yours doesn’t have to be one of them.

At A List Apart, we want to see great minds in the industry write the next great articles, and you could be one of our writers. I’ve been on the editorial team here for about nine months now, and I’ve written a fair share of articles here as well. Part of what I do is review article submissions and give feedback on what’s working and what’s not. We publish different kinds of articles, but many of the submissions I see—particularly from newer writers—fall into the same traps. If you’re trying to get an article published in A List Apart or anywhere else, knowing these common mistakes can help your article’s chances of being accepted.

Keep introductions short and snappy

Did you read the introduction above? My guess is a fair share of readers skipped straight to this point. That’s pretty typical behavior, especially for articles like this one that offer several answers to one clear question. And that’s totally fine. If you’re writing, realize that some people will do the same thing. There are some things you can do to improve the chances of your intro being read, though.

Try to open with a bang. A recent article from Caroline Roberts has perhaps the best example of this I’ve ever seen: “I won an Emmy for keeping a website free of dick pics.” When I saw that in the submission, I was instantly hooked and read the whole thing. It’s hilarious, it shows she has expertise on managing content, and it shows that the topic is more involved and interesting than it may at first seem. A more straightforward introduction to the topic of content procurement would seem very boring in comparison. Your ideas are exciting, so show that right away if you can. A funny or relatable story can also be a great way to lead into an article—just keep it brief!

If you can’t open with a bang, keep it short. State the problem, maybe put something about why it matters or why you’re qualified to write about it, and get to the content as quickly as possible. If a line in your introduction does not add value to the article, delete it. There’s little room for meandering in professional articles, but there’s absolutely no room for it in introductions.

Get specific

Going back to my first article submission for A List Apart, way before I joined the team, I wanted to showcase my talent and expertise, and I thought the best way to do this was to showcase all of it in one article. I wrote an overview of professional skills for web professionals. There was some great information in there, based on my years of experience working up through the ranks and dealing with workplace drama. I was so proud when I submitted the article. It wasn’t accepted, but I got some great feedback from the editor-in-chief: get more specific.

The most effective articles I see deal with one central idea. The more disparate ideas I see in an article, the less focused and impactful the article is. There will be exceptions to this, of course, but those are rarer than articles that suffer for this. Don’t give yourself a handicap by taking an approach that fails more often than it succeeds.

Covering one idea in great detail, with research and examples to back it up, usually goes a lot further in displaying your expertise than an overview of a bunch of disparate thoughts. Truth be told, a lot of people have probably arrived at the same ideas you have. The insights you have are not as important as your evidence and eloquence in expressing them.

Can an overview article work? Actually, yes, but you need to frame it within a specific problem. One great example I saw was an overview of web accessibility (which has not been published yet). The article followed a fictional project from beginning to end, showing how each team on the project could work toward a goal of accessibility. But the idea was not just accessibility—it was how leaders and project managers could assign responsibility in regards to accessibility. It was a great submission because it began with a problem of breadth and offered a complete solution to that problem. But it only worked because it was written specifically for an audience that needed to understand the whole process. In other words, the comprehensive nature of the article was the entire point, and it stuck to that.

Keep your audience in mind

You have a viewpoint. A problem I frequently see with new submissions is forgetting that the audience also has its viewpoint. You have to know your audience and remember how the audience’s mindset matches yours—or doesn’t. In fact, you’ll probably want to state in your introduction who the intended audience is to hook the right readers. To write a successful article, you have to keep that audience in mind and write for it specifically.

A common mistake I see writers make is using an article to vent their frustrations about the people who won’t listen to them. The problem is that the audience of our publication usually agrees with the author on these points, so a rant about why he or she is right is ultimately pointless. If you’re writing for like-minded people, it’s usually best to assume the readers agree with you and then either delve into how to best accomplish what you’re writing about or give them talking points to have that conversation in their workplace. Write the kind of advice you wish you’d gotten when those frustrations first surfaced.

Another common problem is forgetting what the audience already knows—or doesn’t know. If something is common knowledge in your industry, it doesn’t need another explanation. You might link out to another explanation somewhere else just in case, but there’s no need to start from scratch when you’re trying to make a new point. At the same time, don’t assume that all your readers have the same expertise you do. I wrote an article on some higher-level object-oriented programming concepts—something many JavaScript developers are not familiar with. Rather than spend half the article giving an overview of object-oriented programming, though, I provided some links at the beginning of the article that gave a good overview. Pro tip: if you can link out to articles from the same publication you’re submitting to, publications will appreciate the free publicity.

Defining your audience can also really help with knowing their viewpoint. Many times when I see a submission with two competing ideas, they’re written for different audiences. In my article I mentioned above, I provide some links for developers who may be new to object-oriented programming, but the primary audience is developers who already have some familiarity with it and want to go deeper. Trying to cater to both audiences wouldn’t have doubled the readership—it would have reduced it by making a large part of the article less relevant to readers.

Keep it practical

I’ll admit, of all these tips, this is the one I usually struggle with the most. I’m a writer who loves ideas, and I love explaining them in great detail. While there are some readers who appreciate this, most are looking for some tangible ways to improve something. This isn’t to say that big concepts have no place in professional articles, but you need to ask why they are there. Is your five-paragraph explanation of the history of your idea necessary for the reader to make the improvements you suggest?

This became abundantly clear to me in my first submission of an article on managing ego in the workplace. I love psychology and initially included a lengthy section up-front on how our self-esteem springs from the strengths we leaned on growing up. While this fascinated me, it wasn’t right for an audience of web professionals who wanted advice on how to improve their working relationships. Based on feedback I received, I removed the section entirely and added a section on how to manage your own ego in the workplace—much more practical, and that ended up being a favorite section in the final piece.

Successful articles solve a problem. Begin with the problem—set it up in your introduction, maybe tell a little story that illustrates how this problem manifests—and then build a case for your solution. The problem should be clear to the reader very early on in the article, and the rest of the article should all be related to that problem. There is no room for meandering and pontification in a professional article. If the article is not relevant and practical, the reader will move on to something else.

The litmus test for determining the practicality of your article is to boil it down to an outline. Of course all of your writing is much more meaningful than an outline, but look at the outline. There should be several statements along the lines of “Do this,” or “Don’t do this.” You can have other statements, of course, but they should all be building toward some tangible outcome with practical steps for the reader to take to solve the problem set up in your introduction.

It’s a hard truth you have to learn as a writer that you’ll be much more in love with your ideas than your audience will. Writing professional articles is not about self-expression—it’s about helping and serving your readers. The more clear and concise the content you offer, the more your article will be read and shared.

Support what you say

Your opinions, without evidence to support them, will only get you so far. As a writer, your ideas are probably grounded in a lot of real evidence, but your readers don’t know that—you’ll have to show it. How do you show it? Write a first draft and get your ideas out. Then do another pass to look for stories, stats, and studies to support your ideas. Trying to make a point without at least one of these is at best difficult and at worst empty hype. Professionals in your industry are less interested in platitudes and more interested in results. Having some evidence for your claims goes a long way toward demonstrating your expertise and proving your point.

Going back to my first article in A List Apart, on defusing workplace drama, I had an abstract point to prove, and I needed to show that my insights meant something. My editor on that article was fantastic and asked the right questions to steer me toward demonstrating the validity of my ideas in a meaningful way. Personal stories made up the backbone of the article, and I was able to find social psychology studies to back up what I was saying. These illustrations of the ideas ended up being more impactful than the ideas themselves, and the article was very well-received in the community.

Storytelling can be an amazing way to bring your insights to life. Real accounts or fictional, well-told stories can serve to make big ideas easier to understand, and they work best when representing typical scenarios, not edge cases. If your story goes against common knowledge, readers will pick up on that instantly and you’ll probably get some nasty comments. Never use a story to prove a point that doesn’t have any other hard evidence to back it up—use stories to illustrate points or make problems more relatable. Good stories are often the most memorable parts of articles and make your ideas and assertions easier to remember.

Stats are one of the easiest ways to make a point. If you’re arguing that ignoring website accessibility can negatively impact the business, some hard numbers are going to say a lot more than stories. If there’s a good stat to prove your point, always include it, and always be on the lookout for relevant numbers. As with stories, though, you should never try to use stats to distort the truth or prove a point that doesn’t have much else to support it. Mark Twain once said, “There are three kinds of lies: lies, damned lies, and statistics.” You shouldn’t decide what to say and then scour the internet for ways to back it up. Base your ideas on the numbers, don’t base your selection of facts on your idea.

Studies, including both user experience studies and social psychology experiments, are somewhere in between stories and stats, and a lot of the same advantages and pitfalls also apply. A lot of studies can be expressed as a story—write a quick bit from the point of view of the study participant, then go back and explain what’s really going on. This can be just as engaging and memorable as a good story, but studies usually result in stats, which usually serve to make the stories significantly more authoritative. And remember to link out to the study for people who want to read more about it!

Just make sure your study wasn’t disproved by later studies. In my first article, linked above, I originally referenced a study to introduce the bystander effect, but an editor wisely pointed out that there’s actually a lot of evidence against that interpretation of the well-known study. Interpretations can change over time, especially as new information comes out. I found a later, more relevant study that illustrated the point better and was less well-known, so it made for a better story.

Kill your darlings

Early twentieth century writer and critic Arthur Quiller-Couch once said in a speech, “Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it—whole-heartedly—and delete it before sending your manuscript to press. Murder your darlings.” Variants of this quote were repeated by many authors throughout the twentieth century, and it’s just as true today as when he originally said it.

What does that mean for your article? Great prose, great analogies, great stories—any bits of brilliant writing that you churn out—only mean as much as they contribute to the subject at hand. If it doesn’t contribute anything, it needs to be killed.

When getting your article ready for submission, your best friend will be the backspace or delete key on your keyboard. Before submitting, do a read-through for the express purpose of deleting whatever you can to trim down the article. Articles are not books. Brevity is a virtue, and it usually ends up being one of the most important virtues in article submissions.

Your intro should have a clear thesis so readers know what the article is about. For every bit of writing that follows it, ask if it contributes to your argument. Does it illustrate the problem or solution? Does it give the reader empathy for or understanding of the people you’re trying to help? Does it give them guidance on how to have these conversations in their workplaces? If you can’t relate a sentence back to your original thesis, it doesn’t matter how brilliant it is—it should be deleted.

Humor can be useful, but many jokes serve as little more than an aside or distraction from the main point. Don’t interrupt your train of thought with a cute joke—use a joke to make your thoughts more clear. It doesn’t matter how funny the joke is; if it doesn’t help illustrate or reinforce one of your points, it needs to go.

There are times when a picture really is worth a thousand words. Don’t go crazy with images and illustrations in your piece, but if a quick graphic is going to save you a lengthy explanation, go that route.

So what are you waiting for?

The industry needs great advice in articles, and many of you could provide that. The points I’ve delved into in this article aren’t just formalities and vague ideas; the editing team at A List Apart has weighed in, and these are problems we see often that weaken articles and make them less accessible to readers. Heeding this advice will strengthen your professional articles, whether you plan to submit to A List Apart or anywhere else. The next amazing article in A List Apart could be yours, and we hope to see you get there.

Categories: Around The Web

We’re Looking for People Who Love to Write

Tue, 05/08/2018 - 9:02am

Here at A List Apart, we’re looking for new authors, and that means you. What should you write about? Glad you asked!

You should write about topics that keep you up at night, passions that make you the first to show up in the office each morning, ideas that matter to our community and about which you have a story to tell or an insight to share.

We’re not looking for case studies about your company or thousand-foot overviews of topics most ALA readers already know about (i.e., you don’t have to tell A List Apart readers that Sir Tim Berners-Lee invented the web). But you also don’t have to write earth-shaking manifestos or share new ways of working that will completely change the web. A good strong idea, or detailed advice about an industry best practice make excellent ALA articles.

Where we’ve been

Although A List Apart covers everything from accessible UX and product design to advanced typography and content and business strategy, the sweet spot for an A List Apart article is one that combines UI design (and design thinking) with front-end code, especially when it’s innovative. Thus our most popular article of the past ten years was Ethan Marcotte’s “Responsive Web Design”—a marriage of design and code, accessible to people with diverse backgrounds at differing levels of expertise.

In the decade-plus before that, our most popular articles were Douglas Bowman’s “Sliding Doors of CSS” and Dan Cederholm’s “Faux Columns”—again, marriages of design and code, and mostly in the nature of clever workarounds (because CSS in 2004 didn’t really let us design pages as flexibly and creatively, or even as reliably, as we wanted to).

From hacks to standards

Although clever front-end tricks like Sliding Doors, and visionary re-imaginings of the medium like Responsive Web Design, remain our most popular offerings, the magazine has offered fewer of them in recent years, focusing more on UX and strategy. To a certain extent, if a front-end technique isn’t earth-changing (i.e., isn’t more than just a technique), and if it isn’t semantic, inclusive, accessible, and progressively enhanced, we don’t care how flashy it is—it’s not for us.

The demand to create more powerful layouts was also, in a real way, satisfied by the rise of frameworks and shared libraries—another reason for us to have eased off front-end tricks (although not all frameworks and libraries are equally or in some cases even acceptably semantic, inclusive, accessible, and progressively enhanced—and, sadly, many of their users don’t know or care).

Most importantly, now that CSS is finally capable of true layout design without hacks, any responsible web design publication will want to ease off on the flow of front-end hacks, in favor of standards-based education, from basic to advanced. Why would any editor or publisher (or framework engineer, for that matter) recommend that designers use 100 pounds of fragile JavaScript when a dozen lines of stable CSS will do?

It will be interesting to see what happens to the demand for layout hack articles in Medium and web design publications and communities over the next twelve months. It will also be interesting to see what becomes of frameworks now that CSS is so capable. But that’s not our problem. Our problem is finding the best ideas for A List Apart’s readers, and working with the industry’s best old and new writers to polish those ideas to near-perfection.

After all, even more than being known for genius one-offs like Responsive Web Design and Sliding Doors of CSS, A List Apart has spent its life introducing future-friendly, user-focused design advances to this community, i.e., fighting for web standards when table layouts were the rage, fighting for web standards when Flash was the rage, pushing for real typography on the web years before Typekit was a gleam in Jeff Veen’s eye, pushing for readability in layout when most design-y websites thought single-spaced 7px Arial was plenty big enough, promoting accessible design solutions, user-focused solutions, independent content and communities, and so on.

Call to action

Great, industry-changing articles are still what we want most, whether they’re front-end, design, content, or strategy-focused. And changing the industry doesn’t have to mean inventing a totally new way of laying out pages or evaluating client content. It can also mean coming up with a compelling argument in favor of an important but embattled best practice. Or sharing an insightful story that helps those who read it be more empathetic and more ethical in their daily work.

Who will write the next 20 years of great A List Apart articles? That’s where you come in.

Publishing on A List Apart isn’t as easy-peasy as dashing off a post on your blog, but the results—and the audience—are worth it. And when you write for A List Apart, you never write alone: our industry-leading editors, technical editors, and copyeditors are ready to help you polish your best idea from good to great.

Come share with us!

Categories: Around The Web

Priority Guides: A Content-First Alternative to Wireframes

Thu, 05/03/2018 - 9:07am

No matter your role, if you’ve ever been involved in a digital design project, chances are you’re familiar with wireframes. After all, they’re among the most popular and widely used tools when designing websites, apps, dashboards, and other digital user interfaces.

But they do have their problems, and wireframes are so integrated into the accepted way of working that many don’t consider those drawbacks. That’s a shame, because the tool’s downsides can seriously undermine user-centricity. Ever lose yourself in aesthetic details when you should have been talking about content and functionality? We have!

That’s why we use an alternative that avoids the pitfalls of wireframes: the priority guide. It not only keeps our process user-centered and creates more valuable designs for our users (whether used alongside wireframes or as a direct replacement), it’s also improved team engagement, collaboration, and design workflows.

The problem with wireframes

Wikipedia appropriately defines the wireframe as “a visual guide that represents the skeletal framework of a website. … [It] depicts the page layout or arrangement of the website’s content, including interface elements and navigational systems.” In other words, wireframes are sketches that represent the potential website (or app) in a simplified way, including the placement and shape of any interface elements. They range from low-fidelity rough sketches on paper to high-fidelity colored, textual screens in a digital format.

Examples of low-fidelity (on the left) and high-fidelity (on the right) wireframes

Because of their visual nature, wireframes are great tools for sketching and exploring design ideas, as well as communicating those ideas to colleagues, clients, and stakeholders. And since they’re so easy to create and adapt with tools such as Sketch or Balsamiq, you also have something to user test early in the design process, allowing usability issues to be addressed sooner than might otherwise be possible.

But although these are all valuable characteristics of wireframes, there are also some significant downsides.

The illusion of final design

Wireframes can provide the illusion that a design is final, or at least in a late stage of completion. Regardless of how carefully you explain to clients or stakeholders that these first concepts are just early explorations and not final—maybe you even decorated them with big “DRAFT” stickers—too often they’ll still enthusiastically exclaim, “Looks good, let’s start building!”

Killing creativity and engagement

At Mirabeau, we’ve noticed that wireframes tend to kill creativity. We primarily work in multidisciplinary teams consisting of (among others) interaction (UX) designers, visual designers, front-end developers, and functional testers. But once an interaction designer has created a wireframe, it’s hard for many (we’re not saying all) visual designers to think outside the boundaries set by that wireframe and challenge the ideas it contains. As a result, the final designs almost always resemble the wireframes. Their creativity impaired, the visual designers were essentially just coloring in the wireframes.

Undermining user-centricity

As professionals, we naturally care about how something looks and is presented. So much so that we can easily lose ourselves for hours in the fine details, such as alignment, sizing, coloring, and the like, even on rough wireframes intended only for internal use. Losing time means losing focus on what’s valuable for your user: the content, the product offering, and the functionality.

Static, not responsive

A wireframe (even multiple wireframes) can’t capture the responsive behavior that is so essential to modern web design. Even though digital design tools are catching up in efficiently designing for different screen sizes (here’s hoping InVision Studio will deliver), each of the resulting wireframes is still just a static image.

Inconvenient for developers and functional testers

Developers and functional testers work with code, and a wireframe sketch or picture provides little functional information and isn’t directly translatable into code (not yet, anyway). This lack of clarity around how the design should behave can lead to developers and testers making decisions about functionality or responsiveness without input from the designer, or having to frequently check with the designer to find out if a feature is working correctly. Perhaps less of a problem for a mature team or project where there’s plenty of experience with, and knowledge of, the product, but all too often this (unnecessary) collaboration means more development work, a slower process, and wasted time.

To overcome these wireframe pitfalls, about five years ago we adopted priority guides. Our principal interaction designer, Paul Versteeg, brought the tool to Mirabeau, and we’ve been improving and fine-tuning our way of working with them ever since, with great results.

So what are priority guides?

As far as we know, credit for the invention of priority guides goes to Drew Clemens, who first introduced the concept in his article on the Smashing Magazine website in 2012. Since that time, however, it seems that priority guides have received little attention, either from the web and app design industry or within related education establishments.

Simply put, a priority guide contains content and elements for a mobile screen, sorted by hierarchy from top to bottom and without layout specifications. The hierarchy is based on relevance to users, with the content most critical to satisfying user needs and supporting user (and company) goals higher up.

The format of a priority guide is not fixed: it can be digital (we personally prefer Sketch), or it can be physical, made with paper and Post-its. Most importantly, a priority guide is automatically content-first, with a strong focus on providing best value for users.

The core structure of a priority guide

Diving a bit deeper, the following example shows the exact same page as shown in the wireframe images presented earlier in this article. It consists of the title “Book a flight,” real content (yes, even the required legal notice!), several sections of information, and annotations that explain components and functionality.

A detailed digital priority guide for an airline’s flight overview page

When comparing the content to the high-fidelity wireframe, you’ll notice that the order of the sections is not the same. The step indicator, for example, is shown at the bottom of the priority guide, as the designer decided it’s not the most important information on the page. Conversely, the most important information—flight information and prices—is now placed near the top.

Annotations are an important part of priority guides, as they provide explanations of the functionalities and page behavior, name the component types, and link the priority guide of one page to the priority guides of other pages. In this example, you can find descriptions of what happens when a user interacts with a button or link, such as opening a layover screen to display flight details or loading a a flight selection page.

The advantages of priority guides

Of course, we can debate for hours whether the creator of, or team responsible for, the above priority guide has chosen the correct priorities and functionalities, but that goes beyond the scope of this article. Instead, let’s name the main advantages that priority guides offer over wireframes.

Suitable for responsive design

Wireframes are static images, requiring multiple screenshots to cover the full spectrum from mobile to desktop. Priority guides, on the other hand, give an overview of content hierarchy regardless of screen size (assuming user goals remain the same on different devices). Ever since responsive design became standard practice within Mirabeau, priority guides have been an essential addition to our design toolkit.

Focused on solving problems and serving needs

When creating priority guides, you automatically focus on solving the users’ problems, serving their needs, and supporting them to reach their goals. The interface is always filled with content that communicates a message or helps the user. By designing content-first, you’re always focused on serving the user.

No time wasted on aesthetics and layout

There’s no need for interaction designers to waste time on aesthetics and layout in the early phases of the design process. Priority guides help avoid the focus shifting away from the content and user toward specific layout elements too early, and keep us from falling into the “designer trap” of visual perfectionism.

Facilitating visual designers’ creativity

Priority guides provide the opportunity for designers to explore extravagant ideas on how to best support and delight the user without visual boundaries set by interaction designers. Even when you’re the only designer on your team, working as both interaction and visual designer, it’s hard to move past how those first wireframes looked, even when confronted with new content.

Developers and testers get “HTML” early in the process

The structure of a priority guide is very similar to HTML, allowing the developer to start laying the groundwork for future development early on. Similarly, testers get a checklist for testing, allowing them to begin building those tests straight away. The result is early feedback on the feasibility of the designs, and we’ve found priority guides have significantly speeded up the collaborative process of design and development at Mirabeau.

How to create priority guides

There are a number of baselines and steps that we’ve found useful when creating priority guides. We’ve fine-tuned them over the years as we’ve applied this new approach to our projects, and conducted workshops explaining priority guides to the Dutch design community.

The baselines

Your priority guide should only contain real content that’s relevant to the user. Lorem ipsum, or any other type of placeholder text, doesn’t communicate how the page supports users in reaching their goals. Moreover, don’t include any layout elements when making priority guides. Instead, include only content and functionality. Remember that a priority guide is never a deliverable—it’s merely a tool to facilitate discussion among the designers, developers, testers, and stakeholders involved in the project.

Priority guides should always have a mobile format. By constraining yourself this way, you automatically think mobile-first and consider which information is most important (and so should be at the top of the screen). Also, since the menu is typically more or less the same on every screen of your website or app, we recommend leaving the menu out of your priority guide. It’ll help you focus on the screen you’re designing for, and the guide won’t be cluttered with unnecessary distractions.

Step 1: determine the goal(s)

Before jumping to the solution, it’s important to take a step back and consider why you’re making this priority guide. What is the purpose of the page? What goal or goals does the user have? And what goal or goals does the business have? The answers to these questions will both guide your user research and determine which content will add more value to users and the business, and so have higher priority.

Step 2: research and understand the user

There are many methods for user research, and the method or methods chosen will largely depend on the situation and project. However, when creating priority guides, we’ve definitely found it useful to generate personas, affinity diagrams, and experience maps to help create a visual summary of any research findings.

Step 3: determine the content topics

The aim of this stage is to use your knowledge of the user and the business to determine which specific content and topics will best support their goals in each phase of the customer journey. Experience has taught us that co-creating this content outline with users, clients, copywriters, and stakeholders can be highly beneficial. The result is a list of topics that each page should contain.

Step 4: create a high-level priority guide

Use the list of topics to create a high-level priority guide. Which is the most important topic? Place that one on the top. Which is the second most important topic? That one goes below the first. It’s a straightforward prioritization process that should be continued until all the (relevant) topics have found a place in the priority list. It’s important to question the importance of each topic, not only in comparison to other topics, but also whether the topic should really be on the page at all. And we’ve found that starting on paper definitely helps avoid focusing too much on the little visual details, which can happen if using a digital design tool (“pixel-fixing”).

A high-level priority guide for FreeBees, a fictional company Step 5: create a detailed priority guide

Now it’s time to start adding the details. For each topic, determine the detailed, real content that will appear on the page. Also, start thinking about any functionalities the page may need. When you have multiple priority guides for multiple pages, indicate how and where these pages are connected in a sitemap format.

We often use this first schematic shape of the product to identify flows, test if the concept is complete, and determine whether the current content and priorities effectively serve users’ needs and help solve their problems. More than once it has allowed us to identify that a content plan needed to be altered to achieve the outcome we were targeting. And because priority guides are quick and easy to produce, iterating at this stage saved a lot of time and effort.

A detailed priority guide for FreeBees, a fictional company Step 6: user testing and (further) iteration

The last (continuous) step involves testing and iterating your priority guides. Ask users what they think about the information presented in the priority guides (yes, it is possible to do usability testing with priority guides!), and gather feedback from stakeholders. The input gained from these sessions can then be used to validate and reprioritize the information, and to add or adapt functionalities, followed by further testing as needed.

Find out what works for you

Over the years we’ve seen many variations on the process described above. Some designers work entirely with paper and Post-its, while others prefer to create priority guides in a digital design tool from scratch. Some go no further than high-level priority guides, while others use detailed priority guides as a guideline for their entire project.

The key is to experiment, and take the time to find out which approach works best for you and your team. What remains important no matter your process, however, is the need to always keep the focus on user and business goals, and to continuously ask yourself what each piece of content or functionality adds to these goals.

Conclusion

For us here at Mirabeau, priority guides have become a highly efficient tool for designing user-first, content-first, and mobile-first, overcoming many of the significant pitfalls that come from relying only on wireframes. Wireframes do have their uses, and in many situations it’s valuable to be able to visualize ideas and discuss them with team members, clients, or stakeholders. Sketching concepts as wireframes to test ideas can also be useful, and sometimes we’ll even generate wireframes to gain new insights into how to improve our priority guides!

Overall, we’ve found that priority guides are more useful at the start of a project, when in the phase of defining the purpose and content of screens. Wireframes, on the other hand, are more useful for sketching and communicating ideas and visual concepts. Just don’t start with wireframes, and make sure you always stay focused on what’s important.

Categories: Around The Web

The Illusion of Control in Web Design

Thu, 04/26/2018 - 9:02am

We all want to build robust and engaging web experiences. We scrutinize every detail of an interaction. We spend hours getting the animation swing just right. We refactor our JavaScript to shave tiny fractions of a second off load times. We control absolutely everything we can, but the harsh reality is that we control less than we think.

Last week, two events reminded us, yet again, of how right Douglas Crockford was when he declared the web “the most hostile software engineering environment imaginable.” Both were serious enough to take down an entire site—actually hundreds of entire sites, as it turned out. And both were avoidable.

In understanding what we control (and what we don’t), we will build resilient, engaging products for our users.

What happened?

The first of these incidents involved the launch of Chrome 66. With that release, Google implemented a security patch with serious implications for folks who weren’t paying attention. You might recall that quite a few questionable SSL certificates issued by Symantec Corporation’s PKI began to surface early last year. Apparently, Symantec had subcontracted the creation of certificates without providing a whole lot of oversight. Long story short, the Chrome team decided the best course of action with respect to these potentially bogus (and security-threatening) SSL certificates was to set an “end of life” for accepting them as secure. They set Chrome 66 as the cutoff.

So, when Chrome 66 rolled out (an automatic, transparent update for pretty much everyone), suddenly any site running HTTPS on one of these certificates would no longer be considered secure. That’s a major problem if the certificate in question is for our primary domain, but it’s also a problem it’s for a CDN we’re using. You see, my server may be running on a valid SSL certificate, but if I have my assets—images, CSS, JavaScript—hosted on a CDN that is not secure, browsers will block those resources. It’s like CSS Naked Day all over again.

To be completely honest, I wasn’t really paying attention to this until Michael Spellacy looped me in on Twitter. Two hundred of his employer’s sites were instantly reduced to plain old semantic HTML. No CSS. No images. No JavaScript.

The second incident was actually quite similar in that it also involved SSL, and specifically the expiration of an SSL certificate being used by jQuery’s CDN. If a site relied on that CDN to serve an HTTPS-hosted version of jQuery, their users wouldn’t have received it. And if that site was dependent on jQuery to be usable … well, ouch!

For what it’s worth, this isn’t the first time incidents like these have occurred. Only a few short years ago, Sky Broadband’s parental filter dramatically miscategorized the jQuery CDN as a source of malware. With that designation in place, they spent the better part of a day blocking all requests for resources on that domain, affecting nearly all of their customers.

It can be easy to shrug off news like this. Surely we’d make smarter implementation decisions if we were in charge. We’d certainly have included a local copy of jQuery like the good Boilerplate tells us to. The thing is, even with that extra bit of protection in place, we’re falling for one of the most attractive fallacies when it comes to building for the web: that we have control.

Lost in transit?

There are some things we do control on the web, but they may be fewer than you think. As a solo dev or team lead, we have considerable control over the HTML, CSS, and JavaScript code that ultimately constructs our sites. Same goes for the tools we use and the hosting solutions we’ve chosen. Of course, that control lessens on large teams or when others are calling the shots, though in those situations we still have an awareness of the coding conventions, tooling, and hosting environment we’re working with. Once our carefully-crafted code leaves our servers, however, all bets are off.

First off, we don’t—at least in the vast majority of cases—control the network our code traverses to reach our users. Ideally our code takes an optimized path so that it reaches its destination quickly, yet any one of the servers along that path can read and manipulate the code. If you’ve heard of “man-in-the-middle” attacks, this is how they happen.

For example, certain providers have no qualms about injecting their own advertising into your pages. Gross, right? HTTPS is one way to stop this from happening (and to prevent servers from being able to snoop on our traffic), but some providers have even found a way around that. Sigh.

Lost in translation?

Assuming no one touches our code in transit, the next thing standing between our users and our code is the browser. These applications are the gateways to (and gatekeepers of) the experiences we build on the web. And, even though the last decade has seen browser vendors coalesce around web standards, there are still differences to consider. Those differences are yet another factor that will make or break the experience our users have.

While every browser vendor supports the idea and ongoing development of standards, they do so at their own pace and very much in relation to their business interests. They prioritize features that help them meet their own goals and can sometimes be reluctant or slow to implement new features. Occasionally, as happened with CSS Grid, everyone gets on board rather quickly, and we can see a new spec go from draft to implementation within a single calendar year. Others, like Service Worker, can take hold quickly in a handful of browsers but take longer to roll out in others. Still others, like Pointer Events, might get implemented widely, only to be undermined by one browser’s indifference.

All of this is to say that the browser landscape is much like the Great Plains of the American Midwest: from afar it looks very even, but walking through it we’re bound to stumble into a prairie dog burrow or two. And to successfully navigate the challenges posed by the browser environment, it pays to get familiar with where those burrows lie so we don’t lose our footing. Object detection … font stacks … media queries … feature detection … these tools (and more) help us ensure our work doesn’t fall over in less-than-ideal situations.

Beyond standards support, it’s important to recognize that some browsers include optimizations that can affect the delivery of your code. Opera Mini and Amazon’s Silk are examples of the class of browser often referred to as proxy browsers. Proxy browsers, as their name implies, position their own proxy servers in between our domains and the end user. They use these servers to do things like optimize images, simplify markup, and jettison unsupported JavaScript in the interest of slimming the download size of our pages. Proxy browsers can be a tremendous help for users paying for downloads by the bit, especially given our penchant for increasing web page sizes year upon year.

If we don’t consider how these browsers can affect our pages, our site may simply collapse and splay its feet in the air like a fainting goat. Consider this JavaScript taken from an example I threw up on Codepen:

document.body.innerHTML += '<p>Can I count to four?</p>'; for (let i=1; i<=4; i++) { document.body.innerHTML += '<p>' + i + '</p>'; } document.body.innerHTML += '<p>Success!</p>';

This code is designed to insert several paragraphs into the current document and, when executed, produces this:

Can I count to four? 1 2 3 4 Success!

Simple enough, right? Well, yes and no. You see, this code makes use of the let keyword, which was introduced in ECMAScript 2015 (a.k.a. ES6) to enable block-level variable scoping. It will work a treat in browsers that understand let. However, any browsers that don’t understand let will have no idea what to make of it and won’t execute any of the JavaScript—not even the parts they do understand—because they don’t know how to interpret the program. Users of Opera Mini, Internet Explorer 10, QQ, and Safari 9 would get nothing.

This is a relatively simplistic example, but it underscores the fragility of JavaScript. The UK’s GDS ran a study to determine how many of their users didn’t get JavaScript enhancements and discovered that 0.9% of their users who should have received them—in other words, their browser supported JavaScript and they had not turned it off—didn’t for some reason. Add in the 0.2% of users whose browsers did not support JavaScript or who had turned it off, and the total non-JavaScript constituency was 1.1%, or 1 in every 93 people who visit their site.

It’s worth keeping in mind that browsers must understand the entirety of our JavaScript before they can execute it. This may not be a big deal if we write all of our own JavaScript (though we all occasionally make mistakes), but it becomes a big deal when we include third-party code like JavaScript libraries, advertising code, or social media buttons. Errors in any of those codebases can cause problems for our users.

Browser plugins are another form of third-party code that can negatively affect our sites. And they’re ones we don’t often consider. Back in the early ’00s, I remember spending hours trying to diagnose a site issue reported by one of my clients, only to discover it only occurred when using a particular plugin. Anger and self-doubt were wreaking havoc on me as I failed time and time again to reproduce the error my client was experiencing. It took me traveling the two hours to her office and sitting down at her desk to discover the difference between her setup and mine: a third-party browser toolbar.

We don’t have the luxury of traveling to our users’ homes and offices to determine if and when a browser plugin is hobbling our creations. Instead, the best defense against the unknowns of the browsing environment is to always design our sites with a universally usable baseline.

Lost in interpretation?

Regardless of everything discussed so far, when our carefully crafted website finally reaches its destination, it has one more potential barrier to success: us. Specifically, our users. More broadly, people. Unless our product is created solely for the consumption of some other life form or machine, we’ve got to consider the ultimate loss of control when we cede it to someone else.

Over the course of my twenty years of building websites for customers, I’ve always had the plaintive voice of Clerks’ Randal Graves in the back of my head: “This job would be great if it wasn’t for the f—ing customers.” I’m not happy about that. It’s an arrogant position (surely), yet an easy one to lapse into.

People are so needy. Wouldn’t it be great if we could just focus on ourselves?

No, that wouldn’t be good at all.

When we design and build for people like us, we exclude everyone who isn’t like us. And that’s most people. I’m going to put on my business hat here—Fedora? Bowler? Top hat?—and say that artificially limiting our customer base is probably not in our company’s best interest. Not only will it limit our potential revenue growth, it could actually reduce our income if we become the target of a legal complaint by an excluded party.

Our efforts to build robust experiences on the web must account for the actual people that use them (or may want to use them). That means ensuring our sites work for people who experience motor impairments, vision impairments, hearing impairments, vestibular disorders, and other things we aggregate under the heading of “accessibility.” It also means ensuring our sites work well for users in a variety of contexts: on large screens, small screens, even in-between screens. Via mouse, keyboard, stylus, finger, and even voice. In dark, windowless offices, glass-walled conference rooms, and out in the midday sun. Over blazingly fast fiber and painfully slow cellular networks. Wherever people are, however they access the web, whatever special considerations need to be made to accommodate them … we should build our products to support them.

That may seem like a tall order, but consider this: removing access barriers for one group has a far-reaching ripple effect that benefits others. The roadside curb cut is an example we often cite. It was originally designed for wheelchair access, but stroller-pushing parents, children on bicycles, and even that UPS delivery person hauling a tower of Amazon boxes down Seventh Avenue all benefit from that rather simple consideration.

Maybe you’re more of a numbers person. If so, consider designing your interface such that it’s easier to use by someone who only has use of one arm. Every year, about 26,000 people in the U.S. permanently lose the use of an upper extremity. That’s a drop in the bucket compared to an overall population of nearly 326 million people. But that’s a permanent impairment. There are two other forms of impairment to consider: temporary and situational. Breaking your arm can mean you lose use of that hand—maybe your dominant one—for a few weeks. About 13 million Americans suffer an arm injury like this every year. Holding a baby is a situational impairment in that you can put it down and regain use of your arm, but the feasibility of that may depend greatly on the baby’s temperament and sleep schedule. About 8 million Americans welcome this kind of impairment—sweet and cute as it may be—into their home each year, and this particular impairment can last for over a year. All of this is to say that designing an interface that’s usable with one hand (or via voice) can help over 21 million more Americans (about 6% of the population) effectively use your service.

Finally, and in many ways coming full circle, there’s the copy we employ. Clear, well-written, and appropriate copy is the bedrock of great experiences on the web. When we draft copy, we should do so with a good sense of how our users talk to one another. That doesn’t mean we should pepper our legalese with slang, but it does mean we should author copy that is easily understood. It should be written at an appropriate reading level, devoid of unnecessary jargon and idioms, and approachable to both native and non-native speakers alike. Nestled in the gentle embrace of our (hopefully) semantic, server-rendered HTML, the copy we write is one of the only experiences of our sites we can pretty much guarantee our users will have.

Old advice, still relevant

Recognizing all of the ways our carefully-crafted experiences can be rendered unusable can be more than a little disheartening. No one likes to spend their time thinking about failure. So don’t. Don’t focus on all of the bad things you can’t control. Focus on what you can control.

Start simply. Code defensively. User-test the heck out of it. Recognize the chaos. Embrace it. And build resilient web experiences that will work no matter what the internet throws at them.

Categories: Around The Web

Working with External User Researchers: Part II

Tue, 04/17/2018 - 9:10am

In the first installment of the Working with External User Researchers series, we explored the reasons why you might hire a user researcher on contract and helpful things to consider in choosing one. This time, we talk about getting the actual work done.

You’ve hired a user researcher for your project. Congrats! On paper, this person (or team of people) has everything you need and more. You might think the hardest part of your project is complete and that you can be more hands off at this point. But the real work hasn’t started yet. Hiring the researcher is just the beginning of your journey.

Let’s recap what we mean by an external user researcher. Hiring a contract external user researcher means that a person or team is brought on for the duration of a contract to conduct research.

This situation is most commonly found in:

  • organizations without researchers on staff;
  • organizations whose research staff is maxed out;
  • and organizations that need special expertise.

In other words, external user researchers exist to help you gain the insight from your users when hiring one full-time is not an option. Check out Part I to learn more about how to find external user researchers, the types of projects that will get you the most value for your money, writing a request for proposal, and finally, negotiating payment.

Working together Remember why you hired an external researcher

No project or work relationship is perfect. Before we delve into more specific guidelines on how to work well together, remember the reasons why you decided to hire an external researcher (and this specific one) for your project. Keeping them in mind as you work together will help you keep your priorities straight.

External researchers are great for bringing in a fresh, objective perspective

You could ask your full-time designer who also has research skills to wear the research hat. This isn’t uncommon. But a designer won’t have the same depth and breadth of expertise as a dedicated researcher. In addition, they will probably end up researching their own design work, which will make it very difficult for them to remain unbiased.

Product managers sometimes like to be proactive and conduct some form of guerrilla user research themselves, but this is an even riskier idea. They usually aren’t trained on how to ask non-leading questions, for example, so they tend to only hear feedback that validates their ideas.

It isn’t a secret—but it’s well worth remembering—that research participants tend to be more comfortable sharing critical feedback with someone who doesn’t work for the product that is being tested.

The real work begins

In our experience the most important work starts once a researcher is hired. Here are some key considerations in setting them and your own project team up for success.

Be smart about the initial brain dump

Do share background materials that provide important context and prevent redundant work from being done. It’s likely that some insight is already known on a topic that will be researched, so it’s important to share this knowledge with your researcher so they can focus on new areas of inquiry. Provide things such as report templates to ensure that the researcher presents their learnings in a way that’s consistent with your organization’s unique culture. While you’re at it, consider showing them where to find documentation or tutorials about your product, or specific industry jargon.

Make sure people know who they are

Conduct a project kick-off meeting with the external researcher and your internal stakeholders. Influence is often partially a factor of trust and relationships, and for this reason it’s sometimes easy for internal stakeholders to question or brush aside projects conducted by research consultants, especially if they disagree with research insights and recommendations. (Who is this person I don’t know trying to tell me what is best for my product?)

Conduct a kick-off meeting with the broader team

A great way to prevent this potential pushback is to conduct a project kick-off meeting with the external researcher and important internal stakeholders or consumers of the research. Such a meeting might include activities such as:

  • Team introductions.
  • A discussion about the research questions, including an exercise for prioritizing the questions. Especially with contracted-out projects, it’s common for project teams to be tempted to add more questions—question creep—which is why it’s important to have clear priorities from the start.
  • A summary of what’s out of scope for the research. This is another important task in setting firm boundaries around project priorities from the start so the project is completed on time and within budget.
  • A summary of any incoming hypotheses the project team might have—in other words, what they think the answers to the research questions are. This can be an especially impactful exercise to remind stakeholders how their initial thinking changed in response to study findings upon the study being completed.
  • A review of the project phases and timeline, and any threats that could get in the way of the project being completed on time.
  • A review of prior research and what’s already known, if available. This is important for both the external researcher and the most important internal consumers of the research, as it’s often the case that the broader project team might not be aware of prior research and why certain questions already answered aren’t being addressed in the project at hand.
Use a buddy system

Appoint an internal resource who can answer questions that will no doubt arise during the project. This might include questions on how to use an internal lab, questions about whom to invite to a critical meeting, or clarifying questions regarding project priorities. This is also another opportunity to build trust and rapport between your project team and external researcher.

Conducting the research

While an external researcher or agency can help plan and conduct a study for you, don’t expect them to be experts on your product and company culture. It’s like hiring an architect to build your house or a designer to furnish a room: you need to provide guidance early and often, or the end result may not be what you expected. Here are some things to consider to make the engagement more effective.

Be available

A good research contractor will ask lots of questions to make sure they’re understanding important details, such as your priorities and research questions, and to collect feedback on the study plan and research report. While it can sometimes feel more efficient to handle most of these types of questions over email, email can often result in misinterpretations. Sometimes it’s faster to speak to questions that require lots of detail and context rather than type a response. Consider establishing weekly remote or in-person status checks to discuss open questions and action items.

Be present

If moderated sessions are part of the research, plan on observing as many of these as possible. While you should expect the research agency to provide you with a final report, you should not expect them to know which insights are most impactful to your project. They don’t have the background from internal meetings, prior decisions, and discussions about future product directions that an internal stakeholder has. Many of the most insightful findings come from conversations that happen immediately after a session with a research participant. The research moderator and client contact can share their perspectives on what the participant just did and said during their session.

Be proactive

Before the researcher drafts their final report, set up a meeting between them and your internal stakeholders to brainstorm over the main research findings. This will help the researcher identify more insights and opportunities that reflect internal priorities and limitations. It also helps stakeholders build trust in the research findings.

In other words, it’s a waste of everyone’s time if a final report is delivered and basic questions arise from stakeholders that could have been addressed by involving them earlier. This is also a good opportunity to get feedback from stakeholders’ stakeholders, who may have a different (but just as important) influence on the project’s success.

Be reasonable

Don’t treat an external contractor like a PowerPoint jockey. Changing fonts and colors to your liking is fine, but only to a point. Your researcher should provide you with a polished report free from errors and in a professional format, but minute changes are not a constructive use of time and money. Focus more on decisions and recommendations than the aesthetics of the deliverables. You can prevent this kind of situation by providing any templates you want used in your initial brain dump, so the findings don’t have to be replicated in the “right” format for presenting.

When it’s all said and done

Just because the project has been completed and all the agreed deliverables have been received doesn’t mean you should close the door on any additional learning opportunities for both the client and researcher. At the end of the project, identify what worked, and find ways to increase buy-in for their recommendations.

Tell them what happened

Try to identify a check-in point in the future (such as two weeks or months) to let the researcher know what happened because of the research: what decisions were made, what problems were fixed, or other design changes. While you shouldn’t expect your researcher to be perpetually available, if you encounter problems with buy-in, they might be able to provide a quick recommendation.

Maintain a relationship

While it’s typical for vendors to treat their clients to dinner or drinks, don’t be afraid to invite your external researcher to your own happy hour or event with your staff. The success of your next project may rely on getting the right researcher, and you’ll want them to be excited to make themselves available to help you when you need them again.

Categories: Around The Web

Going Offline

Tue, 04/10/2018 - 9:10am

A note from the editors: We’re excited to share Chapter 1 of Going Offline by Jeremy Keith, available this month from A Book Apart.

Businesses are built on the web. Without the web, Twitter couldn’t exist. Facebook couldn’t exist. And not just businesses—Wikipedia couldn’t exist. Your favorite blog couldn’t exist without the web. The web doesn’t favor any one kind of use. It’s been deliberately designed to accommodate many and varied activities.

Just as many wonderful things are built upon the web, the web itself is built upon the internet. Though we often use the terms web and internet interchangeably, the World Wide Web is just one application that uses the internet as its plumbing. Email, for instance, is another.

Like the web, the internet was designed to allow all kinds of services to be built on top of it. The internet is a network of networks, all of them agreeing to use the same protocols to shuttle packets of data around. Those packets are transmitted down fiber-optic cables across the ocean floor, bounced around with Wi-Fi or radio signals, or beamed from satellites in freakin’ space.

As long as these networks are working, the web is working. But sometimes networks go bad. Mobile networks have a tendency to get flaky once you’re on a train or in other situations where you’re, y’know, mobile. Wi-Fi networks work fine until you try to use one in a hotel room (their natural enemy).

When the network fails, the web fails. That’s just the way it is, and there’s nothing we can do about it. Until now.

Weaving the Web

For as long as I can remember, the World Wide Web has had an inferiority complex. Back in the ’90s, it was outshone by CD-ROMs (ask your parents). They had video, audio, and a richness that the web couldn’t match. But they lacked links—you couldn’t link from something in one CD-ROM to something in another CD-ROM. They faded away. The web grew.

Later, the web technologies of HTML, CSS, and JavaScript were found wanting when compared to the whiz-bang beauty of Flash. Again, Flash movies were much richer than regular web pages. But they were also black boxes. The Flash format seemed superior to the open standards of the web, and yet the very openness of those standards made the web an unstoppable force. Flash—under the control of just one company—faded away. The web grew.

These days it’s native apps that make the web look like an underachiever. Like Flash, they’re under the control of individual companies instead of being a shared resource like the web. Like Flash, they demonstrate all sorts of capabilities that the web lacks, such as access to device APIs and, crucially, the ability to work even when there’s no network connection.

The history of the web starts to sound like an endless retelling of the fable of the tortoise and the hare. CD-ROMs, Flash, and native apps outshine the web in the short term, but the web always seems to win the day somehow.

Each of those technologies proved very useful for the expansion of web standards. In a way, Flash was like the R&D department for HTML, CSS, and JavaScript. Smooth animations, embedded video, and other great features first saw the light of day in Flash. Having shown their usefulness, they later appeared in web standards. The same thing is happening with native apps. Access to device features like the camera and the accelerometer is beginning to show up in web browsers. Most exciting of all, we’re finally getting the ability for a website to continue working even when the network isn’t available.

Service Workers

The technology that makes this bewitching offline sorcery possible is a browser feature called service workers. You might have heard of them. You might have heard that they’re something to do with JavaScript, and technically they are…but conceptually they’re very different from other kinds of scripts.

Usually when you’re writing some JavaScript that’s going to run in a web browser, it’s all related to the document currently being displayed in the browser window. You might want to listen out for events triggered by the user interacting with the document (clicks, swipes, hovers, etc.). You might want to update the contents of the document: add some markup here, remove some text there, manipulate some values somewhere else. The sky’s the limit. And it’s all made possible thanks to the Document Object Model (DOM), a representation of what the browser is rendering. Through the combination of the DOM and JavaScript—DOM scripting, if you will—you can conjure up all sorts of wonderful magic.

Well, a service worker can’t do any of that. It’s still a script, and it’s still written in the same language—JavaScript—but it has no access to the DOM. Without any DOM scripting capabilities, this kind of script might seem useless at first glance. But there’s an advantage to having a script that never needs to interact with the current document. Adding, editing, and deleting parts of the DOM can be hard work for the browser. If you’re not careful, things can get very sluggish very quickly. But if there’s a whole class of script that isn’t allowed access to the DOM, then the browser can happily run that script in parallel to its regular rendering activities, safe in the knowledge that it’s an entirely separate process.

The first kind of script to come with this constraint was called a web worker. In a web worker, you could write some JavaScript to do number-crunching calculations without slowing down whatever else was being displayed in the browser window. Spin up a web worker to generate larger and larger prime numbers, for instance, and it will merrily do so in the background.

A service worker is like a web worker with extra powers. It still can’t access the DOM, but it does have access to the fundamental inner workings of the browser.

Browsers and servers

Let’s take a step back and think about how the World Wide Web works. It’s a beautiful ballet of client and server. The client is usually a web browser—or, to use the parlance of web standards, a user agent: a piece of software that acts on behalf of the user.

The user wants to accomplish a task or find some information. The URL is the key technology that will empower the user in their quest. They will either type a URL into their web browser or follow a link to get there. This is the point at which the web browser—or client—makes a request to a web server. Before the request can reach the server, it must traverse the internet of undersea cables, radio towers, and even the occasional satellite (Fig 1.1).

Fig 1.1: Browsers send URL requests to servers, and servers respond by sending files.

Imagine if you could leave instructions for the web browser that would be executed before the request is even sent. That’s exactly what service workers allow you to do (Fig 1.2).

Fig 1.2: Service workers tell the web browser to do something before they send the request to queue up a URL.

Usually when we write JavaScript, the code is executed after it’s been downloaded from a server. With service workers, we can write a script that’s executed by the browser before anything else happens. We can tell the browser, “If the user asks you to retrieve a URL for this particular website, run this corresponding bit of JavaScript first.” That explains why service workers don’t have access to the Document Object Model; when the service worker is run, there’s no document yet.

Getting your head around service workers

A service worker is like a cookie. Cookies are downloaded from a web server and installed in a browser. You can go to your browser’s preferences and see all the cookies that have been installed by sites you’ve visited. Cookies are very small and very simple little text files. A website can set a cookie, read a cookie, and update a cookie. A service worker script is much more powerful. It contains a set of instructions that the browser will consult before making any requests to the site that originally installed the service worker.

A service worker is like a virus. When you visit a website, a service worker is surreptitiously installed in the background. Afterwards, whenever you make a request to that website, your request will be intercepted by the service worker first. Your computer or phone becomes the home for service workers lurking in wait, ready to perform man-in-the-middle attacks. Don’t panic. A service worker can only handle requests for the site that originally installed that service worker. When you write a service worker, you can only use it to perform man-in-the-middle attacks on your own website.

A service worker is like a toolbox. By itself, a service worker can’t do much. But it allows you to access some very powerful browser features, like the Fetch API, the Cache API, and even notifications. API stands for Application Programming Interface, which sounds very fancy but really just means a tool that you can program however you want. You can write a set of instructions in your service worker to take advantage of these tools. Most of your instructions will be written as “when this happens, reach for this tool.” If, for instance, the network connection fails, you can instruct the service worker to retrieve a backup file using the Cache API.

A service worker is like a duck-billed platypus. The platypus not only lactates, but also lays eggs. It’s the only mammal capable of making its own custard. A service worker can also…Actually, hang on, a service worker is nothing like a duck-billed platypus! Sorry about that. But a service worker is somewhat like a cookie, and somewhat like a virus, and somewhat like a toolbox.

Safety First

Service workers are powerful. Once a service worker has been installed on your machine, it lies in wait, like a patient spider waiting to feel the vibrations of a particular thread.

Imagine if a malicious ne’er-do-well wanted to wreak havoc by impersonating a website in order to install a service worker. They could write instructions in the service worker to prevent the website ever appearing in that browser again. Or they could write instructions to swap out the content displayed under that site’s domain. That’s why it’s so important to make sure that a service worker really belongs to the site it claims to come from. As the specification for service workers puts it, they “create the opportunity for a bad actor to turn a bad day into a bad eternity.”1

To prevent this calamity, service workers require you to adhere to two policies:

Same origin.

HTTPS only.

The same-origin policy means that a website at example.com can only install a service worker script that lives at example.com. That means you can’t put your service worker script on a different domain. You can use a domain like for hosting your images and other assets, but not your service worker script. That domain wouldn’t match the domain of the site installing the service worker.

The HTTPS-only policy means that https://example.com can install a service worker, but http://example.com can’t. A site running under HTTPS (the S stands for Secure) instead of HTTP is much harder to spoof. Without HTTPS, the communication between a browser and a server could be intercepted and altered. If you’re sitting in a coffee shop with an open Wi-Fi network, there’s no guarantee that anything you’re reading in browser from http://newswebsite.com hasn’t been tampered with. But if you’re reading something from https://newswebsite.com, you can be pretty sure you’re getting what you asked for.

Securing your site

Enabling HTTPS on your site opens up a whole series of secure-only browser features—like the JavaScript APIs for geolocation, payments, notifications, and service workers. Even if you never plan to add a service worker to your site, it’s still a good idea to switch to HTTPS. A secure connection makes it trickier for snoopers to see who’s visiting which websites. Your website might not contain particularly sensitive information, but when someone visits your site, that’s between you and your visitor. Enabling HTTPS won’t stop unethical surveillance by the NSA, but it makes the surveillance slightly more difficult.

There’s one exception. You can use a service worker on a site being served from localhost, a web server on your own computer, not part of the web. That means you can play around with service workers without having to deploy your code to a live site every time you want to test something.

If you’re using a Mac, you can spin up a local server from the command line. Let’s say your website is in a folder called mysite. Drag that folder to the Terminal app, or open up the Terminal app and navigate to that folder using the cd command to change directory. Then type:

python -m SimpleHTTPServer 8000

This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.

This starts a web server from the mysite folder, served over port 8000. Now you can visit localhost:8000 in a web browser on the same computer, which means you can add a service worker to the website you’ve got inside the mysite folder: http://localhost:8000.


But if you then put the site live at, say, http://mysite.com, the service worker won’t run. You’ll need to serve the site from https://mysite.com instead. To do that, you need a secure certificate for your server.

There was a time when certificates cost money and were difficult to install. Now, thanks to a service called Certbot, certificates are free. But I’m not going to lie: it still feels a bit intimidating to install the certificate. There’s something about logging on to a server and typing commands that makes me simultaneously feel like a l33t hacker, and also like I’m going to break everything. Fortunately, the process of using Certbot is relatively jargon-free (Fig 1.3).

Fig 1.3: The website of EFF’s Certbot.

On the Certbot website, you choose which kind of web server and operating system your site is running on. From there you’ll be guided step-by-step through the commands you need to type in the command line of your web server’s computer, which means you’ll need to have SSH access to that machine. If you’re on shared hosting, that might not be possible. In that case, check to see if your hosting provider offers secure certificates. If not, please pester them to do so, or switch to a hosting provider that can serve your site over HTTPS.

Another option is to stay with your current hosting provider, but use a service like Cloudflare to act as a “front” for your website. These services can serve your website’s files from data centers around the world, making sure that the physical distance between your site’s visitors and your site’s files is nice and short. And while they’re at it, these services can make sure all of those files are served over HTTPS.

Once you’re set up with HTTPS, you’re ready to write a service worker script. It’s time to open up your favorite text editor. You’re about to turbocharge your website!

Footnotes
Categories: Around The Web

Planning for Everything

Tue, 04/03/2018 - 9:22am

A note from the editors: We’re pleased to share an excerpt from Chapter 7 (“Reflecting”) of Planning for Everything: The Design of Paths and Goals by Peter Morville, available now from Semantic Studios.

Once upon a time, there was a happy family. Every night at dinner, mom, dad, and two girls who still believed in Santa played a game. The rules are simple. Tell three stories about your day, two true, one false, and see who can detect the fib. Today I saw a lady walk a rabbit on a leash. Today I found a tooth in the kitchen. Today I forgot my underwear. The family ate, laughed, and learned together, and lied happily ever after.

There’s truth in the tale. It’s mostly not false. We did play this game, for years, and it was fun. We loved to stun and bewilder each other, yet the big surprise was insight. In reflecting on my day, I was often amazed by oddities already lost. If not for the intentional search for anomaly, I’d have erased these standard deviations from memory. The misfits we find, we rarely recall.

We observe a tiny bit of reality. We understand and remember even less. Unlike most machines, our memory is selective and purposeful. Goals and beliefs define what we notice and store.  To mental maps we add places we predict we’ll need to visit later. It’s not about the past. The intent of memory is to plan.

In reflecting we look back to go forward. We search the past for truths and insights to shift the future. I’m not speaking of nostalgia, though we are all borne back ceaselessly and want what we think we had. My aim is redirection. In reflecting on inconvenient truths, I hope to change not only paths but goals.

Figure 7-1. Reflection changes direction.

We all have times for reflection. Alone in the shower or on a walk, we retrace the steps of a day. Together at lunch for work or over family dinner, we share memories and missteps. Some of us reflect more rigorously than others. Given time, it shows.

People who as a matter of habit extract underlying principles or rules from new experiences are more successful learners than those who take their experiences at face value, failing to infer lessons that can be applied later in similar situations.1

In Agile, the sprint retrospective offers a collaborative context for reflection. Every two to four weeks, at the end of a sprint, the team meets for an hour or so to look back. Focal questions include 1) what went well? 2) what went wrong? 3) how might we improve? In reflecting on the plan, execution, and results, the team explores surprises, conflicts, roadblocks, and lessons.

In addition to conventional analysis, a retrospective creates an opportunity for double loop learning. To edit planned actions based on feedback is normal, but revising assumptions, goals, values, methods, or metrics may effect change more profound. A team able to expand the frame may hack their habits, beliefs, and environment to be better prepared to succeed and learn.

Figure 7-2. Double loop learning.

Retrospectives allow for constructive feedback to drive team learning and bonding, but that’s what makes them hard. We may lack courage to be honest, and often people can’t handle the truth. Our filters are as powerful as they are idiosyncratic, which means we’re all blind men touching a tortoise, or is it a tree or an elephant? It hurts to reconcile different perceptions of reality, so all too often we simply shut up and shut down.

Search for Truth

To seek truth together requires a culture of humility and respect. We are all deeply flawed and valuable. We must all speak and listen. Ideas we don’t implement may lead to those we do. Errors we find aren’t about fault, since our intent is a future fix. And counterfactuals merit no more confidence than predictions, as we never know what would have happened if.

Reflection is more fruitful if we know our own minds, but that is harder than we think. An imperfect ability to predict actions of sentient beings is a product of evolution. It’s quick and dirty yet better than nothing in the context of survival in a jungle or a tribe. Intriguingly, cognitive psychology and neuroscience have shown we use the same theory of mind to study ourselves.

Self-awareness is just this same mind reading ability, turned around and employed on our own mind, with all the fallibility, speculation, and lack of direct evidence that bedevils mind reading as a tool for guessing at the thought and behavior of others.2

Empirical science tells us introspection and consciousness are unreliable bases for self-knowledge. We know this is true but ignore it all the time. I’ll do an hour of homework a day, not leave it to the end of vacation. If we adopt a dog, I’ll walk it. If I buy a house, I’ll be happy. I’ll only have one drink. We are more than we think, as Walt Whitman wrote in Song of Myself.

Do I contradict myself?
Very well then I contradict myself
(I am large, I contain multitudes.)

Our best laid plans go awry because complexity exists within as well as without. Our chaotic, intertwingled bodyminds are ecosystems inside ecosystems. No wonder it’s hard to predict. Still, it’s wise to seek self truth, or at least that’s what I think.

Upon reflection, my mirror neurons tell me I’m a shy introvert who loves reading, hiking, and planning. I avoid conflict when possible but do not lack courage. Once I set a goal, I may focus and filter relentlessly. I embrace habit and eschew novelty. If I fail, I tend to pivot rather than persist. Who I am is changing. I believe it’s speeding up. None of these traits is bad or good, as all things are double-edged. But mindful self awareness holds value. The more I notice the truth, the better my plans become.

Years ago, I planned a family vacation on St. Thomas. I kept it simple: a place near a beach where we could snorkel. It was a wonderful, relaxing escape. But over time a different message made it past my filters. Our girls had been bored. I dismissed it at first. I’d planned a shared experience I recalled fondly. It hurt to hear otherwise. But at last I did listen and learn. They longed not for escape but adventure. Thus our trip to Belize. I found planning and executing stressful due to risk, but I have no regrets. We shared a joyful adventure we’ll never forget.

Way back when we were juggling toddlers, we accidentally threw out the mail. Bills went unpaid, notices came, we swore we’d do better, then lost mail again. One day I got home from work to find an indoor mailbox system made with paint cans. My wife Susan built it in a day. We’ve used it to sort and save mail for 15 years. It’s an epic life hack I’d never have done. My ability to focus means I filter things out. I ignore problems and miss fixes. I’m not sure I’ll change. Perhaps it merits a prayer.

God grant me the serenity
to accept the things I cannot change,
courage to change the things I can,
and wisdom to know the difference.

We also seek wisdom in others. This explains our fascination with the statistics of regret. End of life wishes often include:

I wish I’d taken more risks, touched more lives, stood up to bullies, been a better spouse or parent or child. I should have followed my dreams, worked and worried less, listened more. If only I’d taken better care of myself, chosen meaningful work, had the courage to express my feelings, stayed in touch. I wish I’d let myself be happy.

While they do yield wisdom, last wishes are hard to hear. We are skeptics for good reason. Memory prepares for the future, and that too is the aim of regret. It’s unwise to trust the clarity of rose-colored glasses. The memory of pain and anxiety fades in time, but our desire for integrity grows. When time is short, regret is a way to rectify. I’ve learned my lesson. I’m passing it on to you. I’m a better person now. Don’t make my mistakes. It’s easy to say “I wish I’d stood up to bullies,” but hard to do at the time. There’s wisdom in last wishes but bias and self justification too. Confabulation means we edit memories with no intention to deceive. The truth is elusive. Reflection is hard.

Footnotes
  • 1. Make It Stick by Peter Brown et. al. (2014), p.133.
  • 2. Why You Don’t Know Your Own Mind by Alex Rosenberg (2016).
Categories: Around The Web