Jump to Navigation

Feed aggregator

What You Need To Know To Increase Mobile Checkout Conversions

Smashing Magazine - 16 hours 2 min ago
What You Need To Know To Increase Mobile Checkout Conversions What You Need To Know To Increase Mobile Checkout Conversions Suzanna Scacca 2018-04-20T13:35:58+02:00 2018-04-20T15:32:23+00:00

Google’s mobile-first indexing is here. Well, for some websites anyway. For the rest of us, it will be here soon enough, and our websites need to be in tip-top shape if we don’t want search rankings to be adversely affected by the change.

That said, responsive web design is nothing new. We’ve been creating custom mobile user experiences for years now, so most of our websites should be well poised to take this on… right?

Here’s the problem: Research shows that the dominant device through which users access the web, on average, is the smartphone. Granted, this might not be the case for every website, but the data indicates that this is the direction we’re headed in, and so every web designer should be prepared for it.

However, mobile checkout conversions are, to put it bluntly, not good. There are a number of reasons for this, but that doesn’t mean that m-commerce designers should take this lying down.

As more mobile users rely on their smart devices to access the web, websites need to be more adeptly designed to give them the simplified, convenient and secure checkout experience they want. In the following roundup, I’m going to explore some of the impediments to conversion in the mobile checkout and focus on what web designers can do to improve the experience.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → Why Are Mobile Checkout Conversions Lagging?

According to the data, prioritizing the mobile experience in our web design strategies is a smart move for everyone involved. With people spending roughly 51% of their time with digital media through mobile devices (as opposed to only 42% on desktop), search engines and websites really do need to align with user trends.

Now, while that statistic paints a positive picture in support of designing websites with a mobile-first approach, other statistics are floating around that might make you wary of it. Here’s why I say that: Monetate’s e-commerce quarterly report issued for Q1 2017 had some really interesting data to show.

In this first table, they break down the percentage of visitors to e-commerce websites using different devices between Q1 2016 and Q1 2017. As you can see, smartphone Internet access has indeed surpassed desktop:

Website Visits by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 49.30% 47.50% 44.28% 42.83% 42.83% Smartphone 36.46% 39.00% 43.07% 44.89% 44.89% Other 0.62% 0.39% 0.46% 0.36% 0.36% Tablet 13.62% 13.11% 12.19% 11.91% 11.91%

Monetate’s findings on which devices are used to access in the Internet. (Source)

In this next data set, we can see that the average conversion rate for e-commerce websites isn’t great. In fact, the number has gone down significantly since the first quarter of 2016.

Conversion Rates Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Global 3.10% 2.81% 2.52% 2.94% 2.48%

Monetate’s findings on overall e-commerce global conversion rates (for all devices). (Source)

Even more shocking is the split between device conversion rates:

Conversion Rates by Device Q1 2016 Q2 2016 Q3 2016 Q4 2016 Q1 2017 Traditional 4.23% 3.88% 3.66% 4.25% 3.63% Tablet 1.42% 1.31% 1.17% 1.49% 1.25% Other 0.69% 0.35% 0.50% 0.35% 0.27% Smartphone 3.59% 3.44% 3.21% 3.79% 3.14%

Monetate’s findings on the average conversion rates, broken down by device. (Source)

Smartphones consistently receive fewer conversions than desktop, despite being the predominant device through which users access the web.

What’s the problem here? Why are we able to get people to mobile websites, but we lose them at checkout?

In its report from 2017 named “Mobile’s Hierarchy of Needs,” comScore breaks down the top five reasons why mobile checkout conversion rates are so low:

The most common reasons why m-commerce shoppers don’t convert. (Image: comScore) (View large version)

Here is the breakdown for why mobile users don’t convert:

  • 20.2% — security concerns
  • 19.6% — unclear product details
  • 19.6% — inability to open multiple browser tabs to compare
  • 19.3% — difficulty navigating
  • 18.6% — difficulty inputting information.

Those are plausible reasons to move from the smartphone to the desktop to complete a purchase (if they haven’t been completely turned off by the experience by that point, that is).

In sum, we know that consumers want to access the web through their mobile devices. We also know that barriers to conversion are keeping them from staying put. So, how do we deal with this?

10 Ways to Increase Mobile Checkout Conversions In 2018

For most of the websites you’ve designed, you’re not likely to see much of a change in search ranking when Google’s mobile-first indexing becomes official.

Your mobile-friendly designs might be “good enough” to keep your websites at the top of search (to start, anyway), but what happens if visitors don’t stick around to convert? Will Google start penalizing you because your website can’t seal the deal with the majority of visitors? In all honesty, that scenario will only occur in extreme cases, where the mobile checkout is so poorly constructed that bounce rates skyrocket and people stop wanting to visit the website at all.

Let’s say that the drop-off in traffic at checkout doesn’t incur penalties from Google. That’s great… for SEO purposes. But what about for business? Your goal is to get visitors to convert without distraction and without friction. Yet, that seems to be what mobile visitors get.

Going forward, your goal needs to be two-fold:

  • to design websites with Google’s mobile-first mission and guidelines in mind,
  • to keep mobile users on the website until they complete a purchase.

Essentially, this means decreasing the amount of work users have to do and improving the visibility of your security measures. Here is what you can do to more effectively design mobile checkouts for conversions.

1. Keep the Essentials in the Thumb Zone

Research on how users hold their mobile phones is old hat by now. We know that, whether they use the single- or double-handed approach, certain parts of the mobile screen are just inconvenient for mobile users to reach. And when expediency is expected during checkout, this is something you don’t want to mess around with.

For single-handed users, the middle of the screen is the prime playing field:

The good, OK and bad areas for single-handed mobile users. (Image: UX Matters) (View large version)

Although users who cradle their phones for greater stability have a couple options for which fingers to use to interact with the screen, only 28% use their index finger. So, let’s focus on the capabilities of thumb users, which, again, means giving the central part of the screen the most prominence:

The good, OK and bad areas for mobile users that cradle their phones. (Image: UX Matters) (View large version)

Some users hold their phones with two hands. Because the horizontal orientation is more likely to be used for video, this won’t be relevant for mobile checkout. So, pay attention to how much space of that screen is feasibly within reach of the user’s thumb:

The good, OK and bad areas for two-handed mobile users. (Image: UX Matters) (View large version)

In sum, we can use Smashing Magazine’s breakdown of where to focus content, regardless of left-hand, right-hand or two-handed holding of a smartphone:

A summary of where the good, OK and bad zones are on mobile devices. (Image: Smashing Magazine) (View large version)

JCPenney’s website is a good example of how to do this:

JCPenney’s contact form starts midway down the page. (Image: JCPenney) (View large version)

While information is included at the top of the checkout page, the input fields don’t start until just below the middle of it — directly in the ideal thumb zone for users of any type. This ensures that visitors holding their phones in any manner and using different fingers to engage with it will have no issue reaching the form fields.

2. Minimize Content to Maximize Speed

We’ve been taught over and over again that minimal design is best for websites. This is especially true in mobile checkout, where an already slow or frustrating experience could easily push a customer over the edge, when all they want to do is be done with the purchase.

To maximize speed during the mobile checkout process, keep the following tips in mind:

  • Only add the essentials to checkout. This is not the time to try to upsell or cross-sell, promote social media or otherwise distract from the action at hand.
  • Keep the checkout free of all images. The only eye-catching visuals that are really acceptable are trustmarks and calls to action (more on these below).
  • Any text included on the page should be instructional or descriptive in nature.
  • Avoid any special stylization of fonts. The less “wow” your checkout page has, the easier it will be for users to get through the process.

Look to Staples’ website as an example of what a highly simple single-page checkout should look like:

Staples has a single-page checkout with a minimal number of fields to fill out. (Image: Staples) (View large version)

As you can see, Staples doesn’t bog down the checkout process with product images, branding, navigation, internal links or anything else that might (1) distract from the task at hand, or (2) suck resources from the server while it attempts to process your customers’ requests.

Not only will this checkout page be easy to get through, but it will load quickly and without issue every time — something customers will remember the next time they need to make a purchase. By keeping your checkout pages light in design, you ensure a speedy experience in all aspects.

3. Put Them at Ease With Trustmarks

A trustmark is any indicator on a website that lets customers know, “Hey, there’s absolutely nothing to worry about here. We’re keeping your information safe!”

The one trustmark that every m-commerce website should have? An SSL certificate. Without one, the address bar will not display the lock sign or the green https domain name — both of which let customers know that the website has extra encryption.

You can use other trustmarks at checkout as well.

Big Chill includes a RapidSSL trust seal to let customers know its data is encrypted. (Image: Big Chill) (View large version)

While you can use logos from Norton Security, PCI compliance and other security software to let customers know your website is protected, users might also be swayed by recognizable and well-trusted names. When you think about it, this isn’t much different than displaying corporate logos beside customer testimonials or in callouts that boast of your big-name connections. If you can leverage a partnership like the ones mentioned below, you can use the inherent trust there to your benefit.

Take 6pm, which uses a “Login with Amazon” option at checkout:

6pm leverages the Amazon name as a trustmark. (Image: 6pm) (View large version)

This is a smart move for a brand that most definitely does not have the brand-name recognition that a company like Amazon has. By giving customers a convenient option to log in with a brand that’s synonymous with speed, reliability and trust, the company might now become known for those same checkout qualities that Amazon is celebrated for.

Then, there are mobile checkout pages like the one on Sephora:

Sephora uses a trusted payment gateway provider as a trustmark. (Image: Sephora) (View large version)

Sephora also uses this technique of leveraging another brand’s good name in order to build trust at checkout time. In this case, however, it presents customers with two clear options: Check out with us right now, or hop over to PayPal, which will take care of you securely. With security being a major concern that keeps mobile customers from converting, this kind of trustmark and payment method is a good move on Sephora’s part.

4. Provide Easier Editing

In general, never take a visitor (on any device) away from whatever they’re doing on your website. There are already enough distractions online; the last thing they need is for you to point them in a direction that keeps them from converting.

At checkout, however, your customers might feel compelled to do this very thing if they decide they want a different color, size or quantity of an item in their shopping cart. Rather than let them backtrack through the website, give them an in-checkout editing option to keep them in place.

Victoria’s Secret does this well:

Victoria’s Secret doesn’t force users away from checkout to edit items. (Image: Victoria’s Secret) (View large version)

When they first get to the checkout screen, customers will see a list of items they’re about to purchase. When the large “Edit” button beside each item is clicked, a lightbox (shown above) opens with the product’s variations. It’s basically the original product page, just superimposed on top of the checkout. Users can adjust their options and save their changes without ever having to leave the checkout page.

If you find, in reviewing your website’s analytics, that users occasionally backtrack after hitting the checkout (you can see this in the sales funnel), add this built-in editing feature. By preventing this unnecessary movement backwards, you could save yourself lost conversions from confused or distracted customers.

5. Enable Express Checkout Options

When consumers check out on an e-commerce website through a desktop device, it probably isn’t a big deal if they have to input their user name, email address or payment information each time. Sure, if it can be avoided, they’ll find ways around it (like allowing the website to save their information or using a password manager such as LastPass).

But on mobile, re-entering that information is a pain, especially if contact forms aren’t optimized well (more on that below). So, to ease the log-in and checkout process for mobile users, consider ways in which you can simplify the process:

  • Allow for guest checkout.
  • Allow for one-click expedited checkout.
  • Enable one-click sign-in from a trusted source, like Facebook.
  • Enable payment on a trusted payment provider’s website, like PayPal, Google Wallet or Stripe.

One of the nice things about Sephora's already convenient checkout process is that customers can automate the sign-in process going forward with a simple toggle:

Sephora enables return customers to stay signed in, to avoid this during checkout again. (Image: Sephora) (View large version)

When mobile customers are feeling the rush and want to get to the next stage of checkout, Sephora’s auto-sign-in feature would definitely come in handy and encourage customers to buy more frequently from the mobile website.

Many mobile websites wait until the bottom of the login page to tell customers what kinds of options they have for checking out. But rather than surprise them late, Victoria’s Secret displays this information in big bold buttons right at the very top:

Victoria’s Secret simplifies and speeds up checkout by giving three attractive options. (Image: Victoria’s Secret) (View large version)

Customers have a choice of signing in with their account, checking out as a guest or going directly to PayPal. They are not surprised to discover later on that their preferred checkout or payment method isn’t offered.

I also really love how Victoria’s Secret has chosen to do this. There’s something nice about the brightly colored “Sign In” button sitting beside the more muted “Check Out as a Guest” button. For one, it adds a hint of Victoria’s Secret brand colors to the checkout, which is always a nice touch. But the way it’s colored the buttons also makes clear what it wants the primary action to be (i.e. to create an account and sign in).

6. Add Breadcrumbs

When you send mobile customers to checkout, the last thing you want is to give them unnecessary distractions. That’s why the website’s standard navigation bar (or hamburger menu) is typically removed from this page.

Nonetheless, the checkout process can be intimidating if customers don’t know what’s ahead. How many forms will they need to fill out? What sort of information is needed? Will they have a chance to review their order before submitting payment details?

If you’ve designed a multi-page checkout, allay your customers’ fears by defining each step with clearly labeled breadcrumb navigation at the top of the page. In addition, this will give your checkout a cleaner design, reducing the number of clicks and scrolling per page.

Hayneedle has a beautiful example of breadcrumb navigation in action:

Hayneedle’s breadcrumbs are cleanly designed and easy to find. (Image: Hayneedle) (View large version)

You can see that three steps are broken out and clearly labeled. There’s absolutely no question here about what users will encounter in those steps either, which will help put their minds at ease. Three steps seems reasonable enough, and users will have a chance to review the order once more before completing the purchase.

Sephora has an alternative style of “breadcrumbs” in its checkout:

Sephora’s numbered breadcrumbs appear as you complete each section. (Image: Sephora) (View large version)

Instead of placing each “breadcrumb” at the top of the checkout page, Sephora’s customers can see what the next step is, as well as how many more are to come as they work their way through the form.

This is a good option to take if you’d rather not make the top navigation or the breadcrumbs sticky. Instead, you can prioritize the call to action (CTA), which you might find better motivates the customer to move down the page and complete their purchase.

I think both of these breadcrumbs designs are valid, though. So, it might be worth A/B testing them if you’re unsure of which would lead to more conversions for your visitors.

7. Format the Checkout Form Wisely

Good mobile checkout form design follows a pretty strict formula, which isn’t surprising. While there are ways to bend the rules on desktop in terms of structuring the form, the number of steps per page, the inclusion of images and so on, you really don’t have that kind of flexibility on mobile.

Instead, you will need to be meticulous when building the form:

  • Design each field of the checkout form so that it stretches the full width of the website.
  • Limit the fields to only what’s essential.
  • Clearly label each field outside of and above it.
  • Use at least a 16-point-pixel font.
  • Format each field so that it’s large enough to tap into without zooming.
  • Use a recognizable mark to indicate when something is required (like an asterisk).
  • Always let users know when an error has been made immediately after the information has been inputted in a field.
  • Place the call to action at the very bottom of the form.

Because the checkout form is the most important element that moves customers through the checkout process, you can’t afford to mess around with a tried and true formula. If users can’t seamlessly get from top to bottom, if the fields are too difficult to engage with, or if the functionality of the form itself is riddled with errors, then you might as well kiss your mobile purchases (and maybe your purchases in general) goodbye.

Crutchfield shows how to create form fields that are very user-friendly on mobile:

Form fields on the Crutchfield checkout page are large and difficult to miss. (Image: Crutchfield) (View large version)

As you can see, each field is large enough to click on (even with fat fingers). The bold outline around the currently selected field is also a nice touch. For a customer who is multitasking and or distracted by something around them, returning to the checkout form would be much easier with this type of format.

Sephora, again, handles mobile checkout the right way. In this case, I want to draw your attention to the grayed-out “Place Order” button:

Sephora uses the call to action as a guide for customers who haven’t finished the form. (Image: Sephora) (View large version)

The button serves as an indicator to customers that they’re not quite ready to submit their purchase information yet, which is great. Even though the form is beautifully designed — everything is well labeled, the fields are large, and the form is logically organized — mobile users could accidentally scroll too far past a field and wouldn’t know it until clicking the call-to-action button.

If you can keep users from receiving that dreaded “missing information” error, you’ll do a better job of holding onto their purchases.

8. Simplify Form Input

Digging a bit deeper into these contact forms, let’s look at how you can simplify the input of data on mobile:

  • Allow customers to user their browser’s autocomplete functionality to fill in forms.
  • Include a tabindex HTML directive to enable customers to tap an arrow up and down through the form. This keeps their thumbs within a comfortable range on the smartphone at all times, instead of constantly reaching up to tap into a new field.
  • Add a checkbox that automatically copies the billing address information over to the shipping fields.
  • Change the keyboard according to what kind of field is being typed in.

One example of this is Bass Pro Shops’ mobile website:

Each field in the Bass Pro checkout form provides users with the right keyboard type. (Image: Bass Pro Shops) (View large version)

For starters, the keyboard uses tab functionality (see the up and down arrows just above the keyboard). For customers with short fingers or who are impatient and just want to type away on the keyboard, the tabs help keep their hands in one place, thus speeding up checkout.

Also, when customers tab into a numbers-only field (like for their phone number), the keyboard automatically changes, so they don’t have to switch manually. Again, this is another way to up the convenience of making a purchase on mobile.

Amazon’s mobile checkout includes a quick checkbox that streamlines customers’ submission of billing information:

Amazon gives customers an easy way to duplicate their shipping address to billing. (Image: Amazon) (View large version)

As we’ve seen with mobile checkout form design, simpler is always better. Obviously, you will always need to collect certain details from customers each time (unless their account has saved that information). Nonetheless, if you can provide a quick toggle or checkbox that enables them to copy data over from one form to another, then do it.

9. Don’t Skimp on the CTA

When designing a desktop checkout, your main concerns with the CTA are things like strategic placement of the button and choosing an eye-catching color to draw attention to it.

On mobile, however, you have to think about size, too — and not just how much space it takes up on the screen. Remember the thumb zone and the various ways in which users hold their phone. Ensure that the button is wide enough so that any user can easily click on it without having to change their hand position.

So, your goal should be to design buttons that (1) sit at the bottom of the mobile checkout page and (2) stretch all the way from left to right, as is the case on Staples’ mobile website:

Staple’s bright blue CTA sticks out in an otherwise plain checkout. (Image: Staples) (View large version)

No matter who is making the purchase — a left-handed, a right-handed or a two-handed cradler — that button will be easy reach.

Of all the mobile checkout enhancements we’ve covered today, the CTA is the easiest one to address. Make it big, give it a distinctive color, place it at the very bottom of the mobile screen, and make it span the full width. In other words, don’t make customers work hard to take the final step in a purchase.

10. Offer an Alternate Way Out

Finally, give customers an alternate way out.

Let’s say they’re shopping on a mobile website, adding items to their cart, but something isn’t sitting right with them, and they don’t want to make the purchase. You’ve done everything you can to assure them along the way with a clean, easy and secure checkout experience, but they just aren’t confident in making a payment on their phone.

Rather than merely hoping you don’t lose the purchase entirely, give them a chance to save it for later. That way, if they really are interested in buying your product, they can revisit on desktop and pull the trigger. It’s not ideal, because you do want to keep them in place on mobile, but the option is good for customers who just can’t be saved.

As you can see on L.L. Bean’s mobile website, there is an option at checkout to “Move to Wish List”:

L.L. Bean gives customers another chance to move items to their wish list during checkout. (Image: L.L. Bean) (View large version)

What’s nice about this is that L.L. Bean clearly doesn’t want browsing of the wish list or the removal of an item to be a primary action. If “Move to Wish List” were shown as a big bold CTA button, more customers might decide to take this seemingly safer alternative. As it’s designed now, it’s more of a, “Hey, we don’t want you to do anything you’re not comfortable with. This is here just in case.”

While fewer options are generally better in web design, this might be something to explore if your checkout has a high cart abandonment rate on mobile.

Wrapping Up

As more mobile visitors flock to your website, every step leading to conversion — including the checkout phase — needs to be optimized for convenience, speed and security. If your checkout is not adeptly designed to mobile users’ specific needs and expectations, you’re going to find that those conversion rates drop or shift back to desktop — and that’s not the direction you want things to go in, especially if Google is pushing us all towards a mobile-first world.

(da, ra, yk, al, il)
Categories: Around The Web

How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial

Smashing Magazine - Thu, 04/19/2018 - 6:15am
How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial How To Create An Audio/Video Recording App With React Native: An In-Depth Tutorial Oleh Mryhlod 2018-04-19T12:15:26+02:00 2018-04-20T15:32:23+00:00

React Native is a young technology, already gaining popularity among developers. It is a great option for smooth, fast, and efficient mobile app development. High-performance rates for mobile environments, code reuse, and a strong community: These are just some of the benefits React Native provides.

In this guide, I will share some insights about the high-level capabilities of React Native and the products you can develop with it in a short period of time.

We will delve into the step-by-step process of creating a video/audio recording app with React Native and Expo. Expo is an open-source toolchain built around React Native for developing iOS and Android projects with React and JavaScript. It provides a bunch of native APIs maintained by native developers and the open-source community.

After reading this article, you should have all the necessary knowledge to create video/audio recording functionality with React Native.

Let's get right to it.

Brief Description Of The Application

The application you will learn to develop is called a multimedia notebook. I have implemented part of this functionality in an online job board application for the film industry. The main goal of this mobile app is to connect people who work in the film industry with employers. They can create a profile, add a video or audio introduction, and apply for jobs.

The application consists of three main screens that you can switch between with the help of a tab navigator:

  • the audio recording screen,
  • the video recording screen,
  • a screen with a list of all recorded media and functionality to play back or delete them.

Check out how this app works by opening this link with Expo.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

First, download Expo to your mobile phone. There are two options to open the project :

  1. Open the link in the browser, scan the QR code with your mobile phone, and wait for the project to load.
  2. Open the link with your mobile phone and click on “Open project using Expo”.

You can also open the app in the browser. Click on “Open project in the browser”. If you have a paid account on Appetize.io, visit it and enter the code in the field to open the project. If you don’t have an account, click on “Open project” and wait in an account-level queue to open the project.

However, I recommend that you download the Expo app and open this project on your mobile phone to check out all of the features of the video and audio recording app.

You can find the full code for the media recording app in the repository on GitHub.

Dependencies Used For App Development

As mentioned, the media recording app is developed with React Native and Expo.

You can see the full list of dependencies in the repository’s package.json file.

These are the main libraries used:

  • React-navigation, for navigating the application,
  • Redux, for saving the application’s state,
  • React-redux, which are React bindings for Redux,
  • Recompose, for writing the components’ logic,
  • Reselect, for extracting the state fragments from Redux.

Let's look at the project's structure:

Large preview
  • src/index.js: root app component imported in the app.js file;
  • src/components: reusable components;
  • src/constants: global constants;
  • src/styles: global styles, colors, fonts sizes and dimensions.
  • src/utils: useful utilities and recompose enhancers;
  • src/screens: screens components;
  • src/store: Redux store;
  • src/navigation: application’s navigator;
  • src/modules: Redux modules divided by entities as modules/audio, modules/video, modules/navigation.

Let’s proceed to the practical part.

Create Audio Recording Functionality With React Native

First, it's important to сheck the documentation for the Expo Audio API, related to audio recording and playback. You can see all of the code in the repository. I recommend opening the code as you read this article to better understand the process.

When launching the application for the first time, you’ll need the user's permission for audio recording, which entails access to the microphone. Let's use Expo.AppLoading and ask permission for recording by using Expo.Permissions (see the src/index.js) during startAsync.

Await Permissions.askAsync(Permissions.AUDIO_RECORDING);

Audio recordings are displayed on a seperate screen whose UI changes depending on the state.

First, you can see the button “Start recording”. After it is clicked, the audio recording begins, and you will find the current audio duration on the screen. After stopping the recording, you will have to type the recording’s name and save the audio to the Redux store.

My audio recording UI looks like this:

Large preview

I can save the audio in the Redux store in the following format:

audioItemsIds: [‘id1’, ‘id2’], audioItems: { ‘id1’: { id: string, title: string, recordDate: date string, duration: number, audioUrl: string, } },

Let’s write the audio logic by using Recompose in the screen’s container src/screens/RecordAudioScreenContainer.

Before you start recording, customize the audio mode with the help of Expo.Audio.set.AudioModeAsync (mode), where mode is the dictionary with the following key-value pairs:

  • playsInSilentModeIOS: A boolean selecting whether your experience’s audio should play in silent mode on iOS. This value defaults to false.
  • allowsRecordingIOS: A boolean selecting whether recording is enabled on iOS. This value defaults to false. Note: When this flag is set to true, playback may be routed to the phone receiver, instead of to the speaker.
  • interruptionModeIOS: An enum selecting how your experience’s audio should interact with the audio from other apps on iOS.
  • shouldDuckAndroid: A boolean selecting whether your experience’s audio should automatically be lowered in volume (“duck”) if audio from another app interrupts your experience. This value defaults to true. If false, audio from other apps will pause your audio.
  • interruptionModeAndroid: An enum selecting how your experience’s audio should interact with the audio from other apps on Android.

Note: You can learn more about the customization of AudioMode in the documentation.

I have used the following values in this app:

interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, — Our record interrupts audio from other apps on IOS.

playsInSilentModeIOS: true,

shouldDuckAndroid: true,

interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX — Our record interrupts audio from other apps on Android.

allowsRecordingIOS Will change to true before the audio recording and to false after its completion.

To implement this, let's write the handler setAudioMode with Recompose.

withHandlers({ setAudioMode: () => async ({ allowsRecordingIOS }) => { try { await Audio.setAudioModeAsync({ allowsRecordingIOS, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); } catch (error) { console.log(error) // eslint-disable-line } }, }),

To record the audio, you’ll need to create an instance of the Expo.Audio.Recording class.

const recording = new Audio.Recording();

After creating the recording instance, you will be able to receive the status of the Recording with the help of recordingInstance.getStatusAsync().

The status of the recording is a dictionary with the following key-value pairs:

  • canRecord: a boolean.
  • isRecording: a boolean describing whether the recording is currently recording.
  • isDoneRecording: a boolean.
  • durationMillis: current duration of the recorded audio.

You can also set a function to be called at regular intervals with recordingInstance.setOnRecordingStatusUpdate(onRecordingStatusUpdate).

To update the UI, you will need to call setOnRecordingStatusUpdate and set your own callback.

Let’s add some props and a recording callback to the container.

withStateHandlers({ recording: null, isRecording: false, durationMillis: 0, isDoneRecording: false, fileUrl: null, audioName: '', }, { setState: () => obj => obj, setAudioName: () => audioName => ({ audioName }), recordingCallback: () => ({ durationMillis, isRecording, isDoneRecording }) => ({ durationMillis, isRecording, isDoneRecording }), }),

The callback setting for setOnRecordingStatusUpdate is:

recording.setOnRecordingStatusUpdate(props.recordingCallback);

onRecordingStatusUpdate is called every 500 milliseconds by default. To make the UI update valid, set the 200 milliseconds interval with the help of setProgressUpdateInterval:

recording.setProgressUpdateInterval(200);

After creating an instance of this class, call prepareToRecordAsync to record the audio.

recordingInstance.prepareToRecordAsync(options) loads the recorder into memory and prepares it for recording. It must be called before calling startAsync(). This method can be used if the recording instance has never been prepared.

The parameters of this method include such options for the recording as sample rate, bitrate, channels, format, encoder and extension. You can find a list of all recording options in this document.

In this case, let’s use Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY.

After the recording has been prepared, you can start recording by calling the method recordingInstance.startAsync().

Before creating a new recording instance, check whether it has been created before. The handler for beginning the recording looks like this:

onStartRecording: props => async () => { try { if (props.recording) { props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null }); } await props.setAudioMode({ allowsRecordingIOS: true }); const recording = new Audio.Recording(); recording.setOnRecordingStatusUpdate(props.recordingCallback); recording.setProgressUpdateInterval(200); props.setState({ fileUrl: null }); await recording.prepareToRecordAsync(Audio.RECORDING_OPTIONS_PRESET_HIGH_QUALITY); await recording.startAsync(); props.setState({ recording }); } catch (error) { console.log(error) // eslint-disable-line } },

Now you need to write a handler for the audio recording completion. After clicking the stop button, you have to stop the recording, disable it on iOS, receive and save the local URL of the recording, and set OnRecordingStatusUpdate and the recording instance to null:

onEndRecording: props => async () => { try { await props.recording.stopAndUnloadAsync(); await props.setAudioMode({ allowsRecordingIOS: false }); } catch (error) { console.log(error); // eslint-disable-line } if (props.recording) { const fileUrl = props.recording.getURI(); props.recording.setOnRecordingStatusUpdate(null); props.setState({ recording: null, fileUrl }); } },

After this, type the audio name, click the “continue” button, and the audio note will be saved in the Redux store.

onSubmit: props => () => { if (props.audioName && props.fileUrl) { const audioItem = { id: uuid(), recordDate: moment().format(), title: props.audioName, audioUrl: props.fileUrl, duration: props.durationMillis, }; props.addAudio(audioItem); props.setState({ audioName: '', isDoneRecording: false, }); props.navigation.navigate(screens.LibraryTab); } }, (Large preview) Audio Playback With React Native

You can play the audio on the screen with the saved audio notes. To start the audio playback, click one of the items on the list. Below, you can see the audio player that allows you to track the current position of playback, to set the playback starting point and to toggle the playing audio.

Here’s what my audio playback UI looks like:

Large preview

The Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback.

Let's write the logic of the audio playback by using Recompose in the screen container src/screens/LibraryScreen/LibraryScreenContainer, as the audio player is available only on this screen.

If you want to display the player at any point of the application, I recommend writing the logic of the player and audio playback in Redux operations using redux-thunk.

Let's customize the audio mode in the same way we did for the audio recording. First, set allowsRecordingIOS to false.

lifecycle({ async componentDidMount() { await Audio.setAudioModeAsync({ allowsRecordingIOS: false, interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX, playsInSilentModeIOS: true, shouldDuckAndroid: true, interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX, }); }, }),

We have created the recording instance for audio recording. As for audio playback, we need to create the sound instance. We can do it in two different ways:

  1. const playbackObject = new Expo.Audio.Sound();
  2. Expo.Audio.Sound.create(source, initialStatus = {}, onPlaybackStatusUpdate = null, downloadFirst = true)

If you use the first method, you will need to call playbackObject.loadAsync(), which loads the media from source into memory and prepares it for playing, after creation of the instance.

The second method is a static convenience method to construct and load a sound. It сreates and loads a sound from source with the optional initialStatus, onPlaybackStatusUpdate and downloadFirst parameters.

The source parameter is the source of the sound. It supports the following forms:

  • a dictionary of the form { uri: 'http://path/to/file' } with a network URL pointing to an audio file on the web;
  • require('path/to/file') for an audio file asset in the source code directory;
  • an Expo.Asset object for an audio file asset.

The initialStatus parameter is the initial playback status. PlaybackStatus is the structure returned from all playback API calls describing the state of the playbackObject at that point of time. It is a dictionary with the key-value pairs. You can check all of the keys of the PlaybackStatus in the documentation.

onPlaybackStatusUpdate is a function taking a single parameter, PlaybackStatus. It is called at regular intervals while the media is in the loaded state. The interval is 500 milliseconds by default. In my application, I set it to 50 milliseconds interval for a proper UI update.

Before creating the sound instance, you will need to implement the onPlaybackStatusUpdate callback. First, add some props to the screen container:

withClassVariableHandlers({ playbackInstance: null, isSeeking: false, shouldPlayAtEndOfSeek: false, playingAudio: null, }, 'setClassVariable'), withStateHandlers({ position: null, duration: null, shouldPlay: false, isLoading: true, isPlaying: false, isBuffering: false, showPlayer: false, }, { setState: () => obj => obj, }),

Now, implement onPlaybackStatusUpdate. You will need to make several validations based on PlaybackStatus for a proper UI display:

withHandlers({ soundCallback: props => (status) => { if (status.didJustFinish) { props.playbackInstance().stopAsync(); } else if (status.isLoaded) { const position = props.isSeeking() ? props.position : status.positionMillis; const isPlaying = (props.isSeeking() || status.isBuffering) ? props.isPlaying : status.isPlaying; props.setState({ position, duration: status.durationMillis, shouldPlay: status.shouldPlay, isPlaying, isBuffering: status.isBuffering, }); } }, }),

After this, you have to implement a handler for the audio playback. If a sound instance is already created, you need to unload the media from memory by calling playbackInstance.unloadAsync() and clear OnPlaybackStatusUpdate:

loadPlaybackInstance: props => async (shouldPlay) => { props.setState({ isLoading: true }); if (props.playbackInstance() !== null) { await props.playbackInstance().unloadAsync(); props.playbackInstance().setOnPlaybackStatusUpdate(null); props.setClassVariable({ playbackInstance: null }); } const { sound } = await Audio.Sound.create( { uri: props.playingAudio().audioUrl }, { shouldPlay, position: 0, duration: 1, progressUpdateIntervalMillis: 50 }, props.soundCallback, ); props.setClassVariable({ playbackInstance: sound }); props.setState({ isLoading: false }); },

Call the handler loadPlaybackInstance(true) by clicking the item in the list. It will automatically load and play the audio.

Let's add the pause and play functionality (toggle playing) to the audio player. If audio is already playing, you can pause it with the help of playbackInstance.pauseAsync(). If audio is paused, you can resume playback from the paused point with the help of the playbackInstance.playAsync() method:

onTogglePlaying: props => () => { if (props.playbackInstance() !== null) { if (props.isPlaying) { props.playbackInstance().pauseAsync(); } else { props.playbackInstance().playAsync(); } } },

When you click on the playing item, it should stop. If you want to stop audio playback and put it into the 0 playing position, you can use the method playbackInstance.stopAsync():

onStop: props => () => { if (props.playbackInstance() !== null) { props.playbackInstance().stopAsync(); props.setShowPlayer(false); props.setClassVariable({ playingAudio: null }); } },

The audio player also allows you to rewind the audio with the help of the slider. When you start sliding, the audio playback should be paused with playbackInstance.pauseAsync().

After the sliding is complete, you can set the audio playing position with the help of playbackInstance.setPositionAsync(value), or play back the audio from the set position with playbackInstance.playFromPositionAsync(value):

onCompleteSliding: props => async (value) => { if (props.playbackInstance() !== null) { if (props.shouldPlayAtEndOfSeek) { await props.playbackInstance().playFromPositionAsync(value); } else { await props.playbackInstance().setPositionAsync(value); } props.setClassVariable({ isSeeking: false }); } },

After this, you can pass the props to the components MediaList and AudioPlayer (see the file src/screens/LibraryScreen/LibraryScreenView).

Video Recording Functionality With React Native

Let's proceed to video recording.

We’ll use Expo.Camera for this purpose. Expo.Camera is a React component that renders a preview of the device’s front or back camera. Expo.Camera can also take photos and record videos that are saved to the app’s cache.

To record video, you need permission for access to the camera and microphone. Let's add the request for camera access as we did with the audio recording (in the file src/index.js):

await Permissions.askAsync(Permissions.CAMERA);

Video recording is available on the “Video Recording” screen. After switching to this screen, the camera will turn on.

You can change the camera type (front or back) and start video recording. During recording, you can see its general duration and can cancel or stop it. When recording is finished, you will have to type the name of the video, after which it will be saved in the Redux store.

Here is what my video recording UI looks like:

Large preview

Let’s write the video recording logic by using Recompose on the container screen src/screens/RecordVideoScreen/RecordVideoScreenContainer.

You can see the full list of all props in the Expo.Camera component in the document.

In this application, we will use the following props for Expo.Camera.

  • type: The camera type is set (front or back).
  • onCameraReady: This callback is invoked when the camera preview is set. You won't be able to start recording if the camera is not ready.
  • style: This sets the styles for the camera container. In this case, the size is 4:3.
  • ref: This is used for direct access to the camera component.

Let's add the variable for saving the type and handler for its changing.

cameraType: Camera.Constants.Type.back, toggleCameraType: state => () => ({ cameraType: state.cameraType === Camera.Constants.Type.front ? Camera.Constants.Type.back : Camera.Constants.Type.front, }),

Let's add the variable for saving the camera ready state and callback for onCameraReady.

isCameraReady: false, setCameraReady: () => () => ({ isCameraReady: true }),

Let's add the variable for saving the camera component reference and setter.

cameraRef: null, setCameraRef: () => cameraRef => ({ cameraRef }),

Let's pass these variables and handlers to the camera component.

<Camera type={cameraType} onCameraReady={setCameraReady} style={s.camera} ref={setCameraRef} />

Now, when calling toggleCameraType after clicking the button, the camera will switch from the front to the back.

Currently, we have access to the camera component via the reference, and we can start video recording with the help of cameraRef.recordAsync().

The method recordAsync starts recording a video to be saved to the cache directory.

Arguments:

Options (object) — a map of options:

  • quality (VideoQuality): Specify the quality of recorded video. Usage: Camera.Constants.VideoQuality[''], possible values: for 16:9 resolution 2160p, 1080p, 720p, 480p (Android only) and for 4:3 (the size is 640x480). If the chosen quality is not available for the device, choose the highest one.
  • maxDuration (number): Maximum video duration in seconds.
  • maxFileSize (number): Maximum video file size in bytes.
  • mute (boolean): If present, video will be recorded with no sound.

recordAsync returns a promise that resolves to an object containing the video file’s URI property. You will need to save the file’s URI in order to play back the video hereafter. The promise is returned if stopRecording was invoked, one of maxDuration and maxFileSize is reached or the camera preview is stopped.

Because the ratio set for the camera component sides is 4:3, let's set the same format for the video quality.

Here is what the handler for starting video recording looks like (see the full code of the container in the repository):

onStartRecording: props => async () => { if (props.isCameraReady) { props.setState({ isRecording: true, fileUrl: null }); props.setVideoDuration(); props.cameraRef.recordAsync({ quality: '4:3' }) .then((file) => { props.setState({ fileUrl: file.uri }); }); } },

During the video recording, we can’t receive the recording status as we have done for audio. That's why I have created a function to set video duration.

To stop the video recording, we have to call the following function:

stopRecording: props => () => { if (props.isRecording) { props.cameraRef.stopRecording(); props.setState({ isRecording: false }); clearInterval(props.interval); } },

Check out the entire process of video recording.

Video Playback Functionality With React Native

You can play back the video on the “Library” screen. Video notes are located in the “Video” tab.

To start the video playback, click the selected item in the list. Then, switch to the playback screen, where you can watch or delete the video.

The UI for video playback looks like this:

Large preview

To play back the video, use Expo.Video, a component that displays a video inline with the other React Native UI elements in your app.

The video will be displayed on the separate screen, PlayVideo.

You can check out all of the props for Expo.Video here.

In our application, the Expo.Video component uses native playback controls and looks like this:

<Video source={{ uri: videoUrl }} style={s.video} shouldPlay={isPlaying} resizeMode="contain" useNativeControls={isPlaying} onLoad={onLoad} onError={onError} />
  • source
    This is the source of the video data to display. The same forms as for Expo.Audio.Sound are supported.
  • resizeMode
    This is a string describing how the video should be scaled for display in the component view’s bounds. It can be “stretch”, “contain” or “cover”.
  • shouldPlay
    This boolean describes whether the media is supposed to play.
  • useNativeControls
    This boolean, if set to true, displays native playback controls (such as play and pause) within the video component.
  • onLoad
    This function is called once the video has been loaded.
  • onError
    This function is called if loading or playback has encountered a fatal error. The function passes a single error message string as a parameter.

When the video is uploaded, the play button should be rendered on top of it.

When you click the play button, the video turns on and the native playback controls are displayed.

Let’s write the logic of the video using Recompose in the screen container src/screens/PlayVideoScreen/PlayVideoScreenContainer:

const defaultState = { isError: false, isLoading: false, isPlaying: false, }; const enhance = compose( paramsToProps('videoUrl'), withStateHandlers({ ...defaultState, isLoading: true, }, { onError: () => () => ({ ...defaultState, isError: true }), onLoad: () => () => defaultState, onTogglePlaying: ({ isPlaying }) => () => ({ ...defaultState, isPlaying: !isPlaying }), }), );

As previously mentioned, the Expo.Audio.Sound objects and Expo.Video components share a unified imperative API for media playback. That's why you can create custom controls and use more advanced functionality with the Playback API.

Check out the video playback process:

See the full code for the application in the repository.

You can also install the app on your phone by using Expo and check out how it works in practice.

Wrapping Up

I hope you have enjoyed this article and have enriched your knowledge of React Native. You can use this audio and video recording tutorial to create your own custom-designed media player. You can also scale the functionality and add the ability to save media in the phone’s memory or on a server, synchronize media data between different devices, and share media with others.

As you can see, there is a wide scope for imagination. If you have any questions about the process of developing an audio or video recording app with React Native, feel free to drop a comment below.

(da, lf, ra, yk, al, il)
Categories: Around The Web

Which Podcasts Should Web Designers And Developers Be Listening To?

Smashing Magazine - Wed, 04/18/2018 - 7:45am
Which Podcasts Should Web Designers And Developers Be Listening To? Which Podcasts Should Web Designers And Developers Be Listening To? Ricky Onsman 2018-04-18T13:45:00+02:00 2018-04-20T15:32:23+00:00

We asked the Smashing community what podcasts they listened to, aiming to compile a shortlist of current podcasts for web designers and developers. We had what can only be called a very strong response — both in number and in passion.

First, we winnowed out the podcasts that were on a broader theme (e.g. creativity, mentoring, leadership), on a narrower theme (e.g. on one specific WordPress theme) or on a completely different theme (e.g. car maintenance — I’m sure it was well-intentioned).

When we filtered out those that had produced no new content in the last three months or more (although then we did have to make some exceptions, as you’ll see), and ordered the rest according to how many times they were nominated, we had a graded shortlist of 55.

Agreed, that’s not a very short shortlist.

So, we broke it down into five more reasonably sized shortlists:

Obviously, it’s highly unlikely anyone could — or would want to — listen to every episode of every one of these podcasts. Still, we’re pretty sure that any web designer or developer will find a few podcasts in this lot that will suit their particular listening tastes.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features →

A couple of caveats before we begin:

  • We don’t claim to be comprehensive. These lists are drawn from suggestions from readers (not all of which were included) plus our own recommendations.
  • The descriptions are drawn from reader comments, summaries provided by the podcast provider and our own comments. Podcast running times and frequency are, by and large, approximate. The reality is podcasts tend to vary in length, and rarely stick to their stated schedule.
  • We’ve listed each podcast once only, even though several could qualify for more than one list.
  • We’ve excluded most videocasts. This is just for listening (videos probably deserve their own article).
Podcasts For Web Developers Syntax

Wes Bos and Scott Tolinski dive deep into web development topics, explaining how they work and talking about their own experiences. They cover from JavaScript frameworks like React, to the latest advancements in CSS to simplifying web tooling. 30-70 minutes. Weekly.

Developer Tea

A podcast for developers designed to fit inside your tea break, a highly-concentrated, short, frequent podcast specifically for developers who like to learn on their tea (and coffee) break. The Spec Network also produces Design Details. 10-30 minutes. Every two days.

Web Platform Podcast

Covers the latest in browser features, standards, and the tools developers use to build for the web of today and beyond. Founded in 2014 by Erik Isaksen. Hosts Danny, Amal, Leon, and Justin are joined by a special guest to discuss the latest developments. 60 minutes. Weekly.

Devchat Podcasts

Fourteen podcasts with a range of hosts that each explore developments in a specific aspect of development or programming including Ruby, iOS, Angular, JavaScript, React, Rails, security, conference talks, and freelancing. 30-60 minutes. Weekly.

The Bike Shed

Hosts Derek Prior, Sean Griffin, Amanda Hill and guests discuss their development experience and challenges with Ruby, Rails, JavaScript, and whatever else is drawing their attention, admiration, or ire at any particular moment. 30-45 minutes. Weekly.

NodeUp

Hosted by Rod Vagg and a series of occasional co-hosts, this podcast features lengthy discussions with guests and panels about Node.js and Node-related topics. 30-90 minutes. Weekly / Monthly.

.NET Rocks

Carl Franklin and Richard Campbell host an internet audio talk show for anyone interested in programming on the Microsoft .NET platform, including basic information, tutorials, product developments, guests, tips and tricks. 60 minutes. Twice a week.

Three Devs and a Maybe

Join Michael Budd, Fraser Hart, Lewis Cains, and Edd Mann as they discuss software development, frequently joined by a guest on the show’s topic, ranging from daily developer life, PHP, frameworks, testing, good software design and programming languages. 45-60 minutes. Weekly.

Weekly Dev Tips

Hosted by experienced software architect, trainer, and entrepreneur Steve Smith, Weekly Dev Tips offers a variety of technical and career tips for software developers. Each tip is quick and to the point, describing a problem and one or more ways to solve that problem. 5-10 minutes. Weekly.

devMode.fm

Dedicated to the tools, techniques, and technologies used in modern web development. Each episode, Andrew Welch and Patrick Harrington lead a cadre of hosts discussing the latest hotness, pet peeves, and the frontend development technologies we use. 60-90 minutes. Twice a week.

CodeNewbie

Stories from people on their coding journey. New episodes published every Monday. The most supportive community of programmers and people learning to code. Founded by Saron Yitbarek. 30-60 minutes. Weekly.

Front End Happy Hour

A podcast featuring panels of engineers from @Netflix, @Evernote, @Atlassian and @LinkedIn talking over drinks about all things Front End development. 45-60 minutes. Every two weeks.

Under the Radar

From development and design to marketing and support, Under the Radar is all about independent app development. Hosted by David Smith and Marco Arment. 30 minutes. Weekly.

Hanselminutes

Scott Hanselman interviews movers and shakers in technology in this commute-time show. From Michio Kaku to Paul Lutus, Ward Cunningham to Kimberly Bryant, Hanselminutes is talk radio guaranteed not to waste your time. 30 minutes. Weekly.

Fixate on Code

Since October 2017, Larry Botha from South African design agency Fixate has been interviewing well known achievers in web design and development on how to help front end developers write better code. 30 minutes. Weekly.

Podcasts For Web Designers 99% Invisible

Design is everywhere in our lives, perhaps most importantly in the places where we’ve just stopped noticing. 99% Invisible is a weekly exploration of the process and power of design and architecture, from award winning producer Roman Mars. 20-45 minutes. Weekly.

Design Details

A show about the people who design our favorite products, hosted by Bryn Jackson and Brian Lovin. The Spec Network also produces Developer Tea. 60-90 minutes. Weekly.

Presentable

Host Jeffrey Veen brings over two decades of experience as a designer, developer, entrepreneur, and investor as he chats with guests about how we design and build the products that are shaping our digital future and how design is changing the world. 45-60 minutes. Weekly.

Responsive Web Design

In each episode, Karen McGrane and Ethan Marcotte (who coined the term “responsive web design”) interview the people who make responsive redesigns happen. 15-30 minutes. Weekly. (STOP PRESS: Karen and Ethan issued their final episode of this podcast on 26 March 2018.)

RWD Podcast

Host Justin Avery explores new and emerging web technologies, chats with web industry leaders and digs into all aspects of responsive web design. 10-60 minutes. Weekly / Monthly.

UXPodcast

Business, technology and people in digital media. Moving the conversation beyond the traditional realm of User Experience. Hosted by Per Axbom and James Royal-Lawson from Sweden. 30-45 minutes. Every two weeks.

UXpod

A free-ranging set of discussions on matters of interest to people involved in user experience design, website design, and usability in general. Gerry Gaffney set this up to provide a platform for discussing topics of interest to UX practitioners. 30-45 minutes. Weekly / Monthly.

UX-radio

A podcast about IA, UX and Design that features collaborative discussions with industry experts to inspire, educate and share resources with the community. Created by Lara Fedoroff and co-hosted with Chris Chandler. 30-45 minutes. Weekly / Monthly.

User Defenders

Host Jason Ogle aims to highlight inspirational UX Designers leading the way in their craft, by diving deeper into who they are, and what makes them tick/successful, in order to inspire and equip those aspiring to do the same. 30-90 minutes. Weekly.

The Drunken UX Podcast

Our hosts Michael Fienen and Aaron Hill look at issues facing websites and developers that impact the way we all use the web. “In the process, we’ll drink drinks, share thoughts, and hopefully make you laugh a little.” 60 minutes. Twice a week.

UI Breakfast Podcast

Join Jane Portman for conversations about UI/UX design, products, marketing, and so much more, with awesome guests who are industry experts ready to share actionable knowledge. 30-60 minutes. Weekly.

Efficiently Effective

Saskia Videler keeps us up to date with what’s happening in the field of UX and content strategy, aiming to help content experts, UX professionals and others create better digital experiences. 25-40 minutes. Monthly.

The Honest Designers Show

Hosts Tom Ross, Ian Barnard, Dustin Lee and Lisa Glanz have each found success in their creative fields and are here to give struggling designers a completely honest, under the hood look at what it takes to flourish in the modern world. 30-60 minutes. Weekly.

Design Life

A podcast about design and side projects for motivated creators. Femke van Schoonhoven and Charli Prangley (serial side project addicts) saw a gap in the market for a conversational show hosted by two females about design and issues young creatives face. 30-45 minutes. Weekly.

Layout FM

A weekly podcast about design, technology, programming and everything else hosted by Kevin Clark and Rafael Conde. 60-90 minutes. Weekly.

Bread Time

Gabriel Valdivia and Charlie Deets host this micro-podcast about design and technology, the impact of each on the other, and the impact of them both on all of us. 10-30 minutes. Weekly.

The Deeply Graphic DesignCast

Every episode covers a new graphic design-related topic, and a few relevant tangents along the way. Wes McDowell and his co-hosts also answer listener-submitted questions in every episode. 60 minutes. Every two weeks.

Podcasts On The Web, The Internet, And Technology The Big Web Show

Veteran web designer and industry standards champion Jeffrey Zeldman is joined by special guests to address topics like web publishing, art direction, content strategy, typography, web technology, and more. 60 minutes. Weekly.

ShopTalk

A podcast about front end web design, development and UX. Each week Chris Coyier and Dave Rupert are joined by a special guest to talk shop and answer listener submitted questions. 60 minutes. Weekly.

Boagworld

Paul Boag and Marcus Lillington are joined by a variety of guests to discuss a range of web design related topics. Fun, informative and quintessentially British, with content for designers, developers and website owners, something for everybody. 60 minutes. Weekly.

The Changelog

Conversations with the hackers, leaders, and innovators of open source. Hosts Adam Stacoviak and Jerod Santo do in-depth interviews with the best and brightest software engineers, hackers, leaders, and innovators. 60-90 minutes. Weekly.

Back to Front Show

Topics under discussion hosted by Keir Whitaker and Kieran Masterton include remote working, working in the web industry, productivity, hipster beards and much more. Released irregularly but always produced with passion. 30-60 minutes. Weekly / Monthly.

The Next Billion Seconds

The coming “next billion seconds” are the most important in human history, as technology transforms the way we live and work. Mark Pesce talks to some of the brightest minds shaping our world. 30-60 minutes. Every two weeks.

Toolsday

Hosted by Una Kravets and Chris Dhanaraj, Toolsday is about the latest in tech tools, tips, and tricks. 30 minutes. Weekly.

Reply All

A podcast about the internet, often delving deeper into modern life. Hosted by PJ Vogt and Alex Goldman from US narrative podcasting company Gimlet Media. 30-60 minutes. Weekly.

CTRL+CLICK CAST

Diverse voices from industry leaders and innovators, who tackle everything from design, code and CMS, to culture and business challenges. Focused, topical discussions hosted by Lea Alcantara and Emily Lewis. 60 minutes. Every two weeks.

Modern Web

Explores next generation frameworks, standards, and techniques. Hosted by Tracy Lee. Topics include EmberJS, ReactJS, AngularJS, ES2015, RxJS, functional reactive programming. 60 minutes. Weekly.

Relative Paths

A UK based podcast on “web development and stuff like that” for web industry types. Hosted by Mark Phoenix and Ben Hutchings. 60 minutes. Every two weeks.

Business Podcasts For Web Professionals The Businessology Show

The Businessology Show is a podcast about the business of design and the design of business, hosted by CPA/coach Jason Blumer. 30 minutes. Monthly.

CodePen Radio

Chris Coyier, Alex Vazquez, and Tim Sabat, the co-founders of CodePen, talk about the ins and outs of running a small web software business. The good, the bad, and the ugly. 30 minutes. Weekly.

BizCraft

Podcast about the business side of web design, recorded live almost every two weeks. Your hosts are Carl Smith of nGen Works and Gene Crawford of UnmatchedStyle. 45-60 minutes. Every two weeks.

Podcasts That Don’t Have Recent Episodes (But Do Have Great Archives) Design Review Podcast

No chit-chat, just focused in-depth discussions about design topics that matter. Jonathan Shariat and Chris Liu are your hosts and bring to the table passion and years of experience. 30-60 minutes. Every two weeks. Last episode 26 November 2017.

Style Guide Podcast

A small batch series of interviews (20 in total) on Style Guides, hosted by Anna Debenham and Brad Frost, with high profile designer guests. 45 minutes. Weekly. Last episode 19 November 2017.

True North

Looks to uncover the stories of everyday people creating and designing, and highlight the research and testing that drives innovation. Produced by Loop11. 15-60 minutes. Every two weeks. Last episode 18 October 2017

UIE.fm Master Feed

Get all episodes from every show on the UIE network in this master feed: UIE Book Corner (with Adam Churchill) and The UIE Podcast (with Jared Spool) plus some archived older shows. 15-60 minutes. Weekly. Last episode 4 October 2017.

Let’s Make Mistakes

A podcast about design with your hosts, Mike Monteiro, Liam Campbell, Steph Monette, and Seven Morris, plus a range of guests who discuss good design, business and ethics. 45-60 minutes. Weekly / Monthly. Last episode 3 August 2017.

Motion and Meaning

A podcast about motion for digital designers brought to you by Val Head and Cennydd Bowles, covering everything from the basic principles of animation through to advanced tools and techniques. 30 minutes. Monthly. Last episode 13 December 2016.

The Web Ahead

Conversations with world experts on changing technologies and future of the web. The Web Ahead is your shortcut to keeping up. Hosted by Jen Simmons. 60-100 minutes. Monthly. Last episode 30 June 2016.

Unfinished Business

UK designer Andy Clarke and guests have plenty to talk about, mostly on and around web design, creative work and modern life. 60-90 minutes. Monthly. Last episode 28 June 2016. (STOP PRESS: A new episode was issued on 20 March 2018. Looks like it’s back in action.)

Dollars to Donuts

A podcast where Steve Portigal talks with the people who lead user research in their organizations. 50-75 minutes. Irregular. Last episode 10 May 2016.

Any Other Good Ones Missing?

As we noted, there are probably many other good podcasts out there for web designers and developers. If we’ve missed your favorite, let us know about it in the comments, or in the original threads on Twitter or Facebook.

(vf, ra, il)
Categories: Around The Web

How To Improve Your Design Process With Data-Based Personas

Smashing Magazine - Tue, 04/17/2018 - 7:40am
How To Improve Your Design Process With Data-Based Personas How To Improve Your Design Process With Data-Based Personas Tim Noetzel 2018-04-17T13:40:40+02:00 2018-04-20T15:32:23+00:00

Most design and product teams have some type of persona document. Theoretically, personas help us better understand our users and meet their needs. The idea is that codifying what we’ve learned about distinct groups of users helps us make better design decisions. Referring to these documents ourselves and sharing them with non-design team members and external stakeholders should ultimately lead to a user experience more closely aligned with what real users actually need.

In reality, personas rarely prove equal to these expectations. On many teams, persona documents sit abandoned on hard drives, collecting digital dust while designers continue to create products based primarily on whim and intuition.

In contrast, well-researched personas serve as a proxy for the user. They help us check our work and ensure that we’re building things users really need.

In fact, the best personas don’t just describe users; they actually help designers predict their behavior. In her article on persona creation, Laura Klein describes it perfectly:

“If you can create a predictive persona, it means you truly know not just what your users are like, but the exact factors that make it likely that a person will become and remain a happy customer.”

In other words, useful personas actually help design teams make better decisions because they can predict with some accuracy how users will respond to potential product changes.

Obviously, for personas to facilitate these types of predictions, they need to be based on more than intuition and anecdotes. They need to be data-driven.

So, what do data-driven personas look like, and how do you make one?

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Start With What You Think You Know

The first step in creating data-driven personas is similar to the typical persona creation process. Write down your team’s hypotheses about what the key user groups are and what’s important to each group.

If your team is like most, some members will disagree with others about which groups are important, the particular makeup and qualities of each persona, and so on. This type of disagreement is healthy, but unlike the usual persona creation process you may be used to, you’re not going to get bogged down here.

Instead of debating the merits of each persona (and the various facets and permutations thereof), the important thing is to be specific about the different hypotheses you and your team have and write them down. You’re going to validate these hypotheses later, so it’s okay if your team disagrees at this stage. You may choose to focus on a few particular personas, but make sure to keep a backlog of other ideas as well.

First, start by recording all the hypotheses you have about key personas. You’ll refine these through user research in the next step. (Large preview)

I recommend aiming for a short, 1–2 sentence description of each hypothetical persona that details who they are, what root problem they hope to solve by using your product, and any other pertinent details.

You can use the traditional user stories framework for this. If you were creating hypothetical personas for Craigslist, one of these statements might read:

“As a recent college grad, I want to find cheap furniture so I can furnish my new apartment.”

Another might say:

“As a homeowner with an extra bedroom, I want to find a responsible tenant to rent this space to so I can earn some extra income.”

If you have existing data — things like user feedback emails, NPS scores, user interview notes, or analytics data — be sure to go over them and include relevant data points in your notes along with your user stories.

Validate And Refine

The next step is to validate and refine these hypotheses with user interviews. For each of your hypothetical personas, you’ll want to start by interviewing 5 to 10 people who fit that group.

You have three key goals for these interviews. For each group, you need to:

  1. Understand the context in which they need to solve the problem.
  2. Confirm that members of the persona group agree that the problem you recorded is an urgent and painful one that they struggle to solve now.
  3. Identify key predictors of whether a member of this persona is likely to become and remain an active user.

The approach you take to these interviews may vary, but I recommend a hybrid approach between a traditional user interview, which is very non-leading, and a Lean Problem interview, which is deliberately leading.

Start with the traditional user interview approach and ask behavior-based, non-leading questions. In the Craigslist example, we might ask the recent college grad something like:

“Tell me about the last time you purchased furniture. What did you buy? Where did you buy it?”

These types of questions are great for establishing whether the interviewee recently experienced the problem in question, how they solved it, and whether they’re dissatisfied with their current solution.

Once you’ve finished asking these types of questions, move on to the Lean Problem portion of the interview. In this section, you want to tell a story about a time when you experienced the problem — establishing the various issues you struggled with and why it was frustrating — and see how they respond.

You might say something like this:

“When I graduated college, I had to get new furniture because I wasn’t living in the dorm anymore. I spent forever looking at furniture stores, but they were all either ridiculously expensive or big-box stores with super-cheap furniture I knew would break in a few weeks. I really wanted to find good furniture at a reasonable price, but I couldn’t find anything and I eventually just bought the cheap stuff. It inevitably broke, and I had to spend even more money, which I couldn’t really afford. Does any of that resonate with you?”

What you’re looking for here is emphatic agreement. If your interviewee says "yes, that resonates" but doesn’t get much more excited than they were during the rest of the interview, the problem probably wasn’t that painful for them.

You can validate or invalidate your persona hypotheses with a series of quick, 30-minute interviews. (Large preview)

On the other hand, if they get excited, empathize with your story, or give their own anecdote about the problem, you know you’ve found a problem they really care about and need to be solved.

Finally, make sure to ask any demographic questions you didn’t cover earlier, especially those around key attributes you think might be significant predictors of whether somebody will become and remain a user. For example, you might think that recent college grads who get high-paying jobs aren’t likely to become users because they can afford to buy furniture at retail; if so, be sure to ask about income.

You’re looking for predictable patterns. If you bring in 5 members of your persona and 4 of them have the problem you’re trying to solve and desperately want a solution, you’ve probably identified a key persona.

On the other hand, if you’re getting inconsistent results, you likely need to refine your hypothetical persona and repeat this process, using what you learn in your interviews to form new hypotheses to test. If you can’t consistently find users who have the problem you want to solve, it’s going to be nearly impossible to get millions of them to use your product. So don’t skimp on this step.

Create Your Personas

The penultimate step in this process is creating the actual personas themselves. This is where things get interesting. Unlike traditional personas, which are typically static, your data-driven personas will be living, breathing documents.

The goal here is to combine the lessons you learned in the previous step — about who the user is and what they need — with data that shows how well the latest iteration of your product is serving their needs.

At my company Swish, each one of our personas includes two sections with the following data:

Predictive User Data Product Performance Data Description of the user including predictive demographics. The percentage of our current user base the persona represents. Quotes from at least 3 actual users that describe the jobs-to-be-done. Latest activation, retention, and referral rates for the persona. The percentage of the potential user base the persona represents. Current NPS Score for the persona.

If you’re looking for more ideas for data to include, check out Coryndon Luxmoore’s presentation on how his team created data-driven personas at Buildium.

It may take some time for your team to produce all this information, but it’s okay to start with what you have and improve the personas over time. Your personas won’t be sitting on a shelf. Every time you release a new feature or change an existing one, you should measure the results and update your personas accordingly.

Integrate Your Personas Into Your Workflow

Now that you’ve created your personas, it’s time to actually use them in your day-to-day design process. Here are 4 opportunities to use your new data-driven personas:

  1. At Standups
    At Swish, our standups are a bit different. We start these meetings by reviewing the activation, retention, and referral metrics for each persona. This ensures that — as we discuss yesterday’s progress and today’s obstacles — we remain focused on what really matters: how well we’re serving our users.
  2. During Prioritization
    Your data-driven personas are a great way to keep team members honest as you discuss new features and changes. When you know how much of your user base the persona represents and how well you’re serving them, it quickly becomes obvious whether a potential feature could actually make a difference. Suddenly deciding what to work on won’t require hours of debate or horse-trading.
  3. At Design Reviews
    Your data-driven personas are a great way to keep team members honest as you discuss new designs. When team members can creditably represent users with actual quotes from user interviews, their feedback quickly becomes less subjective and more useful.
  4. When Onboarding New Team Members
    New hires inevitably bring a host of implicit biases and assumptions about the user with them when they start. By including your data-driven personas in their onboarding documents, you can get new team members up to speed much more quickly and ensure they understand the hard-earned lessons your team learned along the way.
Keeping Your Personas Up To Date

It’s vitally important to keep your personas up-to-date so your team members can continue to rely on them to guide their design thinking.

As your product improves, it’s simple to update NPS scores and performance data. I recommend doing this monthly at a minimum; if you’re working on an early-stage, rapidly-changing product, you’ll get better mileage by updating these stats weekly instead.

It’s also important to check in with members of your personas periodically to make sure your predictive data stays relevant. As your product evolves and the competitive landscape changes, your users’ views about their problems will change as well. If your growth starts to plateau, another round of interviews may help to unlock insights you didn’t find the first time. Even if everything is going well, try to check in with members of your personas — both current users of your product and some non-users — every 6 to 12 months.

Wrapping Up

Building data-driven personas is a challenging project that takes time and dedication. You won’t uncover the insights you need or build the conviction necessary to unify your team with a week-long throwaway project.

But if you put in the time and effort necessary, the results will speak for themselves. Having the type of clarity that data-driven personas provide makes it far easier to iterate quickly, improve your user experience, and build a product your users love.

Further Reading

If you’re interested in learning more, I highly recommend checking out the linked articles above, as well as the following resources:

(rb, ra, yk, il)
Categories: Around The Web

Best Practices With CSS Grid Layout

Smashing Magazine - Mon, 04/16/2018 - 7:35am
Best Practices With CSS Grid Layout Best Practices With CSS Grid Layout Rachel Andrew 2018-04-16T13:35:19+02:00 2018-04-20T15:32:23+00:00

An increasingly common question — now that people are using CSS Grid Layout in production — seems to be “What are the best practices?” The short answer to this question is to use the layout method as defined in the specification. The particular parts of the spec you choose to use, and indeed how you combine Grid with other layout methods such as Flexbox, is down to what works for the patterns you are trying to build and how you and your team want to work.

Looking deeper, I think perhaps this request for “best practices” perhaps indicates a lack of confidence in using a layout method that is very different from what came before. Perhaps a concern that we are using Grid for things it wasn’t designed for, or not using Grid when we should be. Maybe it comes down to worries about supporting older browsers, or in how Grid fits into our development workflow.

In this article, I’m going to try and cover some of the things that either could be described as best practices, and some things that you probably don’t need to worry about.

The Survey

To help inform this article, I wanted to find out how other people were using Grid Layout in production, what were the challenges they faced, what did they really enjoy about it? Were there common questions, problems or methods being used. To find out, I put together a quick survey, asking questions about how people were using Grid Layout, and in particular, what they most liked and what they found challenging.

In the article that follows, I’ll be referencing and directly quoting some of those responses. I’ll also be linking to lots of other resources, where you can find out more about the techniques described. As it turned out, there was far more than one article worth of interesting things to unpack in the survey responses. I’ll address some of the other things that came up in a future post.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Accessibility

If there is any part of the Grid specification that you need to take care when using, it is when using anything that could cause content re-ordering:

“Authors must use order and the grid-placement properties only for visual, not logical, reordering of content. Style sheets that use these features to perform logical reordering are non-conforming.”

Grid Specification: Re-ordering and Accessibility

This is not unique to Grid, however, the ability to rearrange content so easily in two dimensions makes it a bigger problem for Grid. However, if using any method that allows content re-ordering — be that Grid, Flexbox or even absolute positioning — you need to take care not to disconnect the visual experience from how the content is structured in the document. Screen readers (and people navigating around the document using a keyboard only) are going to be following the order of items in the source.

The places where you need to be particularly careful are when using flex-direction to reverse the order in Flexbox; the order property in Flexbox or Grid; any placement of Grid items using any method, if it moves items out of the logical order in the document; and using the dense packing mode of grid-auto-flow.

For more information on this issue, see the following resources:

Which Grid Layout Methods Should I Use? ”With so much choice in Grid, it was a challenge to stick to a consistent way of writing it (e.g. naming grid lines or not, defining grid-template-areas, fallbacks, media queries) so that it would be maintainable by the whole team.”

Michelle Barker

When you first take a look at Grid, it might seem overwhelming with so many different ways of creating a layout. Ultimately, however, it all comes down to things being positioned from one line of the grid to another. You have choices based on the of layout you are trying to achieve, as well as what works well for your team and the site you are building.

There is no right or wrong way. Below, I will pick up on some of the common themes of confusion. I’ve also already covered many other potential areas of confusion in a previous article “Grid Gotchas and Stumbling Blocks.”

Should I Use An Implicit Or Explicit Grid?

The grid you define with grid-template-columns and grid-template-rows is known as the Explicit Grid. The Explicit Grid enables the naming of lines on the Grid and also gives you the ability to target the end line of the grid with -1. You’ll choose an Explicit Grid to do either of these things and in general when you have a layout all designed and know exactly where your grid lines should go and the size of the tracks.

I use the Implicit Grid most often for row tracks. I want to define the columns but then rows will just be auto-sized and grow to contain the content. You can control the Implicit Grid to some extent with grid-auto-columns and grid-auto-rows, however, you have less control than if you are defining everything.

You need to decide whether you know exactly how much content you have and therefore the number of rows and columns — in which case you can create an Explicit Grid. If you do not know how much content you have, but simply want rows or columns created to hold whatever there is, you will use the Implicit Grid.

Nevertheless, it’s possible to combine the two. In the below CSS, I have defined three columns in the Explicit Grid and three rows, so the first three rows of content will be the following:

  • A track of at least 200px in height, but expanding to take content taller,
  • A track fixed at 400px in height,
  • A track of at least 300px in height (but expands).

Any further content will go into a row created in the Implicit Grid, and I am using the grid-auto-rows property to make those tracks at least 300px tall, expanding to auto.

.grid { display: grid; grid-template-columns: 1fr 3fr 1fr; grid-template-rows: minmax(200px auto) 400px minmax(300px, auto); grid-auto-rows: minmax(300px, auto); grid-gap: 20px; } A Flexible Grid With A Flexible Number Of Columns

By using Repeat Notation, autofill, and minmax you can create a pattern of as many tracks as will fit into a container, thus removing the need for Media Queries to some extent. This technique can be found in this video tutorial, and also demonstrated along with similar ideas in my recent article “Using Media Queries For Responsive Design In 2018.”

Choose this technique when you are happy for content to drop below earlier content when there is less space, and are happy to allow a lot of flexibility in sizing. You have specifically asked for your columns to display with a minimum size, and to auto fill.

There were a few comments in the survey that made me wonder if people were choosing this method when they really wanted a grid with a fixed number of columns. If you are ending up with an unpredictable number of columns at certain breakpoints, you might be better to set the number of columns — and redefine it with media queries as needed — rather than using auto-fill or auto-fit.

Which Method Of Track Sizing Should I Use?

I described track sizing in detail in my article “How Big Is That Box? Understanding Sizing In Grid Layout,” however, I often get questions as to which method of track sizing to use. Particularly, I get asked about the difference between percentage sizing and the fr unit.

If you simply use the fr unit as specced, then it differs from using a percentage because it distributes available space. If you place a larger item into a track then the way the fr until will work is to allow that track to take up more space and distribute what is left over.

.grid { display: grid; grid-template-columns: 1fr 1fr 1fr; grid-gap: 20px; } The first column is wider as Grid has assigned it more space.

To cause the fr unit to distribute all of the space in the grid container you need to give it a minimum size of 0 using minmax().

.grid { display: grid; grid-template-columns: minmax(0,1fr) minmax(0,1fr) minmax(0,1fr); grid-gap: 20px; } Forcing a 0 minimum may cause overflows

So you can choose to use fr in either of these scenarios: ones where you do want space distribution from a basis of auto (the default behavior), and those where you want equal distribution. I would typically use the fr unit as it then works out the sizing for you, and enables the use of fixed width tracks or gaps. The only time I use a percentage instead is when I am adding grid components to an existing layout that uses other layout methods too. If I want my grid components to line up with a float- or flex-based layout which is using percentages, using them in my grid layout means everything uses the same sizing method.

Auto-Place Items Or Set Their Position?

You will often find that you only need to place one or two items in your layout, and the rest fall into place based on content order. In fact, this is a really good test that you haven’t disconnected the source and visual display. If things pretty much drop into position based on auto-placement, then they are probably in a good order.

Once I have decided where everything goes, however, I do tend to assign a position to everything. This means that I don’t end up with strange things happening if someone adds something to the document and grid auto-places it somewhere unexpected, thus throwing out the layout. If everything is placed, Grid will put that item into the next available empty grid cell. That might not be exactly where you want it, but sat down at the end of your layout is probably better than popping into the middle and pushing other things around.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Which Positioning Method To Use?

When working with Grid Layout, ultimately everything comes down to placing items from one line to another. Everything else is essentially a helper for that.

Decide with your team if you want to name lines, use Grid Template Areas, or if you are going to use a combination of different types of layout. I find that I like to use Grid Template Areas for small components in particular. However, there is no right or wrong. Work out what is best for you.

Grid In Combination With Other Layout Mechanisms

Remember that Grid Layout isn’t the one true layout method to rule them all, it’s designed for a certain type of layout — namely two-dimensional layout. Other layout methods still exist and you should consider each pattern and what suits it best.

I think this is actually quite hard for those of us used to hacking around with layout methods to make them do something they were not really designed for. It is a really good time to take a step back, look at the layout methods for the tasks they were designed for, and remember to use them for those tasks.

In particular, no matter how often I write about Grid versus Flexbox, I will be asked which one people should use. There are many patterns where either layout method makes perfect sense and it really is up to you. No-one is going to shout at you for selecting Flexbox over Grid, or Grid over Flexbox.

In my own work, I tend to use Flexbox for components where I want the natural size of items to strongly control their layout, essentially pushing the other items around. I also often use Flexbox because I want alignment, given that the Box Alignment properties are only available to use in Flexbox and Grid. I might have a Flex container with one child item, in order that I can align that child.

A sign that perhaps Flexbox isn’t the layout method I should choose is when I start adding percentage widths to flex items and setting flex-grow to 0. The reason to add percentage widths to flex items is often because I’m trying to line them up in two dimensions (lining things up in two dimensions is exactly what Grid is for). However, try both, and see which seems to suit the content or design pattern best. You are unlikely to be causing any problems by doing so.

Nesting Grid And Flex Items

This also comes up a lot, and there is absolutely no problem with making a Grid Item also a Grid Container, thus nesting one grid inside another. You can do the same with Flexbox, making a Flex Item and Flex Container. You can also make a Grid Item and Flex Container or a Flex Item a Grid Container — none of these things are a problem!

What we can’t currently do is nest one grid inside another and have the nested grid use the grid tracks defined on the overall parent. This would be very useful and is what the subgrid proposals in Level 2 of the Grid Specification hope to solve. A nested grid currently becomes a new grid so you would need to be careful with sizing to ensure it aligns with any parent tracks.

You Can Have Many Grids On One Page

A comment popped up a few times in the survey which surprised me, there seems to be an idea that a grid should be confined to the main layout, and that many grids on one page were perhaps not a good thing. You can have as many grids as you like! Use grid for big things and small things, if it makes sense laid out as a grid then use Grid.

Fallbacks And Supporting Older Browsers “Grid used in conjunction with @supports has enabled us to better control the number of layout variations we can expect to see. It has also worked really well with our progressive enhancement approach meaning we can reward those with modern browsers without preventing access to content to those not using the latest technology.”

Joe Lambert working on rareloop.com

In the survey, many people mentioned older browsers, however, there was a reasonably equal split between those who felt that supporting older browsers was hard and those who felt it was easy due to Feature Queries and the fact that Grid overrides other layout methods. I’ve written at length about the mechanics of creating these fallbacks in “Using CSS Grid: Supporting Browsers Without Grid.”

In general, modern browsers are far more interoperable than their earlier counterparts. We tend to see far fewer actual “browser bugs” and if you use HTML and CSS correctly, then you will generally find that what you see in one browser is the same as in another.

We do, of course, have situations in which one browser has not yet shipped support for a certain specification, or some parts of a specification. With Grid, we have been very fortunate in that browsers shipped Grid Layout in a very complete and interoperable way within a short time of each other. Therefore, our considerations for testing tend to be to need to test browsers with Grid and without Grid. You may also have chosen to use the -ms prefixed version in IE10 and IE11, which would then require testing as a third type of browser.

Browsers which support modern Grid Layout (not the IE version) also support Feature Queries. This means that you can test for Grid support before using it.

Testing Browsers That Don’t Support Grid

When using fallbacks for browsers without support for Grid Layout (or using the -ms prefixed version for IE10 and 11), you will want to test how those browsers render Grid Layout. To do this, you need a way to view your site in an example browser.

I would not take the approach of breaking your Feature Query by checking for support of something nonsensical, or misspelling the value grid. This approach will only work if your stylesheet is incredibly simple, and you have put absolutely everything to do with your Grid Layout inside the Feature Queries. This is a very fragile and time-consuming way to work, especially if you are extensively using Grid. In addition, an older browser will not just lack support for Grid Layout, there will be other CSS properties unsupported too. If you are looking for “best practice” then setting yourself up so you are in a good position to test your work is high up there!

There are a couple of straightforward ways to set yourself up with a proper method of testing your fallbacks. The easiest method — if you have a reasonably fast internet connection and don’t mind paying a subscription fee — is to use a service such as BrowserStack. This is a service that enables viewing of websites (even those in development on your computer) on a whole host of real browsers. BrowserStack does offer free accounts for open-source projects.

You can download Virtual Machines for testing from Microsoft.

To test locally, my suggestion would be to use a Virtual Machine with your target browser installed. Microsoft offers free Virtual Machine downloads with versions of IE back to IE8, and also Edge. You can also install onto the VM an older version of a browser with no Grid support at all. For example by getting a copy of Firefox 51 or below. After installing your elderly Firefox, be sure to turn off automatic updates as explained here as otherwise it will quietly update itself!

You can then test your site in IE11 and in non-supporting Firefox on one VM (a far less fragile solution than misspelling values). Getting set up might take you an hour or so, but you’ll then be in a really good place to test your fallbacks.

Unlearning Old Habits “It was my first time to use Grid Layout, so there were a lot of concepts to learn and properties understand. Conceptually, I found the most difficult thing to unlearn all the stuff I had done for years, like clearing floats and packing everything in container divs.”

Hidde working on hiddedevries.nl/en

Many of the people responding to the survey mentioned the need to unlearn old habits and how learning Layout would be easier for people completely new to CSS. I tend to agree. When teaching people in person complete beginners have little problem using Grid while experienced developers try hard to return grid to a one-dimensional layout method. I’ve seen attempts at “grid systems” using CSS Grid which add back in the row wrappers needed for a float or flex-based grid.

Don’t be afraid to try out new techniques. If you have the ability to test in a few browsers and remain mindful of potential issues of accessibility, you really can’t go too far wrong. And, if you find a great way to create a certain pattern, let everyone else know about it. We are all new to using Grid in production, so there is certainly plenty to discover and share.

“Grid Layout is the most exciting CSS development since media queries. It's been so well thought through for real-world developer needs and is an absolute joy to use in production - for designers and developers alike.”

Trys Mudford working on trysmudford.com

To wrap up, here is a very short list of current best practices! If you have discovered things that do or don’t work well in your own situation, add them to the comments.

  1. Be very aware of the possibility of content re-ordering. Check that you have not disconnected the visual display from the document order.
  2. Test using real target browsers with a local or remote Virtual Machine.
  3. Don’t forget that older layout methods are still valid and useful. Try different ways to achieve patterns. Don’t be hung up on having to use Grid.
  4. Know that as an experienced front-end developer you are likely to have a whole set of preconceptions about how layout works. Try to look at these new methods anew rather than forcing them back into old patterns.
  5. Keep trying things out. We’re all new to this. Test your work and share what you discover.
(il)
Categories: Around The Web

Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive

Smashing Magazine - Fri, 04/13/2018 - 8:30am
Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive Monthly Web Development Update 4/2018: On Effort, Bias, And Being Productive Anselm Hannemann 2018-04-13T14:30:19+02:00 2018-04-20T15:32:23+00:00

These days, it is one of the biggest challenges to think long-term. In a world where we live with devices that last only a few months or a few years maybe, where we buy stuff to throw it away only days or weeks later, the term ‘effort’ gains a new meaning.

Recently, I was reading an essay on ‘Yatnah’, ‘Effort’. I spent a lot of time outside in nature in the past weeks and created a small acre to grow some vegetables. I also attended a workshop to learn the craft of grafting fruit trees. When you cut a tree, you realize that our fast-living, short-term lifestyle is very different from how nature works. I grafted a tree that is supposed to grow for decades now, and if you cut a tree that has been there for forty years, it’ll take another forty to grow one that will be similarly tall.

I’d love that we all try to create more long-lasting work, software that works in a decade, and in order to do so, put effort into learning how we can make that happen. So long, I’ll leave you with this quote and a bunch of interesting articles.

“In our modern world it can be tempting to throw effort away and replace it with a few phrases of positive thinking. But there is just no substitute for practice”.

— Kino Macgregor

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → News
  • The Safari Technology Preview 52 removes support for all NPAPI plug-ins other than Adobe Flash and adds support for preconnect link headers.
  • Chrome 66 Beta brings the CSS Typed Object Model, Async Clipboard API, AudioWorklets, and support to use calc(), min(), and max() in Media Queries. Additionally, select and textarea fields now support the autocomplete attribute, and the catch clause of a try statement can be used without a parameter from now on.
  • iOS 11.3 is available to the public now, and, as already announced, the release brings support for Progressive Web Apps to iOS. Maximiliano Firtman shares what this means, what works and what doesn’t work (yet).
  • Safari 11.1 is now available for everyone. Here is a summary of all the new WebKit features it includes.
Progressive Web Apps for iOS are here. Full screen, offline capable, and even visible in the iPad’s dock. (Image credit) General
  • Anil Dash reflects on what the web was intended to be and how today’s web differs from this: “At a time when millions are losing trust in the web’s biggest sites, it’s worth revisiting the idea that the web was supposed to be made out of countless little sites. Here’s a look at the neglected technologies that were supposed to make it possible.”
  • Morten Rand-Hendriksen wrote about using ethics in web design and what questions we should ask ourselves when suggesting a solution, creating a design, or a new feature. Especially when we think we’re making something ‘smart’, it’s important to put the question whether it actually helps people first.
  • A lot of protest and discussion came along with the Facebook / Cambridge Analytica affair, most of them pointing out the technological problems with Facebook’s permission model. But the crux lies in how Facebook designed their company and which ethical baseline they set. If we don’t want something like this to happen again, it’s upon us to design the service we want.
  • Brendan Dawes shares why he thinks URLs are a masterpiece and a user experience by themselves.
  • Charlie Owen’s talk transcription of “Dear Developer, The Web Isn’t About You” is a good summary of why we as developers need to think beyond what’s good for us and consider what serves the users and how we can achieve that instead.
UI/UX
  • B. Kaan Kavuştuk shares his thoughts about why we won’t be able to build a perfect design or codebase on the first try, no matter how much experience we have. Instead, it’s the constant small improvements that pave the way to perfection.
  • Trine Falbe introduces us to Ethical Design with a practical getting-started guide. It shows alternatives and things to think about when building a business or product. It doesn’t matter much if you’re the owner, a developer, a designer or a sales person, this is about serving users and setting the ground for real and sustainable trust.
  • Josh Lovejoy shares his learnings from working on inclusive tech solutions and why it takes more than good intention to create fair, inclusive technology. This article goes into depth of why human judgment is very difficult and often based on bias, and why it isn’t easy to design and develop algorithms that treat different people equally because of this.
  • The HSB (Hue, Saturation, Brightness) color system isn’t especially new, but a lot of people still don’t understand its advantages. Erik D. Kennedy explains its principles and advantages step-by-step.
  • While there’s more discussion about inclusive design these days, it’s often seen under the accessibility hat or as technical decisions. Robert del Prado now shares how important inclusive design thinking is and why it’s much more about the generic user than some specific people with specific disabilities. Inclusive design brings people together, regardless of who they are, where they live, and what they can afford. And isn’t it the goal of every product to be successful by acquiring as many people as possible? Maybe we need to discuss this with marketing people as well.
  • Anton Lovchikov shares ways to improve optical adjustments in components. It’s an interesting study on how very small changes can make quite a difference.
Afraid or angry? Which emotion we think the baby is showing depends on whether we think it’s a girl or a boy. Josh Lovejoy explains how personal bias and judgments like this one lead to unfair products. (Image credit) Tooling
  • Brian Schrader found an unknown feature in Git which is very helpful to test ideas quickly: Git Notes lets us add, remove, or read notes attached to objects, without touching the objects themselves and without needing to commit the current state.
  • For many projects, I prefer to use npm scripts over calling gulp or direct webpack tasks. Michael Kühnel shares some useful tricks for npm scripts, including how to allow CLI option parameters or how to watch tasks and alert notices on error.
  • Anton Sten explains why new tools don’t always equal productivity. We all love new design tools, and new ones such as Sketch, Figma, Xd, or Invision Studio keep popping up. But despite these tools solving a lot of common problems and making some things easier, productivity is mostly about what works for your problem and not what is newest. If you need to create a static mockup and Photoshop is what you know best, why not use it?
  • There’s a new, fast DNS service available by Cloudflare. Finally, a better alternative to the much used Google DNS servers, it is available under 1.1.1.1. The new DNS is the fastest, and probably one of the most secure ones, too, out there. Cloudflare put a lot of effort into encrypting the service and partnering up with Mozilla to make DNS over HTTPS work to close a big privacy gap that until now leaked all your browsing data to the DNS provider.
  • I heard a lot about iOS machine learning already, but despite the interesting fact that they’re able to do this on the device without sending everything to a cloud, I haven’t found out how to make use of this for apps, yet. Luckily, Manu Rink put together a nice guide in which she explains machine learning in iOS for beginners.
  • There’s great news for the Git GUI fans: Tower now offers a new beta version that includes pull request support, interactive rebase workflows, quick actions, reflog, and search. An amazing update that makes working with the software much faster than before, and even for me as a command line lover it’s a nice option.
Manu Rink shows how machine learning in iOS works by building an offline handwritten text recognition. (Image credit) Security Web Performance Accessibility CSS
  • Amber Wilson shares some insights into what it feels like to be thrown into a complex project in order to do the styling there. She rightly says that “nobody said CSS is easy” and expresses how important it is that we as developers face inconvenient situations in order to grow our knowledge.
  • Ana Tudor is known for her special CSS skills. Now she explores and describes how we can achieve scooped corners in CSS with some clever tricks.
Scooped corners? Ana Tudor shows how to do it. (Image credit) JavaScript
  • WebKit got an upgrade for the Clipboard API, and the team gives some very interesting insights into how it works and how Safari will handle some of the common challenges with clipboard data (e.g. images).
  • If you work with key value stores that live only in the frontend, IDB-Keyval is a great lightweight library that simplifies working with IndexedDB and localStorage.
  • Ever wanted to create graphics from your data with a hand-drawn, sketchy look on a website? Rough.js lets you do just that. It’s usually Canvas-based (for better performance and less data) but can also draw SVG paths.
  • If you need a drag-and-drop reorder module, there’s a smooth and accessible solution available now: dragon-drop.
  • For many years, we could only get CSS values in their computed value and even that wasn’t flexible or nice to work with. But now CSS has a proper object-based API for working with values in JavaScript: the CSS Typed Object Model. It’s only available in the upcoming Chrome 66 yet but definitely a promising feature I’d love to use in my code soon.
  • The React.js documentation now has an extra section that explains how to easily and programmatically manage focus states to ensure your UI is accessible.
  • James Milner shares how we can use abortable fetch to cancel requests.
  • There are a few articles about Web Push Notifications out there already, but Oleksii Rudenko’s getting-started guide is a great primer that explains the principles very well.
  • In the past years, we got a lot of new features on the JavaScript platform. And since it’s hard to remember all the new stuff, Raja Rao DV summed up “Everything new in ECMAScript 2016, 2017, and 2018”.
Work & Life
  • To raise awareness for how common such situations are for all of us, James Bennett shares an embarrassing situation where he made a simple mistake that took him a long time to find out. It’s not just me making mistakes, it’s not just you, and not just James — all of us make mistakes, and as embarrassing as they seem to be in that particular situation, there’s nothing to feel bad about.
  • Adam Blanchard says “People are machines. We need maintenance, too.” and creates a comparison for engineers to understand why we need to take care of ourselves and also why we need people who take care of us. This is an insight into what People Engineers do, and why it’s so important for companies to hire such people to ensure a team is healthy.
  • If there’s one thing we don’t talk much about in the web industry, it’s retirement. Jan Chipchase now wrote a lot of interesting thoughts all about retirement.
  • Rebecca Downes shares some insights into her PhD on remote teams, revealing under which circumstances remote teams are great and under which they’re not.
People need maintenance, too. That’s where the People Engineer comes in. (Image credit) Going Beyond…

We hope you enjoyed this Web Development Update. The next one is scheduled for Friday, May 18th. Stay tuned.

(cm)
Categories: Around The Web

Automating Your Feature Testing With Selenium WebDriver

Smashing Magazine - Thu, 04/12/2018 - 6:45am
Automating Your Feature Testing With Selenium WebDriver Automating Your Feature Testing With Selenium WebDriver Nils Schütte 2018-04-12T12:45:54+02:00 2018-04-20T15:32:23+00:00

This article is for web developers who wish to spend less time testing the front end of their web applications but still want to be confident that every feature works fine. It will save you time by automating repetitive online tasks with Selenium WebDriver. You will find a step-by-step example for automating and testing the login function of WordPress, but you can also adapt the example for any other login form.

What Is Selenium And How Can It Help You?

Selenium is a framework for the automated testing of web applications. Using Selenium, you can basically automate every task in your browser as if a real person were to execute the task. The interface used to send commands to the different browsers is called Selenium WebDriver. Implementations of this interface are available for every major browser, including Mozilla Firefox, Google Chrome and Internet Explorer.

Automating Your Feature Testing With Selenium WebDriver

Which type of web developer are you? Are you the disciplined type who tests all key features of your web application after each deployment. If so, you are probably annoyed by how much time this repetitive testing consumes. Or are you the type who just doesn’t bother with testing key features and always thinks, “I should test more, but I’d rather develop new stuff.” If so, you probably only find bugs by chance or when your client or boss complains about them.

I have been working for a well-known online retailer in Germany for quite a while, and I always belonged to the second category: It was so exciting to think of new features for the online shop, and I didn’t like at all going over all of the previous features again after each new software deployment. So, the strategy was more or less to hope that all key features would work.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

One day, we had a serious drop in our conversion rate and started digging in our web analytics tools to find the source of this drop. It took quite a while before we found out that our checkout did not work properly since the previous software deployment.

This was the day when I started to do some research about automating our testing process of web applications, and I stumbled upon Selenium and its WebDriver. Selenium is basically a framework that allows you to automate web browsers. WebDriver is the name of the key interface that allows you to send commands to all major browsers (mobile and desktop) and work with them as a real user would.

Preparing The First Test With Selenium WebDriver

First, I was a little skeptical of whether Selenium would suit my needs because the framework is most commonly used in Java, and I am certainly not a Java expert. Later, I learned that being a Java expert is not necessary to take advantage of the power of the Selenium framework.

As a simple first test, I tested the login of one of my WordPress projects. Why WordPress? Just because using the WordPress login form is an example that everybody can follow more easily than if I were to refer to some custom web application.

What do you need to start using Selenium WebDriver? Because I decided to use the most common implementation of Selenium in Java, I needed to set up my little Java environment.

If you want to follow my example, you can use the Java environment of your choice. If you haven’t set one up yet, I suggest installing Eclipse and making sure you are able to run a simple “Hello world” script in Java.

Because I wanted to test the login in Chrome, I made sure that the Chrome browser was already installed on my machine. That’s all I did in preparation.

Downloading The ChromeDriver

All major browsers provide their own implementation of the WebDriver interface. Because I wanted to test the WordPress login in Chrome, I needed to get the WebDriver implementation of Chrome: ChromeDriver.

I extracted the ZIP archive and stored the executable file chromedriver.exe in a location that I could remember for later.

Setting Up Our Selenium Project In Eclipse

The steps I took in Eclipse are probably pretty basic to someone who works a lot with Java and Eclipse. But for those like me, who are not so familiar with this, I will go over the individual steps:

  1. Open Eclipse.
  2. Click the "New" icon.
    Creating a new project in Eclipse
  3. Choose the wizard to create a new "Java Project," and click “Next.”
    Choose the java-project wizard.
  4. Give your project a name, and click "Finish."
    The eclipse project wizard
  5. Now you should see your new Java project on the left side of the screen.
    We successfully created a project to run the Selenium WebDriver.
Adding The Selenium Library To Our Project

Now we have our Java project, but Selenium is still missing. So, next, we need to bring the Selenium framework into our Java project. Here are the steps I took:

  1. Download the latest version of the Java Selenium library.
    Download the Selenium library.
  2. Extract the archive, and store the folder in a place you can remember easily.
  3. Go back to Eclipse, and go to "Project" → “Properties.”
    Go to properties to integrate the Selenium WebDriver in you project.
  4. In the dialog, go to "Java Build Path" and then to register “Libraries.”
  5. Click on "Add External JARs."
    Add the Selenium lib to your Java build path.
  6. Navigate to the just downloaded folder with the Selenium library. Highlight all .jar files and click "Open."
    Select all files of the lib to add to your project.
  7. Repeat this for all .jar files in the subfolder libs as well.
  8. Eventually, you should see all .jar files in the libraries of your project:
    The Selenium WebDriver framework has now been successfully integrated into your project!

That’s it! Everything we’ve done until now is a one-time task. You could use this project now for all of your different tests, and you wouldn’t need to do the whole setup process for every test case again. Kind of neat, isn’t it?

Creating Our Testing Class And Letting It Open the Chrome Browser

Now we have our Selenium project, but what next? To see whether it works at all, I wanted to try something really simple, like just opening my Chrome browser.

To do this, I needed to create a new Java class from which I could execute my first test case. Into this executable class, I copied a few Java code lines, and believe it or not, it worked! Magically, the Chrome browser opened and, after a few seconds, closed all by itself.

Try it yourself:

  1. Click on the "New" button again (while you are in your new project’s folder).
    Create a new class to run the Selenium WebDriver.
  2. Choose the "Class" wizard, and click “Next.”
    Choose the Java class wizard to create a new class.
  3. Name your class (for example, "RunTest"), and click “Finish.”
    The eclipse Java Class wizard.
  4. Replace all code in your new class with the following code. The only thing you need to change is the path to chromedriver.exe on your computer:
    import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; /** * @author Nils Schuette via frontendtest.org */ public class RunTest { static WebDriver webDriver; /** * @param args * @throws InterruptedException */ public static void main(final String[] args) throws InterruptedException { // Telling the system where to find the chrome driver System.setProperty( "webdriver.chrome.driver", "C:/PATH/TO/chromedriver.exe"); // Open the Chrome browser webDriver = new ChromeDriver(); // Waiting a bit before closing Thread.sleep(7000); // Closing the browser and WebDriver webDriver.close(); webDriver.quit(); } }
  5. Save your file, and click on the play button to run your code.
    Running your first Selenium WebDriver project.
  6. If you have done everything correctly, the code should open a new instance of the Chrome browser and close it shortly thereafter.
    The Chrome Browser opens itself magically. (Large preview)
Testing The WordPress Admin Login

Now I was optimistic that I could automate my first little feature test. I wanted the browser to navigate to one of my WordPress projects, login to the admin area and verify that the login was successful. So, what commands did I need to look up?

  1. Navigate to the login form,
  2. Locate the input fields,
  3. Type the username and password into the input fields,
  4. Hit the login button,
  5. Compare the current page’s headline to see if the login was successful.

Again, after I had done all the necessary updates to my code and clicked on the run button in Eclipse, my browser started to magically work itself through the WordPress login. I successfully ran my first automated website test!

If you want to try this yourself, replace all of the code of your Java class with the following. I will go through the code in detail afterwards. Before executing the code, you must replace four values with your own:

  1. The location of your chromedriver.exe file (as above),

  2. The URL of the WordPress admin account that you want to test,

  3. The WordPress username,

  4. The WordPress password.

Then, save and let it run again. It will open Chrome, navigate to the login of your WordPress website, login and check whether the h1 headline of the current page is “Dashboard.”

import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; /** * @author Nils Schuette via frontendtest.org */ public class RunTest { static WebDriver webDriver; /** * @param args * @throws InterruptedException */ public static void main(final String[] args) throws InterruptedException { // Telling the system where to find the chrome driver System.setProperty( "webdriver.chrome.driver", "C:/PATH/TO/chromedriver.exe"); // Open the Chrome browser webDriver = new ChromeDriver(); // Maximize the browser window webDriver.manage().window().maximize(); if (testWordpresslogin()) { System.out.println("Test Wordpress Login: Passed"); } else { System.out.println("Test Wordpress Login: Failed"); } // Close the browser and WebDriver webDriver.close(); webDriver.quit(); } private static boolean testWordpresslogin() { try { // Open google.com webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/"); // Type in the username webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); // Type in the password webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD"); // Click the Submit button webDriver.findElement(By.id("wp-submit")).click(); // Wait a little bit (7000 milliseconds) Thread.sleep(7000); // Check whether the h1 equals “Dashboard” if (webDriver.findElement(By.tagName("h1")).getText() .equals("Dashboard")) { return true; } else { return false; } // If anything goes wrong, return false. } catch (final Exception e) { System.out.println(e.getClass().toString()); return false; } } }

If you have done everything correctly, your output in the Eclipse console should look something like this:

The Eclipse console states that our first test has passed. (Large preview) Understanding The Code

Because you are probably a web developer and have at least a basic understanding of other programming languages, I am sure you already grasp the basic idea of the code: We have created a separate method, testWordpressLogin, for the specific test case that is called from our main method.

Depending on whether the method returns true or false, you will get an output in your console telling you whether this specific test passed or failed.

This is not necessary, but this way you can easily add many more test cases to this class and still have readable code.

Now, step by step, here is what happens in our little program:

  1. First, we tell our program where it can find the specific WebDriver for Chrome.
    System.setProperty("webdriver.chrome.driver","C:/PATH/TO/chromedriver.exe");
  2. We open the Chrome browser and maximize the browser window.
    webDriver = new ChromeDriver(); webDriver.manage().window().maximize();
  3. This is where we jump into our submethod and check whether it returns true or false.
    if (testWordpresslogin()) …
  4. The following part in our submethod might not be intuitive to understand:
    The try{…}catch{…} blocks. If everything goes as expected, only the code in try{…} will be executed, but if anything goes wrong while executing try{…}, then the execution continuous in catch{}. Whenever you try to locate an element with findElement and the browser is not able to locate this element, it will throw an exception and execute the code in catch{…}. In my example, the test will be marked as "failed" whenever something goes wrong and the catch{} is executed.
  5. In the submethod, we start by navigating to our WordPress admin area and locating the fields for the username and the password by looking for their IDs. Also, we type the given values in these fields.
    webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/"); webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME"); webDriver.findElement(By.id("user_pass")).sendKeys("YOUR_PASSWORD");
    Selenium fills out our login form
  6. After filling in the login form, we locate the submit button by its ID and click it.
    webDriver.findElement(By.id("wp-submit")).click();
  7. In order to follow the test visually, I include a 7-second pause here (7000 milliseconds = 7 seconds).
    Thread.sleep(7000);
  8. If the login is successful, the h1 headline of the current page should now be "Dashboard," referring to the WordPress admin area. Because the h1 headline should exist only once on every page, I have used the tag name here to locate the element. In most other cases, the tag name is not a good locator because an HTML tag name is rarely unique on a web page. After locating the h1, we extract the text of the element with getText() and check whether it is equal to the string “Dashboard.” If the login is not successful, we would not find “Dashboard” as the current h1. Therefore, I’ve decided to use the h1 to check whether the login is successful.
    if (webDriver.findElement(By.tagName("h1")).getText().equals("Dashboard")) { return true; } else { return false; }
    Letting the WebDriver check, whether we have arrived on the Dashboard: Test passed! (Large preview)
  9. If anything has gone wrong in the previous part of the submethod, the program would have jumped directly to the following part. The catch block will print the type of exception that happened to the console and afterwards return false to the main method.
    catch (final Exception e) { System.out.println(e.getClass().toString()); return false; }
Adapting The Test Case

This is where it gets interesting if you want to adapt and add test cases of your own. You can see that we always call methods of the webDriver object to do something with the Chrome browser.

First, we maximize the window:

webDriver.manage().window().maximize();

Then, in a separate method, we navigate to our WordPress admin area:

webDriver.navigate().to("https://www.YOUR-SITE.org/wp-admin/");

There are other methods of the webDriver object we can use. Besides the two above, you will probably use this one a lot:

webDriver.findElement(By. …)

The findElement method helps us find different elements in the DOM. There are different options to find elements:

  • By.id
  • By.cssSelector
  • By.className
  • By.linkText
  • By.name
  • By.xpath

If possible, I recommend using By.id because the ID of an element should always be unique (unlike, for example, the className), and it is usually not affected if the structure of your DOM changes (unlike, say, the xPath).

Note: You can read more about the different options for locating elements with WebDriver over here.

As soon as you get ahold of an element using the findElement method, you can call the different available methods of the element. The most common ones are sendKeys, click and getText.

We’re using sendKeys to fill in the login form:

webDriver.findElement(By.id("user_login")).sendKeys("YOUR_USERNAME");

We have used click to submit the login form by clicking on the submit button:

webDriver.findElement(By.id("wp-submit")).click();

And getText has been used to check what text is in the h1 after the submit button is clicked:

webDriver.findElement(By.tagName("h1")).getText()

Note: Be sure to check out all the available methods that you can use with an element.

Conclusion

Ever since I discovered the power of Selenium WebDriver, my life as a web developer has changed. I simply love it. The deeper I dive into the framework, the more possibilities I discover — running one test simultaneously in Chrome, Internet Explorer and Firefox or even on my smartphone, or taking screenshots automatically of different pages and comparing them. Today, I use Selenium WebDriver not only for testing purposes, but also to automate repetitive tasks on the web. Whenever I see an opportunity to automate my work on the web, I simply copy my initial WebDriver project and adapt it to the next task.

If you think that Selenium WebDriver is for you, I recommend looking at Selenium’s documentation to find out about all of the possibilities of Selenium (such as running tasks simultaneously on several (mobile) devices with Selenium Grid).

I look forward to hearing whether you find WebDriver as useful as I do!

(rb, ra, al, il)
Categories: Around The Web

Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them

Smashing Magazine - Wed, 04/11/2018 - 11:00am
Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them Will SiriKit’s Intents Fit Your App? If So, Here’s How To Use Them Lou Franco 2018-04-11T17:00:44+02:00 2018-04-20T15:32:23+00:00

Since iOS 5, Siri has helped iPhone users send messages, set reminders and look up restaurants with Apple’s apps. Starting in iOS 10, we have been able to use Siri in some of our own apps as well.

In order to use this functionality, your app must fit within Apple’s predefined Siri “domains and intents.” In this article, we’ll learn about what those are and see whether our apps can use them. We’ll take a simple app that is a to-do list manager and learn how to add Siri support. We’ll also go through the Apple developer website’s guidelines on configuration and Swift code for a new type of extension that was introduced with SiriKit: the Intents extension.

When you get to the coding part of this article, you will need Xcode (at least version 9.x), and it would be good if you are familiar with iOS development in Swift because we’re going to add Siri to a small working app. We’ll go through the steps of setting up a extension on Apple’s developer website and of adding the Siri extension code to the app.

“Hey Siri, Why Do I Need You?”

Sometimes I use my phone while on my couch, with both hands free, and I can give the screen my full attention. Maybe I’ll text my sister to plan our mom’s birthday or reply to a question in Trello. I can see the app. I can tap the screen. I can type.

But I might be walking around my town, listening to a podcast, when a text comes in on my watch. My phone is in my pocket, and I can’t easily answer while walking.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

With Siri, I can hold down my headphone’s control button and say, “Text my sister that I’ll be there by two o’clock.” Siri is great when you are on the go and can’t give full attention to your phone or when the interaction is minor, but it requires several taps and a bunch of typing.

This is fine if I want to use Apple apps for these interactions. But some categories of apps, like messaging, have very popular alternatives. Other activities, such as booking a ride or reserving a table in a restaurant, are not even possible with Apple’s built-in apps but are perfect for Siri.

Apple’s Approach To Voice Assistants

To enable Siri in third-party apps, Apple had to decide on a mechanism to take the sound from the user’s voice and somehow get it to the app in a way that it could fulfill the request. To make this possible, Apple requires the user to mention the app’s name in the request, but they had several options of what to do with the rest of the request.

  • It could have sent a sound file to the app.
    The benefit of this approach is that the app could try to handle literally any request the user might have for it. Amazon or Google might have liked this approach because they already have sophisticated voice-recognition services. But most apps would not be able to handle this very easily.
  • It could have turned the speech into text and sent that.
    Because many apps don’t have sophisticated natural-language implementations, the user would usually have to stick to very particular phrases, and non-English support would be up to the app developer to implement.
  • It could have asked you to provide a list of phrases that you understand.
    This mechanism is closer to what Amazon does with Alexa (in its “skills” framework), and it enables far more uses of Alexa than SiriKit can currently handle. In an Alexa skill, you provide phrases with placeholder variables that Alexa will fill in for you. For example, “Alexa, remind me at $TIME$ to $REMINDER$” — Alexa will run this phrase against what the user has said and tell you the values for TIME and REMINDER. As with the previous mechanism, the developer needs to do all of the translation, and there isn’t a lot of flexibility if the user says something slightly different.
  • It could define a list of requests with parameters and send the app a structured request.
    This is actually what Apple does, and the benefit is that it can support a variety of languages, and it does all of the work to try to understand all of the ways a user might phrase a request. The big downside is that you can only implement handlers for requests that Apple defines. This is great if you have, for example, a messaging app, but if you have a music-streaming service or a podcast player, you have no way to use SiriKit right now.

Similarly, there are three ways for apps to talk back to the user: with sound, with text that gets converted, or by expressing the kind of thing you want to say and letting the system figure out the exact way to express it. The last solution (which is what Apple does) puts the burden of translation on Apple, but it gives you limited ways to use your own words to describe things.

The kinds of requests you can handle are defined in SiriKit’s domains and intents. An intent is a type of request that a user might make, like texting a contact or finding a photo. Each intent has a list of parameters — for example, texting requires a contact and a message.

A domain is just a group of related intents. Reading a text and sending a text are both in the messaging domain. Booking a ride and getting a location are in the ride-booking domain. There are domains for making VoIP calls, starting workouts, searching for photos and a few more things. SiriKit’s documentation contains a full list of domains and their intents.

A common criticism of Siri is that it seems unable to handle requests as well as Google and Alexa, and that the third-party voice ecosystem enabled by Apple’s competitors is richer.

I agree with those criticisms. If your app doesn’t fit within the current intents, then you can’t use SiriKit, and there’s nothing you can do. Even if your app does fit, you can’t control all of the words Siri says or understands; so, if you have a particular way of talking about things in your app, you can’t always teach that to Siri.

The hope of iOS developers is both that Apple will greatly expand its list of intents and that its natural language processing becomes much better. If it does that, then we will have a voice assistant that works without developers having to do translation or understand all of the ways of saying the same thing. And implementing support for structured requests is actually fairly simple to do — a lot easier than building a natural language parser.

Another big benefit of the intents framework is that it is not limited to Siri and voice requests. Even now, the Maps app can generate an intents-based request of your app (for example, a restaurant reservation). It does this programmatically (not from voice or natural language). If Apple allowed apps to discover each other’s exposed intents, we’d have a much better way for apps to work together, (as opposed to x-callback style URLs).

Finally, because an intent is a structured request with parameters, there is a simple way for an app to express that parameters are missing or that it needs help distinguishing between some options. Siri can then ask follow-up questions to resolve the parameters without the app needing to conduct the conversation.

The Ride-Booking Domain

To understand domains and intents, let’s look at the ride-booking domain. This is the domain that you would use to ask Siri to get you a Lyft car.

Apple defines how to ask for a ride and how to get information about it, but there is actually no built-in Apple app that can actually handle this request. This is one of the few domains where a SiriKit-enabled app is required.

You can invoke one of the intents via voice or directly from Maps. Some of the intents for this domain are:

  • Request a ride
    Use this one to book a ride. You’ll need to provide a pick-up and drop-off location, and the app might also need to know your party’s size and what kind of ride you want. A sample phrase might be, “Book me a ride with <appname>.”
  • Get the ride’s status
    Use this intent to find out whether your request was received and to get information about the vehicle and driver, including their location. The Maps app uses this intent to show an updated image of the car as it is approaching you.
  • Cancel a ride
    Use this to cancel a ride that you have booked.

For any of this intents, Siri might need to know more information. As you’ll see when we implement an intent handler, your Intents extension can tell Siri that a required parameter is missing, and Siri will prompt the user for it.

The fact that intents can be invoked programmatically by Maps shows how intents might enable inter-app communication in the future.

Note: You can get a full list of domains and their intents on Apple’s developer website. There is also a sample Apple app with many domains and intents implemented, including ride-booking.

Adding Lists And Notes Domain Support To Your App

OK, now that we understand the basics of SiriKit, let’s look at how you would go about adding support for Siri in an app that involves a lot of configuration and a class for each intent you want to handle.

The rest of this article consists of the detailed steps to add Siri support to an app. There are five high-level things you need to do:

  1. Prepare to add a new extension to the app by creating provisioning profiles with new entitlements for it on Apple’s developer website.
  2. Configure your app (via its plist) to use the entitlements.
  3. Use Xcode’s template to get started with some sample code.
  4. Add the code to support your Siri intent.
  5. Configure Siri’s vocabulary via plists.

Don’t worry: We’ll go through each of these, explaining extensions and entitlements along the way.

To focus on just the Siri parts, I’ve prepared a simple to-do list manager, List-o-Mat.

Making lists in List-o-Mat (Large preview)

You can find the full source of the sample, List-o-Mat, on GitHub.

To create it, all I did was start with the Xcode Master-Detail app template and make both screens into a UITableView. I added a way to add and delete lists and items, and a way to check off items as done. All of the navigation is generated by the template.

To store the data, I used the Codable protocol, (introduced at WWDC 2017), which turns structs into JSON and saves it in a text file in the documents folder.

I’ve deliberately kept the code very simple. If you have any experience with Swift and making view controllers, then you should have no problem with it.

Now we can go through the steps of adding SiriKit support. The high-level steps would be the same for any app and whichever domain and intents you plan to implement. We’ll mostly be dealing with Apple’s developer website, editing plists and writing a bit of Swift.

For List-o-Mat, we’ll focus on the lists and notes domain, which is broadly applicable to things like note-taking apps and to-do lists.

In the lists and notes domain, we have the following intents that would make sense for our app.

  • Get a list of tasks.
  • Add a new task to a list.

Because the interactions with Siri actually happen outside of your app (maybe even when you app is not running), iOS uses an extension to implement this.

The Intents Extension

If you have not worked with extensions, you’ll need to know three main things:

  1. An extension is a separate process. It is delivered inside of your app’s bundle, but it runs completely on its own, with its own sandbox.
  2. Your app and extension can communicate with each other by being in the same app group. The easiest way is via the group’s shared sandbox folders (so, they can read and write to the same files if you put them there).
  3. Extensions require their own app IDs, profiles and entitlements.

To add an extension to your app, start by logging into your developer account and going to the “Certificates, Identifiers, & Profiles” section.

Updating Your Apple Developer App Account Data

In our Apple developer account, the first thing we need to do is create an app group. Go to the “App Groups” section under “Identifiers” and add one.

Registering an app group (Large preview)

It must start with group, followed by your usual reverse domain-based identifier. Because it has a prefix, you can use your app’s identifier for the rest.

Then, we need to update our app’s ID to use this group and to enable Siri:

  1. Go to the “App IDs” section and click on your app’s ID;
  2. Click the “Edit” button;
  3. Enable app groups (if not enabled for another extension).
    Enable app groups (Large preview)
  4. Then configure the app group by clicking the “Edit” button. Choose the app group from before.
    Set the name of the app group (Large preview)
  5. Enable SiriKit.
    Enable SiriKit (Large preview)
  6. Click “Done” to save it.

Now, we need to create a new app ID for our extension:

  1. In the same “App IDs” section, add a new app ID. This will be your app’s identifier, with a suffix. Do not use just Intents as a suffix because this name will become your module’s name in Swift and would then conflict with the real Intents.
    Create an app ID for the Intents extension (Large preview)
  2. Enable this app ID for app groups as well (and set up the group as we did before).

Now, create a development provisioning profile for the Intents extension, and regenerate your app’s provisioning profile. Download and install them as you would normally do.

Now that our profiles are installed, we need to go to Xcode and update the app’s entitlements.

Updating Your App’s Entitlements In Xcode

Back in Xcode, choose your project’s name in the project navigator. Then, choose your app’s main target, and go to the “Capabilities” tab. In there, you will see a switch to turn on Siri support.

Enable SiriKit in your app's entitlements. (Large preview)

Further down the list, you can turn on app groups and configure it.

Configure the app's app group (Large preview)

If you have set it up correctly, you’ll see this in your app’s .entitlements file:

The plist shows the entitlements that you set (Large preview)

Now, we are finally ready to add the Intents extension target to our project.

Adding The Intents Extension

We’re finally ready to add the extension. In Xcode, choose “File” → “New Target.” This sheet will pop up:

Add the Intents extension to your project (Large preview)

Choose “Intents Extension” and click the “Next” button. Fill out the following screen:

Configure the Intents extension (Large preview)

The product name needs to match whatever you made the suffix in the intents app ID on the Apple developer website.

We are choosing not to add an intents UI extension. This isn’t covered in this article, but you could add it later if you need one. Basically, it’s a way to put your own branding and display style into Siri’s visual results.

When you are done, Xcode will create an intents handler class that we can use as a starting part for our Siri implementation.

The Intents Handler: Resolve, Confirm And Handle

Xcode generated a new target that has a starting point for us.

The first thing you have to do is set up this new target to be in the same app group as the app. As before, go to the “Capabilities” tab of the target, and turn on app groups, and configure it with your group name. Remember, apps in the same group have a sandbox that they can use to share files with each other. We need this in order for Siri requests to get to our app.

List-o-Mat has a function that returns the group document folder. We should use it whenever we want to read or write to a shared file.

func documentsFolder() -> URL? { return FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "group.com.app-o-mat.ListOMat") }

For example, when we save the lists, we use this:

func save(lists: Lists) { guard let docsDir = documentsFolder() else { fatalError("no docs dir") } let url = docsDir.appendingPathComponent(fileName, isDirectory: false) // Encode lists as JSON and save to url }

The Intents extension template created a file named IntentHandler.swift, with a class named IntentHandler. It also configured it to be the intents’ entry point in the extension’s plist.

The intent extension plist configures IntentHandler as the entry point

In this same plist, you will see a section to declare the intents we support. We’re going to start with the one that allows searching for lists, which is named INSearchForNotebookItemsIntent. Add it to the array under IntentsSupported.

Add the intent’s name to the intents plist (Large preview)

Now, go to IntentHandler.swift and replace its contents with this code:

import Intents class IntentHandler: INExtension { override func handler(for intent: INIntent) -> Any? { switch intent { case is INSearchForNotebookItemsIntent: return SearchItemsIntentHandler() default: return nil } } }

The handler function is called to get an object to handle a specific intent. You can just implement all of the protocols in this class and return self, but we’ll put each intent in its own class to keep it better organized.

Because we intend to have a few different classes, let’s give them a common base class for code that we need to share between them:

class ListOMatIntentsHandler: NSObject { }

The intents framework requires us to inherit from NSObject. We’ll fill in some methods later.

We start our search implementation with this:

class SearchItemsIntentHandler: ListOMatIntentsHandler, INSearchForNotebookItemsIntentHandling { }

To set an intent handler, we need to implement three basic steps

  1. Resolve the parameters.
    Make sure required parameters are given, and disambiguate any you don’t fully understand.
  2. Confirm that the request is doable.
    This is often optional, but even if you know that each parameter is good, you might still need access to an outside resource or have other requirements.
  3. Handle the request.
    Do the thing that is being requested.

INSearchForNotebookItemsIntent, the first intent we’ll implement, can be used as a task search. The kinds of requests we can handle with this are, “In List-o-Mat, show the grocery store list” or “In List-o-Mat, show the store list.”

Aside: “List-o-Mat” is actually a bad name for a SiriKit app because Siri has a hard time with hyphens in apps. Luckily, SiriKit allows us to have alternate names and to provide pronunciation. In the app’s Info.plist, add this section:

Add alternate app name's and pronunciation guides to the app plist

This allows the user to say “list oh mat” and for that to be understood as a single word (without hyphens). It doesn’t look ideal on the screen, but without it, Siri sometimes thinks “List” and “Mat” are separate words and gets very confused.

Resolve: Figuring Out The Parameters

For a search for notebook items, there are several parameters:

  1. the item type (a task, a task list, or a note),
  2. the title of the item,
  3. the content of the item,
  4. the completion status (whether the task is marked done or not),
  5. the location it is associated with,
  6. the date it is associated with.

We require only the first two, so we’ll need to write resolve functions for them. INSearchForNotebookItemsIntent has methods for us to implement.

Because we only care about showing task lists, we’ll hardcode that into the resolve for item type. In SearchItemsIntentHandler, add this:

func resolveItemType(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INNotebookItemTypeResolutionResult) -> Void) { completion(.success(with: .taskList)) }

So, no matter what the user says, we’ll be searching for task lists. If we wanted to expand our search support, we’d let Siri try to figure this out from the original phrase and then just use completion(.needsValue()) if the item type was missing. Alternatively, we could try to guess from the title by seeing what matches it. In this case, we would complete with success when Siri knows what it is, and we would use completion(.notRequired()) when we are going to try multiple possibilities.

Title resolution is a little trickier. What we want is for Siri to use a list if it finds one with an exact match for what you said. If it’s unsure or if there is more than one possibility, then we want Siri to ask us for help in figuring it out. To do this, SiriKit provides a set of resolution enums that let us express what we want to happen next.

So, if you say “Grocery Store,” then Siri would have an exact match. But if you say “Store,” then Siri would present a menu of matching lists.

We’ll start with this function to give the basic structure:

func resolveTitle(for intent: INSearchForNotebookItemsIntent, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) { guard let title = intent.title else { completion(.needsValue()) return } let possibleLists = getPossibleLists(for: title) completeResolveListName(with: possibleLists, for: title, with: completion) }

We’ll implement getPossibleLists(for:) and completeResolveListName(with:for:with:) in the ListOMatIntentsHandler base class.

getPossibleLists(for:) needs to try to fuzzy match the title that Siri passes us with the actual list names.

public func getPossibleLists(for listName: INSpeakableString) -> [INSpeakableString] { var possibleLists = [INSpeakableString]() for l in loadLists() { if l.name.lowercased() == listName.spokenPhrase.lowercased() { return [INSpeakableString(spokenPhrase: l.name)] } if l.name.lowercased().contains(listName.spokenPhrase.lowercased()) || listName.spokenPhrase.lowercased() == "all" { possibleLists.append(INSpeakableString(spokenPhrase: l.name)) } } return possibleLists }

We loop through all of our lists. If we get an exact match, we’ll return it, and if not, we’ll return an array of possibilities. In this function, we’re simply checking to see whether the word the user said is contained in a list name (so, a pretty simple match). This lets “Grocery” match “Grocery Store.” A more advanced algorithm might try to match based on words that sound the same (for example, with the Soundex algorithm),

completeResolveListName(with:for:with:) is responsible for deciding what to do with this list of possibilities.

public func completeResolveListName(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INSpeakableStringResolutionResult) -> Void) { switch possibleLists.count { case 0: completion(.unsupported()) case 1: if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() { completion(.success(with: possibleLists[0])) } else { completion(.confirmationRequired(with: possibleLists[0])) } default: completion(.disambiguation(with: possibleLists)) } }

If we got an exact match, we tell Siri that we succeeded. If we got one inexact match, we tell Siri to ask the user if we guessed it right.

If we got multiple matches, then we use completion(.disambiguation(with: possibleLists)) to tell Siri to show a list and let the user pick one.

Now that we know what the request is, we need to look at the whole thing and make sure we can handle it.

Confirm: Check All Of Your Dependencies

In this case, if we have resolved all of the parameters, we can always handle the request. Typical confirm() implementations might check the availability of external services or check authorization levels.

Because confirm() is optional, we could just do nothing, and Siri would assume we could handle any request with resolved parameters. To be explicit, we could use this:

func confirm(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) { completion(INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil)) }

This means we can handle anything.

Handle: Do It

The final step is to handle the request.

func handle(intent: INSearchForNotebookItemsIntent, completion: @escaping (INSearchForNotebookItemsIntentResponse) -> Void) { guard let title = intent.title, let list = loadLists().filter({ $0.name.lowercased() == title.spokenPhrase.lowercased()}).first else { completion(INSearchForNotebookItemsIntentResponse(code: .failure, userActivity: nil)) return } let response = INSearchForNotebookItemsIntentResponse(code: .success, userActivity: nil) response.tasks = list.items.map { return INTask(title: INSpeakableString(spokenPhrase: $0.name), status: $0.done ? INTaskStatus.completed : INTaskStatus.notCompleted, taskType: INTaskType.notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: "\(list.name)\t\($0.name)") } completion(response) }

First, we find the list based on the title. At this point, resolveTitle has already made sure that we’ll get an exact match. But if there’s an issue, we can still return a failure.

When we have a failure, we have the option of passing a user activity. If your app uses Handoff and has a way to handle this exact type of request, then Siri might try deferring to your app to try the request there. It will not do this when we are in a voice-only context (for example, you started with “Hey Siri”), and it doesn’t guarantee that it will do it in other cases, so don’t count on it.

This is now ready to test. Choose the intent extension in the target list in Xcode. But before you run it, edit the scheme.

Edit the scheme of the the intent to add a sample phrase for debugging.

That brings up a way to provide a query directly:

Add the sample phrase to the Run section of the scheme. (Large preview)

Notice, I am using “ListOMat” because of the hyphens issue mentioned above. Luckily, it’s pronounced the same as my app’s name, so it should not be much of an issue.

Back in the app, I made a “Grocery Store” list and a “Hardware Store” list. If I ask Siri for the “store” list, it will go through the disambiguation path, which looks like this:

Siri handles the request by asking for clarification. (Large preview)

If you say “Grocery Store,” then you’ll get an exact match, which goes right to the results.

Adding Items Via Siri

Now that we know the basic concepts of resolve, confirm and handle, we can quickly add an intent to add an item to a list.

First, add INAddTasksIntent to the extension’s plist:

Add the INAddTasksIntent to the extension plist (Large preview)

Then, update our IntentHandler’s handle function.

override func handler(for intent: INIntent) -> Any? { switch intent { case is INSearchForNotebookItemsIntent: return SearchItemsIntentHandler() case is INAddTasksIntent: return AddItemsIntentHandler() default: return nil } }

Add a stub for the new class:

class AddItemsIntentHandler: ListOMatIntentsHandler, INAddTasksIntentHandling { }

Adding an item needs a similar resolve for searching, except with a target task list instead of a title.

func resolveTargetTaskList(for intent: INAddTasksIntent, with completion: @escaping (INTaskListResolutionResult) -> Void) { guard let title = intent.targetTaskList?.title else { completion(.needsValue()) return } let possibleLists = getPossibleLists(for: title) completeResolveTaskList(with: possibleLists, for: title, with: completion) }

completeResolveTaskList is just like completeResolveListName, but with slightly different types (a task list instead of the title of a task list).

public func completeResolveTaskList(with possibleLists: [INSpeakableString], for listName: INSpeakableString, with completion: @escaping (INTaskListResolutionResult) -> Void) { let taskLists = possibleLists.map { return INTaskList(title: $0, tasks: [], groupName: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil) } switch possibleLists.count { case 0: completion(.unsupported()) case 1: if possibleLists[0].spokenPhrase.lowercased() == listName.spokenPhrase.lowercased() { completion(.success(with: taskLists[0])) } else { completion(.confirmationRequired(with: taskLists[0])) } default: completion(.disambiguation(with: taskLists)) } }

It has the same disambiguation logic and behaves in exactly the same way. Saying “Store” needs to be disambiguated, and saying “Grocery Store” would be an exact match.

We’ll leave confirm unimplemented and accept the default. For handle, we need to add an item to the list and save it.

func handle(intent: INAddTasksIntent, completion: @escaping (INAddTasksIntentResponse) -> Void) { var lists = loadLists() guard let taskList = intent.targetTaskList, let listIndex = lists.index(where: { $0.name.lowercased() == taskList.title.spokenPhrase.lowercased() }), let itemNames = intent.taskTitles, itemNames.count > 0 else { completion(INAddTasksIntentResponse(code: .failure, userActivity: nil)) return } // Get the list var list = lists[listIndex] // Add the items var addedTasks = [INTask]() for item in itemNames { list.addItem(name: item.spokenPhrase, at: list.items.count) addedTasks.append(INTask(title: item, status: .notCompleted, taskType: .notCompletable, spatialEventTrigger: nil, temporalEventTrigger: nil, createdDateComponents: nil, modifiedDateComponents: nil, identifier: nil)) } // Save the new list lists[listIndex] = list save(lists: lists) // Respond with the added items let response = INAddTasksIntentResponse(code: .success, userActivity: nil) response.addedTasks = addedTasks completion(response) }

We get a list of items and a target list. We look up the list and add the items. We also need to prepare a response for Siri to show with the added items and send it to the completion function.

This function can handle a phrase like, “In ListOMat, add apples to the grocery list.” It can also handle a list of items like, “rice, onions and olives.”

Siri adds a few items to the grocery store list Almost Done, Just A Few More Settings

All of this will work in your simulator or local device, but if you want to submit this, you’ll need to add a NSSiriUsageDescription key to your app’s plist, with a string that describes what you are using Siri for. Something like “Your requests about lists will be sent to Siri” is fine.

You should also add a call to:

INPreferences.requestSiriAuthorization { (status) in }

Put this in your main view controller’s viewDidLoad to ask the user for Siri access. This will show the message you configured above and also let the user know that they could be using Siri for this app.

The device will ask for permission if you try to use Siri in the app.

Finally, you’ll need to tell Siri what to tell the user if the user asks what your app can do, by providing some sample phrases:

  1. Create a plist file in your app (not the extension), named AppIntentVocabulary.plist.
  2. Fill out the intents and phrases that you support.
Add an AppIntentVocabulary.plist to list the sample phrases that will invoke the intent you handle. (Large preview)

There is no way to really know all of the phrases that Siri will use for an intent, but Apple does provide a few samples for each intent in its documentation. The sample phrases for task-list searching show us that Siri can understand “Show me all my notes on <appName>,” but I found other phrases by trial and error (for example, Siri understands what “lists” are too, not just notes).

Summary

As you can see, adding Siri support to an app has a lot of steps, with a lot of configuration. But the code needed to handle the requests was fairly simple.

There are a lot of steps, but each one is small, and you might be familiar with a few of them if you have used extensions before.

Here is what you’ll need to prepare for a new extension on Apple’s developer website:

  1. Make an app ID for an Intents extension.
  2. Make an app group if you don’t already have one.
  3. Use the app group in the app ID for the app and extension.
  4. Add Siri support to the app’s ID.
  5. Regenerate the profiles and download them.

And here are the steps in Xcode for creating Siri’s Intents extension:

  1. Add an Intents extension using the Xcode template.
  2. Update the entitlements of the app and extension to match the profiles (groups and Siri support).
  3. Add your intents to the extension’s plist.

And you’ll need to add code to do the following things:

  1. Use the app group sandbox to communicate between the app and extension.
  2. Add classes to support each intent with resolve, confirm and handle functions.
  3. Update the generated IntentHandler to use those classes.
  4. Ask for Siri access somewhere in your app.

Finally, there are some Siri-specific configuration settings:

  1. Add the Siri support security string to your app’s plist.
  2. Add sample phrases to an AppIntentVocabulary.plist file in your app.
  3. Run the intent target to test; edit the scheme to provide the phrase.

OK, that is a lot, but if your app fits one of Siri’s domains, then users will expect that they can interact with it via voice. And because the competition for voice assistants is so good, we can only expect that WWDC 2018 will bring a bunch more domains and, hopefully, much better Siri.

Further Reading (da, ra, al, il)
Categories: Around The Web

Becoming A UX Leader

Smashing Magazine - Wed, 04/11/2018 - 6:50am
Becoming A UX Leader Becoming A UX Leader Christopher Murphy 2018-04-11T12:50:15+02:00 2018-04-20T15:32:23+00:00

(This is a sponsored article.) In my previous article on Building UX Teams, I explored the rapidly growing need for UX teams as a result of the emergence of design as a wider business driver. As teams grow, so too does a need for leaders to nurture and guide them.

In my final article in this series on user experience design, I’ll explore the different characteristics of an effective UX leader, and provide some practical advice about growing into a leadership role.

I’ve worked in many organizations — both large and small, and in both the private and public sectors — and, from experience, leadership is a rare quality that is far from commonplace. Truly inspiring leaders are few and far between; if you’re fortunate to work with one, make every effort to learn from them.

Managers that have risen up the ranks don’t automatically become great leaders, and perhaps one of the biggest lessons I’ve learned is that truly inspirational leaders — those that inspire passion and commitment — aren’t as commonplace as you’d think.

A UX leader is truly a hybrid, perhaps more so than in many other — more traditional — businesses. A UX leader needs to encompass a wide range of skills:

  • Establishing, driving and articulating a vision;
  • Communicating across different teams, including design, research, writing, engineering, and business (no small undertaking!);
  • Acting as a champion for user-focused design;
  • Mapping design decisions to key performance indicators (KPIs), and vice-versa, so that success can be measured; and
  • Managing a team, ensuring all the team’s members are challenged and motivated.

UX leadership is not unlike being bi-lingual — or, more accurately, multi-lingual — and it’s a skill that requires dexterity so that nothing gets lost in translation.

This hybrid skill set can seem daunting, but — like anything — the attributes of leadership can be learned and developed. In my final article in this series of ten, I’ll explore what defines a leader and focus on the qualities and attributes needed to step up to this important role.

Undertaking A Skills Audit

Every leader is different, and every leader will be informed by the different experiences they have accumulated to date. There are, however, certain qualities and attributes that leaders tend to share in common.

Great leaders demonstrate self-awareness. They tend to have the maturity to have looked themselves in the mirror and identified aspects of their character that they may need to develop if they are to grow as leaders.

Having identified their strengths and weaknesses and pinpointing areas for improvement, they will have an idea of what they know and — equally important — what they don’t know. As Donald Rumsfeld famously put it:

“There are known knowns: there are things we know we know. We also know there are known unknowns: That is to say, we know there are some things we do not know. But there are also unknown unknowns: the things we don't know we don't know.”

Rumsfeld might have been talking about unknown unknowns in a conflict scenario, but his insight applies equally to the world of leadership. To grow as a leader, it’s important to widen your knowledge so that it addresses both:

  • The Known Unknowns
    Skills you know that you don’t know, which you can identify through a self-critical skills audit; and
  • The Unknown Unknowns
    Skills you don’t know you don’t know, which you can identify through inviting your peers to review your strengths and weaknesses.

In short, a skills audit will equip you with a roadmap that you can use as a map to plot a path from where you are now to where you want to be.

Undertaking a skills audit will enable you to develop a map that you can use to plot a path from where you are now to where you want to be. (Large preview)

To become an effective leader is to embark upon a journey, identifying the gaps in your knowledge and — step by step — addressing these gaps so that you’re prepared for the leadership role ahead.

Identifying The Gaps In Your Knowledge

One way to identify the gaps in your knowledge is to undertake an honest and self-reflective ‘skills audit’ while making an effort to both learn about yourself and learn about the environment you are working within.

To become a UX leader, it’s critical to develop this self-awareness, identifying the knowledge you need to acquire by undertaking both self-assessments and peer assessments. With your gaps in knowledge identified, it’s possible to build a learning pathway to address these gaps.

In the introduction, I touched on a brief list of skills that an effective and well-equipped leader needs to develop. That list is just the tip of a very large iceberg. At the very least, a hybrid UX leader needs to equip themselves by:

  • Developing an awareness of context, expanding beyond the realms of design to encompass a broader business context;
  • Understanding and building relationships with a cross-section of team members;
  • Identifying outcomes and goals, establishing KPIs that will help to deliver these successfully;
  • Managing budgets, both soft and hard; and
  • Planning and mapping time, often across a diversified team.

These are just a handful of skills that an effective UX leader needs to develop. If you’re anything like me, hardly any of this was taught at art school, so you’ll need to learn these skills yourself. This article will help to point you in the right direction. I’ve also provided a list of required reading for reference to ensure you’re well covered.

A 360º Assessment

A 360º degree leadership assessment is a form of feedback for leaders. Drawn from the world of business, but equally applicable to the world of user experience, it is an excellent way to measure your effectiveness and influence as a leader.

Unlike a top-down appraisal, where a leader or manager appraises an employee underneath them in the hierarchy, a 360º assessment involves inviting your colleagues — at your peer level — to appraise you, highlighting your strengths and weaknesses.

This isn’t easy — and can lead to some uncomfortable home truths — but it can prove a critical tool in helping you to identify the qualities you need to work on. You might, for example, consider yourself an excellent listener only to discover that your colleagues feel like this is something you need to work on.

This willingness to put yourself under the spotlight, identify your weaknesses, address these, and develop yourself is one of the defining characteristics of leaders. Great leaders are always learning and they aren’t afraid to admit that fact.

A 360º assessment is a great way to uncover your ‘unknown unknowns’, i.e. the gaps in your knowledge that you aren’t aware of. With these discoveries in hand, it’s possible to build ‘a learning road-map’ that will allow you to develop the skills you need.

Build A Roadmap

With the gaps in your knowledge identified, it’s important to adopt some strategies to address these gaps. Great leaders understand that learning is a lifelong process and to transition into a leadership role will require — inevitably — the acquisition of new skills.

To develop as a leader, it’s important to address your knowledge gaps in a considered and systematic manner. By working back from your skills audit, identify what you need to work on and build a learning programme accordingly.

This will inevitably involve developing an understanding of different domains of knowledge, but that’s the leader’s path. The important thing is to take it step by step and, of course, to take that first step.

We are fortunate now to be working in an age in which we have an abundance of learning materials at our fingertips. We no longer need to enroll in a course at a university to learn; we can put together our own bespoke learning programmes.

We now have so many tools we can use, from paid resources like Skillshare which offers “access to a learning platform for personalized, on-demand learning,” to free resources like FutureLearn which offers the ability to “learn new skills, pursue your interests and advance your career.”

In short, you have everything you need to enhance your career just a click away.

It’s Not Just You

Great leaders understand that it’s not about the effort of individuals, working alone. It’s about the effort of individuals — working collectively. Looking back through the history of innovation, we can see that most (if not all) of the greatest breakthroughs were driven by teams that were motivated by inspirational leaders.

Thomas Edison didn’t invent the lightbulb alone; he had an ‘invention factory’ housed in a series of research laboratories. Similarly, when we consider the development of contemporary graphical user interfaces (GUIs), these emerged from the teamwork of Xerox PARC. The iPod was similarly conceived.

Great leaders understand that it’s not about them as individuals, but it’s about the teams they put together, which they motivate and orchestrate. They have the humility to build and lead teams that deliver towards the greater good.

This — above all — is one of the defining characteristics of a great leader: they prioritize and celebrate the team’s success over and above their own success.

It’s All About Teamwork

Truly great leaders understand the importance that teams play in delivering outcomes and goals. One of the most important roles a leader needs to undertake is to act as a lynchpin that sits at the heart of a team, identifying new and existing team members, nurturing them, and building them into a team that works effectively together.

A forward-thinking leader won’t just focus on the present, but will proactively develop a vision and long-term goals for future success. To deliver upon this vision of future success will involve both identifying potential new team members, but — just as importantly — developing existing team members. This involves opening your eyes to the different aspects of the business environment you occupy, getting to know your team, and identifying team members’ strengths and weaknesses.

As a UX leader, an important role you’ll play is helping others by mentoring and coaching them, ensuring they are equipped with the skills they need to grow. Again, this is where a truly selfless leader will put others first, in the knowledge that the stronger the team, the stronger the outcomes will be.

As a UX leader, you’ll also act as a champion for design within the wider business context. You’ll act as a bridge — and occasionally, a buffer — between the interface of business requirements and design requirements. Your role will be to champion the power of design and sell its benefits, always singing your team’s praises and — occasionally — fighting on their behalf (often without their awareness).

The Art Of Delegation

It’s possible to build a great UX team from the inside by developing existing team members, and an effective leader will use delegation as an effective development tool to enhance their team members’ capabilities.

Delegation isn’t just passing off the tasks you don’t want to do, it’s about empowering the different individuals in a team. A true leader understands this and spends the time required to learn how to delegate effectively.

Delegation is about education and expanding others’ skill sets, and it’s a powerful tool when used correctly. Effective delegation is a skill, one that you’ll need to acquire to step up into a UX leadership role.

When delegating a task to a team member, it’s important to explain to them why you’re delegating the task. As a leader, your role is to provide clear guidance and this involves explaining why you’ve chosen a team member for a task and how they will be supported, developed and rewarded for taking the task on.

This latter point is critical: All too often managers who lack leadership skills use delegation as a means to offload tasks and responsibility, unaware of the power of delegation. This is poor delegation and it’s ineffective leadership, though I imagine, sadly, we have all experienced it! An effective leader understands and strives to delegate effectively by:

  • defining the task, establishing the outcomes and goals;
  • identifying the appropriate individual or team to take the task on;
  • assessing the team member(s) ability and ascertaining any training needs;
  • explaining their reasoning, clearly outlining why they chose the individual or team;
  • stating the required results;
  • agreeing on realistic deadlines; and
  • providing feedback on completion of the task.

When outlined like this, it becomes clear that effective delegation is more than simply passing on a task you’re unwilling to undertake. Instead, it’s a powerful tool that an effective UX leader uses to enable their team members to take ownership of opportunities, whilst growing their skills and experience.

Give Success And Accept Failure

A great leader is selfless: they give credit for any successes to the team; and accept the responsibility for any failures alone.

A true leader gives success to the team, ensuring that — when there’s a win — the team is celebrated for its success. A true leader takes pleasure in celebrating the team’s win. When it comes to failure, however, a true leader steps up and takes responsibility. A mark of a truly great leader is this selflessness.

As a leader, you set the direction and nurture the team, so it stands to reason that, if things go wrong — which they often do — you’re willing to shoulder the responsibility. This understanding — that you need to give success and accept failure — is what separates great leaders from mediocre managers.

Poor managers will seek to ‘deflect the blame,’ looking for anyone but themselves to apportion responsibility to. Inspiring leaders are aware that, at the end of the day, they are responsible for the decisions made and outcomes reached; when things go wrong they accept responsibility.

If you’re to truly inspire others and lead them to achieve great things, it’s important to remember this distinction between managers and leaders. By giving success and accepting failure, you’ll breed intense loyalty in your team.

Lead By Example

Great leaders understand the importance of leading by example, acting as a beacon for others. To truly inspire a team, it helps to connect yourself with that team, and not isolate yourself. Rolling up your sleeves and pitching in, especially when deadlines are pressing, is a great way to demonstrate that you haven’t lost touch with the ‘front line.’

A great leader understands that success is — always — a team effort and that a motivated team will deliver far more than the sum of its parts.

As I’ve noted in my previous articles: If you’re ever the smartest person in a room, find another room. An effective leader has the confidence to surround themselves with other, smarter people.

Leadership isn’t about individual status or being seen to be the most talented. It’s about teamwork and getting the most out of a well-oiled machine of individuals working effectively together.

Get Out Of Your Silo

To lead effectively, it’s important to get out of your silo and to see the world as others do. This means getting to know all of the team, throughout the organization and at every level.

Leaders that isolate themselves — in their often luxurious corner offices — are, in my experience, poor leaders (if, indeed, they can be called leaders at all!). By distancing themselves from the individuals that make up an organization they run the very real risk of losing touch.

To lead, get out of your silo and acquaint yourself with the totality of your team and, if you’re considering a move into leadership, make it your primary task to explore all the facets of the business.

The Pieces Of The Jigsaw

To lead effectively, you need to have an understanding of others and their different skills. In my last article, Building a UX Team, I wrote about the idea of ‘T-shaped’ people — those that have a depth of skill in their field, along with the willingness and ability to collaborate across disciplines. Great leaders tend to be T-shaped, flourishing by seeing things from others’ perspectives.

Every organization — no matter how large or small — is like an elaborate jigsaw that is made up of many different interlocking parts. An effective leader is informed by an understanding of this context, they will have made the effort to see all of the pieces of the jigsaw. As a UX leader, you’ll need to familiarize yourself with a wide range of different teams, including design, research, writing, engineering, and business.

To lead effectively, it’s important to push outside of your comfort zone and learn about these different specialisms. Do so and you will ensure that you can communicate to these different stakeholders. At the risk of mixing metaphors, you will be the glue that holds the jigsaw together.

Sweat The Details

As Charles and Ray Eames put it:

“The details aren’t the details, they make the product.”

Great leaders understand this: they set the bar high and they encourage and motivate the teams they lead to deliver on the details. To lead a team, it’s important to appreciate the need to strive for excellence. Great leaders aren’t happy to accept the status quo, especially if the status quo can be improved upon.

Of course, these qualities can be learned, but many of us — thankfully — have them, innately. Few (if any) of us are happy with second best and, in a field driven by a desire to create delightful and memorable user experiences, we appreciate the importance of details and their place in the grand scheme of things. This is a great foundation on which to build leadership skills.

To thrive as a leader, it’s important to share this focus on the details with others, ensuring they understand and appreciate the role that the details play in the whole. Act as a beacon of excellence: lead by example; never settle for second best; give success and accept failure… and your team will follow you.

In Closing

As user experience design matures as a discipline, and the number of different specializations multiplies, so too does the discipline’s need for leaders, to nurture and grow teams. As a relatively new field of expertise, the opportunities to develop as a UX leader are tremendous.

Leadership is a skill and — like any skill — it can be learned. As I’ve noted throughout this series of articles, one of the best places to learn is to look to other disciplines for guidance, widening the frame of reference. When we consider leadership, this is certainly true.

There is a great deal we can learn from the world of business, and websites like Harvard Business Review (HBR), McKinsey Quarterly, and Fast Company — amongst many, many others — offer us a wealth of insight.

There’s never been a more exciting time to work in User Experience design. UX has the potential to impact on so many facets of life, and the world is crying out for leaders to step up and lead the charge. I’d encourage anyone eager to learn and to grow to undertake a skills audit, take the first step, and embark on the journey into leadership. Leadership is a privilege, rich with rewards, and is something I’d strongly encourage exploring.

Suggested Reading

There are many great publications, offline and online, that will help you on your adventure. I’ve included a few below to start you on your journey.

  • The Harvard Business Review website is an excellent starting point and its guide, HBR’s 10 Must Reads on Leadership, provides an excellent overview on developing leadership qualities.
  • Peter Drucker’s writing on leadership is also well worth reading. Drucker has written many books, one I would strongly recommend is Managing Oneself. It’s a short (but very powerful) read, and I read it at least two or three times a year.
  • If you’re serious about enhancing your leadership credentials, Michael D. Watkins’s The First 90 Days: Proven Strategies for Getting Up to Speed Faster and Smarter, provides a comprehensive insight into transitioning into leadership roles.
  • Finally, HBR’s website — mentioned above — is definitely worth bookmarking. Its business-focused flavor offers designers a different perspective on leadership, which is well worth becoming acquainted with.

This article is part of the UX design series sponsored by Adobe. Adobe XD is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype, and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

(ra, il)
Categories: Around The Web

How To Design Emotional Interfaces For Boring Apps

Smashing Magazine - Tue, 04/10/2018 - 7:20am
How To Design Emotional Interfaces For Boring Apps How To Design Emotional Interfaces For Boring Apps Alice Кotlyarenko 2018-04-10T13:20:05+02:00 2018-04-20T15:32:23+00:00

There’s a trickling line of ones and zeros that disappears behind a large yellow tube. A bear pops out of the tube as a clawed paw starts pointing at my browser’s toolbar, and a headline appears, saying: “Start your bear-owsing!”

Between my awwing and oohing I forget what I wanted to browse.

Products like a VPN service rarely evoke endearment — or any other emotion, for that matter. It’s not their job, not what they were built to do. But because TunnelBear does, I choose it over any other VPN and recommend it to my friends, so they can have some laughs while caught up in routine.

Humans can’t endure boredom for a long time, which is why products that are built for non-exciting, repetitive tasks so often get abandoned and gather dust on computers and phones. But boredom, according to psychologists, is merely lack of stimulation, the unfulfilled desire for satisfying activity. So what if we use the interface to give them that stimulation?

I sat with product designers here at MacPaw, who spend their waking hours designing not-so-sexy things like duplicate finders and encryption apps, and they shared five secrets to more emotional UIs: gamification, humor, animation, illustration, and mascots.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry → Games People Play

There’s some debate going on around the use of gamification in UIs: 24 empirical studies, for example, arrived at varying conclusions as to how effective it was. But then again, effectiveness depends on what you were trying to accomplish by designing those shiny achievement badges.

For many product creators, including Akar Sumset here, the point of gamification is not letting users have fun per se — it’s gently pushing them towards certain behaviors via said fun. Achievements, ranks, leaderboards tap into the basic human need of esteem, trigger competitiveness, and supposedly urge users to do what you want them to, like make progress, keep coming back to the app, or share it on social media.

Gamification can succeed or fail at that, but what it sure achieves is an emotional response. Our brain is packed full of cells that control the levels of dopamine, one of the major neurochemicals of happiness. When something enjoyable happens, these neurons light up and trigger a release of dopamine into the blood, but what’s even better, if this pleasant event is regular and can be predicted, they’ll light up and release dopamine before it even happens. What does that mean for your interface? That expecting an enjoyable thing such as the next achievement will give the users little shots of happiness throughout their experience with the product.

Gamification in UI: Gemini 2 And Duolingo

When designing Gemini 2, the new version of our duplicate finder for Mac, we had a serious problem at hand. Reviewing gigabytes of files was soul-crushingly boring, and some users complained they quit before they were done. So what we tried to achieve with the achievements system is intensify the feeling of a crossed-out item on a to-do list, which is the only upside of tedious tasks. The space theme, unwittingly set with the app’s name and exploited in the interface, was perfect for gamification. Our audience grew up on Star Wars and Star Trek, so sci-fi inspired ranks would hit home with them.

Within days of the release, we started getting tweets from users asking for clues on the Easter Egg that would unlock the final achievement. A year after the release, Gemini 2 got the Red Dot Award for a design that exhibits “clarity and emotion.” So while it’s hard to measure how motivating our achievement system has been, it sure didn’t leave people cold.

Another product that got it right — and has by far the most gamified interface I’ve seen — is Duolingo, an online service and mobile app for learning languages. Trying to master a foreign tongue from scratch is daunting, especially if it’s just you and your laptop, without the reassurance that comes with having a teacher. Given how quickly people lose interest in their language endeavors (speaking from experience here), Duolingo would have to go out of its way to keep you hooked. And it does.

Whenever you complete a quick 5-minute lesson, you earn 10 points. Take lessons 30 days in a row? Get an achievement. Complete 20 lessons without a single typo? Unlock another. For every baby step you take, your senses are rewarded with triumphant sounds and colorful graphics that trigger the release of that sweet, sweet dopamine. Eventually, you start associating Duolingo with the feeling of accomplishment and pride — the kind of feeling you want to come back to.

If you’d like to dive deeper into gamification, Gabe Zichermann’s book “Gamification by Design: Implementing Game Mechanics in Web and Mobile Apps” is a great way to start.

You’ve Got To Be Joking

Victor Yocco has made a solid case for using humor in web design as a tool to create memorable experiences, connect with users, and make your work stand out. But the biggest power of jokes is that they’re emotional. While we still don’t fully understand the nature of humor, one thing is clear: it makes humans happy. According to brain imaging research, funny cartoons activate the reward network in the limbic system — the same network that responds to eating, music, sex, and mood-altering drugs. In other words, a good joke gives people a kind of emotional high.

Would you want that kind of reaction to your interface? Of course. But the tricky part is that not only is humor subjective, but the way we respond to it depends a lot on the context. One thing is throwing in a pun on the launch screen; a completely different is goofing around in an error message. And while all humans enjoy humor in this or that form, it’s vital to know your audience: what they find hilarious and what might seem inappropriate, crude, or poorly timed. Not that different from cracking jokes in real life.

Humor in UI: Authentic Weather and Slack

One app that nails the use of humor — and not just as a complementary comic relief, but as a unique selling proposition — is Authentic Weather. Weather apps are a prime example of utilitarian products: they are something people use to get information, period. But with Authentic Weather, you get a lot more than that. No matter the weather, it’s going to crack you up with a snarky comment like “It’s ducking freezing,” “Go home winter,” and my personal favorite “It’s just okay. Look outside for more information.”

What happens when you use Authentic Weather is you don’t just open it for the forecast — you want to see what it comes up with next, and a routine task like checking the weather becomes a thing to look forward to in the morning. Now, the app’s moody commentary, packed full of f-words and scorn, would probably seem less entertaining to my mom. But being the grumpy millennial that I am, I find it hilarious, which proves humor works if you know your audience.

Another interface that puts fun to good use is Slack’s. For an app people associate with work emergencies, Slack does a solid job creating a more humane experience, not least because of its one-liners. From loading screens to the moments when you’re finally caught up with all your chats, it cracks a joke when you don’t see it coming.

With such a diverse demographic, humor is a hit and miss, so Slack plays safe with goofy puns and good-natured banter — the kind of jokes that don’t exactly send you rolling on the floor but don’t annoy or offend either. In the best case scenario, the user will chuckle and share the screenshot in one of their channels; in the worst case scenario, they’ll just roll their eyes.

More on Humor: “Just Kidding: Using Humor Effectively” by Louis R. Franzini.

Get The World Moving

Nearly every interface uses a form of animation. It’s the natural way to transition from one state to another. But animations in UI can serve a lot more purposes than signifying a change of state — they can help you direct attention and communicate what’s going on better than static visuals or copy ever could. The movement stimulates both visual and kinesthetic learning, which means users are more likely to stay focused and figure out how to use the thing.

These are all good reasons to incorporate animation into your design, but why does it elicit emotion, exactly? Simon Grozyan, who worked on our apps Encrypto and Gemini Photos, believes it’s because in the physical world we interpret animated things as alive:

“We are used to seeing things in movement. Everything around us is either moving or changing appearance because of the light. Static equals dead.”

In addition to the relatable, lifelike quality of a moving object, animation has the power of a delightful and unexpected thing that brings us a lot more pleasure than a thing equally delightful but expected. Therefore, by using it in spots less habitual than transitions you can achieve that coveted stimulation that makes your product fun to use.

Animation in UI: Encrypto and Shazam

Encrypto is a tiny Mac app that encrypts and decrypts your files so that you can send them to someone securely. It’s an indispensable tool for those who care about data security and privacy, but not the kind of tool you would feel attached to. Nevertheless, Encrypto is by far my favorite MacPaw app as far as design is concerned, thanks to the Matrix-style animated bar that slides over your file and transforms it into a new secured entity. Encryption comes to life; it’s no longer a dull process on your computer — it’s mesmerizing digital magic.

Animation is at the heart of another great UI: that of Shazam, an app you probably have on your phone. When you use Shazam to find out what’s playing, the button you tap starts sending concentric circles outward and inward. This similarity to a throbbing audio speaker makes the interface almost tangible, physical — as if you’re blasting your favorite album on a powerful sound system.

More on Animation: “How Functional Animation Helps Improve User Experience”.

Art Is Everywhere

As Blair Culbreth argues, polished is no longer enough for interfaces. Sleek, professional design is expected, but it’s the personalized, humane details that users smile at and forward to their friends. Custom art can be this detail.

Unlike generic imagery, illustration is emotional, because it communicates more than meaning. It carries positive associations with cartoons every person used to watch as a child, shows things in a more playful, imaginative way, and, most importantly, contains a touch of the artist’s personality.

“I think when an artist creates an illustration they always infuse some of their personal experience, their context, their story into it,” says Max Kukurudziak, one of our product designers. The theory rings true — a human touch is more likely to stir feelings.

Illustration in UI: Gemini Photos and Google Calendar

One of our newest products Gemini Photos is an iPhone app that helps you clear unneeded photos. Much like Gemini 2 for desktop, it involves some tedious reviewing for the user, so even with a handy and handsome UI, we’d have a hard time holding their attention and generally making them feel good.

Like in many of our previous apps, we used animations and sounds to enliven the interface, but custom art has become the highlight of the experience. As said above, it’s scientifically proven that surprising pleasurable things cause an influx of that happiness chemical into our blood, so by using quirky illustrations in unexpected spots we didn’t just fill up an empty screen — we added a tad of enjoyment to an otherwise monotonous activity.

One more example of how illustration can make a product more lovable is Google Calendar. Until recently there was a striking difference between the web version and the iOS app. While the former had the appeal of a spreadsheet, the latter instantly won my heart with one killer detail. For many types of events, Google Calendar slips in art that illustrates them, based on the keywords it picks up from event titles. That way, your plans for the week look a lot more exciting, even if all you’ve got going on is the gym and a dentist appointment.

But that’s not even the best thing. I realized that whenever I create a new event, I secretly hope Google Calendar will have art for it and feel genuinely pleased when it does. Just like that, using a calendar stopped being a necessity and became a source of positive emotion. And, apparently, the illustration experiment didn’t work for me alone, because Google recently rolled out the web version of their calendar with the same art.

More on Illustration: “Illustration That Works: Professional Techniques For Artistic And Commercial Success” by Greg Houston.

What A Character

Cute characters that impersonate products have been used in web design and marketing for years (think Ronald McDonald and the Michelin Man). In interfaces — not quite as much. Mascots in UI can be perceived as intrusive and annoying, especially if they distract the user from an important action or obstruct the view. A notorious example of a mascot gone wrong is Microsoft’s Clippy: it evoked nothing but fear and loathing (which, of course, are emotions, but not the kind you’re looking for).

At the same time, studies show that people easily personify things, even if they are merely geometric figures. Lifelike creatures are easier to relate to, understand the behavior of, and generally feel some way about. Moreover, an animated character is easier to attribute a personality to, so you can broadcast the characteristics of your product through that character — make it playful and goofy, eager and helpful, or whatever you need it to be. With that much-untapped potential, mascots are perfect for non-emotional products.

The trick is timing.

Clippy was so obnoxious because he appeared uninvited, interrupted completely unrelated tasks, and was generally in the way. But if the mascot shows up in a relatively idle moment — for example, the user has just completed a task — it will do its endearing job.

Mascots in UI: RememBear and Yelp

TunnelBear Inc. has recently beta launched another utility that’s cute as a button (no pun intended). RememBear is a password manager, and passwords are supposed to be no joke. But the brilliance of bear cartoons in RememBear is that they are nowhere in sight when you do serious, important things like creating a new entry. Instead, you get a bear hug when you’re done with stage one of signing up for the app and haven’t yet proceeded to stage two — saving your first password. By placing the mascot in this spot, RememBear avoided being in the way but made me smile when I least expected it.

Just like RememBear, Yelp — a widely known app for restaurant reviews — has perfect timing for their mascot. The funny hamster first appeared at the bottom of the iOS app’s settings so that the user would discover it like an Easter egg.

“At Yelp we’re always striving to make our product and brand feel fun and delightful,” says Yoni De Beule, Yelp’s Product Design manager. “We reflect Yelp’s personality in everything from our fun poster designs and funny release notes to internal hackathon projects and Yelp Elite parties. When we found our iPhone settings page to be seriously lacking in the fun department, we decided to roll up our sleeves and fix it.”

The hamster in the iOS app later got company, as the team designed a velociraptor for the Android version and a dog for the web. So whenever — and wherever — you use Yelp, you almost want to run out of recommendations, so that you can see another version of the delightful character.

If you’d like to learn how to create your own mascot, there’s a nice tutorial by Sirine (aka ‘Miss ChatZ’) on Envato Tuts+.

To Wrap It Up…

Not all products are inherently fun the way games, or social media apps are, but even utilities don’t have to be merely utilitarian. Apps that deal with repetitive tasks often struggle with retaining users: people abandon them because they feel bored, and boredom is simply lack of stimulation. By using positive stimuli like humor, movement, unique art, elements of game, and relatable characters we can make users feel a different way — more excited, less distracted, and ultimately happier.

Further Reading (cc, ra, il)
Categories: Around The Web

Designing For Accessibility And Inclusion

Smashing Magazine - Mon, 04/09/2018 - 8:45am
Designing For Accessibility And Inclusion Designing For Accessibility And Inclusion Steven Lambert 2018-04-09T14:45:39+02:00 2018-04-20T15:32:23+00:00

“Accessibility is solved at the design stage.” This is a phrase that Daniel Na and his team heard over and over again while attending a conference. To design for accessibility means to be inclusive to the needs of your users. This includes your target users, users outside of your target demographic, users with disabilities, and even users from different cultures and countries. Understanding those needs is the key to crafting better and more accessible experiences for them.

One of the most common problems when designing for accessibility is knowing what needs you should design for. It’s not that we intentionally design to exclude users, it’s just that “we don’t know what we don’t know.” So, when it comes to accessibility, there’s a lot to know.

How do we go about understanding the myriad of users and their needs? How can we ensure that their needs are met in our design? To answer these questions, I have found that it is helpful to apply a critical analysis technique of viewing a design through different lenses.

“Good [accessible] design happens when you view your [design] from many different perspectives, or lenses.”

The Art of Game Design: A Book of Lenses

A lens is “a narrowed filter through which a topic can be considered or examined.” Often used to examine works of art, literature, or film, lenses ask us to leave behind our worldview and instead view the world through a different context.

For example, viewing art through a lens of history asks us to understand the “social, political, economic, cultural, and/or intellectual climate of the time.” This allows us to better understand what world influences affected the artist and how that shaped the artwork and its message.

Nope, we can't do any magic tricks, but we have articles, books and webinars featuring techniques we all can use to improve our work. Smashing Members get a seasoned selection of magic front-end tricks — e.g. live designing sessions and perf audits, too. Just sayin'! ;-)

Explore Smashing Wizardry →

Accessibility lenses are a filter that we can use to understand how different aspects of the design affect the needs of the users. Each lens presents a set of questions to ask yourself throughout the design process. By using these lenses, you will become more inclusive to the needs of your users, allowing you to design a more accessible user experience for all.

The Lenses of Accessibility are:

You should know that not every lens will apply to every design. While some can apply to every design, others are more situational. What works best in one design may not work for another.

The questions provided by each lens are merely a tool to help you understand what problems may arise. As always, you should test your design with users to ensure it’s usable and accessible to them.

Lens Of Animation And Effects

Effective animations can help bring a page and brand to life, guide the users focus, and help orient a user. But animations are a double-edged sword. Not only can misusing animations cause confusion or be distracting, but they can also be potentially deadly for some users.

Fast flashing effects (defined as flashing more than three times a second) or high-intensity effects and patterns can cause seizures, known as ‘photosensitive epilepsy.’ Photosensitivity can also cause headaches, nausea, and dizziness. Users with photosensitive epilepsy have to be very careful when using the web as they never know when something might cause a seizure.

Other effects, such as parallax or motion effects, can cause some users to feel dizzy or experience vertigo due to vestibular sensitivity. The vestibular system controls a person’s balance and sense of motion. When this system doesn’t function as it should, it causes dizziness and nausea.

“Imagine a world where your internal gyroscope is not working properly. Very similar to being intoxicated, things seem to move of their own accord, your feet never quite seem to be stable underneath you, and your senses are moving faster or slower than your body.”

A Primer To Vestibular Disorders

Constant animations or motion can also be distracting to users, especially to users who have difficulty concentrating. GIFs are notably problematic as our eyes are drawn towards movement, making it easy to be distracted by anything that updates or moves constantly.

This isn’t to say that animation is bad and you shouldn’t use it. Instead you should understand why you’re using the animation and how to design safer animations. Generally speaking, you should try to design animations that cover small distances, match direction and speed of other moving objects (including scroll), and are relatively small to the screen size.

You should also provide controls or options to cater the experience for the user. For example, Slack lets you hide animated images or emojis as both a global setting and on a per image basis.

To use the Lens of Animation and Effects, ask yourself these questions:

  • Are there any effects that could cause a seizure?
  • Are there any animations or effects that could cause dizziness or vertigo through use of motion?
  • Are there any animations that could be distracting by constantly moving, blinking, or auto-updating?
  • Is it possible to provide controls or options to stop, pause, hide, or change the frequency of any animations or effects?
Lens Of Audio And Video

Autoplaying videos and audio can be pretty annoying. Not only do they break a users concentration, but they also force the user to hunt down the offending media and mute or stop it. As a general rule, don’t autoplay media.

“Use autoplay sparingly. Autoplay can be a powerful engagement tool, but it can also annoy users if undesired sound is played or they perceive unnecessary resource usage (e.g. data, battery) as the result of unwanted video playback.”

Google Autoplay guidelines

You’re now probably asking, “But what if I autoplay the video in the background but keep it muted?” While using videos as backgrounds may be a growing trend in today’s web design, background videos suffer from the same problems as GIFs and constant moving animations: they can be distracting. As such, you should provide controls or options to pause or disable the video.

Along with controls, videos should have transcripts and/or subtitles so users can consume the content in a way that works best for them. Users who are visually impaired or who would rather read instead of watch the video need a transcript, while users who aren’t able to or don’t want to listen to the video need subtitles.

To use the Lens of Audio and Video, ask yourself these questions:

  • Are there any audio or video that could be annoying by autoplaying?
  • Is it possible to provide controls to stop, pause, or hide any audio or videos that autoplay?
  • Do videos have transcripts and/or subtitles?
Lens Of Color

Color plays an important part in a design. Colors evoke emotions, feelings, and ideas. Colors can also help strengthen a brand’s message and perception. Yet the power of colors is lost when a user can’t see them or perceives them differently.

Color blindness affects roughly 1 in 12 men and 1 in 200 women. Deuteranopia (red-green color blindness) is the most common form of color blindness, affecting about 6% of men. Users with red-green color blindness typically perceive reds, greens, and oranges as yellowish.

Deuteranopia (green color blindness) is common and causes reds to appear brown/yellow and greens to appear beige. Protanopia (red color blindness) is rare and causes reds to appear dark/black and orange/greens to appear yellow. Tritanopia (blue-yellow colorblindness) is very rare and cases blues to appear more green/teal and yellows to appear violet/grey. (Source) (Large preview)

Color meaning is also problematic for international users. Colors mean different things in different countries and cultures. In Western cultures, red is typically used to represent negative trends and green positive trends, but the opposite is true in Eastern and Asian cultures.

Because colors and their meanings can be lost either through cultural differences or color blindness, you should always add a non-color identifier. Identifiers such as icons or text descriptions can help bridge cultural differences while patterns work well to distinguish between colors.

Trello’s color blind friendly labels use different patterns to distinguish between the colors. (Large preview)

Oversaturated colors, high contrasting colors, and even just the color yellow can be uncomfortable and unsettling for some users, prominently those on the autism spectrum. It’s best to avoid high concentrations of these types of colors to help users remain comfortable.

Poor contrast between foreground and background colors make it harder to see for users with low vision, using a low-end monitor, or who are just in direct sunlight. All text, icons, and any focus indicators used for users using a keyboard should meet a minimum contrast ratio of 4.5:1 to the background color.

You should also ensure your design and colors work well in different settings of Windows High Contrast mode. A common pitfall is that text becomes invisible on certain high contrast mode backgrounds.

To use the Lens of Color, ask yourself these questions:

  • If the color was removed from the design, what meaning would be lost?
  • How could I provide meaning without using color?
  • Are any colors oversaturated or have high contrast that could cause users to become overstimulated or uncomfortable?
  • Does the foreground and background color of all text, icons, and focus indicators meet contrast ratio guidelines of 4.5:1?
Lens Of Controls

Controls, also called ‘interactive content,’ are any UI elements that the user can interact with, be they buttons, links, inputs, or any HTML element with an event listener. Controls that are too small or too close together can cause lots of problems for users.

Small controls are hard to click on for users who are unable to be accurate with a pointer, such as those with tremors, or those who suffer from reduced dexterity due to age. The default size of checkboxes and radio buttons, for example, can pose problems for older users. Even when a label is provided that could be clicked on instead, not all users know they can do so.

Controls that are too close together can cause problems for touch screen users. Fingers are big and difficult to be precise with. Accidentally touching the wrong control can cause frustration, especially if that control navigates you away or makes you lose your context.

When touching a single line tweet, it’s very easy to accidentally click the person’s name or handle instead of opening the tweet because there’s not enough space between them. (Source) (Large preview)

Controls that are nested inside another control can also contribute to touch errors. Not only is it not allowed in the HTML spec, it also makes it easy to accidentally select the parent control instead of the one you wanted.

To give users enough room to accurately select a control, the recommended minimum size for a control is 34 by 34 device independent pixels, but Google recommends at least 48 by 48 pixels, while the WCAG spec recommends at least 44 by 44 pixels. This size also includes any padding the control has. So a control could visually be 24 by 24 pixels but with an additional 10 pixels of padding on all sides would bring it up to 44 by 44 pixels.

It’s also recommended that controls be placed far enough apart to reduce touch errors. Microsoft recommends at least 8 pixels of spacing while Google recommends controls be spaced at least 32 pixels apart.

Controls should also have a visible text label. Not only do screen readers require the text label to know what the control does, but it’s been shown that text labels help all users better understand a controls purpose. This is especially important for form inputs and icons.

To use the Lens of Controls, ask yourself these questions:

  • Are any controls not large enough for someone to touch?
  • Are any controls too close together that would make it easy to touch the wrong one?
  • Are there any controls inside another control or clickable region?
  • Do all controls have a visible text label?
Lens Of Font

In the early days of the web, we designed web pages with a font size between 9 and 14 pixels. This worked out just fine back then as monitors had a relatively known screen size. We designed thinking that the browser window was a constant, something that couldn’t be changed.

Technology today is very different than it was 20 years ago. Today, browsers can be used on any device of any size, from a small watch to a huge 4K screen. We can no longer use fixed font sizes to design our sites. Font sizes must be as responsive as the design itself.

Not only should the font sizes be responsive, but the design should be flexible enough to allow users to customize the font size, line height, or letter spacing to a comfortable reading level. Many users make use of custom CSS that helps them have a better reading experience.

The font itself should be easy to read. You may be wondering if one font is more readable than another. The truth of the matter is that the font doesn’t really make a difference to readability. Instead it’s the font style that plays an important role in a fonts readability.

Decorative or cursive font styles are harder to read for many users, but especially problematic for users with dyslexia. Small font sizes, italicized text, and all uppercase text are also difficult for users. Overall, larger text, shorter line lengths, taller line heights, and increased letter spacing can help all users have a better reading experience.

To use the Lens of Font, ask yourself these questions:

  • Is the design flexible enough that the font could be modified to a comfortable reading level by the user?
  • Is the font style easy to read?
Lens Of Images and Icons

They say, “A picture is worth a thousand words.” Still, a picture you can’t see is speechless, right?

Images can be used in a design to convey a specific meaning or feeling. Other times they can be used to simplify complex ideas. Whichever the case for the image, a user who uses a screen reader needs to be told what the meaning of the image is.

As the designer, you understand best the meaning or information the image conveys. As such, you should annotate the design with this information so it’s not left out or misinterpreted later. This will be used to create the alt text for the image.

How you describe an image depends entirely on context, or how much textual information is already available that describes the information. It also depends on if the image is just for decoration, conveys meaning, or contains text.

“You almost never describe what the picture looks like, instead you explain the information the picture contains.”

Five Golden Rules for Compliant Alt Text

Since knowing how to describe an image can be difficult, there’s a handy decision tree to help when deciding. Generally speaking, if the image is decorational or there’s surrounding text that already describes the image’s information, no further information is needed. Otherwise you should describe the information of the image. If the image contains text, repeat the text in the description as well.

Descriptions should be succinct. It’s recommended to use no more than two sentences, but aim for one concise sentence when possible. This allows users to quickly understand the image without having to listen to a lengthy description.

As an example, if you were to describe this image for a screen reader, what would you say?

Source (Large preview)

Since we describe the information of the image and not the image itself, the description could be Vincent van Gogh’s The Starry Night since there is no other surrounding context that describes it. What you shouldn’t put is a description of the style of the painting or what the picture looks like.

If the information of the image would require a lengthy description, such as a complex chart, you shouldn’t put that description in the alt text. Instead, you should still use a short description for the alt text and then provide the long description as either a caption or link to a different page.

This way, users can still get the most important information quickly but have the ability to dig in further if they wish. If the image is of a chart, you should repeat the data of the chart just like you would for text in the image.

If the platform you are designing for allows users to upload images, you should provide a way for the user to enter the alt text along with the image. For example, Twitter allows its users to write alt text when they upload an image to a tweet.

To use the Lens of Images and Icons, ask yourself these questions:

  • Does any image contain information that would be lost if it was not viewable?
  • How could I provide the information in a non-visual way?
  • If the image is controlled by the user, is it possible to provide a way for them to enter the alt text description?
Lens Of Keyboard

Keyboard accessibility is among the most important aspects of an accessible design, yet it is also among the most overlooked.

There are many reasons why a user would use a keyboard instead of a mouse. Users who use a screen reader use the keyboard to read the page. A user with tremors may use a keyboard because it provides better accuracy than a mouse. Even power users will use a keyboard because it’s faster and more efficient.

A user using a keyboard typically uses the tab key to navigate to each control in sequence. A logical order for the tab order greatly helps users know where the next key press will take them. In western cultures, this usually means from left to right, top to bottom. Unexpected tab orders results in users becoming lost and having to scan frantically for where the focus went.

Sequential tab order also means that they must tab through all controls that are before the one that they want. If that control is tens or hundreds of keystrokes away, it can be a real pain point for the user.

By making the most important user flows nearer to the top of the tab order, we can help enable our users to be more efficient and effective. However, this isn’t always possible nor practical to do. In these cases, providing a way to quickly jump to a particular flow or content can still allow them to be efficient. This is why “skip to content” links are helpful.

A good example of this is Facebook which provides a keyboard navigation menu that allows users to jump to specific sections of the site. This greatly speeds up the ability for a user to interact with the page and the content they want.

Facebook provides a way for all keyboard users to jump to specific sections of the page, or other pages within Facebook, as well as an Accessibility Help menu. (Large preview)

When tabbing through a design, focus styles should always be visible or a user can easily become lost. Just like an unexpected tab order, not having good focus indicators results in users not knowing what is currently focused and having to scan the page.

Changing the look of the default focus indicator can sometimes improve the experience for users. A good focus indicator doesn’t rely on color alone to indicate focus (Lens of Color), and should be distinct enough to easily allow the user to find it. For example, a blue focus ring around a similarly colored blue button may not be visually distinct to discern that it is focused.

Although this lens focuses on keyboard accessibility, it’s important to note that it applies to any way a user could interact with a website without a mouse. Devices such as mouth sticks, switch access buttons, sip and puff buttons, and eye tracking software all require the page to be keyboard accessible.

By improving keyboard accessibility, you allow a wide range of users better access to your site.

To use the Lens of Keyboard, ask yourself these questions:

  • What keyboard navigation order makes the most sense for the design?
  • How could a keyboard user get to what they want in the quickest way possible?
  • Is the focus indicator always visible and visually distinct?
Lens Of Layout

Layout contributes a great deal to the usability of a site. Having a layout that is easy to follow with easy to find content makes all the difference to your users. A layout should have a meaningful and logical sequence for the user.

With the advent of CSS Grid, being able to change the layout to be more meaningful based on the available space is easier than ever. However, changing the visual layout creates problems for users who rely on the structural layout of the page.

The structural layout is what is used by screen readers and users using a keyboard. When the visual layout changes but not the underlying structural layout, these users can become confused as their tab order is no longer logical. If you must change the visual layout, you should do so by changing the structural layout so users using a keyboard maintain a sequential and logical tab order.

The layout should be resizable and flexible to a minimum of 320 pixels with no horizontal scroll bars so that it can be viewed comfortably on a phone. The layout should also be flexible enough to be zoomed in to 400% (also with no horizontal scroll bars) for users who need to increase the font size for a better reading experience.

Users using a screen magnifier benefit when related content is in close proximity to one another. A screen magnifier only provides the user with a small view of the entire layout, so content that is related but far away, or changes far away from where the interaction occurred is hard to find and can go unnoticed.

When performing a search on CodePen, the search button is in the top right corner of the page. Clicking the button reveals a large search input on the opposite side of the screen. A user using a screen magnifier would be hard pressed to notice the change and would think the button doesn’t work. (Large preview)

To use the Lens of Layout, ask yourself these questions:

  • Does the layout have a meaningful and logical sequence?
  • What should happen to the layout when it’s viewed on a small screen or zoomed in to 400%?
  • Is content that is related or changes due to user interaction in close proximity to one another?
Lens Of Material Honesty

Material honesty is an architectural design value that states that a material should be honest to itself and not be used as a substitute for another material. It means that concrete should look like concrete and not be painted or sculpted to look like bricks.

Material honesty values and celebrates the unique properties and characteristics of each material. An architect who follows material honesty knows when each material should be used and how to use it without tarnishing itself.

Material honesty is not a hard and fast rule though. It lies on a continuum. Like all values, you are allowed to break them when you understand them. As the saying goes, they are “more what you’d call “guidelines” than actual rules.”

When applied to web design, material honesty means that one element or component shouldn’t look, behave, or function as if it were another element or component. Doing so would cheat the user and could lead to confusion. A common example of this are buttons that look like links or links that look like buttons.

Links and buttons have different behaviors and affordances. A link is activated with the enter key, typically takes you to a different page, and has a special context menu on right click. Buttons are activated with the space key, used primarily to trigger interactions on the current page, and have no such context menu.

When a link is styled to look like a button or vise versa, a user could become confused as it does not behave and function as it looks. If the “button” navigates the user away unexpectedly, they might become frustrated if they lost data in the process.

“At first glance everything looks fine, but it won’t stand up to scrutiny. As soon as such a website is stress‐tested by actual usage across a range of browsers, the façade crumbles.”

Resilient Web Design

Where this becomes the most problematic is when a link and button are styled the same and are placed next to one another. As there is nothing to differentiate between the two, a user can accidentally navigate when they thought they wouldn’t.

Can you tell which one of these will navigate you away from the page and which won’t? (Large preview)

When a component behaves differently than expected, it can easily lead to problems for users using a keyboard or screen reader. An autocomplete menu that is more than an autocomplete menu is one such example.

Autocomplete is used to suggest or predict the rest of a word a user is typing. An autocomplete menu allows a user to select from a large list of options when not all options can be shown.

An autocomplete menu is typically attached to an input field and is navigated with the up and down arrow keys, keeping the focus inside the input field. When a user selects an option from the list, that option will override the text in the input field. Autocomplete menus are meant to be lists of just text.

The problem arises when an autocomplete menu starts to gain more behaviors. Not only can you select an option from the list, but you can edit it, delete it, or even expand or collapse sections. The autocomplete menu is no longer just a simple list of selectable text.

With the addition of edit, delete, and profile buttons, this autocomplete menu is materially dishonest. (Large preview)

The added behaviors no longer mean you can just use the up and down arrows to select an option. Each option now has more than one action, so a user needs to be able to traverse two dimensions instead of just one. This means that a user using a keyboard could become confused on how to operate the component.

Screen readers suffer the most from this change of behavior as there is no easy way to help them understand it. A lot of work will be required to ensure the menu is accessible to a screen reader by using non-standard means. As such, it will might result in a sub-par or inaccessible experience for them.

To avoid these issues, it’s best to be honest to the user and the design. Instead of combining two distinct behaviors (an autocomplete menu and edit and delete functionality), leave them as two separate behaviors. Use an autocomplete menu to just autocomplete the name of a user, and have a different component or page to edit and delete users.

To use the Lens of Material Honesty, ask yourself these questions:

  • Is the design being honest to the user?
  • Are there any elements that behave, look, or function as another element?
  • Are there any components that combine distinct behaviors into a single component? Does doing so make the component materially dishonest?
Lens Of Readability

Have you ever picked up a book only to get a few paragraphs or pages in and want to give up because the text was too hard to read? Hard to read content is mentally taxing and tiring.

Sentence length, paragraph length, and complexity of language all contribute to how readable the text is. Complex language can pose problems for users, especially those with cognitive disabilities or who aren’t fluent in the language.

Along with using plain and simple language, you should ensure each paragraph focuses on a single idea. A paragraph with a single idea is easier to remember and digest. The same is true of a sentence with fewer words.

Another contributor to the readability of content is the length of a line. The ideal line length is often quoted to be between 45 and 75 characters. A line that is too long causes users to lose focus and makes it harder to move to the next line correctly, while a line that is too short causes users to jump too often, causing fatigue on the eyes.

“The subconscious mind is energized when jumping to the next line. At the beginning of every new line the reader is focused, but this focus gradually wears off over the duration of the line”

— Typographie: A Manual of Design

You should also break up the content with headings, lists, or images to give mental breaks to the reader and support different learning styles. Use headings to logically group and summarize the information. Headings, links, controls, and labels should be clear and descriptive to enhance the users ability to comprehend.

To use the Lens of Readability, ask yourself these questions:

  • Is the language plain and simple?
  • Does each paragraph focus on a single idea?
  • Are there any long paragraphs or long blocks of unbroken text?
  • Are all headings, links, controls, and labels clear and descriptive?
Lens Of Structure

As mentioned in the Lens of Layout, the structural layout is what is used by screen readers and users using a keyboard. While the Lens of Layout focused on the visual layout, the Lens of Structure focuses on the structural layout, or the underlying HTML and semantics of the design.

As a designer, you may not write the structural layout of your designs. This shouldn’t stop you from thinking about how your design will ultimately be structured though. Otherwise, your design may result in an inaccessible experience for a screen reader.

Take for example a design for a single elimination tournament bracket.

Large preview

How would you know if this design was accessible to a user using a screen reader? Without understanding structure and semantics, you may not. As it stands, the design would probably result in an inaccessible experience for a user using a screen reader.

To understand why that is, we first must understand that a screen reader reads a page and its content in sequential order. This means that every name in the first column of the tournament would be read, followed by all the names in the second column, then third, then the last.

“George, Fred, Linus, Lucy, Jack, Jill, Fred, Ginger, George, Lucy, Jack, Ginger, George, Ginger, Ginger.”

If all you had was a list of seemingly random names, how would you interpret the results of the tournament? Could you say who won the tournament? Or who won game 6?

With nothing more to work with, a user using a screen reader would probably be a bit confused about the results. To be able to understand the visual design, we must provide the user with more information in the structural design.

This means that as a designer you need to know how a screen reader interacts with the HTML elements on a page so you know how to enhance their experience.

  • Landmark Elements (header, nav, main, and footer)
    Allow a screen reader to jump to important sections in the design.
  • Headings (h1 → h6)
    Allow a screen reader to scan the page and get a high level overview. Screen readers can also jump to any heading.
  • Lists (ul and ol)
    Group related items together, and allow a screen reader to easily jump from one item to another.
  • Buttons
    Trigger interactions on the current page.
  • Links
    Navigate or retrieve information.
  • Form labels
    Tell screen readers what each form input is.

Knowing this, how might we provide more meaning to a user using a screen reader?

To start, we could group each column of the tournament into rounds and use headings to label each round. This way, a screen reader would understand when a new round takes place.

Next, we could help the user understand which players are playing against each other each game. We can again use headings to label each game, allowing them to find any game they might be interested in.

By just adding headings, the content would read as follows:

“__Round 1, Game 1__, George, Fred, __Game 2__, Linus, Lucy, __Game 3__, Jack, Jill, __Game 4__, Fred, Ginger, __Round 2, Game 5__, George, Lucy, __Game 6__, Jack, Ginger, __Round 3__, __Game 7__, George, Ginger, __Winner__, Ginger.”

This is already a lot more understandable than before.

The information still doesn’t answer who won a game though. To know that, you’d have to understand which game a winner plays next to see who won the previous game. For example, you’d have to know that the winner of game four plays in game six to know who advanced from game four.

We can further enhance the experience by informing the user who won each game so they don’t have to go hunting for it. Putting the text “(winner)” after the person who won the round would suffice.

We should also further group the games and rounds together using lists. Lists provide the structural semantics of the design, essentially informing the user of the connected nodes from the visual design.

If we translate this back into a visual design, the result could look as follows:

The tournament with descriptive headings and winner information (shown here with grey background). (Large preview)

Since the headings and winner text are redundant in the visual design, you could hide them just from visual users so the end visual result looks just like the first design.

“If the end result is visually the same as where we started, why did we go through all this?” You may ask.

The reason is that you should always annotate your design with all the necessary structural design requirements needed for a better screen reader experience. This way, the person who implements the design knows to add them. If you had just handed the first design to the implementer, it would more than likely end up inaccessible.

To use the Lens of Structure, ask yourself these questions:

  • Can I outline a rough HTML structure of my design?
  • How can I structure the design to better help a screen reader understand the content or find the content they want?
  • How can I help the person who will implement the design understand the intended structure?
Lens Of Time

Periodically in a design you may need to limit the amount of time a user can spend on a task. Sometimes it may be for security reasons, such as a session timeout. Other times it could be due to a non-functional requirement, such as a time constrained test.

Whatever the reason, you should understand that some users may need more time in order finish the task. Some users might need more time to understand the content, others might not be able to perform the task quickly, and a lot of the time they could just have been interrupted.

“The designer should assume that people will be interrupted during their activities”

— The Design of Everyday Things

Users who need more time to perform an action should be able to adjust or remove a time limit when possible. For example, with a session timeout you could alert the user when their session is about to expire and allow them to extend it.

To use the Lens of Time, ask yourself this question:

  • Is it possible to provide controls to adjust or remove time limits?
Bringing It All Together

So now that you’ve learned about the different lenses of accessibility through which you can view your design, what do you do with them?

The lenses can be used at any point in the design process, even after the design has been shipped to your users. Just start with a few of them at hand, and one at a time carefully analyze the design through a lens.

Ask yourself the questions and see if anything should be adjusted to better meet the needs of a user. As you slowly make changes, bring in other lenses and repeat the process.

By looking through your design one lens at a time, you’ll be able to refine the experience to better meet users’ needs. As you are more inclusive to the needs of your users, you will create a more accessible design for all your users.

Using lenses and insightful questions to examine principles of accessibility was heavily influenced by Jesse Schell and his book “The Art of Game Design: A Book of Lenses.”

(il, ra, yk)
Categories: Around The Web

Art Directing For The Web With CSS Grid Template Areas

Smashing Magazine - Mon, 04/09/2018 - 5:00am
Art Directing For The Web With CSS Grid Template Areas Art Directing For The Web With CSS Grid Template Areas Andrew Clarke 2018-04-09T11:00:23+02:00 2018-04-20T15:32:23+00:00

(This article is kindly sponsored by CoffeeCup Software.) Alright, I’m going to get straight to the point. CSS Grid is important, really important, too important to be one of those "I’ll use it when all browsers support it" properties. That’s because, with CSS Grid, we can now be as creative with layout on the web as we can in print, without compromising accessibility, responsiveness, or usability.

If you’re at all serious about web design or development, you need to be serious about learning and using CSS Grid too. In this article I’m going to explain how to use one aspect, grid-template areas, a way of arranging elements that even a big, dumb mug like me can understand, and one that doesn’t get enough attention.

Now, you want to see some action and some code, I know that, but hold on one Goddam minute. Before you learn "how," I want to teach you "why" it’s important to make the kind of layouts we’ve seen in other media for decades, but have mostly been absent from the web.

Feeling Frustrated

I guess you’ve seen those "which one of these two layouts are you designing today?" tweets, lamenting the current state of design on the web. Even I’ve spoken about how web design’s lost its "soul." I bet you’ve also seen people use CSS Grid to recreate posters or pages from magazines. These technical demonstrations are cool, and they show how easy implementing complex layouts with CSS Grid can be when compared to other methods, but they don’t get to the bottom of why doing this stuff matters.

So what’s the reason? Why’s layout such an important part of design? Well, it all boils down to one thing, and that’s communication.

For what seems like forever, web designers have created templates, then filled them, with little consideration of the relationship between content and layout. I suppose that’s inevitable, given considerations for content management systems, our need to make designs responsive, and the limitations of the CSS properties we’ve used until now. Sure, we’ve made designs that are flexible, usable, but we’ve been missing a key piece of the puzzle, the role that layout plays in delivering a message.

If you’ve been around the block a few times, you’ll know the role color plays in setting the right tone for a design. I don’t need to tell you that type plays its part too. Pick the wrong typeface, and you run the risk of communicating ineffectively and leaving people feeling differently to how you intended.

Layout — closely linked to aspects of typography like the ’measure’ — plays an equally important role. Symmetry and asymmetry, harmony and tension. These principles draw people to your content, guide them, and help them understand it more easily. That’s why crafting the right layout is as important as choosing the most appropriate typeface. Print designers have known this for years.

Telling Stories Through Art Direction

Art direction matters as much on the web as it does in other media, including print, and what I’m going to cover applies as much to promoting digital products as it does to telling stories.

What do you think of when you hear the term ’art direction?’ Do you think about responsive images, presenting alternative crops, sizes or orientations to several screen sizes using the <picture> element or ’sizes’ in HTML? They’ve become useful responsive design and art direction tools, but there’s more to web design than tools.

Do you think of those designers like Jason Santa Maria and Trent Walton who sometimes art direct their writing by giving an entry its own, distinctive image, layout and typography. This gets us closer to understanding art direction, but images, layout, and typography are only the result of art direction, not the meaning of it.

So if art direction isn’t exactly those things, what exactly is it? In a sentence, it’s the art of distilling an essential, precise meaning or purpose from a piece of content — be that magazine article or a list of reasons why to use the coolest app from the hottest start-up — and conveying that meaning or purpose better by using design. We don’t hear much about art direction on the web, but it’s well established in another medium, perhaps the most memorable being magazines and to some extent newspapers.

I’m not old enough to remember first hand Alexey Brodovitch’s work on Harpers Bazaar magazine from 1934 to 1958.

Fig.1. What I love about these designs — particularly his pencil sketches — is how Brodovitch placed his content to perfectly reflect the image that accompanies it.

I do remember Neville Brody’s artistic art direction for the Face magazine and I’m still inspired by it every day.

Fig.2. Even twenty five years after he created them, Brody’s pages from The Face magazine are still remarkable designs.

Art direction is so rarely discussed in relation to the web that you could be forgiven for thinking that it’s not relevant. Perhaps you see art direction as an activity that’s more suited to the print world than it is to the web? Some people might think of art direction as elitist in some way.

I don’t think that any of that’s true. Stories are stories, no matter where they’re told or through what medium. They may be thought-provoking like the ones published on ProPublica, or they might be the story of your company and why people should do business with you. There’s the story of how your charity supports a good cause and why people should donate to it. Then there’s the story of your start-up’s new app and why someone should download it. With all of these stories, there’s a deeper message beyond just telling the facts about what you do or sell.

Art direction is about understanding those messages and deciding how best to communicate them through the organization and presentation of words and visuals. Is art direction relevant for the web? Of course. Art directors use design to help people better understand the significance of a piece of content, and that’s as important on the web as it is in print. In fact, the basic principles of art direction haven’t changed between print and digital.

I’d go further, by saying that art direction is essential to creating cohesive experiences across multiple channels, so that the meaning of a story isn’t lost in the gaps between devices and screen sizes.

David Hillman, formerly of The Guardian and New Statesman and designer of many other publications said:

"In its best form, (art direction) involves the art director having a full and in-depth understanding of what the magazine says, and through design, influencing how it is said."

My friend Mark Porter, coincidentally the former Creative Director at The Guardian also said:

"Design is being in charge of the distribution of elements in space."

CSS Grid makes being in charge of the distribution of elements more possible than ever before.

Art Directing A Hardboiled Story

I guess now is the time to get down to it, so I’m going to tell you how to put some of this to work in a series of Hardboiled examples. I’ll shine a flashlight on layout and how it helps storytelling and then give you the low down on how to develop one of these designs using CSS Grid.

Fig.3. When I conceived the covers for my Hardboiled books, I wanted the story to continue across several ’shots.’ (Left: Cover illustrations by Kevin Cornell. Right: Cover illustrations by Natalie Smith.) (Copyright: Stuff & Nonsense)

First, the backstory. On the cover of my 2010 edition of Hardboiled Web Design (1), a mystery woman in a red dress (there’s always a woman in a red dress) is pointing a gun at our private dick. (Sheesh, I know that feeling.) By the Fifth Anniversary Edition in 2015 (2), the story’s moved on and a shadow moves ominously across the door of our detective’s office. The door flies open, two villains burst in (3), and a fist fight ensues (4). Our mystery woman sure knows how to throw a punch and before you can say "kiss me, deadly" one villain’s tied to a chair and ready to spill the beans (5).

Chapter Three

I’ll start telling that story at the explosive moment when those two villains bust open the door. Now, if you’ve read Scott McCloud’s book ‘Understanding Comics’ you’ll know that panel size affects how long people spend looking at an area, so I want to make the image of our bad guys as large as possible to maximise its impact (1). What the hoods don’t know is that our woman is waiting for them. I use layout to add tension by connecting their eye lines, (2) at the same time drawing a reader’s eyes to where the content starts.

Fig.4. Add tension by connecting eye lines and maximise impact through large images. (View project files on CodePen) (Copyright: Stuff & Nonsense) Chapter Four

As the first villain bursts onto the scene, I use the left edge of the page, without margins, to represent the open door (1). As most of the action takes place on the right, I create a large spacial zone using the majority of the height and width of the page (2).

Now, when fists fly in all directions, our layout needs to do the same, so my content comes from the top — where whitespace draws the eye down to the bold paragraph (3) — and from the left with the enormous headline (4). You might be wondering why I haven’t mentioned that smaller image in the top-right, but I’ll get to that in a minute.

Fig.5. When fists fly, a layout needs to do the same. (View project files on CodePen) (Copyright: Stuff & Nonsense) Chapter Five

The fight’s over, and our detective is back in control, so on this final page I use a more structured layout to reflect the order that’s returned. Solid columns of justified text (1) with plenty of whitespace around them add to the feeling of calm. At the same time, the right aligned caption (2) feels edgy and uncomfortable, like the gunpoint interrogation that’s taking place.

Fig.6. We can use layout to create order as well as disorder. (View project files on CodePen) (Copyright: Stuff & Nonsense) Getting My Dands Dirty

It’s time for a confession. I’m not going to teach you everything you need to know about developing layouts using CSS Grid as there are plenty of smarter people who’ve done that before:

Instead, I’ll show you the inspiration for one grid, how I translated it into a (large screen) layout using columns and rows in CSS Grid, and then placed elements into the spacial zones created using the grid-template areas property. Finally, I’ll deconstruct and alter the design for smaller screen sizes.

The Perfect Beat

My inspiration for the layout I use came from this 1983 design by Neville Brody for The Face Magazine. I was drawn to how Brody cleverly created both horizontal and vertical axis and the large space occupied by the main image.

Fig.7. This layout by Neville Brody for The Face Magazine felt like the perfect starting point for my design. Look closely at Brody’s grid, and you should spot that he used five columns of equal width.

I did the same by applying the following CSS Grid properties to the margin-less <body> element of my page, where columns one fraction unit wide repeat five times with a 2vw gap between them:

body { margin: 0; padding : 0; display: grid; grid-column-gap : 2vw; grid-template-columns: repeat(5, 1fr); } Fig.8. I combine five equal width columns in different ways to create spacial zones.

In CSS Grid we define a grid module by giving it a name, then we place an element into either a single module or multiple adjacent modules — known as spacial zones — with the grid-template-areas property. Sounds complicated huh? No, not really. It’s one of the easiest and most obvious ways of using CSS Grid, so let’s get to work.

First things, first. I have five elements to position, and they are my "Kiss Me, Deadly" title, the largest ’banner’ image, main content, aside paragraph and two images, fig-1 and fig-2. My HTML looks like this:

<body> <picture role="banner">…</picture> <h1 class="title">…</h1> <main>…</main> <aside>…</aside> <img class="fig-1"> <img class="fig-2"> </body>

I wrote that markup in the order that makes most sense, just as I would when constructing a narrative. It reads like a dream on small screens and even without styles. I give each element a grid-area value that in a moment I’ll use to place it on my grid:

[role="banner"] { grid-area: banner; } .title { grid-area: title; } main { grid-area: main; } aside { grid-area: aside; } .fig-1 { grid-area: fig-1; } .fig-2 { grid-area: fig-2; }

Your grid area values don’t necessarily need to reflect your element types. In fact, you can use any values, even single letters like a, b, c, or d.

Back with the grid, I add three rows to the columns I created earlier. The height of each row is automatically defined by the height of the content inside it:

body { grid-template-rows: repeat(3, auto); }

Here’s where the magic happens. I literally draw the grid in CSS using the grid-template-areas property, where each period (.) represents one empty module:

body { grid-template-areas: ". . . . ." ". . . . ." ". . . . ."; }

Now it’s time to position elements on that grid using the grid-area values I created earlier. I place each elements’ value into a module on the grid and if I repeat that value across multiple adjacent modules — either across columns or row, that element will expand across them to create a spacial zone. Leaving a period (.) will create an empty space:

body { grid-template-areas: ". aside . fig-2 fig-2" "title title banner banner banner" "fig-2 main banner banner banner"; }

One more small detail before I finish the layout CSS. I want the content of the aside element to sit at the bottom — close to the title and leaving ample white space above it to draw someone’s eye down — so I use an align-self property that might be familiar from learning Flexbox, but with a new value of ’end.‘

aside { align-self: end; } Fig.9. That’s it, my CSS Grid layout for larger screens is done. (Copyright: Stuff & Nonsense)

All that remains is to add a few other styles to bring the design to life, including a striking inverse color scheme and a bright, red accent that ties the word "Deadly" in the title to the color of our woman’s dress:

<h1 class="title">Kiss Me, <em>Deadly</em></h1> .title em { font-style: normal; color : #fe3d6b; } Going Up In Smoke

Now, I know you’ve been wondering about that smaller fight image, and I need to admit something. Natalie Smith made only one finished fists flying illustration for my Hardboiled Shot covers, but her sketches were too good to waste. I used CSS Grid to position an inverted version of one pencil sketch above the gun and rotated it with a CSS transform to form a cloud of smoke.

Fig.10. CSS Grid and transforms turn this sketch into a cloud of smoke. (Copyright: Stuff & Nonsense) Breaking It Down

In this article, I’ve shown how to create a layout for large screens, but in reality, I start with a small one and then work up, using breakpoints to add or change styles. With CSS Grid, adapting a layout to various screen sizes is as easy as positioning elements into different grid-template areas. There are two ways that I can do this, first by changing the grid itself:

body { grid-template-columns: 50px repeat(2, 1fr); } @media screen and (min-width : 48em) { body { grid-template-columns: repeat(5, 1fr); } }

The second, by positioning elements into different grid-template areas on the same grid:

body { grid-template-areas: "fig-1 aside aside aside aside" "fig-1 title title title title" "banner banner banner banner banner" ".... main main main main"; } @media screen and (min-width : 64em) { body { grid-template-areas: ".... aside .... fig-2 fig-2" "title title banner banner banner" "fig-1 main banner banner banner"; } } Fig.11. Adapting my layout to various screen sizes is as easy as positioning elements into different grid-template areas. Small screen (left) Medium screen (right). (Copyright: Stuff & Nonsense) Using CSS Grid Builder

Grid template areas make developing art directed layouts so easy that even a flat-foot like me can do it, but if you’re the type that likes tools to do the dirty work, CSS Grid Builder from CoffeeCup Software might be just the thing for you. You may have used WYSIWYG editors before, so you might be remembering how lousy the code they spat out was. Let me stop you there. CSS Grid Builder outputs clean CSS and accessible markup. Maybe not as clean as you write yourself, but pretty damn close, and the small team who developed it plan to make it even better. My handwritten HTML looks like this:

<picture role="banner"> <source srcset="banner.png" media="(min-width: 64em)"> <img src="banner-small.png" alt="Kiss Me, Deadly"> </picture>

The CSS Grid Builder <picture> element comes wrapped in an extra division, with a few other elements thrown in for good measure:

<div class="responsive-picture banner" role="banner"> <picture> <!--[if IE 9]><video style="display: none;"><![endif]--> <source srcset="banner.png" media="(min-width: 64em)"> <!--[if IE 9]></video><![endif]--> <img alt="Kiss Me, Deadly" src="banner-small.png"> </picture> </div>

Like I said, close enough, and if you don’t believe me, download a set of exported files from my Hardboiled example. Maybe that’ll convince you.

Browsers’ developer tools are getting better at inspecting grids, but CSS Grid Builder helps you build them. Obviously. At its core, CSS Grid Builder is a Chromium-based browser wrapped in a user-interface, and it runs on macOS and Windows. That means that if the browser can render it, the UI tools can write it, with one or two notable exceptions including CSS Shapes.

In fact, CSS Grid Builder builds more than grids, and you can use it to create styles for backgrounds — including gradients, which is very handy — borders, and typography. It even handles Flexbox and multi-column layouts, but you’re here because you want to learn about CSS Grid.

Looking Around The Interface

The interface in CSS Grid Builder, is pretty much as you’d expect it, with a wide area for the design you’re making on the left and controls over on the right. Those controls include common elements; text, images, interactive buttons and form controls, and layout containers. If you need one of those elements, drag and drop it into your work area.

Drag and drop common elements including text, images, and layout containers.

Press to reveal the Styles tab, and you’ll find controls for naming class and ID attributes, applying styles at specific breakpoints and in particular states. All very useful, but it’s the layout section — somewhat inconveniently tucked away at the bottom of the pane — that’s the most interesting.

Styles layout section contains grid controls.

In this section you can design a grid. Setting up columns and rows to form a layout without visual representation can be one of the hardest parts of learning how ‘grid’ works. The app’s ability to visually define the grid structure is a handy feature, especially when you’re new to CSS Grid. This is the section I’m going to explain.

The Grid Editor contains tools for building a grid visually.

Using CSS Grid Builder I added a container division. When selecting that in the work area, I get access to the Grid Editor. Activate that, and all the tools needed to visually build a grid are there:

  • Add columns and rows
  • Align and justify content and items within each module
  • Size columns and rows using every type of unit including fr and minmax
  • Specify gaps
  • Name grid-template-areas
  • Specify breakpoints

When I’m happy with those settings, "OK" the changes and they’re applied to the design in the work area. Back there, use sliders to preview the results at various breakpoints, and if you’re one of those people who’s worried about the shrinking percentage of people using incapable browsers, CSS Grid Builder also offers settings where you can figure fallbacks. Then just copy and paste CSS styles to somewhere else in your project or export the whole kit and caboodle.

Preview results at various breakpoints, save the project to edit later or export the files.

CSS Grid Builder is currently free while CoffeeCup develops it and if you like what they’re doing, you can throw a few dollars their way to help fund its development.

Cleaning Up

I’m finding it hard to contain my excitement about CSS Grid. Yes, I know I should get out more, but I really do think that it offers us the best chance yet of learning lessons from other media to make the websites we create better at communicating what we aim to convey to our audiences. Whether we make websites for businesses who want to sell more, charities who need to raise more money through donations to good causes, or news outlets who want to tell stories more effectively, CSS Grid plus thoughtful, art directed content makes that all possible.

Now that’s Hardboiled.

I hope you enjoyed this article, now view the project files on CodePen or download the example files.

‘Art Direction for the Web’ by Andy Clarke, the first Hardboiled Web Design ‘shot.’ Shots are a series of short books on ‘Art Directing for the web, ’ ‘Designing with a Browser,’ and ‘Selling Creative Ideas’ to be published throughout 2018.

(ms, ra, il)
Categories: Around The Web

Hit The Ground Running With Vue.js And Firestore

Smashing Magazine - Fri, 04/06/2018 - 7:30am
Hit The Ground Running With Vue.js And Firestore Hit The Ground Running With Vue.js And Firestore Lukas van Driel 2018-04-06T13:30:33+02:00 2018-04-20T15:32:23+00:00

Google Firebase has a new data storage possibility called ‘Firestore’ (currently in beta stage) which builds on the success of the Firebase Realtime Database but adds some nifty features. In this article, we’ll set up the basics of a web app using Vue.js and Firestore.

Let’s say you have this great idea for a new product (e.g. the next Twitter, Facebook, or Instagram, because we can never have too much social, right?). To start off, you want to make a prototype or Minimum Viable Product (MVP) of this product. The goal is to build the core of the app as fast as possible so you can show it to users and get feedback and analyze usage. The emphasis is heavily on development speed and quick iterating.

But before we start building, our amazing product needs a name. Let’s call it “Amazeballs.” It’s going to be legen — wait for it — dary!

Here’s a shot of how I envision it:

The legendary Amazeballs app

Our Amazeballs app is — of course — all about sharing cheesy tidbits of your personal life with friends, in so-called Balls. At the top is a form for posting Balls, below that are your friends’ Balls.

When building an MVP, you’ll need tooling that gives you the power to quickly implement the key features as well as the flexibility to quickly add and change features later on. My choice falls on Vue.js as it’s a Javascript-rendering framework, backed by the Firebase suite (by Google) and its new real-time database called Firestore.

“You must unlearn what you have learned!” Meet the brand new episode of SmashingConf San Francisco with smart front-end tricks and UX techniques. Featuring Yiying Lu, Aarron Draplin, Smashing Yoda, and many others. Tickets now on sale. April 17-18.

Check the speakers →

Firestore can directly be accessed using normal HTTP methods which makes it a full backend-as-a-service solution in which you don’t have to manage any of your own servers but still store data online.

Sounds powerful and daunting, but no sweat, I’ll guide you through the steps of creating and hosting this new web app. Notice how big the scrollbar is on this page; there’s not a huge amount of steps to it. Also, if you want to know where to put each of the code snippets in a code repository, you can see a fully running version of Amazeballs on github.

Let’s Start

We’re starting out with Vue.js. It’s great for Javascript beginners, as you start out with HTML and gradually add logic to it. But don’t underestimate; it packs a lot of powerful features. This combination makes it my first choice for a front-end framework.

Vue.js has a command-line interface (CLI) for scaffolding projects. We’ll use that to get the bare-bones set-up quickly. First, install the CLI, then use it to create a new project based on the “webpack-simple” template.

npm install -g vue-cli vue init webpack-simple amazeballs

If you follow the steps on the screen (npm install and npm run dev) a browser will open with a big Vue.js logo.

Congrats! That was easy.

Next up, we need to create a Firebase project. Head on over to https://console.firebase.google.com/ and create a project. A project starts out in the free Spark plan, which gives you a limited database (1 GB data, 50K reads per day) and 1 GB of hosting. This is more than enough for our MVP, and easily upgradable when the app gains traction.

Click on the ‘Add Firebase to your web app’ to display the config that you need. We’ll use this config in our application, but in a nice Vue.js manner using shared state.

First npm install firebase, then create a file called src/store.js. This is the spot that we’re going to put the shared state in so that each Vue.js component can access it independently of the component tree. Below is the content of the file. The state only contains some placeholders for now.

import Vue from 'vue'; import firebase from 'firebase/app'; import 'firebase/firestore'; // Initialize Firebase, copy this from the cloud console // Or use mine :) var config = { apiKey: "AIzaSyDlRxHKYbuCOW25uCEN2mnAAgnholag8tU", authDomain: "amazeballs-by-q42.firebaseapp.com", databaseURL: "https://amazeballs-by-q42.firebaseio.com", projectId: "amazeballs-by-q42", storageBucket: "amazeballs-by-q42.appspot.com", messagingSenderId: "972553621573" }; firebase.initializeApp(config); // The shared state object that any vue component can get access to. // Has some placeholders that we’ll use further on! export const store = { ballsInFeed: null, currentUser: null, writeBall: (message) => console.log(message) };

Now we’ll add the Firebase parts. One piece of code to get the data from the Firestore:

// a reference to the Balls collection const ballsCollection = firebase.firestore() .collection('balls'); // onSnapshot is executed every time the data // in the underlying firestore collection changes // It will get passed an array of references to // the documents that match your query ballsCollection .onSnapshot((ballsRef) => { const balls = []; ballsRef.forEach((doc) => { const ball = doc.data(); ball.id = doc.id; balls.push(ball); }); store.ballsInFeed = balls; });

And then replace the writeBall function with one that actually executes a write:

writeBall: (message) => ballsCollection.add({ createdOn: new Date(), author: store.currentUser, message })

Notice how the two are completely decoupled. When you insert into a collection, the onSnapshot is triggered because you’ve inserted an item. This makes state management a lot easier.

Now you have a shared state object that any Vue.js component can easily get access to. Let’s put it to good use.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Post Stuff!

First, let’s find out who the current user is.

Firebase has authentication APIs that help you with the grunt of the work of getting to know your user. Enable the appropriate ones on the Firebase Console in AuthenticationSign In Method. For now, I’m going to use Google Login — with a very non-fancy button.

Authentication with Google Login

Firebase doesn’t give you any interface help, so you’ll have to create your own “Login with Google/Facebook/Twitter” buttons, and/or username/password input fields. Your login component will probably look a bit like this:

<template> <div> <button @click.prevent="signInWithGoogle">Log in with Google</button> </div> </template> <script> import firebase from 'firebase/app'; import 'firebase/auth'; export default { methods: { signInWithGoogle() { var provider = new firebase.auth.GoogleAuthProvider(); firebase.auth().signInWithPopup(provider); } } } </script>

Now there’s one more piece of the login puzzle, and that’s getting the currentUser variable in the store. Add these lines to your store.js:

// When a user logs in or out, save that in the store firebase.auth().onAuthStateChanged((user) => { store.currentUser = user; });

Due to these three lines, every time the currently-logged-in user changes (logs in or out), store.currentUser also changes. Let’s post some Balls!

Login state is stored in the store.js file

The input form is a separate Vue.js component that is hooked up to the writeBall function in our store, like this:

<template> <form @submit.prevent="formPost"> <textarea v-model="message" /> <input type="submit" value="DUNK!" /> </form> </template> <script> import { store } from './store'; export default { data() { return { message: null, }; }, methods: { formPost() { store.writeBall(this.message); } }, } </script>

Awesome! Now people can log in and start posting Balls. But wait, we’re missing authorization. We want you to only be able to post Balls yourself, and that’s where Firestore Rules come in. They’re made up of Javascript-ish code that defines access privileges to the database. You can enter them via the Firestore console, but you can also use the Firebase CLI to install them from a file on disk. Install and run it like this:

npm install -g firebase-tools firebase login firebase init firestore

You’ll get a file named firestore.rules where you can add authorization for your app. We want every user to be able to insert their own balls, but not to insert or edit someone else’s. The below example do nicely. It allows everyone to read all documents in the database, but you can only insert if you’re logged in, and the inserted resource has a field “author” that is the same as the currently logged in user.

service cloud.firestore { match /databases/{database}/documents { match /{document=**} { allow read: if true; allow create: if request.auth.uid != null && request.auth.uid == request.resource.data.author; } } }

It looks like just a few lines of code, but it’s very powerful and can get complex very quickly. Firebase is working on better tooling around this part, but for now, it’s trial-and-error until it behaves the way you want.

If you run firebase deploy, the Firestore rules will be deployed and securing your production data in seconds.

Adding Server Logic

On your homepage, you want to see a timeline with your friends’ Balls. Depending on how you want to determine which Balls a user sees, performing this query directly on the database could be a performance bottleneck. An alternative is to create a Firebase Cloud Function that activates on every posted Ball and appends it to the walls of all the author’s friends. This way it’s asynchronous, non-blocking and eventually consistent. Or in other words, it’ll get there.

To keep the examples simple, I’ll do a small demo of listening to created Balls and modifying their message. Not because this is particularly useful, but to show you how easy it is to get cloud functions up-and-running.

const functions = require('firebase-functions'); exports.createBall = functions.firestore .document('balls/{ballId}') .onCreate(event => { var createdMessage = event.data.get('message'); return event.data.ref.set({ message: createdMessage + ', yo!' }, {merge: true}); });

Oh, wait, I forgot to tell you where to write this code.

firebase init functions

This creates the functions directory with an index.js. That’s the file you can write your own Cloud Functions in. Or copy-paste mine if you’re very impressed by it.

Cloud Functions give you a nice spot to decouple different parts of your application and have them asynchronously communicate. Or, in architectural drawing style:

Asynchronous communication between the different components of your application Last Step: Deployment

Firebase has its Hosting option available for this, and you can use it via the Firebase CLI.

firebase init hosting

Choose distas a public directory, and then ‘Yes’ to rewrite all URLs to index.html. This last option allows you to use vue-router to manage pretty URLs within your app.

Now there’s a small hurdle: the dist folder doesn’t contain an index.html file that points to the right build of your code. To fix this, add an npm script to your package.json:

{ "scripts": { "deploy": "npm run build && mkdir dist/dist && mv dist/*.* dist/dist/ && cp index.html dist/ && firebase deploy" } }

Now just run npm deploy, and the Firebase CLI will show you the URL of your hosted code!

When To Use This Architecture

This setup is perfect for an MVP. By the third time you’ve done this, you’ll have a working web app in minutes — backed by a scalable database that is hosted for free. You can immediately start building features.

Also, there’s a lot of space to grow. If Cloud Functions aren’t powerful enough, you can fall back to a traditional API running on docker in Google Cloud for instance. Also, you can upgrade your Vue.js architecture with vue-router and vuex, and use the power of webpack that’s included in the vue-cli template.

It’s not all rainbows and unicorns, though. The most notorious caveat is the fact that your clients are immediately talking to your database. There’s no middleware layer that you can use to transform the raw data into a format that’s easier for the client. So, you have to store it in a client-friendly way. Whenever your clients request change, you’re going to find it pretty difficult to run data migrations on Firebase. For that, you’ll need to write a custom Firestore client that reads every record, transforms it, and writes it back.

Take time to decide on your data model. If you need to change your data model later on, data migration is your only option.

So what are examples of projects using these tools? Amongst the big names that use Vue.js are Laravel, GitLab and (for the Dutch) nu.nl. Firestore is still in beta, so not a lot of active users there yet, but the Firebase suite is already being used by National Public Radio, Shazam, and others. I’ve seen colleagues implement Firebase for the Unity-based game Road Warriors that was downloaded over a million times in the first five days. It can take quite some load, and it’s very versatile with clients for web, native mobile, Unity, and so on.

Where Do I Sign Up?!

If you want to learn more, consider the following resources:

Happy coding!

(da, ra, hj, il)
Categories: Around The Web

The Rise Of Intelligent Conversational UI

Smashing Magazine - Thu, 04/05/2018 - 7:00am
The Rise Of Intelligent Conversational UI The Rise Of Intelligent Conversational UI Burke Holland 2018-04-05T13:00:48+02:00 2018-04-20T15:32:23+00:00

For a long time, we’ve thought of interfaces strictly in a visual sense: buttons, dropdown lists, sliders, carousels (please no more carousels). But now we are staring into a future composed not just of visual interfaces, but of conversational ones as well. Microsoft alone reports that three thousand new bots are built every week on their bot framework. Every. Week.

The importance of Conversational UI cannot be understated, even if some of us wish it wasn’t happening.

The most important advancement in Conversational UI has been Natural Language Processing (NLP). This is the field of computing that deals not with deciphering the exact words that a user said, but with parsing out of it their actual intent. If the bot is the interface, NLP is the brain. In this article, we’re going to take a look at why NLP is so important, and how you (yes, you!) can build your own.

Speech Recognition vs. NLP

Most people will be familiar with Amazon Echo, Cortana, Siri or Google Home, all of which have an interface that is primarily conversational. They are also all using NLP.

Large preview

Aside from these intelligent assistants, most Conversational UIs have nothing to do with voice at all. They are text driven. These are the bots we chat with in Slack, Facebook Messenger or over SMS. They deliver high quality gifs in our chats, watch our build processes and even manage our pull requests.

Getting workflow just right ain't an easy task. So are proper estimates. Or alignment among different departments. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore features → Large preview

Conversational UIs built on text are nice because there is no speech recognition component. The text is already parsed.

When it comes to a verbal interaction, the fundamental problem is not recognizing the speech. We’ve mostly got that one down.

OK, so maybe it’s not perfect. I still get voicemails every day like a game of Mad Libs that I never asked to play. iOS just sticks a blank line in whenever they don’t know what exactly was said.

Large preview

Google, on the other hand, just tries to guess. Like this one from my father. I have absolutely no idea what this message is actually trying to say other than “Be Safe” which honestly sounds like my mom, and not my dad. I have a hard time believing he ever said that. I don’t trust the computer.

Large preview

I’m picking on voice mail transcriptions here, which might be the hardest speech recognition to do given how degraded the audio quality is.

Nevertheless, speech recognition is largely a solved problem. It’s even built right into Chrome and it works remarkably well.

Large preview

After we solved the problem of speech recognition, we started to use it everywhere. That was unfortunate because speech recognition on it’s own doesn’t do us a whole lot of good. Interfaces that rely soley on speech recognition require the user to state things a precise way and they can only state the limited number of exact words or phrases that the interface knows about. This is not natural. This is not how a conversation works.

Without NLP, Conversational UI can be true nightmare.

Conversational UI Without NLP

We’re probably all familiar with automated phone menus. These are known as Interactive Voice Response systems — or IVRs for short. They are designed to take the place of the traditional operator and automatically transfer callers to the right place without having to talk to a human. On the surface, this seems like a good idea. In practice, it’s mostly just you waiting while a recorded voice reads out a list of menu items that “may have changed.”

Large preview

A 2011 study from New York University found that 83% of people feel IVR systems “provide either no benefit at all, or only a cost savings benefit to the company.” They also noted that IVR systems “score lower than any other service option.” People would literally rather do anything else than use an automated phone menu.

NLP has changed the IVR market rather significantly in the past few years. NLP can pick a user’s intent out of anything they say, so it’s better to just let them say it and then determine if you support the action.

Check out how AT&T does it.

AT&T has a truly intelligent Conversational UI. It uses NLP to let me just state my intent. Also, notice that I don’t have to know what to say. I can fumble all around and it still picks out my intent.

AT&T also uses information that it already has (my phone number) and then leverages text messaging to send me a link to a traditional visual UI, which is probably a much better UX for making a payment. NLP drives the whole experience here. Without it, the rest of the interaction would not be nearly as smooth.

NLP is powerful, but more importantly, it is also accessible to developers everywhere. You don’t have to know a thing about Machine Learning (ML) or Artificial Intelligence (AI) to use it. All you need to how to do is make an AJAX call. Even I can do that!

Building An NLP Interface

So much of Machine Learning still remains inaccessible to developers. Even the best YouTube videos on the subject quickly become hard to follow with subjects like Neural Networks and Gradient Descents. We have, however, made significant progress in the field of Language Processing, to the point that it’s accessible to developers of nearly any skill level.

Natural Language Processing differs based on the service, but the overall idea is that the user has an intent, and that intent contains entities. That means exactly nothing to you at the moment, so let’s work up a hypothetical Home Automation bot and see how this works.

The Home Automation Example

In the field of Natural Language Processing, the canonical “Hello World” is usually a Home Automation demo. This is because it helps to clearly demonstrate the fundamental concepts of NLP without overloading your brain.

A Home Automation Bot is a service that can control hypothetical lights in a hypothetical house. For instance, we might want to say “Turn on the kitchen lights”. That is our intent. If we said “Hello”, we are clearly expressing a different intent. Inside of that intent, there are two pieces of information that we need to complete the action:

  1. The ‘Location’ of the light (kitchen)
  2. The desired state of the lights ‘Power’ (on/off)

These (Location, Power) are known as entities.

When we are finished designing our NLP interface, we are going to be able to call an HTTP endpoint and pass it our intent: “Turn on the kitchen lights.” That endpoint will return to us the intent (Control Lights) and two objects representing our entities: Location and Power. We can then pass those into a function which actually controls our lights…

function controlLights(location, power) { console.log(`Turning ${power} the ${location} lights`); // TODO: Call an imaginary endpoint which controls lights }

There are a lot of NLP services out there that are available today for developers. For this example, I’m going to show the LUIS project from Microsoft because it is free to use.

LUIS is a completely visual tool, so we won’t actually be writing any code at all. We’ve already talked about Intents and Entities, so you already know most of the terminology that you need to know to build this interface.

The first step is to create a “Control Lights” intent in LUIS.

Large preview

Before I do anything with that intent, I need to define my Location and Power entities. Entities can be different types — kind of like types in a programming language. You can have dates, lists and even entities that are related to other entities. In this case, Power is a list of values (on, off) and Location is a simple entity, which can be any value.

It will be up to LUIS to be smart enough to figure out exactly what the Location is.

Large preview Large preview

Now we can begin to train this model to understand all of the different ways that we might ask it to control the lights in a different location. Let’s think of all the different ways that we could do that:

  • Turn off the kitchen lights;
  • Turn off the lights in the office;
  • The lights in the living room, turn them on;
  • Lights, kitchen, off;
  • Turn off the lights (no location).

As I feed these into the Control Lights intent as utterances, LUIS tries to determine where in the intent the entities are. You can see that because Power is a discreet list of values, it gets that right every time.

Large preview

But it has no idea what a Location even is. LUIS wants us to go through this list and tell it where the Location is. That’s done by clicking on a word or group of words and assigning to the right entity. As we are doing this, we are really creating a machine learning model that LUIS is going to use to statistically estimate what qualifies as a Location.

Large preview

When I’m done telling LUIS where in these utterances all the locations are, my dashboard looks like this…

Large preview

Now we train the model by clicking on the “Train” button at the top. Do you feel like a data scientist yet?

Now I can test it using the test panel. You can see that LUIS is already pretty smart. The Power is easy to pick out, but it can actually pick out Locations it has never seen before. It’s doing what your brain does — using the information that it has to make an educated guess. Machine Learning is equal parts impressive and scary.

Large preview

If we try hard enough, we can fool the AI. The more utterances we give it and label, the smarter it will get. I added 35 utterances to mine before I was done and it is close to bullet proof.

So now we get to the important part, which is how we actually use this NLP in an app. LUIS has a “Publish” menu option which allows us to publish our model to the internet where it’s exposed via a single HTTP endpoint. It will look something like this…

https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/c4396135-ee3f-40a9-8b83-4704cddabf7a?subscription-key=19d29a12d3fc4d9084146b466638e62a&verbose=true&timezoneOffset=0&q=

The very last part of that query string is a q= variable. This is where we would pass our intent.

https://westus.api.cognitive.microsoft.com/luis/v2.0/apps/c4396135-ee3f-40a9-8b83-4704cddabf7a?subscription-key=19d29a12d3fc4d9084146b466638e62a&verbose=true&timezoneOffset=0&q=turn on the kitchen lights

The response that we get back looks is just a JSON object.

{ "query": "turn on the kitchen lights", "topScoringIntent": { "intent": "Control Lights", "score": 0.999999046 }, "intents": [ { "intent": "Control Lights", "score": 0.999999046 }, { "intent": "None", "score": 0.0532306843 } ], "entities": [ { "entity": "kitchen", "type": "Location", "startIndex": 12, "endIndex": 18, "score": 0.9516622 }, { "entity": "on", "type": "Power", "startIndex": 5, "endIndex": 6, "resolution": { "values": [ "on" ] } } ] }

Now this is something that we can work with as developers! This is how you add NLP to any project — with a single REST endpoint. Now you’re free to create a bot with some real brains!

Brian Holt used the browser speech API and a LUIS model to create a voice powered calculator that is running right inside of CodePen. Chrome is required for the speech API.

See the Pen Voice Calculator by Brian Holt (@btholt) on CodePen.

Bot Design Is Still Hard

Having a smart bot is only half the battle. We still need to account for any of the actions that our system might expose, and that can lead to a lot of different logical paths which makes for messy code.

Conversations also happen in stages, so the bot needs to be able to intelligently direct users down the right path without frustrating them or being unable to recover when something goes wrong. It needs to be able to recover when the conversation dies midstream and then starts again. That’s a whole other article and I’ve included some resources below to help.

When it comes to language understanding, the AI platforms are mature and ready to use today. While that won’t help you perfectly design your bot, it will be a key component to building a bot that people don’t hate.

Great UI Is Just Great UI

A final note: As we saw from the AT&T example, a truly smart interface combines great speech recognition, Natural Language Processing, different types of conversational UI (speech and text) and even a visual UI. In short, great UI is just that — great UI — and it is not a zero sum game. Great UIs will leverage all of the technology available to provide the best possible user experience.

Special thanks to Mat Velloso for his input on this article.

Further Resources: (rb, ra, yk, il)
Categories: Around The Web

Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js

Smashing Magazine - Wed, 04/04/2018 - 7:00am
Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js Analyzing Your Company’s Social Media Presence With IBM Watson And Node.js Jamie Munro 2018-04-04T13:00:50+02:00 2018-04-20T15:32:23+00:00

If you are unfamiliar with Machine Learning (ML) technology, it has existed in science fiction for many years and is finally reaching its maturity in our society. One of the first ML examples I saw as a kid was in Star Trek’s The Next Generation when Lieutenant Tasha Yar trains with her holographic opponent that learns how to fight and better defeat in future battles.

In today’s society, China has developed a “lane robot” that is a guard rail controlled by a computer system that can direct the flow of traffic into different lanes, increasing safety and improving traveling time. This is done automatically based on time of day and how much traffic is flowing in each direction.

Another example is Pittsburg unveiling AI traffic signals that automatically detect traffic patterns and alter the traffic lights on-the-fly. Each light is controlled independently to help reduce both the commuting time and the idling time of cars. According to the article, pilot tests have demonstrated a reduced travel time of 25% and idling time by over 40%. There are, of course, hundreds of other examples of ML technology that make intelligent decisions based on the content it consumes.

To accomplish today’s goal, I am going to demonstrate (using Node.js) how to perform a search with Twitter’s API to retrieve content that will be inputted into the ML algorithm to be analyzed. This way, you’ll be provided with characteristics about the users who wrote that specific content so that you can get a better understanding of your audience. The example application will be written using Node.js as the server.

It is beyond the scope of this article to demonstrate how to write an ML algorithm. Instead, to aid in the analysis, I will demonstrate how to use IBM’s Watson to help you understand the general personality of your social media audience.

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features → What Is IBM Watson?

In 2011, Watson began as a computer system that attempted to index the (entire) Internet. It was originally programmed to answer questions posed in ordinary English. Watson competed and won on the TV show Jeopardy! claiming a $1,000,000 cash prize.

Watson was now a proven success.

With the fame of winning on Jeopardy!, IBM has continued to push Watson’s capabilities. Watson has evolved into an enterprise-level application that is focused on Artificial Intelligence (AI) which you can train to identify what you care about most allowing you to make smarter decisions automatically.

The suite of Watson’s services is divided into six high-level categories:

  1. Conversation
    The services in this category allow you to build intelligent chatbot’s or a virtual customer service agent.
  2. Knowledge
    This category is focused on teaching Watson how to interpret data to unlock hidden value and monitor trends.
  3. Vision
    This service provides the ability to tag content inside an image that is used to train Watson to be able to automatically recognize the same pattern inside of other images.
  4. Speech
    These services provide the ability to convert speech to text and the inverse, text to speech.
  5. Language
    This category is split between translating one language to another as well as interpreting the text to predict what predefined category the text belongs to.
  6. Empathy
    This category is devoted to understanding the content’s tone, personality, and emotional state. Inside this category is a service called “Personality Insights” that will be used in this article to predict the personality characteristics with the social media content we will provide it.

This article will be focusing on understanding the personality of the content that we will fetch from Twitter. However, as you can see, Watson provides many other AI features that you can explore to automate many other processes simply through training and content aggregation.

Personality Insights

Personality Insights will analyze content and help you understand the habits and preferences at an individual level and at scale. This is called the ‘personality profile.’ The profile is split into two high-level groups: Personality characteristics and Consumption preferences. These groups are further broken down into more finite components.

Note: To help understand the high-level concepts (before we deep dive into the results), the Personality Insights documentation provides this helpful summary describing how the profile is inferred from the content you provide it.

Big Five Personality Traits. Image courtesy: IBM.com. (Large preview) Personality Characteristics

The Personality Insights service infers personality characteristics based on three primary models:

  • The ‘Big Five’ personality characteristics represent the most widely used model for generally describing how a person engages with the world. The model includes five primary dimensions:
    • Agreeableness
    • Conscientiousness
    • Extraversion
    • Emotional range
    • Openness
      Note: Each dimension has six facets that further characterize an individual according to the dimension.
  • Needs describe which aspects of a product will resonate with a person. The model includes twelve characteristic needs:
    • Excitement
    • Harmony
    • Curiosity
    • Ideal
    • Closeness
    • Self-expression
    • Liberty
    • Love
    • Practicality
    • Stability
    • Challenge
    • Structure
  • Values describe motivating factors that influence a person’s decision making. The model includes five values:
    • Self-transcendence / Helping others
    • Conservation / Tradition
    • Hedonism / Taking pleasure in life
    • Self-enhancement / Achieving success
    • Open to change / Excitement

For more information, see Personality models.

Consumption preferences

Based on the personality characteristics inferred from the input text, the service can also return an indication of the author’s consumption preferences. ‘Consumption preferences’ indicate the author’s likelihood to pursue different products, services, and activities. The service groups the individual preferences into eight categories:

  • Shopping
  • Music
  • Movies
  • Reading and learning
  • Health and activity
  • Volunteering
  • Environmental concern
  • Entrepreneurship

Each category contains from one to as many as a dozen individual preferences.

Note: For more information, see Consumption preferences. For a more in-depth overview of a particular point of interest, I suggest you refer to the Personality Insights documentation.

To be effective, Watson requires a minimum of a hundred words to provide an insight into the consumer’s personality. The more words provided, the better Watson can analyze and determine the consumer’s preference.

This means, if you wish to target individuals, you will need to collect more data than one or two tweets from a specific person. However, if a user writes a product review, blog post, email, or anything else related to your company, this could be analyzed on both an individual level and at scale.

To begin, let’s start by setting up the Personality Insights service to begin analyzing a real-world example.

Configuring The Personality Insights Service

Watson is an enterprise application but they offer a free, limited service. Once you've created an account and are logged in, you will need to add the Personality Insight service. IBM offers a Lite plan that is free. The Lite plan is limited to 1,000 API calls per month and is automatically deleted after 30 days — perfect for our demonstration.

Create the Personality Insights Service. (Large preview)

Once the service has been added, we will need to retrieve the service’s credentials to perform API calls against it. From Watson’s Dashboard, your service should be displayed. After you've selected the service, you'll find a link to view the Service credentials in the left-hand menu. You will need to create a new ‘Credential.’ A unique name is required and optional configuration parameters can be defaulted for this login. For now, we will leave the configuration options empty.

After you have created a credential, select the ‘View’ credentials link. This will display the API’s URL, your username, and password required to securely execute API calls. Save these somewhere safe as we will need them in the next step.

Testing The Personality Insights Service

To perform API calls, I am going to use Node.js. If you already have Node.js installed, you can move on to the next step; otherwise, follow the instructions to setup Node.js from the official download page.

To demonstrate how to use the Personality Insights, I am going to create a new Node.js project on my computer. With a command prompt open, navigate to the directory where your Node.js projects will be stored and create your new project:

mkdir watson-sentiments cd watson-sentiments npm init

To assist with making the API calls to Watson, I am going to leverage the NPM Package: Watson Developer Cloud Node.js SDK. This package can be installed via the command prompt:

npm install watson-developer-cloud --save

Before making the first call, the PersonalityInsightsV3 object needs to be instantiated with the credentials from the previous section. Begin by creating a new file called index.js that will contain the Node.js code.

Here is an example of configuring the class so it is ready to make API calls:

var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’); var personality_insights = new PersonalityInsightsV3({ "url": "https://gateway.watsonplatform.net/personality-insights/api", "username": "**************************", "password": "*************", "version_date": "2017-12-01" });

The personality_insights variable is what we will use to interact with the API for the Personality Insights service. Let’s review how to execute a call and return a personality profile:

var fs = require(’fs’); personality_insights.profile({ "contentItems": [ { "content": "Some content that contains more than 100 words...", "contenttype": "text/plain", "created": 1447639154000, "id": "666073008692314113", "language": "en" } ], "consumption_preferences": true }, (err, response) => { if (err) throw err; fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err) { if (err) throw err; console.log("Results were saved!"); }); });

The profile function accepts an array of contentItems. The ‘content’ item contains the actual content with a few additional properties identifying additional information to help Watson interpret it.

When this is executed, the results are written to a text file (the results are too large to write in the console). The result is an object that contains the following high-level properties:

  • word_count
  • The count of words interpreted
  • processed_language

The language that the content provided, e.g. (en).

  • Personality
    This is an array of the ‘Big Five’ personality characteristics (Openness, Conscientiousness, Extraversion, Agreeableness, and Emotional range). Each characteristic contains an overall percentile for that characteristic (e.g. 0.8100175318417588). To ascertain more detail, there is an array called children that provides more in-depth insight. For example, a child category under ‘Openness’ is ‘Adventurousness’ that contains its own percentile.
  • Needs
    This is an array of the twelve characteristics that define the aspects a person will resonate with a product (Excitement, Harmony, Curiosity, Ideal, Closeness, Self-expression, Liberty, Love, Practicality, Stability, Challenge, and Structure). Each characteristic contains a percentile of how the content was interpreted.
  • Values
    This is an array of the five characteristics that describe motivating factors that influence a person’s decision making (Self-transcendence / Helping others, Conservation / Tradition, Hedonism / Taking pleasure in life, Self-enhancement / Achieving success, and Open to change / Excitement). Each characteristic contains a percentile of how the content was interpreted.
  • Behavior
    This is an array that contains thirty-one elements. Each element provides a percentile of when the content was created. Seven of the elements define the days of the week (Sunday through Saturday). The remaining twenty-four elements define the hours of the day. This helps you understand when customer’s interact with your product.
  • consumption_preferences
    This is an array that contains eight different categories with as much as a twelve child categories providing a percentile of likelihood to pursue different products, services, and activities (Shopping, Music, Movies, Reading and learning, Health and activity, Volunteering, Environmental concern, and Entrepreneurship).
  • Warnings
    This is an array that provides messages if a problem was encountered interpreting the content provided.

Here is a CodePen of the formatted results:

See the Pen Example Watson Results by Jamie Munro (@endyourif) on CodePen.

Configuring Twitter

To search Twitter for relevant tweets, I am going to use the Twitter NPM package. From a console window where the application is hosted, run the following command to install:

npm install twitter --save

Before we can implement the Twitter package, you need to create a Twitter application.

Retrieving Twitter’s Access Tokens. (Large preview)

Once you’ve created your application, you need to retrieve the authorization keys required to perform API calls. With your application created, navigate to the ‘Keys’ and ‘Access Tokens’ page. Since we are not performing API calls against users of Twitter, OAuth integration is not required. Instead, we need only the four following keys:

  1. Consumer Key
  2. Consumer Secret
  3. Access Token
  4. Access Token Secret

The last two keys need to be generated near the bottom of the ‘Keys’ and ‘Access Tokens’ page. With the keys, here is an example of searching for Tweets about #SmashingMagazine:

var Twitter = require(’twitter’); var client = new Twitter({ consumer_key: ’*********************’, consumer_secret: ’******************’, access_token_key: ’******************’, access_token_secret: ’****************’ }); client.get(’search/tweets’, { q: ’#SmashingMagazine’ }, function(error, tweets, response) { if(error) throw error; console.log(tweets); });

The result of this code will log a list tweets about Smashing Magazine. For the purposes of this demonstration, the following fields are of interest to us:

  1. id
  2. created_at
  3. text
  4. metadata.iso_language_code

These are the fields we will feed Watson.

Integrating Personality Insights With Twitter

With Twitter setup and Watson setup, it’s time to integrate the two together and see the results. To make it interesting, let’s search for #DonaldTrump to see what the world thinks about the President of the United States. Here is the code example to search Twitter, feed the results into Watson, and write the results to a text file:

var fs = require(’fs’); var Twitter = require(’twitter’); var client = new Twitter({ consumer_key: ’*********************’, consumer_secret: ’******************’, access_token_key: ’******************’, access_token_secret: ’****************’ }); var PersonalityInsightsV3 = require(’watson-developer-cloud/personality-insights/v3’); var personality_insights = new PersonalityInsightsV3({ "url": "https://gateway.watsonplatform.net/personality-insights/api", "username": "**************************", "password": "*************", "version_date": "2017-12-01" }); client.get(’search/tweets’, { q: ’#DonaldTrump’ }, function(error, tweets, response) { if(error) throw error; var contentItems = []; // Loop through the tweets for (var i = 0; i < tweets.statuses.length; i++) { var tweet = tweets.statuses[i]; contentItems.push({ "content": tweet.text, "contenttype": "text/plain", "created": new Date(tweet.created_at).getTime(), "id": tweet.id, "language": tweet.metadata.iso_language_code }); } // Call Watson with the tweets personality_insights.profile({ "contentItems": contentItems, "consumption_preferences": true }, (err, response) => { if (err) throw err; // Write the results to a file fs.writeFile("results.txt", JSON.stringify(response, null, 2), function(err) { if (err) throw err; console.log("Results were saved!"); }); }); });

Here is another CodePen of the formatted results that I received:

See the Pen Donald Trump Watson Results by Jamie Munro (@endyourif) on CodePen.

What Do The Results Say?

Once we’ve analyzed the ‘Openness’ trait of the ‘Big Five,’ we can infer the following:

  • Emotion is quite low at 13%
  • Imagination is average at 54%
  • Intellect is very high at 96%
  • Authority challenging is also quite high at 87%

The ‘Conscientiousness’ trait at a high-level is average at 46% compared with the ‘Openness’ high-level average of 88%. Whereas ‘Agreeableness’ is very low at only 25%. I guess people on Twitter don’t like to agree with Donald Trump.

Moving on to the ‘Needs.’ The sub-categories of ‘Curiosity’ and ‘Structure’ are in the 60 percentile compared to other categories being below the 10th percentile (Excitement, Harmony, etc.).

And finally, under ‘Values,’ the sub-category that stands out to me as interesting is the ‘Openness’ to ‘Change’ at an abysmal 6%.

Based on when you perform your search, your results may vary as the results are limited to the past seven days from executing the example.

From these results, I would determine that the average person who tweets about Donald Trump is quite intellectual, challenges authority, and is not open to change.

With these results, it would allow you to automatically alter how you would target your content towards your audience to match the results received. You will need to determine what categories are of interest and what percentiles do you wish to target. With this ammunition, you can begin automating.

What Else Can I Do With Watson?

As I mentioned at the beginning of this article, Watson offers many other different services. With these services, you could automate many different parts of common business processes. For example:

  • Building a chat bot that can intelligently answer questions based on a knowledge base of information;
  • Build an application where you dictate what you want written to Watson by using the speech to text functionality;
  • Automatically translate your content into different languages to create a multi-lingual site or knowledge base;
  • Teach Watson how to look for specific patterns in images. This could be used to determine if a logo is embedded into a photo.

This, of course, is a very small subset that my limited imagination can postulate. I’m sure you can think of many other ways to leverage Watson’s immense capabilities.

If you are looking for more examples, IBM has an entire GitHub repository dedicated to their Node.js SDK. The example folder contains over ten sample applications that convert speech to text, text to speech, tone analysis, and visual recognition to name just a few.

Conclusion

Before Watson can runaway with technological growth, resulting in the singularity where Artificial Intelligence destroys mankind, this article demonstrated how you can turn social media content into a powerful understanding of how the people creating the content think. Using the results from Watson, your application can use the categories of interest where the percentile exceeds or is less than a predetermined amount to change how you target your audience.

If you have other interesting uses of Watson or how you are using the Personality Insights, be sure to leave a comment below.

(rb, ra, yk, il)
Categories: Around The Web

Planning for Everything

Design Blog - Tue, 04/03/2018 - 9:22am

A note from the editors: We’re pleased to share an excerpt from Chapter 7 (“Reflecting”) of Planning for Everything: The Design of Paths and Goals by Peter Morville, available now from Semantic Studios.

Once upon a time, there was a happy family. Every night at dinner, mom, dad, and two girls who still believed in Santa played a game. The rules are simple. Tell three stories about your day, two true, one false, and see who can detect the fib. Today I saw a lady walk a rabbit on a leash. Today I found a tooth in the kitchen. Today I forgot my underwear. The family ate, laughed, and learned together, and lied happily ever after.

There’s truth in the tale. It’s mostly not false. We did play this game, for years, and it was fun. We loved to stun and bewilder each other, yet the big surprise was insight. In reflecting on my day, I was often amazed by oddities already lost. If not for the intentional search for anomaly, I’d have erased these standard deviations from memory. The misfits we find, we rarely recall.

We observe a tiny bit of reality. We understand and remember even less. Unlike most machines, our memory is selective and purposeful. Goals and beliefs define what we notice and store.  To mental maps we add places we predict we’ll need to visit later. It’s not about the past. The intent of memory is to plan.

In reflecting we look back to go forward. We search the past for truths and insights to shift the future. I’m not speaking of nostalgia, though we are all borne back ceaselessly and want what we think we had. My aim is redirection. In reflecting on inconvenient truths, I hope to change not only paths but goals.

Figure 7-1. Reflection changes direction.

We all have times for reflection. Alone in the shower or on a walk, we retrace the steps of a day. Together at lunch for work or over family dinner, we share memories and missteps. Some of us reflect more rigorously than others. Given time, it shows.

People who as a matter of habit extract underlying principles or rules from new experiences are more successful learners than those who take their experiences at face value, failing to infer lessons that can be applied later in similar situations.1

In Agile, the sprint retrospective offers a collaborative context for reflection. Every two to four weeks, at the end of a sprint, the team meets for an hour or so to look back. Focal questions include 1) what went well? 2) what went wrong? 3) how might we improve? In reflecting on the plan, execution, and results, the team explores surprises, conflicts, roadblocks, and lessons.

In addition to conventional analysis, a retrospective creates an opportunity for double loop learning. To edit planned actions based on feedback is normal, but revising assumptions, goals, values, methods, or metrics may effect change more profound. A team able to expand the frame may hack their habits, beliefs, and environment to be better prepared to succeed and learn.

Figure 7-2. Double loop learning.

Retrospectives allow for constructive feedback to drive team learning and bonding, but that’s what makes them hard. We may lack courage to be honest, and often people can’t handle the truth. Our filters are as powerful as they are idiosyncratic, which means we’re all blind men touching a tortoise, or is it a tree or an elephant? It hurts to reconcile different perceptions of reality, so all too often we simply shut up and shut down.

Search for Truth

To seek truth together requires a culture of humility and respect. We are all deeply flawed and valuable. We must all speak and listen. Ideas we don’t implement may lead to those we do. Errors we find aren’t about fault, since our intent is a future fix. And counterfactuals merit no more confidence than predictions, as we never know what would have happened if.

Reflection is more fruitful if we know our own minds, but that is harder than we think. An imperfect ability to predict actions of sentient beings is a product of evolution. It’s quick and dirty yet better than nothing in the context of survival in a jungle or a tribe. Intriguingly, cognitive psychology and neuroscience have shown we use the same theory of mind to study ourselves.

Self-awareness is just this same mind reading ability, turned around and employed on our own mind, with all the fallibility, speculation, and lack of direct evidence that bedevils mind reading as a tool for guessing at the thought and behavior of others.2

Empirical science tells us introspection and consciousness are unreliable bases for self-knowledge. We know this is true but ignore it all the time. I’ll do an hour of homework a day, not leave it to the end of vacation. If we adopt a dog, I’ll walk it. If I buy a house, I’ll be happy. I’ll only have one drink. We are more than we think, as Walt Whitman wrote in Song of Myself.

Do I contradict myself?
Very well then I contradict myself
(I am large, I contain multitudes.)

Our best laid plans go awry because complexity exists within as well as without. Our chaotic, intertwingled bodyminds are ecosystems inside ecosystems. No wonder it’s hard to predict. Still, it’s wise to seek self truth, or at least that’s what I think.

Upon reflection, my mirror neurons tell me I’m a shy introvert who loves reading, hiking, and planning. I avoid conflict when possible but do not lack courage. Once I set a goal, I may focus and filter relentlessly. I embrace habit and eschew novelty. If I fail, I tend to pivot rather than persist. Who I am is changing. I believe it’s speeding up. None of these traits is bad or good, as all things are double-edged. But mindful self awareness holds value. The more I notice the truth, the better my plans become.

Years ago, I planned a family vacation on St. Thomas. I kept it simple: a place near a beach where we could snorkel. It was a wonderful, relaxing escape. But over time a different message made it past my filters. Our girls had been bored. I dismissed it at first. I’d planned a shared experience I recalled fondly. It hurt to hear otherwise. But at last I did listen and learn. They longed not for escape but adventure. Thus our trip to Belize. I found planning and executing stressful due to risk, but I have no regrets. We shared a joyful adventure we’ll never forget.

Way back when we were juggling toddlers, we accidentally threw out the mail. Bills went unpaid, notices came, we swore we’d do better, then lost mail again. One day I got home from work to find an indoor mailbox system made with paint cans. My wife Susan built it in a day. We’ve used it to sort and save mail for 15 years. It’s an epic life hack I’d never have done. My ability to focus means I filter things out. I ignore problems and miss fixes. I’m not sure I’ll change. Perhaps it merits a prayer.

God grant me the serenity
to accept the things I cannot change,
courage to change the things I can,
and wisdom to know the difference.

We also seek wisdom in others. This explains our fascination with the statistics of regret. End of life wishes often include:

I wish I’d taken more risks, touched more lives, stood up to bullies, been a better spouse or parent or child. I should have followed my dreams, worked and worried less, listened more. If only I’d taken better care of myself, chosen meaningful work, had the courage to express my feelings, stayed in touch. I wish I’d let myself be happy.

While they do yield wisdom, last wishes are hard to hear. We are skeptics for good reason. Memory prepares for the future, and that too is the aim of regret. It’s unwise to trust the clarity of rose-colored glasses. The memory of pain and anxiety fades in time, but our desire for integrity grows. When time is short, regret is a way to rectify. I’ve learned my lesson. I’m passing it on to you. I’m a better person now. Don’t make my mistakes. It’s easy to say “I wish I’d stood up to bullies,” but hard to do at the time. There’s wisdom in last wishes but bias and self justification too. Confabulation means we edit memories with no intention to deceive. The truth is elusive. Reflection is hard.

Footnotes
  • 1. Make It Stick by Peter Brown et. al. (2014), p.133.
  • 2. Why You Don’t Know Your Own Mind by Alex Rosenberg (2016).
Categories: Around The Web

Finding UX Research Participants

Smashing Magazine - Tue, 04/03/2018 - 5:30am
Finding UX Research Participants Finding UX Research Participants Victor Yocco 2018-04-03T11:30:49+02:00 2018-04-20T10:35:04+00:00

For UX designers and design teams, research with stakeholders and users is critical. However, accessing research participants isn’t as easy as it sounds. For both professional and amateur researchers finding people to participate in studies can be an elusive task. We often hear about studies and their findings, but we don’t hear as often how researchers recruit study participants.

Researchers can choose from a variety ways to find participants. Many factors determine the best method to use. This includes resources such as time and money, the research method you’re using, the type or characteristics of participants you want to recruit, and the accessibility of these types of participants. In this post, I’ll remove some of the mystery and provide guidance to those interested in recruiting participants for qualitative UX studies.

Potential research participants are everywhere, if you know what to look for. Incentives

You can use incentives to increase the likelihood of participation in any of these methods of recruitment. Use of incentives is usually a personal choice. Do you feel incentivized participants provide skewed or biased data? I don’t have any issues with providing incentives. An incentive can be a small token of appreciation ($5 gift card) or something more substantial ($200 or more depending on time needed and type of participant.

I’ve provided guidance for each method based on my experience with incentives.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents → Identifying And Interviewing Key Internal Stakeholders

You gain insight when you interview key colleagues, clients, and other relevant stakeholders of a project. Particularly at the beginning of a project. This is a great opportunity to understand everyone’s role, what their vision and hopes are for a project or product, and how you might incorporate their experience into the rest of the project. You can increase buy-in and make people feel like part of the process by including stakeholder interviews in any project. I use the term internal stakeholder broadly to describe individuals who have a vested interest in a product or project who are connected to your organization or the product in some way. Many of these internal stakeholders might also be users of the product you are interviewing them about.

When To Use It

You can always look for opportunities to interview stakeholders and colleagues to learn more. This is especially useful at the beginning of a project. You can learn expectations for a product, background information on what led to the current status of the project, and goals and hopes for the future. Checking in with stakeholders throughout a project will keep them aware of how things are progressing and allow you to get their feedback. I’ve found this is helpful for building trust with stakeholders and making them feel included in the process.

How You Might Do It

You can often arrange interviews with key stakeholders yourself if they are internal to your company. You’ll identify who is relevant to your project, including project team members, and invite them to an interview. You can contact them to schedule a time, or look to schedule using your company’s shared calendar platform (e.g., Outlook or G Suite). You should know ahead of time how long you need to schedule and how you will interview the participant (in-person or remote) so you can share this information with them at scheduling.

Identifying and interviewing stakeholders becomes more complicated when you don’t have direct access to scheduling yourself. If your project team is part of a larger organization, you might need to ask colleagues in other departments to help identify and schedule stakeholders. If you are on a project team with outside partners or have external stakeholders, you will often need someone to facilitate identification and scheduling of interviews. I’ll cover some additional challenges for recruiting stakeholders and others through your clients in the identifying participants through a client section.

Positive Aspects

Gain insight into roles, backgrounds, and history of stakeholders’ involvement with a product or issue, potentially quick to schedule, low to no cost outside of time, can be done remote or in-person, talking to customer/user-facing stakeholders might provide some insight into what users think of a product.

Negative Aspects

Difficult to identify everyone you want to participate, might include people at high-levels who are hard to reach, scheduling if not doing it yourself, scaling down if resources are limited, does not replace research with users, many stakeholders are too close to their product to be objective.

Incentives

I typically don’t provide an incentive if internal stakeholders are participating during work hours.

Case Study

I worked on a project with a bank that wanted to design an online onboarding experience for new customers. We needed to understand what the current (non-digital) onboarding experience was. We wanted to document available resources to pull into the onboarding experience. Lastly, we needed to build trust with partners who we were going to rely on to champion the experience we created.

We relied on word of mouth to learn who we needed to speak with. First, we interviewed the people we were closest to and asked them who else they considered necessary for us to speak with. We spoke with people in numerous US states, both remotely and in-person. We were able to speak with 30 people in three weeks (this was not a project we were dedicating full time). Occasionally, we spoke to people who were not relevant to our specific purpose. There were two key reasons we were given names of some folks who weren’t relevant:

  • They were higher up executives with little knowledge of what we were exploring
  • The people referring them didn’t understand/effectively convey what we were trying to accomplish, so they volunteered to participate in something not aligned with their role

We found our most common difficulties were in scheduling and getting people to reply to our initial emails. We were trying to schedule an hour to speak with people who spend most of their days traveling and in meetings. Many of them had personal assistants managing their calendars. Some didn’t have an opening to speak with us for weeks after our initial request. Most people did want to make the time to speak with us. They viewed our project as one with high strategic importance in the long-term health of their company. We also had many people reschedule due to unforeseen conflicts involving client needs arising.

We were able to paint a clearer picture of the bank’s onboarding experience and what resources were available. We were able to understand what (some) of the leadership viewed as the potential future for an onboarding experience with new customers and what their perceptions of shortcomings were for the current onboarding experience. We were able to identify gaps in knowledge that required additional future research and education. We made connections with critical internal advocates who walked away with a better understanding and appreciation of the experience we were creating. We would not have been able to achieve these outcomes through a survey or through other means of recruiting participants. Later, we were able to approach these same stakeholders to have them provide feedback on the designs for the onboarding experience we created.

Identifying Participants Through A Client

Many potential research participants are unavailable to the general public. You will find situations where you don’t have direct access to recruiting relevant participants. This is particularly true if you work for a design consultancy/studio, or as part of a shared services team within a large organization. For example, if your client is a widget manufacturer and their product is a widget warehouse product supply application, you will need to access their staff in order to understand their current pain points and needs. You won’t have an easy time finding relevant participants using the population you have access to. You want to conduct research and usability testing with participants who will become the end user of the application, which again means you’d need to access this population through your client.

When To Use It

In addition to the reasons given in the previous section for recruiting stakeholders, when you have to reach specific populations, need opinions from specific people, and want to make your client-stakeholders feel like part of the process. When you don’t have direct insight or access to critical research participants when you are looking to build relationships beyond the project team you are working with, when you want to include a diverse set of individuals covering relevant areas of the product you’re working on.

How To Do It

Work closely with your client or person you are collaborating with to identify the right people for the project you are on. Your project will dictate the exact specifications of roles you need. This includes Product Owners, VPs, Business Analysts, and Users. I often provide a script or email language for my clients to use for recruiting participants. I explain the purpose of the research, how you were made aware of the participant (e.g., Jane from accounting gave me your information) how long the conversation is expected to take, potential dates of availability, incentive (if any), prep work required (if any).

You should provide your client with a screener clearly stating:

  • How many of each type of participant you want to participate
  • Details you want to know ahead of time (e.g., years using the product, industry)
  • Factors leading to disqualification from the study (e.g., less than one year of experience with the product)

Bonus: Many organizations keep data on their users. Your client might be able to screen their database and provide you research participants. However, when I’ve used this in the past, there are often many permissions required and processes to gain access to customers. This can add a significant amount of time to your project.

I am always clear to my clients that scheduling participants is one of the largest hurdles to a project’s timeline. Working with others’ schedules is complicated. You should make it clear to your clients how to recruit, and the need to start recruiting as soon as possible.

Positive Aspects

You get specific people close to a project or product, you learn about long-term and short-term goals directly from the people you work with, you are able to ask to follow up questions that might inform projects well beyond your current relationship, you learn the history of the product or organization, you can reach relevant people you don’t have direct access to, you gain insight into roles, backgrounds, and history of stakeholders’ and users’ involvement with an issue, you will find talking directly to the users of the product provides context and texture you wouldn’t find from someone without similar knowledge.

Negative Aspects

This can be time-consuming, requires a clear communication of purpose, you might end up talking to people less relevant if your client doesn’t screen effectively, less control over scheduling, lack of control over how information is shared with participants.

Incentives – I typically don’t provide an incentive if they are from the client and participating during work hours. I’d provide an incentive if they have recruited users who are coming in on an off day or outside of work hours. You might also have a larger incentive but only give it to a couple randomly selected participants.

Case Study

I worked for a team looking at redesigning a digital report for a large mortgage lender. Many other banks and loan providers do business under the umbrella of this company. We needed to identify a specific type of user, one who: worked for a bank under the parent company and used the report as part of their daily tasks.

The client wanted us to interview 30 individuals with roles interacting with the report. They identified a handful of these individuals upfront, and then put out a call for participation to identify the remaining individuals. There were numerous layers of communication through relationship managers as well as permissions and disclosures the client needed to handle with each participant.

We were able to complete over 30 remote (over the phone) interviews in the month we were allotted to collect data. Our client arranged and scheduled each interview. Our most common difficulties were similar to those I gave in the previous case study, scheduling and relevancy of participants. We were interviewing people who spend their entire workday running the report and using the data to inform their decisions; busy people with limited flexibility of daytime work hours. We made ourselves available at any time a participant had availability in order to solve this. This created drawbacks in scheduling other meetings unrelated to working on the project.

Some of our participants forwarded the invitation to others they thought should be on the interview as well. We would find this out when more than one person would join the call. We were initially caught off guard when we had a call intended for one person take place with four participants at once. We created a separate multi-participant protocol to account for this occurring on future calls, which it did. I recommend expecting this to happen regardless of who is recruiting your participants. It’s difficult to control what happens, once you send out an invitation to the wild.

We used data from our interviews to understand the current behaviors, frustrations, and needs of users. We also presented later participants with sample designs in order to get feedback on report layout and feature changes. We delivered a redesigned report that exceeded client expectations and became a reference piece in their quest to get further funding for research and design projects.

Paying A Recruitment Firm (When You Have An Accessible Population)

Recruitment firms offer services ranging from participant screening and recruitment, facilities to conduct research, recording your sessions, and much more. You can use a recruitment firm when you are conducting research with populations you believe you can reach through contact with the general public. For example, if you are conducting usability testing on an online banking application. You can expect most people familiar with banking transactions (e.g., making a deposit or bill pay) should successfully use your application. Even if they don’t currently use your bank.

I’ve used a number of firms over the past few years. Most of them offer similar services.

Recruitment firms often provide facilities for interviews or usability testing. When To Use It

When you don’t have direct access to potential participants when you want to have a third party screen your participants, when your sample is available through the general public, when you want to have someone handle recruitment, scheduling, and day-of-research preparation.

How To Do It

You will need to create the screener the recruiter will use. You decide in advance how many of each type of participant you will want. You’ll want to include a number of “floaters” in your recruitment as well. Floaters are people who meet the requirements of the study and are willing to show up for participation in case some of the other participants don’t show up. Floaters are typically compensated at higher levels because they are committing to spend two or three hours sitting around in case they are needed.

You’ll also need to provide the screener with enough advance notice as the recruiter requires. I’ve found this is two weeks in advance for most studies, and three weeks in advance for more complex studies. All recruitment firms offer participants an incentive, usually cash, to participate in a study. You will have to be ok with the fact your participants are receiving money to participate. I haven’t found this to be problematic, but you should be prepared to defend why you don’t think this will add any additional bias to your data.

Positive Aspects

Very detailed screening, don’t have to find people, often have a facility you can use, will record audio and video as needed, will recruit additional participants in case some don’t show up.

Negative Aspects

Cost, the time needed in advance if you have a difficult to reach population, participants trying to game the system.

Incentives – Recruitment firms almost always compensate the people they recruit. You will pay the recruitment firm a set fee they pay to participants.

Case Study

I worked for a team wanting to define the digital needs and behaviors of specific types of Financial Advisors. The client did not want to expose their brand during any of the research, so they did not want to facilitate the recruitment. The client wanted the interviews to pull participants from more than one major city in the US. We worked with a recruitment firm to identify and recruit participants, as well as to conduct the interview sessions.

We worked with the client to create a detailed screener with items meant to refine the population to the specific participants we wanted for the study. The recruitment firm asked for three weeks to find 15 participants for the first city in our study. The usual turn around when working with the firm was two weeks with less specialized participants. We were also advised to provide a higher incentive, over double what we typically offered, due to the probability we were asking participants to step away from work and the perceived value of their time.

We were able to interview 15 participants over the course of two days. We found a few of the participants didn’t actually meet the qualifications we’d screened for. They had manipulated their responses to qualify. Our client was unhappy with this. We were able to use the floaters to replace the participants who didn’t truly qualify. We were also able to get a refund on what we’d paid to recruit the unqualified participants.

Ultimately, we reached our goal of interviewing the right number of participants in the right amount of time, and produced a report on needs and behaviors for our client.

We would not have been able to access this population without the use of the recruitment firm. The client was unwilling to expose their brand and therefore unwilling to identify participants from their contact list. We would have spent more time and money than the project allowed if we were left to recruit participants. We don’t have contact lists or the ability to easily identify specialized populations through our own resources. We still experienced frustration with the lack of initial quality participants the recruitment firm provided. In general, we’ve had positive experiences with recruitment firms, but the more specialized the population, the more likely you will find some duds.

Guerilla Recruiting (When You Want To Find People In The Wild)

You can utilize public spaces to recruit potential study participants. Guerilla research is a term for quick and dirty research conducted with people as they go about their daily tasks (in the wild so to speak). The term is meant to reflect a context in which you are pressed for resources. However, you can benefit from using this method of recruiting even when you have resources for other methods. Sometimes collecting data from people when they are in specific settings is the most appropriate method.

You can find plenty of potential users in the wild.

You should determine a space you want to recruit participants for a logical reason. Let’s say you’re designing a smartphone application meant to help people track their workouts at the gym. You would want to recruit participants from that setting, entering or exiting the gym. If you wanted to test out a new form of electronic payment, you’d want to be present in a setting where transactions take place.

When To Use It

When you have little time or budget, when you have access to relevant populations, when you only want to get quick feedback from a few people, when you can spend 20 minutes or less per participant, when you have a product related to a specific physical space (e.g., an art museum tour application).

How To Do It

Find a location, get permission if needed, create a script. I’ve previously written a detailed article on the specifics of recruiting participants in public.

Positive Aspects

Quick execution, the potential for multiple locations if you have the resources, small or large sample sizes, accessing relevant populations, compatible with multiple research methods.

Negative Aspects

Little ability for screening, approaching people takes practice and skill, potentially inclement weather if outside, a lot of standing around.

Incentives

I’d base the incentive on the amount of time and type of activity. For example, I might give a product discount code for something taking a minute or less. A $5 gift card if you are taking a few minutes of their time.

Case Study

I worked on a project examining the use of technology in library settings. Specifically, we wanted to understand the usability of a system for finding and locating materials within the library. We wanted to work with people who use a library. We needed to test inside of the library because the last part of testing involved physically locating the material.

We sent two researchers to spend multiple days at the library while it was open for patrons. We stood with clipboards at the entrance of the library. We asked patrons if they would spend a few minutes with us participating in our study. We then observed them using the system to search for an item and asked them to locate the material based on where the system told them it should be located.

Our biggest challenge was long periods of time where there were no new patrons coming into the library. We wanted to complete 30 to 40 sessions using three different scenarios. We had budgeted to spend one week onsite to get this many responses. We had to extend our timeline for the following week to reach our goal.

We were able to suggest improvements in the interface, terminology, and an explanation of where materials were located. We would not have had similar findings if we hadn’t been on location at a library and we might not have had as valuable insights if we used people who were not library patrons.

Friends And Family (Low On Time And Budget)

Sometimes, you might have very little opportunity to engage in research. There are many reasons for this, time, budget, or your working for a client who refuses to allow research as part of the project plan. The designers I’ve worked with still want to have some type of feedback to shape their thinking. You can still look to gather some meaningful data from those you have closest access to. Perhaps you are on a project where you are working on a product that is relevant to your coworkers or friends you have easy access to. You might ask a few of them to participate in interviews about the product.

Friends and family are the definition of a convenience sample, and should only be used when no other options exist. This is the most biased and least rigorous way of collecting data. However, you can still benefit from insights into experiences you might otherwise not get. You can use friends and family to participate in interviews or usability testing as a means of accessing informing your design. I strongly recommend conducting additional research, using one of the other methods of finding participants, as your design progresses.

When To Use It

As a last resort, when you have no budget, little time, yet you want to know something about the context or users you are designing for when you have access to relevant people to participate in the study. Background information of your participants.

How To Do It

Reach out to others you and your team know; you can include social media to distribute the call to participate, schedule a time to speak or send an email explaining what you’re asking participants to do (you can also distribute survey links this way) Positive – you will get some feedback, almost instant, low budget

Negative Aspects

Most limited pool of participants, possibly less reach, you’re are relying on favors, less ability to screen for specific characteristics, introducing a larger bias due to familiarity with participants.

Incentives

I would incentivize based on time and budget. A $25 gift card is much less expensive than what you’d pay for a participant from a recruitment firm, but friends and family might find this amount acceptable for up to an hour of time.

Case Study

I was part of a project team responding to a (paid) request for proposals (RFP) from a major vacation industry company. We had two weeks to turn around our response, including design concepts to show our thinking. Most of our team had no experience in using the services from this specific industry. We needed to find out more information to help inform our response. We didn’t have the resources to undertake our typical research process of finding and interviewing stakeholders or representative end users. Instead, we reached out to friends and family members who stated they’d had experience in this vacation activity within the past three years.

We emailed our staff and asked if anyone had friends or family members with this qualification who’d be willing to engage in brief phone conversations about their experience. We conducted interviews with seven people over the course of the next two days. Our designers were able to use the insights we gained to better understand the types of needs users might have while vacationing. Our concepts attempted to address some of the issues our participants stated existed when they had experienced while vacationing.

Although we didn’t win the long-term work, our team was able to place among the top candidates. We credited the participation of friends and family in our research as part of what helped our design stand out in a positive way. We were later awarded separate work from the team we presented to for the initial RFP.

The table below provides a summary of key characteristics for each participant recruitment method I’ve covered in this article.

Time Cost Ability to pre-screen participants Ability to access participants Stakeholder Slow Low Easy Easy Client Recruits Slow Low Difficult Difficult Recruitment Firm Slow High Easy Varies – harder to reach specific populations Guerilla Recruiting Fast Free Difficult Easy Friends & Family Fast Free Moderate Easy depending on topic Table 1: Characteristics of common research participant recruitment methods

Conclusion

We need to access users and potential users in order to effectively conduct research. I’ve covered a number of common ways you can find research participants. Each has certain strengths and weaknesses. You’ll want to become familiar with each of these and adapt your approach based on your product, budget, and timeline.

(cc, ra, il)
Categories: Around The Web

Are Mobile Pop-Ups Dying? Are They Even Worth Saving?

Smashing Magazine - Mon, 04/02/2018 - 6:35am
Are Mobile Pop-Ups Dying? Are They Even Worth Saving? Are Mobile Pop-Ups Dying? Are They Even Worth Saving? Suzanna Scacca 2018-04-02T12:35:29+02:00 2018-04-19T08:47:31+00:00

The pop-up has an interesting (and somewhat risqué) origin. Were you aware of this? The creator of the original pop-up ad, Ethan Zuckerman, explained how it came into being:

Specifically, we came up with it when a major car company freaked out that they’d bought a banner ad on a page that celebrated anal intercourse. I wrote the code to launch the window and run an ad in it. I’m sorry. Our intentions were good.

Basically, the client was dissatisfied with having their ad placed beside an article discussing this less-than-savory subject. Rather than lose the ad revenue or, worse, the client, Zuckerman and his team came up with a solution: The car company’s ad would still run on the website, but this time would pop out into a new window. Thus, the pop-up gave the advertiser an opportunity to share their offer without the risk of sitting next to a competitor or unsuitable blog content.

Origin story aside, does Zuckerman have anything to apologize for? Is the pop-up in its current state such a bad thing for the user experience? With a few simple searches around the web, you might very well begin to believe that.

For instance, a search of the term “pop-up ads” in Answer the Public comes up with this disheartening response:

Users clearly just want pop-ups to go away. (Image: Answer the Public) (View large version)

Getting the process just right ain't an easy task. That's why we've set up 'this-is-how-I-work'-sessions — with smart cookies sharing what works really well for them. A part of the Smashing Membership, of course.

Explore features →

A search for “I hate pop-ups” on Google results in over 3 million pages and responses like this:

You can bet that a search for 'I love pop-ups' doesn’t have quite the same results. (Image: Google) (View large version)

With the seemingly abundant negative responses to pop-ups, does this mean the pop-up is dead? Google’s 2017 algorithmic update penalizing certain types of mobile pop-ups could very well spell their doom — though I’m not ready to throw in the towel yet.

So, today, I want to see what the research says.

Are mobile pop-ups dying? Or will they simply undergo another adaptation?

If they continue to remain effective, how should designers make use of them, especially in mobile web design?

Finally, are there alternatives web designers can start using now to prepare for Google’s vision of a more mobile-friendly digital world?

Is The Mobile Pop-Up Dead? What The Experts Say

Pop-ups have come a long way since their founding by Zuckerman in the ’90s.

For the most part, pop-ups don’t force users out of the browser, nor do they surprise them with a desktop cluttered with ads once the browser is closed altogether. It’s a neater and more controlled experience overall. And we’ve seen them in a variety of forms, too:

  • full-page interstitials,
  • partial modal pop-ups,
  • top- or bottom-aligned bars,
  • pop-out modules tucked in the corner of the page,
  • push notifications,
  • inline banners found within the actual content of the page.

Pop-ups can also now appear at various points throughout the journey, thanks in part to big data and AI:

  • appearing as soon as the web page loads;
  • appearing once the user scrolls down the page;
  • appearing once the user moves the cursor to the close button in the browser tab;
  • ever-present, sitting off to the side, waiting for engagement.

But this type of pop-up technology doesn’t work all that well with the mobile experience, does it?

Take Macy’s website. Upon entering it, you’ll encounter this pop-up ad within a few seconds:

Macy’s displays this offer within a few seconds of your arrival on the website. (Image: Macy’s) (View large version)

When you open the website on mobile, however, you won’t find any trace of that pop-up. Instead, you’ll see a small bar built into the space just below the navigation bar:

Macy’s ditches the pop-up on mobile and integrates it in the content. (Image: Macy’s) (View large version)

The offer is similar, but with no request for an email address and no pop-up functionality. This is likely because of the change to Google’s algorithm in 2017.

Which brings me to what the experts say about pop-ups. While most are focused on the life expectancy of pop-ups in general, Google has been leading the charge against mobile pop-ups (sort of) for almost a year now:

Google

Let’s start by looking at Google’s announcement regarding mobile-first indexing. This originally came to light in 2016, but it was just talk at the time. It is now over a year later, and Google has begun rolling out this indexing initiative.

Basically, what it does is change how Google’s bots crawl and index a website. Google no longer views the desktop version of a website as the primary experience for users. Going forward, the mobile website will be the primary version indexed.

With Google users increasingly starting on a mobile device instead of desktop, this move makes sense. It’s also why the algorithm change in 2017 that penalizes certain types of mobile pop-ups was another logical move in Google’s mission to make the web a more mobile-friendly place.

Google provides examples of the kinds of interstitial pop-ups to avoid. (Image: Google) (View large version)

In laying out the details of this change, Google explained that mobile pop-ups deemed disruptive to the user experience would result in ranking penalties for those websites. These kinds of pop-ups fall into three categories:

  • interstitial pop-ups that cover the entire screen upon entering the website and that require users to “X” out in order see the actual website;
  • pop-ups that cover the entire screen upon entering the website but that require users to know that scrolling past them is the way to bypass the pop-up and see the main content;
  • any pop-up that hides the majority of content on the page behind it.

In other words, Google doesn’t believe that traditional pop-ups have any place on mobile because the limited screen space would make the experience too disruptive. That’s likely the reason why you’re seeing popular websites like Macy’s do away with mobile pop-ups altogether. Though there are some traditional modal pop-ups Google doesn’t mind, it’s probably safer to avoid modals and interstitials on mobile in order to avoid the chance of a penalty.

As you can see, pop-ups for legal requirements are still OK, although most of the time you’re going to see publishers relegate them to small bars, as MailChimp has done here:

MailChimp adheres to Google’s new guidelines in providing a cookies disclaimer. (Image: MailChimp) (View large version) Nielsen Norman Group

In 2017, Nielsen Norman Group conducted a survey on the most hated advertising techniques. This study encompassed all kinds of website advertising (including video ads, on-page banner ads, etc.), but there was special mention of pop-up ads that make the findings relevant here.

Out of a total score of 7, with 1 being “strongly like” and 7 being “strongly dislike,” respondents gave mobile ads a score of 5.45. Desktop ads weren’t far behind, with 5.09, although the survey results did consistently show that mobile ads were more despised than their desktop counterparts.

Users might despise ads, in general, but they really don’t like them on mobile. (Image: Nielsen Norman Group) (View large version)

Drilling down, Nielsen Norman Group also found modals (i.e. partially covering pop-ups) to be the most hated type of ad that mobile users encounter:

Oof! Users really don’t like modal pop-ups, do they? (Image: Nielsen Norman Group) (View large version)

Why does Nielsen Norman Group believe this to be the case? Well, there’s the aforementioned real estate issue. Mobile phones just don’t have enough room to accommodate modal pop-ups without overwhelming users. According to the authors, though, there may be another reason:

Additionally, the context of mobile use tends to be “on-the-go” — that is, users are more likely to be distracted by competing stimuli, and the need for efficiency is drastically increased.

Having reviewed Nielsen Norman Group’s research, I do agree that many users will very likely be put off upon encountering a pop-up on a mobile website. That being said, plenty of research provides a valid counter-argument.

While users might be likely to describe their annoyance with pop-ups as high when surveyed about it, some evidence suggests it is short-lived for many of them. As we’ll see in a moment, pop-ups are actually quite effective in driving conversions.

Sumo

Sumo declared in 2018 that pop-ups aren’t dead. While that opinion might be seen as biased, considering it’s in the business of creating and selling list-builder tools such as pop-ups, welcome mats and smart bars, it does have evidence to suggest that pop-ups are still worthwhile if generating leads and conversions is your top priority.

Sumo used data from nearly 2 billion customer pop-ups to make this argument. Sadly, the data doesn’t directly break out anything related to mobile pop-ups and their conversion rates, but I found this particular statistic to be relevant:

Of the top 10% of pop-ups, only 8% had pop-ups appear in the 0-4 second mark. And the majority of those 8% were on pages where the pop-up was expected to appear quickly — as in sending someone to a download page.

In other words, users don’t want to be rushed into seeing your pop-ups — which is one of the major points Google is trying to make with its algorithm update. (Tests conducted by Crazy Egg mirror this point about delaying pop-ups.) Mobile websites that jump the gun and present visitors with a pop-up message before giving them an opportunity to scroll through the website are just creating an unnecessary disruption.

Another point that Sumo stresses is that pop-ups need to be valuable and presented within context. This is especially important on mobile, where you can’t afford to test visitors’ patience with a video pop-up completely unrelated to the blog post they were trying to read beneath it.

In other words, always think about how a pop-up will add value to the experience that you are (partially) blocking.

Justinmind

Justinmind calls modal pop-ups “complicated,” and for good reason. Even though there was nearly an even split between how users felt about pop-ups (21% said they liked them, while 23% said they didn’t), the research shows that pop-ups have proven to be quite helpful in the conversion process.

That being said, what a lot of this comes down to is how a website uses the pop-up. The University of Alberta, for example, was able to get 12% to 15% more email subscribers by using a pop-up on its website. On the other hand, you have Search Engine Land claiming that the main reason people block websites is because of pop-up ads.

Another thing to think about, according to Justinmind, is the mobile UI. It suggests that even if you do everything else right — deliver a valuable and well-timed offer and compromise an unobtrusive amount of space — there’s still the thumb zone to think about.

While it’s great that designers have built the ever-trusty “X” button into the top-right corner of pop-ups, that’s not the easiest stretch for the mobile user’s thumb. If you want to design ads for the mobile UX, consider another placement of that exit button.

30 Lines

Digital marketing agency 30 Lines claims:

Our clients who run targeted lead capture pop-ups on their websites typically convert anywhere from 75-250% more leads from their sites than clients who don’t.

Unlike other experts who have shied away from the subject of mobile pop-ups (because it might end in them admitting defeat), 30 Lines took on the topic head on. And this was the point they sought to make:

  • Google is not saying that mobile pop-ups are all bad.
  • Google, in fact, does want you to generate more conversions on your website — and it acknowledges that pop-ups might play a role in that.
  • It’s simply up to you to determine what will lead to the most unobtrusive experience for your visitors.

30 Lines gives a lot of great tips on how to adhere to Google’s principles without doing away with mobile pop-ups altogether. As we move on to discuss ways in which designers can use mobile pop-ups in the future, I’ll be sure to include them for consideration.

What Do Web Designers Do With Mobile Pop-Ups Now?

I’m not going to lie: This is a tough one, because while it would be so easy to just kill pop-ups on mobile websites altogether — and many consumers would be thrilled with that decision — they do still have incredible value in generating conversions. So, what do we do?

Clearly, this is a complicated matter, because you could equally argue both sides and are left choosing between two evils:

  • Do you want to run mobile pop-ups in the hope of gaining more subscribers (especially considering that mobile users tend to have lower conversion rates to begin with)?
  • Or do you want to put more resources into writing high-converting landing pages and on-page banners to sell and convert mobile visitors?

Do you even know which option mobile visitors would be more receptive to?

Below are questions to think about as you evaluate whether pop-ups make sense for your mobile website now and in the future.

Is It Necessary?

Ask yourself whether a particular message even needs to be in a pop-up format. If it could work just as well integrated in a page, then you might want to skip it entirely (as in the Macy’s example from earlier).

Fast Company uses pop-ups on its mobile website (shown below), but it also integrates its contact forms into on-page banners, like this one:

Fast Company inserts a subscriber form inline with the content. (Image: Fast Company) (View large version) Different Designs

Create different pop-up designs for desktop and for mobile. So long as the message and offer are still relevant and valuable to mobile users, there’s no reason not to completely start from the ground up when building mobile pop-ups. Just be sure to think about the design, message and trigger rules when reshaping desktop pop-ups for mobile.

Gap is a good example of this. You can see how its offer is displayed on desktop as an on-page banner with expanded details:

This is how Gap displays this offer on desktop.

Then, on mobile, it is shown as a bottom bar element:

This is how Gap displays this offer on mobile. (Image: Gap) (View large version) Go Small

Keep pop-ups small on mobile. In general, it’s recommended they take up no more than 15% of the screen. This means staying away from full-page interstitials, even if you’re trying to sneak them in on a second or third page.

Inc has a small and succinct message for mobile users:

Inc keeps its pop-up message bold but brief. (Image: Inc) (View large version) Target Mobile Context

Use mobile-targeted messaging. This means be very light on text, and don’t include images or icons that force the pop-up to be larger than it needs to be. You can also create targeted messages for consumers who use your website for research while out and about or even while shopping in house.

Stick To The Bottom

To play it safe, display pop-ups only at the very bottom of a page. This could mean one of two things. First, you could align the pop-up to the bottom of the mobile screen (this could be a traditional modal pop-up or a hello bar). Here’s an example of how Fast Company does it:

Fast Company doesn’t shy away from modals with this mobile pop-up example. (Image: Fast Company) (View large version)

The second option is to open the pop-up once the visitor has scrolled all the way to the bottom of the web page.

Delay

Try not to show a pop-up on the first page a visitor sees. By this, I mean the first page that a user is directed to by search or a referral website (which is not necessarily the home page). Also, don’t forget about timing. In general, try not to load a pop-up within the first four seconds of a visitor arriving on a page.

Intuit does this really well:

This Intuit pop-up only appears after you’ve navigated inwards on the website. (Image: Intuit) (View large version)

Visit the first page of the website and you won’t encounter any kind of pop-up messaging. Click through to learn more about pricing, and then you’ll see a relevant and value-adding message pop up at the bottom of the screen.

Easy Exit

If you still want to use a modal pop-up design, make sure it’s easy to exit out of. This means putting an “X” in the bottom-right corner or an exit message beneath the CTA.

Or you could stick with the bottom bar design that many mobile web designers seem to favor right now, like Zumiez:

A bottom-aligned hello bar pop-up from Zumiez. (Image: Zumiez) (View large version)

The New Yorker also does this:

A bottom-aligned hello bar pop-up from The New Yorker. (Image: The New Yorker) (View large version) Make It Optional

Create a special CTA or other interactive element on your website that, only when clicked, opens a pop-up. Basically, let mobile users decide whether and when they want to interrupt the on-site experience.

Basic Outfitters does this after you’ve added your first item to the cart:

The Basic Outfitters pop-up shows only after the user actively triggers it on the website. (Image: Basic-Outfitters) (View large version) Consider Alternatives

If you’re nervous about designing a traditional pop-up on your website, fear not. There are alternatives.

Consider push notifications and SMS notifications. They allow you to reach mobile users without having to intrude in the browser or in the mobile device experience without their express permission.

Gated content is another way to collect leads on a mobile website without having to force users into a pop-up to submit their contact information.

Track Preference

You will more likely annoy a mobile user with a repeat pop-up ad than a desktop user. So, if you can use cookies to prevent mobile visitors from being interrupted by the same pop-up message after they’ve dismissed it, that would be ideal.

Remember: You’re not just playing by Google’s rules here. If mobile visitor numbers drop off and Google spots a change in your bounce rate and time-on-site statistics, then your website’s rank will suffer as a result, since Google now prioritizes the mobile website experience over desktop.

The Mobile Pop-Up Doesn’t Need To Die

For now, the best plan is to heed the experts. And what they’re saying is that mobile pop-ups aren’t dying. In fact, they can still play a vital role in signing up more email subscribers and converting more customers from mobile devices. But, as with anything else, you need to play by Google’s rules and always think about how your decisions will affect your users’ experience.

So, use your mobile pop-ups wisely.

(da, ra, yk, il, al)
Categories: Around The Web

Designing For The Tactile Experience

Smashing Magazine - Mon, 04/02/2018 - 5:00am
Designing For The Tactile Experience Designing For The Tactile Experience Lucia Kolesárová 2018-04-02T11:00:20+02:00 2018-04-18T10:03:11+00:00

The focus of digital technology in the last few decades has neglected human hands and bodies to a large extent. Our thoughts and feelings are strongly connected to the gestures, postures, and actions we perform. I aim to push you — as a designer — to think outside of the zone of screens.

I’d also like to ask you to start thinking critically about current technologies; touch and motor skills need to be taken into consideration when designing your very next product. Allow me to explain why.

Less Haptic Stimuli, Less Experience

According to Finnish neurophysiologist Matti Bergström, quoted in a lecture of Sofia Svanteson:

“The density of nerve endings in our fingertips is enormous. Their discrimination is almost as good as that of our eyes. If we don’t use our fingers during childhood or youth, we become “fingerblind,” this rich network of nerves is impoverished — which represents a huge loss to the brain and thwarts the individual's development as a whole. Such damage may be likened to blindness itself. Perhaps worse, while a blind person may simply not be able to find this or that object, the fingerblind cannot understand its inner meaning and value”. Hold, Push, Swipe, Tap

If you end up as a typical white-collar worker, you’ll probably spend a significant part of your day looking at your screen, without any possibility of physically touching the things you work with. How much time do you spend on your computer at work? How much time do you spend on your phone afterwards. What about during your spare time: What do you do during those hours? Hold, push, swipe, tap.

The word “touch” is in the word “touchscreen,” but tapping and swiping a cold flat piece of matter basically neglects the sense of touch. You are capable of experiencing only a fraction of what your sense of touch allows you to during the long hours of manipulation with touchscreens.

Is your pattern library up to date today? Alla Kholmatova has just finished a fully fledged book on Design Systems and how to get them right. With common traps, gotchas and the lessons she learned. Hardcover, eBook. Just sayin'.

Table of Contents →

What actions do you physically perform with your body? Perhaps you are not a very active person. What posture are you usually in? What kind of impact can sitting over the screen of a mobile phone or computer all day have on a person? Pablo Briñol, Richard E. Petty and Benjamin Wagner claim in their research article that your body posture can shape your mind.

“… We argue that any postures associated with confidence (e.g., pushing one’s chest out) should magnify the effect of anything that is currently available in people’s minds relative to postures associated with doubt (e.g., slouching forward with one’s back curved).”

As the theory of embodied cognition states, your body affects your behavior.

Tactile Feedback

Many tangible things are disappearing from our surroundings and reappearing in digital form. They are improved upon and enriched with new functions that would not be possible in the material world. A few examples are maps, calendars, notebooks and pens, printed photos, music players, calculators and compasses. However, with the loss of their material form comes also the loss of the sensations and experiences that only physical interaction with objects can give us. The “… disembodied brain could not experience the world in the same ways that we do, because our experience of the world is intimately tied to the ways in which we act in it,” writes Paul Dourish in his book Where the Action Is.

Fingers are able to sense the progress of a book (Image: on Unsplash) (View large version) Different Activities, Different Movements

Consider some actions we perform in the physical world:

I pay for a ticket. I pull my wallet out of my bag. I open it and take out banknotes. While holding the notes in one hand, I draw some coins with my other hand. I give the money to the salesperson.

I confess love. I sit or stand opposite to the person. I look into their eyes. I blush. I say, “You know, I love you.” I am kissed.

I look for a recipe. I choose a cookbook from the shelf. I take the book. I flip a few pages, forwards, backwards. I find a recipe.

Whereas in the world of screens:

I pay for a ticket. I fill text fields. I hit a button.

I confess love. I fill a text field. I hit a button.

I look for a recipe. I fill a text field. I hit a button. (Image: Jeremy Paige on Unsplash) (View large version)

The environment surrounding us, the activities we perform and the things we come into contact with help us to perceive situations more intensely and meaningful. Phenomenologists such as Husserl, Schutz, Heidegger and Merleau-Ponty have already explored the relationship between embodied action and meaning. “For them, the source of meaning (and meaningfulness) is not a collection of abstract, idealized entities; instead, it is to be found in the world in which we act, and which acts upon us. This world is already filled with meaning. Its meaning is to be found in the way in which it reveals itself to us as being available for our actions. It is only through those actions, and the possibility for actions that the world affords us, that we can come to find the world, in both its physical and social manifestations, meaningful.” Another quote from above-mentioned book by Paul Dourish.

Because so many different activities are being carried out in the same manner in the digital world, their value is becoming less clear. I believe that haptic sense has something to do, for instance, with the perception of paying by “real” or by virtual currency — that feeling of something tangible in your hand that you are giving to someone else, compared to just tapping a flat surface to confirm that the number on the screen will be deducted from your account.

Try a simple task. Suppose you want to remember something. Write it down and see how it affects your brain. Professor Anne Mangen, who studies the impact of digital technologies on reading and writing, has shown that writing helps your brain process information and remember it much better. Physical sensorimotor activities create a stronger connection to performed tasks. That’s probably one of the reasons why paper planners are seeing a rise in sales. Sales of paper books are also rising. Giving a digital book as a gift is much less impressive than giving its paper equivalent. This points to an interesting phenomenon. Physical presents just “feel” much better. There is a trend of returning to “tangible music”, which caused an increase in vinyl sales. But are those returns to “old forms” enough? Or can we act also from the current opportunities?

Designing For Touch

How can we create more material experiences in design? What are some tangible solutions, solutions that solve problems through our senses, through our contact with the physical, material world, solutions that let us act in our surrounding as much as possible without using our smartphones or any other flat screens? There are many possible ways to get back to the physical experience.

1. Interact With Digital Technology in a More Human Way.

Make digital information tangible. Interact with it by hand gestures and movements in the material world.

One of the most famous pioneering projects with that aim was SixthSense. Back in 2009, it linked digital devices and our interactions with the physical world. This kind of wearable technology consisted of a camera, a projector hanging on the user’s neck, and color markers stuck to their fingers. The user could dial a phone number using projected keys on their palm, while the camera would record their finger movements. They could read newspapers showing live video news, or draw a circle on their wrist to check the time. The whole principle was to project visuals into the world surrounding the user. With current technology, however, that principle has transformed. The outside world is no longer altered by some projection. The only altered thing is our vision. It’s enhanced by a new layer of augmented reality (AR), by special kinds of glasses, and there is a completely new reality created in virtual reality (VR) headsets.

Using a palm to dial a phone number. (Image: pranavmistry.com) (View large version)

A more modern example is Magic Leap, a secretive project that connects virtual reality and the “real” world in a mixed reality. You can see objects in your surroundings that are not part of your reality — for example, jellyfish flying in your room. This device is exceptional because it also enables hand tracking. You are able to shoot robots falling from your ceiling, holding a real plastic gun in your hand, meanwhile controlling the interface with hand gestures. This is big progress from mostly sequential activities, which screen interfaces enable the user to do. We are getting there.

Magic Leap connects ‘real’ and virtual. (Image: magic-leap.reality.newsView large version)

Mixed, VR and AR projects could be the future. The good thing is that these technologies are built with a huge emphasis on human behavior, psychology, physics laws and ergonomics. The experience is lived, not just observed on a screen. They are not tearing you away from the natural (or virtual) environment and sticking you in a chair to stare into a flat square. You get involved in the action, immersed in doing things and feeling emotions. All of these technologies bring you experiences. Whether they’re real or not, you will remember them as things that happened to you.

Another advantage is that they make your body move — for example, by replacing your physical screens with virtual ones. They allow you to do your work practically everywhere, possibly on the move as well. Whether you are 3D painting with a virtual brush, throwing squares (a VR game) or organizing your desktop, you are using your fingers, your hands, your wrists and whole body movements. Technology is finally adapting to you.

2. Involve More Sensory Experiences In Your Design.

If sight sensors are already occupied by some functionality, don’t add more visual stimuli. Better to include some haptics, hearing or even olfactory stimuli — thus, creating so-called multi-sensorial design. As noted in their book Product Experience, Hendrik N. J. Schifferstein and Paul Hekkert state, “By now, many different studies have suggested that the greater the number of sensory modalities that are stimulated at any one time, the richer our experiences will be.”

Let’s discuss the topic of virtual reality further. Even though it doesn’t feel like virtual could satisfy the need for material or tangible experience, VR is a perfect example of connecting several senses together, not only sight and hearing, but also touch.

There are a couple of different ways to bring touch into VR:

  • The classic primitive controllers
    They give you the sense of being present, just like holding a mouse, i.e. it’s one object but has a single point of interaction. Well, it actually has two controllers that are controlled by two hands. Still, the full potential of your hands and ten fingers is not being used in this case.
Classic VR controllers. (Image credit) (View large version)
  • Haptic gloves
    These enable you to feel objects from VR in your hands. The sensors translate touch sensations into vibrations that enable you to perceive the shape of an apple or to experience rain. You can even feel the release of a virtual arrow. Obviously, all of these sensations are not the same as real ones in their fidelity. But as a whole virtual reality, they pose a question: What does it mean to be real? What makes for a real touch experience — a real touched object made of realistic, tangible material or a real feeling transmitted by neurons to your brain? Is it enough to fool your brain, without even using your hands? This is maybe the moment when we can ask, Are we just brains or whole bodies?
Haptic VR controllers still look a bit utopian. (Image: dextarobotics.com) (View large version)
  • Combining haptic gloves with material objects
    Various games layer VR over a physical playground. One of them is The Void. As a player, you wear a vest with 22 haptic patches that vibrate and shake you at the right times. The idea is that you are playing the game in VR but all of your surroundings are tangible, so instead of seeing four empty walls, you see a large territory around you. A big stone would be perceived as a mountain, and a normal door could be transformed into a magic one. But opening the magic one would feel real because, in the end, it is. All such little gimmicks with sight, touch, hearing and even smell involve more sensory experience and make VR even more immersive.
The Void game (Image: thevoid.com) (View large version) 3. When Designing For The Screen, Think About How the Task Could Be Performed In The Physical World Instead.

How would people act in their most “natural” way?

Time tracking is not always pleasant, maybe because you feel like a robot from constantly checking the time or opening and closing your time-tracking app. ZEI is a great example of tangible design. The developers found a way to get robots to do our job in the background so that we can act more like humans. This time-tracking device is an octahedron (eight sides). Each face is assigned one activity, so you can easily track time spent on different projects just by flipping it. It presents a very natural way to switch from task to task and to turn your attention from one thing to another.

ZEI moves screen tasks to tangible reality. (Image: timeular.com) (View large version)

When you’re designing a product, think of how users would perform without it. How do people track their work? Maybe they tend to take notes. How did people used to complete tasks in the past? Did we stand up from our chair and stretch a bit? What if every accomplished task were to be followed by a small exercise or at least standing up, to support users’ health? Many ridiculous ideas will probably appear in that kind of process, but you can get much closer to designing products for humans with such a human approach.

4. Transfer Your Digital Product To Tangible Experiences.

If you already have a product, program or app designed for the screen, think of whether there is some possibility to convert it to the physical world.

Computers made it possible to compose music by using various musical instruments that exist only in the digital world. But the dynamics of physical contact with the instrument cannot be replaced by using a computer mouse. Physically pushing keys on a piano or hitting drums with drumsticks, fast or softly, using mostly just your fingers and wrists, or blasting drums with your forearms and whole arms — these are experiences that seem to be non-transferable to computer programs.

Ableton, the well-known producer of software for music production, decided to create its own hardware, Ableton Push. The second edition of Ableton Push “puts everything you need to make music in one place — at your fingertips.” Push is basically a table with pads and controls that enable you to play drums or pitched instruments on one device. It offers a range of ways to play and manipulate samples, allowing you to capture ideas quickly. No technology stands in the way, and you can physically interact with music once again.

Ableton Push (Image: ableton.com) (View large version) 5. Think the Other Way Around: How Can You Upgrade Things That Already Exist With Some Digital Experience?

Classic toys, board games, paper books and notebooks, musical instruments — all of these have served us for decades and are beautiful, efficient and playful. However, many of them are disappearing because they are no longer attractive enough and are unable to compete with the digital experience. Sustain them. Upgrade them with some digital value and experience.

Playing with wooden toys is one of the best experiences for children. Their material and shape develop children’s building capacity and hand muscles. Their simplicity stimulates children’s imagination and creativity. We should not give up these benefits for a flat screen. Studio deFORM’s project KOSKI, a building block game, “connects the physical world and the digital gaming world together.” Physical, wooden toy blocks are mirrored in an iPad app and enhanced with imaginative worlds, characters and stories on the screen. The player physically alters the projected world on screen by manipulating the blocks in the real time.

While we can argue about whether this game develops a child’s imagination, I find it to be a good alternative to current tablet games.

KOSKI (Image: koskigame.com) (View large version)

We’re already used to old-fashioned things. There’s no need to teach users new design patterns or ways of communication with hi-tech novelties. Everyone knows how to use a paper notebook. But often when I want to write with a pen on paper, I have to think twice about it. I know that, in the end, it will have to be rewritten in some digital form so that it can be easily shared and stored. This issue was tackled by Wacom with its notebook digitizer. Its solution was the SmartPad, which converts handwriting into digital files. It also offers the possibility to combine pages of notes and to edit them.

Even existing material can take on new qualities when enriched by the digital experience. Mixing together unexpected things can create very non-traditional objects. Consider FabricKeyboard, made by MIT Media Lab’s Responsive Environments Lab. As Meg Miller explains:

"This fabric made from textile sensors allows you to play the keys like one would on a normal keyboard, or you can create the sounds by manipulating the fabric itself — by pressing, pulling, twisting and even by waving your hands above the material. The e-fabric responds to touch, pressure, stretch, proximity and electric field." FabricKeyboard (Image: Irmandy Wicaksono on MIT Media Lab) (View large version)

Finally, let’s consider one more reason why we should think carefully before letting traditional objects vanish away. They’ve been created from years of experience. They’ve evolved into their current form, one that fits their purpose very well. Think of how usable, convenient and pleasurable many printed books are. The rules of layout and typography from this established medium have been transferred very quickly to ebooks and web design, which are struggling to meet the standards of their physical counterparts. Think also of the non-transferable qualities: the tactile sense of progress, their collectibility, the sensuous aspects.

Some old-school materials are worth keeping, and their development should continue even in the digital era.

Tangible Future

As Andrea Resmini and Luca Rosati write in their book Pervasive Information Architecture:

"We are swinging like a pendulum. Fifty years ago we were rooted in material world. When you wanted to know something, you asked some person or read a book. Then desktop computers became our interface of choice to access information, and now we are swinging back to the real world, but we are bringing computers along. Information is becoming pervasive."

One way to bring qualities of the real world to our daily used technologies is to learn from material things. Another way is to suss out the attributes we are missing in our interaction with screens. Let your senses lead you, and think about a solution that can replace a current discomfort. The classic human-centered approach still works. However, as advanced technologies improve and extend into multiple areas of our lives, we need to think more carefully about what it means to be human. Our bodies and senses are definitely a part of it.

(cc, ra, al, yk, il)
Categories: Around The Web

A Journey Through The World Of Music (April 2018 Desktop Wallpapers)

Smashing Magazine - Sat, 03/31/2018 - 3:30am
A Journey Through The World Of Music (April 2018 Desktop Wallpapers) A Journey Through The World Of Music (April 2018 Desktop Wallpapers) Cosima Mielke 2018-03-31T09:30:00+02:00 2018-04-17T09:43:41+00:00

A song can wake memories, give you an energy boost, or inspire you. It can help overcome a creative trough or make a beautiful moment even more beautiful. To pay tribute to the music you love, we announced the “Illustrate your favorite song” wallpapers challenge a few weeks ago. And, well, today, we’re happy to present the lucky winner.

The idea behind the challenge was to design a desktop wallpaper for April 2018 in which you tell us a little story about your favorite song. What is the song about? What images arise in your head when you listen to it? How does it make you feel? Is it bold and full of energy or calm and relaxing? Artists and designers from across the globe took on the challenge, and, well, the results are a colorful journey through the world of music — earworms guaranteed. So without further ado, let’s dive right in.

Bunnies ahead!

The fluffy little fellows are an unmistakable sign that Easter is here, and, well, we’ve got a wallpapers post dedicated entirely to them and their companions in crime, the small, yellow chicks. Happy Easter! →

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

“You must unlearn what you have learned!” Meet the brand new episode of SmashingConf San Francisco with smart front-end tricks and UX techniques. Featuring Yiying Lu, Aarron Draplin, Smashing Yoda, and many others. Tickets now on sale. April 17-18.

Check the speakers → And The Winner Is … Nikes On My Feet

PopArt Web Design from Serbia designed a wallpaper based on the song ‘Nikes on My Feet’ by Mac Miller. A great reminder to put on your shoes, get outside, and embrace spring.

“We got inspired by the song ‘Nikes on My Feet’ by Mac Miller, which was perfect for the upcoming month of April and the warm weather that spring brings. As Mac Miller said, ‘All I really need is some shoes on my feet…”

Download the wallpaper:

Congratulations, dear PopArt Web Design team! You won a ticket to one of our upcoming SmashingConfs. We’re already looking forward to meet you there. In San Francisco or Toronto, maybe?

More Submissions

A big Thank You to everyone who participated. Keep up the brilliant work!

Wildest Dreams

“We love the art direction, story and overall cinematography of the ‘Wildest Dreams’ music video by Taylor Swift. It inspired us to create this illustration. Hope it will look good on your desktops.” — Designed by Kasra Design from Malaysia.

Yellow Submarine

“The Beatles — ‘Yellow Submarine’: This song is fun and at the same time there is a lot of interesting text that changes your thinking. Like everything that makes The Beatles.” — Designed by WebToffee from India.

Learning To Fly

“Man has always wanted to fly like a bird and his striving for freedom is remarkable. After watching this song from Pink Floyd, I wanted to jump off a cliff and become an eagle so many times as a kid.” — Designed by WPFloor from India.

Wonderful Life

“My favorite song is ‘Wonderful Life’ from Black from my childhood. This picture that was taken in a very beautiful dock in Belgrade evokes a calm feeling from that song, a peacefulness of soul and mind. Each of us has a gift, but what is truly wonderful is to embrace a flair toward life in small things because, no need to run and hide, it’s a wonderful, Wonderful Life. Cheers!” — Designed by Marija Zaric from Belgrade, Serbia.

Purple Rain

“I love Prince and I was very sad when he left. This song is pretty romantic and makes me dream…” — Designed by Purple from India.

An Autumn Night

“‘My autumn night vanishes into light, Who will I leave you with, my flute?’ The famous song written by Rabindranath Tagore, Nobel Prize winner in Literature 1913. The song is taken from his famous song collection ‘Gitabitan’.” — Designed by Suman Sil from India.

Dreadlock Rasta

“‘Buffalo Soldier’ is a nickname bestowed by the Native Americans to members of the U.S. 10th Cavalry Regiment of the United States Army, denoting their stubborn courage and toughness in battle. The song ‘Buffalo Soldier’ reflects on the courage and valour of these soldiers despite the racist, prejudicial system in which they operated.” — Designed by Sweans from London.

Buena Vista Social Club

“I dance salsa and other Latin dances since 10 years. This makes me happy and makes me love life even more. This song by Chan Chan is one of the first Latin songs I’ve ever fallen in love with.” — Designed by Material Admin from India.

Stairway To Heaven

“‘Stairway To Heaven’ by Led Zeppelin.” — Designed by Stellar from India.

Sounds Like Spring

“In spring you can hear all those beautiful sounds outside of the birds singing. Therefore I created this wallpaper of an old phonograph with lovely flowers coming out like music.” — Designed by Melissa Bogemans from Belgium.

Spring Wallpapers

Your favorite song wasn’t part of the collection? No worries, we’ve got some seasonal wallpapers for you to get your desktop ready for April, too, of course — no matter the weather. Please note that some of them are from our archives, and, thus, don’t come with a calendar.

Enjoy Easter

Designed by UIG Studio from Poland.

No Winter Lasts Forever

“Here comes spring, breathing new life into the world around us.” — Designed by Norjimm Pvt Ltd from India.

Bunny

“Easter is the time to celebrate new beginnings, welcoming spring while rejoicing the festival of abundance with your loved ones and your family.” — Designed by Vipin Nayar from India.

Brushes Are Flowers

“April is just around the corner of spring, so I added the flower, but it is also a gray month with all the rain that usually comes with it, so I added gray colors with a twist.” — Designed by Tiago Oliveira from Portugal.

Earth Yey!

“April has a very special day: Earth Day! So I decided to pay tribute to this amazing planet by creating this image with a happy universe.” — Designed by Sara Andreia Agostinho from Portugal.

Strength To Face the Challenges

“April is a month that most people look back to see how far they got with the new year’s resolutions. It’s always overwhelming to realize that it’s almost half way to one year and there’s still so much to be done. This calendar wallpaper is designed to remind you that, ‘you’ve got this!’” — Designed by Metrovista from Orlando, Florida.

Happy Easter

Designed by Tazi Design from Australia.

Relax!

“…and enjoy your Easter holidays with some good chocolate.” — Designed by Ricardo Gimenes from Brazil.

Clover Field

Designed by Nathalie Ouederni from France.

Fairytale

“A tribute to Hans Christian Andersen. Happy Birthday!” — Designed by Roxi Nastase from Romania.

Springtime Sage

“Spring and fresh herbs always feel like they compliment each other. Keeping it light and fresh with this wallpaper welcomes a new season!” — Designed by Susan Chiang from the United States.

Fusion

Designed by Rio Creativo from Poland.

Be Happy Bee

“Smell of spring flowers, especially daisies and open landscapes, the joy of freedom.” — Designed by Kiraly Tamas from Romania.

Spring Infographics

“Spring comes for everyone, for big and for small. How spring is arranged? I suggest us to understand this question.” — Designed by Ilya Denisenko from Russia.

Flying On A Rainy Day!

“April is the month of spring or autumn depending where you live on the globe! It’s also the second rainiest month of the year. I was inspired by one simple motif to illustrate rain, birds and flowers. So either you witness rainy days or colorful ones … Enjoy April!” — Designed by Rana Kadry from Egypt.

Without The Rain There Would Be No Rainbows

“I love April showers and the spring blooms they bring!” — Designed by Denise Johnson from Chicago.

Good Day

“Some pretty flowers and spring time always make for a good day.” — Designed by Amalia Van Bloom from the United States.

The Perpetual Circle

“The Black Forest, which is beginning right behind our office windows, so we can watch the perpetual circle of nature, when we take a look outside.” — Designed by Nils Kunath from Germany.

April Brings Spring

“With April comes spring, flowers and a fresh breathe of warmth and creativity.” — Designed by Zack Aronson from New York, US.

Silly Sheep

Designed by Pietje Precies from The Netherlands.

Sakura

“Spring is finally here with its sweet Sakura’s flowers, which reminds me of my trip to Japan.” Designed by Laurence Vagner from France.

A Time For Reflection

“‘We’re all equal before a wave.’ - Laird Hamilton.” — Designed by Shawna Armstrong from the United States.

Spring Rain

“Even the rain is beautiful during spring!” — Designed by Zlatina Petrova from Bulgaria.

Join In Next Month!

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works.

Join in next month!

Categories: Around The Web
Syndicate content