Jump to Navigation

Feed aggregator

Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD

Smashing Magazine - Tue, 12/18/2018 - 9:00am
Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD Mixing Tangible And Intangible: Designing Multimodal Interfaces Using Adobe XD Nick Babich 2018-12-18T15:00:04+01:00 2018-12-19T09:07:50+00:00

(This article is kindly sponsored by Adobe.) User interfaces are evolving. Voice-enabled interfaces are challenging the long dominance of graphical user interfaces and are quickly becoming a common part of our daily lives. Significant progress in automatic speech recognition (APS) and natural language processing (NLP), together with an impressive consumer base (millions of mobile devices with built-in voice assistants), have influenced the rapid development and adoption of voice-based interface.

Products that use voice as the primary interface are becoming more and more popular. In the US alone, 47.3 million adults have access to a smart speaker (that’s one fifth of the US adult population), and the number is growing. But voice interfaces have a bright future not only in personal and home use. As people become accustomed to voice interfaces, they will come to expect them in a business context as well. Just imagine that soon you’ll be able to trigger a conference-room projector by saying something like, “Show my presentation”.

It’s evident that human-machine communication is rapidly expanding to encompass both written and spoken interaction. But does it mean that future interfaces will be voice-only? Despite some science-fiction portrayals, voice won’t completely replace graphical user interfaces. Instead, we’ll have a synergy of voice, visual and gesture in a new format of interface: a voice-enabled, multimodal interface.

In this article, we’ll:

  • explore the concept of a voice-enabled interface and review different types of voice-enabled interfaces;
  • find out why voice-enabled, multimodal user interfaces will be the preferred user experience;
  • see how you can build a multimodal UI using Adobe XD.
The State Of Voice User Interfaces (VUI)

Before diving into the details of voice user interfaces, we must define what voice input is. Voice input is a human-computer interaction in which a user speaks commands instead of writing them. The beauty of voice input is that it’s a more natural interaction for people — users are not restricted to a specific syntax when interacting with a system; they can structure their input in many different ways, just as they would do in human conversation.

Voice user interfaces bring the following benefits to their users:

  • Less interaction cost
    Although using a voice-enabled interface does involve an interaction cost, this cost is smaller (in theory) than that of learning a new GUI.
  • Hands-free control
    VUIs are great for when the users hands are busy — for example, while driving, cooking or exercising.
  • Speed
    Voice is excellent when asking a question is faster than typing it and reading through the results. For example, when using voice in a car, it is faster to say the place to a navigation system, rather than type the location on a touchscreen.
  • Emotion and personality
    Even when we hear a voice but don’t see an image of a speaker, we can picture the speaker in our head. This has an opportunity to improve user engagement.
  • Accessibility
    Visually impaired users and users with a mobility impairment can use voice to interact with a system.
Three Types Of Voice-Enabled Interfaces

Depending on how voice is used, it could be one of the following types of interfaces.

Voice Agents In Screen-First Devices

Apple Siri and Google Assistant are prime examples of voice agents. For such systems, the voice acts more like an enhancement for the existing GUI. In many cases, the agent acts as the first step in the user’s journey: The user triggers the voice agent and provides a command via voice, while all other interactions are done using the touchscreen. For example, when you ask Siri a question, it will provide answers in the format of a list, and you need to interact with that list. As a result, the user experience becomes fragmented — we use voice to initiate the interaction and then shift to touch to continue it.

Siri executes a voice command to search for news, but then requires users to touch the screen in order to read the items. (Large preview) Voice-Only Devices

These devices don’t have visual displays; users rely on audio for both input and output. Amazon Echo and Google Home smart speakers are prime examples of products in this category. The lack of a visual display is a significant constraint on the device’s ability to communicate information and options to the user. As a result, most people use these devices to complete simple tasks, such as playing music and getting answers to simple questions.

Amazon Echo Dot is a screen-less device. (Large preview) Voice-First Devices

With voice-first systems, the device accepts user input primarily via voice commands, but also has an integrated screen display. It means that voice is the primary user interface, but not the only one. The old saying, “A picture is worth a thousand words” still applies to modern voice-enabled systems. The human ​brain​ has incredible​ ​image​-​processing​ abilities — we​ ​can​ ​understand​ ​complex​ ​information​ ​faster​ ​when we​ ​see​ ​it​ ​visually. Compared to voice-only devices, voice-first devices allow users to access a larger amount of information and make many tasks much easier.

The Amazon Echo Show is a prime example of a device that employs a voice-first system. Visual information is gradually incorporated as part of a holistic system — the screen is not loaded with app icons; rather, the system encourages users to try different voice commands (suggesting verbal commands such as, “Try ‘Alexa, show me the weather at 5:00 pm’”). The screen even makes common tasks such as checking a recipe while cooking much easier — users don’t need to listen carefully and keep all of the information in their heads; when they need the information, they simply look at the screen.

Amazon Echo Show is basically an Amazon Echo speaker with a screen. (Large preview) Introducing Multimodal Interfaces

When it comes to using voice in UI design, don’t think of voice as something you can use alone. Devices such as Amazon Echo Show include a screen but employ voice as the primary input method, making for a more holistic user experience. This is the first step towards a new generation of user interfaces: multimodal interfaces.

A multimodal interface is an interface that blends voice, touch, audio and different types of visuals in a single, seamless UI. Amazon Echo Show is an excellent example of a device that takes full advantage of a voice-enabled multimodal interface. When users interact with Show, they make requests just as they would with a voice-only device; however, the response they receive will likely be multimodal, containing both voice and visual responses.

Multimodal products are more complex than products that rely only on visuals or only on voice. Why should anyone create a multimodal interface in the first place? To answer that question, we need to step back and see how people perceive the environment around them. People have five senses, and the combination of our senses working together is how we perceive things. For example, our senses work together when we are listening to music at a live concert. Remove one sense (for example, hearing), and the experience takes on an entirely different context.

(Large preview)

For too long, we’ve thought about the user experience as exclusively either visual or gestural design. It’s time to change this thinking. Multimodal design is a way to think about and design for experiences that connect our sensory abilities together.

Multimodal interfaces feel like ​a more​ ​human​ ​way​ for ​user​ ​and​ machine to communicate. They open up new opportunities for deeper interactions. And today, it’s much easier to design multimodal interfaces because the technical limitations that in the past constrained interactions with products are being erased.

The Difference Between A GUI And Multimodal Interface

The key difference here is that multimodal interfaces like Amazon Echo Show sync voice and visual interfaces. As a result, when we’re designing the experience, the voice and visuals are no longer independent parts; they are integral parts of the experience that the system provides.

Visual And Voice Channel: When To Use Each

It’s important to think about voice and visuals as channels for input and output. Each channel has its own strengths and weaknesses.

Let’s start with the visuals. It’s clear that some information is just easier to understand when we see it, rather than when we hear it. Visuals work better when you need to provide:

  • a long lists of options (reading a long list will take a lot of time and be difficult to follow);
  • data-heavy information (such as diagrams and graphs);
  • product information (for example, products in online shops; most likely, you would want to see a product before buying) and product comparison (as with the long list of options, it would be hard to provide all of the information using only voice).

For some information, however, we can easily rely on verbal communication. Voice might be the right fit for the following cases:

  • user commands (voice is an efficient input modality, allowing users to give commands to the system quickly and bypassing complex navigation menus);
  • simple user instructions (for example, a routine check on a prescription);
  • warnings and notifications (for example, an audio warning paired with voice notifications during driving).

While these are a few typical cases of visual and voice combined, it’s important to know that we can’t separate the two from each other. We can create a better user experience only when both voice and visuals work together. For example, suppose we want to purchase a new pair of shoes. We could use voice to request from the system, “Show me New Balance shoes.” The system would process your request and visually provide product information (an easier way for us to compare shoes).

What You Need To Know To Design Voice-Enabled, Multimodal Interfaces

Voice is one of the most exciting challenges for UX designers. Despite its novelty, the fundamental rules for designing voice-enabled, multimodal interface are the same as those we use to create visual designs. Designers should care about their users. They should aim to reduce friction for the user by solving their problems in efficient ways and prioritize clarity to make the user’s choices clear.

But there are some unique design principles for multimodal interfaces as well.

Make Sure You Solve The Right Problem

Design should solve problems. But it’s vital to solve the right problems; otherwise, you could spend a lot of time creating an experience that doesn’t bring much value to users. Thus, make sure you’re focused on solving the right problem. Voice interactions should make sense to the user; users should have a compelling reason to use voice over other methods of interaction (such as clicking or tapping). That’s why, when you create a new product — even before starting the design — it’s essential to conduct user research and determine whether voice would improve the UX.

Start with creating a user journey map. Analyze the journey map and find places where including voice as a channel would benefit the UX.

  • Find places in the journey where users might encounter friction and frustration. Would using voice reduce the friction?
  • Think about the context of the user. Would voice work for a particular context?
  • Think about what is uniquely enabled by voice. Remember the unique benefits of using voice, such as hands-free and eyes-free interaction. Could voice add value to the experience?
Create Conversational Flows

Ideally, the interfaces you design should require zero interaction cost: Users should be able to fulfill their needs without spending extra time on learning how to interact with the system. This happens only when voice interaction resemble a real conversation, not a system dialog wrapped in the format of voice commands. The fundamental rule of a good UI is simple: Computers should adapt to humans, not the other way around.

People rarely have flat, linear conversations (conversations that only last one turn). That’s why, to make interaction with a system feel like a live conversation, designers should focus on creating conversational flows. Each conversational flow consists of dialogs — the pathways that occur between the system and the user. Each dialog would include the system’s prompts and the user’s possible responses.

A conversational flow can be presented in the form of a flow diagram. Each flow should focus on one particular use case (for example, setting an alarm clock using a system). For most dialogs in a flow, it’s vital to consider error paths, when things go off the rails.

Each voice command of the user consists of three key elements: intent, utterance and slot.

  • Intent is the objective of the user’s interaction with a voice-enabled system.
    An intent is just a fancy way of defining the purpose behind a set of words. Each interaction with a system brings the user some utility. Whether it’s information or an action, the utility is in intent. Understanding the user’s intent is a crucial part of voice-enabled interfaces. When we design VUI, we don’t always know for sure what a user’s intent is, but we can guess it with high accuracy.
  • Utterance is how the user phrases their request.
    Usually, users have more than one way to formulate a voice command. For example, we can set an alarm clock by saying “Set alarm clock to 8 am”, or “Alarm clock 8 am tomorrow” or even “I need to wake up at 8 am.” Designers need to consider every possible variation of utterance.
  • Slots are variables that users use in a command. Sometimes users need to provide additional information in the request. In our example of the alarm clock, “8 am” is a slot.
Don’t Put Words In The User’s Mouth

People know how to talk. Don't try to teach them commands. Avoid phrases like, “To send a meeting appointment, you need to say ‘Calendar, meetings, create a new meeting’.” If you have to explain commands, you need to reconsider the way you’re designing the system. Always aim for natural language conversation, and try to accommodate diverse speaking styles).

Strive For Consistency

You need to achieve consistency in language and voice across contexts. Consistency will help to build familiarity in interactions.

Always Provide Feedback

Visibility of system status is one of the fundamental principles of good GUI design. The system should always keep users informed of what is going on through appropriate feedback within a reasonable time. The same rule applies to VUI design.

  • Make the user aware that the system is listening.
    Show visual indicators when the device is listening or processing the user’s request. Without feedback, the user can only guess whether the system is doing something. That’s why even voice-only devices such as Amazon Echo and Google Home give us nice visual feedback (flashing lights) when they are listening or searching for an answer.
  • Provide conversational markers.
    Conversational markers tell the user where they’re at in the conversation.
  • Confirm when a task is completed.
    For example, when users ask the voice-enabled smart home system “Turn off the lights in the garage”, the system should let the user know that the command has been successfully executed. Without confirmation, users will need to walk into the garage and check the lights. It defeats the purpose of the smart home system, which is to make the user’s life easier.
Avoid Long Sentences

When designing a voice-enabled system, consider the way you provide information to users. It’s relatively easy to overwhelm users with too much information when you use long sentences. First, users can’t retain a lot of information in their short-term memory, so they can easily forget some important information. Also, audio is a slow medium — most people can read much faster than they can listen.

Be respectful of your user’s time; don’t read out long audio monologues. When you’re designing a response, the fewer words you use, the better. But remember that you still need to provide enough information for the user to complete their task. Thus, if you cannot summarize an answer in a few words, display it on the screen instead.

Provide Next Steps Sequentially

Users can be overwhelmed not only by long sentences, but also their number of options at one time. It’s vital to break down the process of interaction with a voice-enabled system into bite-sized chunks. Limit the number of choices the user has at any one time, and make sure they know what to do at every moment.

When designing a complex voice-enabled system with a lot of features, you can use the technique of progressive disclosure: Present only the options or information necessary to complete the task.

Have A Strong Error-Handling Strategy

Of course, the system should prevent errors from occurring in the first place. But no matter how good your voice-enabled system is, you should always design for the scenario in which the system doesn’t understand the user. Your responsibility is to design for such cases.

Here are a few practical tips for creating a strategy:

  • Don’t blame the user.
    In conversation, there are no errors. Try to avoid reponses like, “Your answer is incorrect.”
  • Provide error-recovery flows.
    Provide an option for back-and-forths in a conversation, or even to exit the system, without losing important information. Save the user’s state in the journey, so that they can re-engage with the system right from where they left off.
  • Let users replay information.
    Provide an option to make the system repeat the question or answer. This might be helpful for complex questions or answers where it would be hard for the user to commit all of the information to their working memory.
  • Provide stop wording.
    In some cases, the user will not be interested in listening to an option and will want the system to stop talking about it. Stop wording should help them do just that.
  • Handle unexpected utterances gracefully.
    No matter how much you invest in the design of a system, there will be situations when the system doesn’t understand the user. It’s vital to handle such cases gracefully. Don’t be afraid to let the system admit a lack of understanding. The system should communicate what it has understood and provide helpful reprompts.
  • Use analytics to improve your error strategy.
    Analytics can help you identify wrong turns and misinterpretations.
Keep Track Of Context

Make sure the system understands the context of the user’s input. For example, when someone says that they want to book a flight to San Francisco next week, they might refer to “it” or “the city” during the conversational flow. The system should remember what was said and be able to match it to the newly received information.

Learn About Your Users To Create More Powerful Interactions

A voice-enabled system becomes more sophisticated when it uses additional information (such as user context or past behavior) to understand what the user wants. This technique is called intelligent interpretation, and it requires that the system actively learn about the user and be able to adjust their behavior accordingly. This knowledge will help the system to provide answers even to complex questions, such as "What gift should I buy for my wife’s birthday?"

Give Your VUI A Personality

Every voice-enabled system has an emotional impact on the user, whether you plan for it or not. People associate voice with humans rather than machines. According to Speak Easy Global Edition research, 74% of regular users of voice technology expect brands to have unique voices and personalities for their voice-enabled products. It’s possible to build empathy through personality and achieve a higher level of user engagement.

Try to reflect your unique brand and identity in the voice and tone you present. Construct a persona of your voice-enabled agent, and rely on this persona when creating dialogs.

Build Trust

When users don’t trust a system, they don’t have the motivation to use it. That’s why building trust is a requirement of product design. Two factors have a significant impact on the level of trust built: system capabilities and valid outcome.

Building trust starts with setting user expectations. Traditional GUIs have a lot of visual details to help the user understand what the system is capable of. With a voice-enabled system, designers have fewer tools to rely on. Still, it’s vital to make the system naturally discoverable; the user should understand what is and isn’t possible with the system. That’s why a voice-enabled system might require user onboarding, where it talks about what the system can do or what it knows. When designing onboarding, try to offer meaningful examples to let people know what it can do (examples work better than instructions).

When it comes to valid outcomes, people know that voice-enabled systems are imperfect. When a system provides an answer, some users might doubt that the answer is correct. this happens because users don’t have any information about whether their request was correctly understood or what algorithm was used to find the answer. To prevent trust issues, use the screen for supporting evidence — display the original query on the screen — and provide some key information about the algorithm. For example, when a user asks, “Show me the top five movies of 2018”, the system can say, “Here are top five movies of 2018 according to the box office in the US”.

Don’t Ignore Security And Data Privacy

Unlike mobile devices, which belong to the individual, voice devices tend to belong to a location, like a kitchen. And usually, there are more than one person in the same location. Just imagine that someone else can interact with a system that has access to all of your personal data. Some VUI systems such as Amazon Alexa, Google Assistant and Apple Siri can recognize individual voices, which adds a layer of security to the system. Still, it doesn’t guarantee that the system will be able to recognize users based on their unique voice signature in 100% of cases.

Voice recognition is continually improving, and it will be hard or nearly impossible to imitate a voice in the near future. However, in the current reality, it’s vital to provide an additional authentication layer to reassure the user that their data is safe. If you design an app that works with sensitive data, such as health information or banking details, you might want to include an extra authentication step, such as a password or fingerprint or face recognition.

Conduct Usability Testing

Usability testing is a mandatory requirement for any system. Test early, test often should be a fundamental rule of your design process. Gather user research data early on, and iterate your designs. But testing multimodal interfaces has its own specifics. Here are two phases that should be taken into account:

  • Ideation phase
    Test drive your sample dialogs. Practice reading sample dialogs out loud. Once you have some conversational flows, record both sides of the conversation (the user’s utterances and the system’s responses), and listen to the recording to understand whether they sound natural.
  • Early stages of product development (testing with lo-fi prototypes)
    Wizard of Oz testing is well-suited to testing conversational interfaces. Wizard of Oz testing is a type of testing in which a participant interacts with a system that they believe is operated by a computer but is in fact operated by a human. The test participant formulates a query, and a real person responds on the other end. This method gets its name from the book The Wonderful Wizard of Oz by Frank Baum. In the book, an ordinary man hides behind a curtain, pretending to be a powerful wizard. This test allows you to map out every possible scenario of interaction and, as a result, create more natural interactions. Say Wizard is a great tool to help you run a Wizard of Oz voice-interface test on macOS.
  • Designing For Voice: The ‘Wizard Of Oz’ Method (Watch on Vimeo)
  • Later stages of product development (testing with hi-fi prototypes)
    In usability testing of graphical user interfaces, we often ask users to speak out loud when they interact with a system. For a voice-enabled system, that’s not always possible because the system would be listening to that narration. So, it might be better to observe the user’s interactions with the system, rather than ask them to speak out loud.
How To Create A Multimodal Interface Using Adobe XD

Now that you have a solid understanding of what a multimodal interface is and what rules to remember when designing them, we can discuss how to make a prototype of a multimodal interface.

Prototyping is a fundamental part of the design process. Being able to bring an idea to life and share it with others is extremely important. Until now, designers who wanted to incorporate voice in prototyping had few tools to rely on, the most powerful of which was a flowchart. Picturing how a user would interact with a system required a lot of imagination from someone looking at the flowchart. With Adobe XD, designers now have access to the medium of voice and can use it in their prototypes. XD seamlessly connects screen and voice prototyping in one app.

New Experiences, Same Process

Even though voice is a totally different medium than visual, the process of prototyping for voice in Adobe XD is pretty much the same as prototyping for a GUI. The Adobe XD team integrates voice in a way that will feel natural and intuitive for any designer. Designers can use voice triggers and speech playback to interact with prototypes:

  • Voice triggers start an interaction when a user says a particular word or phrase (utterance).
  • Speech playback gives designers access to a text-to-speech engine. XD will speak words and sentences defined by a designer. Speech playback can be used for many different purposes. For example, it can act as an acknowledgment (to reassure users) or as guidance (so users know what to do next).

The great thing about XD is that it doesn’t force you to learn the complexities of each voice platform.

Enough words — let’s see how it works in action. For all of the examples you’ll see below, I’ve used artboards created using Adobe XD UI kit for Amazon Alexa (this is a link to download the kit). The kit contains all of the styles and components needed to create experiences for Amazon Alexa.

Suppose we have the following artboards:

(Large preview)

Let’s go into prototyping mode to add in some voice interactions. We’ll start with voice triggers. Along with triggers such as tap and drag, we are now able to use voice as a trigger. We can use any layers for voice triggers as long as they have a handle leading to another artboard. Let’s connect the artboards together.

Connecting artboards together. (Large preview)

Once we do that, we’ll find a new “Voice” option under the “Trigger”. When we select this option, we’ll see a “Command” field that we can use to enter an utterance — this is what XD will actually be listening for. Users will need to speak this command to activate the trigger.

Setting a voice trigger in Adobe XD. (Large preview)

That’s all! We’ve defined our first voice interaction. Now, users can say something, and a prototype will respond to it. But we can make this interaction much more powerful by adding speech playback. As I mentioned previously, speech playback allows a system to speak some words.

Select an entire second artboard, and click on the blue handle. Choose a “Time” trigger with a delay and set it to 0.2s. Under the action, you’ll find “Speech Playback”. We’ll write down what the virtual assistant speaks back to us.

(Large preview)

We’re ready to test our prototype. Select the first artboard, and clicking the play button in the top right will launch a preview window. When interacting with voice prototyping, make sure your mic is on. Then, hold down the spacebar to speak the voice command. This input triggers the next action in the prototype.

Use Auto-Animate To Make The Experience More Dynamic

Animation brings a lot of benefits to UI design. It serves clear functional purposes, such as:

  • communicating the spatial relationships between objects (Where does the object come from? Are those objects related?);
  • communicating affordance (What can I do next?)

But functional purposes aren’t the only benefits of animation; animation also makes the experience more alive and dynamic. That’s why UI animations should be a natural part of multimodal interfaces.

With “Auto-Animate” available in Adobe XD, it becomes much easier to create prototypes with immersive animated transitions. Adobe XD does all the hard work for you, so you don’t need to worry about it. All you need to do to create an animated transition between two artboards is simply duplicate an artboard, modify the object properties in the clone (properties such as size, position and rotation), and apply an Auto-Animate action. XD will automatically animate the differences in properties between each artboard.

Let’s see how it works in our design. Suppose we have an existing shopping list in Amazon Echo Show and want to add a new object to the list using voice. Duplicate the following artboard:

Artboard: shopping list. (Large preview)

Let’s introduce some changes in the layout: Add a new object. We aren’t limited here, so we can easily modify any properties such as text attributes, color, opacity, position of the object — basically, any changes we make, XD will animate between them.

Two artboards: our original shopping list and its duplicate with a new item. (Large preview)

When you wire two artboards together in prototype mode using Auto-Animate in “Action”, XD will automatically animate the differences in properties between each artboard.

(Large preview)

And here’s how the interaction will look to users:

One crucial thing that requires mentioning: Keep the names of all of the layers the same; otherwise, Adobe XD won’t be able to apply the auto-animation.

Conclusion

We’re at the dawn of a user interface revolution. A new generation of interfaces — multimodal interfaces — not only will give users more power, but will also change the way users interact with systems. We will probably still have displays, but we won’t need keyboards to interact with the systems.

At the same time, the fundamental requirements for designing multimodal interfaces won’t be much different from those of designing modern interfaces. Designers will need to keep the interaction simple; focus on the user and their needs; design, prototype, test and iterate.

And the great thing is that you don’t need to wait to start designing for this new generation of interfaces. You can start today.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

(ms, ra, al, yk, il)
Categories: Around The Web

Don’t Pay To Speak At Commercial Events

Smashing Magazine - Mon, 12/17/2018 - 8:15am
Don’t Pay To Speak At Commercial Events Don’t Pay To Speak At Commercial Events Vitaly Friedman 2018-12-17T14:15:00+01:00 2018-12-19T09:07:50+00:00

Setting up a conference isn’t an easy undertaking. It takes time, effort, patience, and attention to all the little details that make up a truly memorable experience. It’s not something one can take lightly, and it’s often a major personal and financial commitment. After all, somebody has to build a good team and make all those arrangements: flights, catering, parties, badges, and everything in between.

The work that takes place behind the scenes often goes unnoticed and, to an extent, that’s an indication that the planning went well. There are hundreds of accessible and affordable meet-ups, community events, nonprofit events, and small local groups — all fueled by incredible personal efforts of humble, kind, generous people donating their time on the weekends to create an environment for people to share and learn together. I love these events, and I have utter respect and admiration for the work they are doing, and I’d be happy to speak at these events and support these people every day and night, with all the resources and energy I have. These are incredible people doing incredible work; their efforts deserve to be supported and applauded.

Unlike these events, commercial and corporate conferences usually target companies’ employees and organizations with training budgets to send their employees for continuing education. There is nothing wrong with commercial conferences per se and there is, of course, a wide spectrum of such events — ranging from single-day, single-track gatherings with a few speakers, all the way to week-long multi-track festivals with a bigger line-up of speakers. The latter tend to have a higher ticket price, and often a much broader scope. Depending on the size and the reputation of the event, some of them have more or less weight in the industry, so some are perceived to be more important to attend or more prestigious to speak at.

Both commercial and non-commercial events tend to have the so-called Call For Papers (CFPs), inviting speakers from all over the world to submit applications for speaking, with a chance of being selected to present at the event. CFPs are widely accepted and established in the industry; however, the idea of CFPs is sometimes challenged and discussed, and not always kept in a positive light. While some organizers and speakers consider them to lower the barrier for speaking to new talent, for others CFPs are an easy way out for filling in speaking slots. The argument is that CFPs push diversity and inclusion to a review phase, rather than actively seeking it up front. As a result, accepted speakers might feel like they have been “chosen” which nudges them into accepting low-value compensation.

The key to a fair, diverse and interesting line-up probably lies somewhere in the middle. It should be the organizer’s job to actively seek, review, and invite speakers that would fit the theme and the scope of the event. Admittedly, as an organizer, unless you are fine with the same speakers appearing at your event throughout the years, it’s much harder to do than just setting up a call for speakers and wait for incoming emails to start showing up. Combining thorough curation with a phase of active CFPs submission probably works best, but it’s up to the organizer how the speakers are “distributed” among both. Luckily, many resources are highlighting new voices in the industry, such as WomenWhoDesign which is a good starting point to move away from “usual suspects” from the conference circuit.

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬

Many events strongly and publicly commit to creating an inclusive and diverse environment for attendees and speakers with a Code of Conduct. The Code of Conduct explains the values and the principles of conference organizers as well as contact details in case any conflict or violation appears. The sheer presence of such a code on a conference website sends a signal to attendees, speakers, sponsors, and the team that there had been given some thought to creating an inclusive, safe, and friendly environment for everybody at the event. However, too often at commercial events, the Code of Conduct is considered an unnecessary novelty and hence is either neglected or forgotten.

Now, there are wonderful, friendly, professional, well-designed and well-curated commercial events with a stellar reputation. These events are committed to diverse and inclusive line-ups and they always at least cover speaker’s expenses, flights, and accommodation. The reason why they’ve gained reputation over years is because organizers can afford to continuously put their heart and soul into running these events year after year — mostly because their time and efforts are remunerated by the profit the conference makes.

Many non-commercial events, fueled by great ideas and hard work, may succeed the first, second, and third time, but unfortunately, it’s not uncommon for them to fade away just a few years later. Mostly because setting up and maintaining the quality of such events takes a lot of personal time, effort and motivation beyond regular work times, and it’s just really hard to keep it up without a backbone of a strong, stable team or company behind you.

Some conferences aren’t quite like that. In fact, I’d argue that some conferences are pretty much the exact opposite. It’s more likely for them to allocate resources in outstanding catering and lighting and video production on site rather than the core of the event: the quality of the content, as provided by speakers. What lurks behind the scenes of such events is a toxic, broken conference culture despite the hefty ticket price. And more often than not, speakers bear the burden of all of their conference-related expenses, flights, and accommodation (to say nothing of the personal and professional time they are already donating to prepare, rehearse, and travel to and from the event) from their own pockets. This isn’t right, and it shouldn’t be acceptable in our industry.

The Broken State Of Commercial Conferences

Personally, I’ve been privileged to speak at many events over the years and, more often than not, there was a fundamental mismatch between how organizers see speaking engagements and how I perceive them. Don’t get me wrong: speaking at tech conferences has tremendous benefits, and it’s a rewarding experience, full of adventures, networking, traveling, and learning; but it also takes time and effort, usually away from your family, your friends, and your company. For a given talk, it might easily take over 80 hours of work just to get all the research and content prepared, not to mention rehearsal and traveling time. That’s a massive commitment and time investment.

But many conference organizers don’t see it this way. The size of the event, with hundreds and thousands of people attending the conference, is seen as a fair justification for the lack of speaker/training budgets or diversity/student scholarships. It’s remarkably painful to face the same conversations over and over and over again: the general expectation is that speakers should speak for free as they’ve been given a unique opportunity to speak and that neither flights nor expenses should be covered for the very same reason.

It’s sad to see invitation emails delicately avoiding or downplaying the topics of diversity, honorarium, and expenses. Instead, they tend to focus on the size of the event, the brands represented there, the big names that have spoken in the past, and the opportunities such a conference provides. In fact, a good number of CFPs gently avoid mentioning how the conference deals with expenses at all. As a result, an applicant who needs their costs to be covered is often discriminated against, because an applicant, whose expenses will be covered by their company is preferred. Some events explicitly require unique content for the talk, while not covering any speaker expenses, essentially asking speakers to work for free.

Preparing for a talk is a massive commitment and time investment. Taking care of the fine details such as the confidence monitor and countdown on stage is one of those little things. (Large preview) (Image source: beyond tellerrand)

It’s disappointing (upon request) to receive quick-paced replies explaining that there isn’t really any budget for speakers, as employers are expected to cover flights and accommodation. Sometimes, as a sign of good faith, the organizers are happy to provide a free platinum pass which would grant exclusive access to all conference talks across all tracks (“worth $2500” or so). And sometimes it goes so far as to be exploitative when organizers offer a “generous” 50% discount off the regular ticket price, including access to the speakers’ lounge area where one could possibly meet “decision makers” with the opportunity and hope of creating unique and advantageous connections.

It’s both sad and frustrating to read that “most” speakers were “happy to settle for only a slot at the conference.” After all, they are getting an “incredible amount of exposure to decision makers.” Apparently, according to the track record of the conference, it “reliably” helped dozens of speakers in the past to find new work and connect with new C-level clients. Once organizers are asked again (in a slightly more serious tone), suddenly a speaker budget materializes. This basically means that the organizers are willing to pay an honorarium only to speakers that are actually confident enough to repeatedly ask for it.

And then, a few months later, it’s hurtful to see the same organizers who chose not to cover speaker expenses, publishing recordings of conference talks behind a paywall, further profiting from speakers’ work without any thought of reimbursing or subsidizing speakers’ content they are repackaging and reselling. It’s not uncommon to run it all under the premise of legal formalities, asking the speaker to sign a speaker’s contract upon arrival.

As an industry, we should and can be better than that. Such treatment of speakers shows a fundamental lack of respect for time, effort, and work done by knowledgeable and experienced experts in our industry. It’s also a sign of a very broken state of affairs that dominates many tech events. It’s not surprising, then, that web conferences don’t have a particularly good reputation, often criticized for being unfair, dull, a scam, full of sponsored sessions, lacking diversity or a waste of money.

Speakers, Make Organizers Want To Invite You

On a personal note, throughout all these years, I have rarely received consultancy projects from “exposure” on stage. More often than not, the time away from family and company costs much more than any honorarium provided. Neither did I meet many “decision-makers” in the speaker lounge as they tend to delicately avoid large gatherings and public spaces to avoid endless pitches and questions. One thing that large conferences do lead to is getting invitations to more conferences; however, expecting a big client from a speaking engagement at corporate events has proved to be quite unrealistic for me. In fact, I tend to get way more work from smaller events and meet-ups where you actually get a chance to have a conversation with people located in a smaller, intimate space.

Of course, everybody has their own experiences and decides for themselves what’s acceptable for them, yet my personal experience taught me to drastically lower my expectations. That’s why after a few years of speaking I started running workshops alongside the speaking engagements. With a large group of people attending a commercial event, full-day workshops can indeed bring a reasonable revenue, with a fair 50% / 50% profit split between the workshop coach and the conference organizer.

Admittedly, during the review of this article, I was approached by some speakers who have had very different experiences; they ended up with big projects and clients only after an active phase of speaking at large events. So your experience may vary, but the one thing I learned over the years is that it’s absolutely critical to keep reoccurring in industry conversations, so organizers will seize an opportunity to invite you to speak. For speakers, that’s a much better position to be in.

If you’re a new speaker, consider speaking for free at local meet-ups; it’s fair and honorable — and great training for larger events; the smaller group size and more informal setting allows you seek valuable feedback about what the audience enjoyed and where you can improve. You can also gain visibility through articles, webinars, and open-source projects. And an email to an organizer, featuring an interesting topic alongside a recorded talk, articles and open source projects can bring you and your work to their attention. Organizers are looking for knowledgeable and excited speakers who love and live what they are doing and can communicate that energy and expertise to the audience.

Of course, there may be times when it is reasonable to accept conditions to get an opportunity to reach potential clients, but this decision has to be carefully considered and measured in light of the effort and time investment it requires. After all, it’s you doing them a favor, not the other way around. When speaking at large commercial conferences without any remuneration, basically you are trading your name, your time and your money for the promise of gaining exposure while helping the conference sell tickets along the way.

Organizers, Allocate The Speaking Budget First

I don’t believe for a second that most organizers have bad intentions; nor do I believe that they mean to cut corners at all costs to maximize profit. From my conversations with organizers, I clearly see that they share the same goals that community events have, as they do their best to create a wonderful and memorable event for everybody involved, while also paying the bills for all the hard-working people who make the event happen. After all, the conference business isn’t an easy one, and you hardly ever know how ticket sales will go next year. Still, there seems to be a fundamental mismatch of priorities and expectations.

Setting up a conference is an honorable thought, but you need a comprehensive financial plan of what it costs and how much you can spend. As mentioned above, too many promising events fade away because they are powered by the motivation of a small group of people who also need to earn money with their regular job. Conference organizers deserve to get revenue to share across the team, as working on a voluntary basis is often not sustainable.

All organizers have the same goal: to create wonderful, memorable events for everybody involved. (Large preview) (Image source: ColdFront)

To get a better understanding of how to get there, I can only recommend the fantastic Conference Organizer’s Handbook by Peter-Paul Koch, which covers a general strategy for setting up a truly professional event from planning to pricing to running it — without burning out. Bruce Lawson also has prepared a comprehensive list of questions that could be addressed in the welcome email to speakers, too. Plus, Lara Hogan has written an insightful book on Demystifying Public Speaking which I can only highly encourage to look at as well.

Yes, venues are expensive, and yes, so is catering, and yes, so is AV and technical setup. But before allocating thousands on food, roll-ups, t-shirts, and an open bar, allocate decent budgets for speakers first, especially for new voices in the industry — they are the ones who are likely to spend dozens or hundreds of hours preparing that one talk.

Jared Spool noted while reviewing this article:

“The speaking budget should come before the venue and the catering. After all, the attendees are paying to see the speakers. You can have a middling venue and mediocre catering, but if you have an excellent program, it’s a fabulous event. In contrast, you can have a great venue and fantastic food, but if the speakers are boring or off topic, the event will not be successful. Speaking budgets are an investment in the value of the program. Every penny invested is one that pays back in multiples. You certainly can’t say the same for venue or food.”

No fancy bells and whistles are required; speaker dinners or speaker gifts are a wonderful token of attention and appreciation but they can’t be a replacement for covering expenses. It’s neither fair nor honest to push the costs over to speakers, and it’s simply not acceptable to expect them to cover these costs for exposure, especially if a conference charges attendees several hundred Euros (or Dollars) per ticket. By not covering expenses, you’re depriving the industry of hearing from those groups who can’t easily fund their own conference travel — people who care for children or other relatives; people with disabilities who can’t travel without their carer, or people from remote areas or low-income countries where a flight might represent a significant portion of even multiple months of their income.

Jared continues:

“The formula is:

Break_Even = Fixed_Costs/(Ticket_Price – Variable_Costs)

Costs, such as speakers and venue are the biggest for break-even numbers. Catering costs are mostly variable costs and should be calculated on a per-attendee basis, to then subtract them from the price. To calculate the speaker budget, determine what the ticket price and variable per-attendee costs are up front, then use the net margin from that to figure out how many speakers you can afford, by diving net margin into the total speaker budget. That will tell you how many tickets you must sell to make a profit. (If you follow the same strategy for the venue, you’ll know your overall break even and when you start making profit.) Consider paying a bonus to speakers who the audience rates as delivering the best value. Hence, you’re rewarding exactly what benefits the attendees.”

That’s a great framework to work within. Instead of leaving the speaker budget dependent on the ticket sales and variable costs, set the speaker budget first. What would be a fair honorarium for speakers? Well, there is no general rule of how to establish this. However, for smaller commercial events in Europe, it’s common to allocate the price of 3–5 tickets on each speaker. For a large conference with hundreds and thousands of attendees, three tickets should probably be a minimum, but it would also have to be distributed among simultaneous talks and hence depend on the number of tracks and how many attendees are expected per talk.

Dear organizers, options matter. Keep in mind to label food (e.g. vegan/vegetarian, and so on). It’s the little details that matter most. (Large preview) (Image source: performance.now())

Provide an honorarium, even if it isn’t much. Also, ask speakers to collect all receipts, so you can cover them later, or provide a per diem (flat daily expenses coverage) to avoid the hassle with receipts. As a standard operating procedure, suggest buying the flight tickets for the speaker unless they’d love to do it on their own. Some speakers might not have the privilege to spend hundreds of dollars for a ticket and have to wait months for reimbursement. Also, it’s a nice gesture to organize pre-paid transport from and to the airport, so drivers with a sign will be waiting for a speaker at the arrival area. (There is nothing more frustrating than realizing that your cabbie accepts only local cash to pay for the trip — and that after a frustrating flight delay arriving late at night.)

Once all of these costs are covered, consider providing a mentor to help newcomers draft, refine, adjust, review and iterate the talk a few times, and set aside a separate time when they could visit the venue and run through their slides, just to get a feeling of what it’s going to be like on stage.

On a positive side, if you’ve ever wondered about a high speakers’ drop-out rate at your event, not covering expenses might be a good reason for it. If speakers are paying on their own, you shouldn’t expect them to treat the speaking engagement as a priority.

As Laurie Barth noted when reviewing this article:

“If you aren’t paid for your time, then you likely have less unpaid time to give to preparing your talk and/or have less incentive to prioritize the travel and time for the talk.”

The time, work, effort, and commitment of your speakers are what make the conference a success.

Organizer’s Checklist
  • Cover all speaker’s expenses by default, and outline what’s included from the very start (in invitation letters) and before someone invests their time in completing a CFP form;
  • Avoid hassle with receipts, and offer at least a flat per diem;
  • Suggest buying the flight tickets for the speaker rather than reimbursing later, and organize pre-paid transport pick-up if applicable,
  • Allocate budgets and honorarium for speakers, coaching and mentoring early on. Good content is expensive, and if your ticket prices can’t cover it, refine the conference format to make it viable;
  • Provide an option to donate an honorarium and expenses covered by companies towards diversity/student scholarship;
  • As a principle, never accept voiding the honorarium. If the speaker can’t be paid or their expenses can’t be covered, dedicate the funds to the scholarship or a charity, and be public about it;
  • Be honest and sincere about your expectations, and explain which expenses you cover and which not up front in the CFP or in the speaking invitation.
Speakers, Ask Around Before Agreeing To Speak

Think twice before submitting a proposal to conferences that don’t cover at least your costs despite a high ticket price. It’s not acceptable to be asked to pay for your own travel and accommodation. If an event isn’t covering your expenses, then you are paying to speak at their event. It might seem not to matter much if your time and expenses are covered by your employer but it puts freelancers and new speakers at a disadvantage. If your company is willing to pay for your speaking engagement, ask the organizers to donate the same amount to a charity of your choice, or sponsor a diversity/student scholarships to enable newcomers to speak at the event.

Come up with a fair honorarium for your time given your interest and the opportunity, and if possible, make exceptions for nonprofits, community events, or whenever you see a unique value for yourself. Be very thorough and selective with conferences you speak at, and feel free to ask around about how other speakers have been treated in the past. Look at past editions of the event and ask speakers who attended or spoke there about their experience as well as about the reputation of the conference altogether.

If you are new to the industry, asking around could be quite uncomfortable, but it’s actually a common practice among speakers, so they should be receptive to the idea. I’m very confident that most speakers would be happy to help, and I know that our entire team — Rachel, Bruce, me and the entire Smashing Crew would love to help, anytime.

Before committing to speak at a conference, ask questions. Ethan Marcotte has prepared a useful little template with questions about compensation and general treatment of speakers (thanks to Jared for the tip!). Ask about the capacity and expected attendance of the conference, and what the regular price of the ticket is. Ask what audience is expected, and what profile they have. Ask about conference accessibility, i.e. whether there will be talk captioning/transcripts available to the audience, or even sign language interpreters. Ask if there is a commitment to a diverse line-up of speakers. Ask if other speakers get paid, and if yes, how much. Ask if traveling and accommodation are covered for all speakers, by default. Ask if there is a way to increase honorarium by running a workshop, a review session or any other activities. Since you are dedicating your time, talents, and expertise to the event, think of it as your project, and value the time and effort you will spend preparing. Decide what’s acceptable to you and make exceptions when they matter.

Dear speakers, feel free to ask how other speakers have been treated in the past. It’s your right; don’t feel uncomfortable for asking what is important to you and want to know beforehand. (Large preview) (Image source: ColdFront)

As you expect a fair treatment by organizers, also treat organizers the same way. Respect organizers’ time and efforts. They are covering your expenses, but it doesn’t mean that it’s acceptable to spend a significant amount without asking for permission first. Obviously, unexpected costs might come up, and personal issues might appear, and most organizers will fully understand that. But don’t use the opportunity as a carte blanche for upscale cocktails or fancy meals — you probably won’t be invited again. Also, if you can’t come to speak due to occurring circumstances, suggest a speaker that could replace your session, and inform the organizer as soon as you are able to upfront.

Speaker’s Checklist
  • Think twice before applying to a large commercial event that doesn’t cover your expenses;
  • If your company is covering expenses, consider asking organizers to donate the same amount to a charity of your choice, or sponsor a diversity/student scholarship;
  • Be very thorough and selective with conferences you speak at, and ask how other speakers have been treated in the past;
  • Prepare a little template of questions to ask an organizer before confirming a speaking engagement;
  • Support nonprofits and local events if you can dedicate your time to speak for free;
  • Choose a fair honorarium for a talk, and decide on case-by-case basis;
  • Ask whether videos will be publicly available,
  • Ask about conference accessibility, i.e. whether there will be talk captioning/transcripts, or sign language interpreters,
  • Treat organizers with respect when you have to cancel your engagement or modify your arrangements.
Our Industry Deserves Better

As an attendee, you always have a choice. Of course, you want to learn and get better, and you want to connect with wonderful like-minded people like yourself. However, be selective choosing the conference to attend next. More often than not, all the incredible catering and free alcohol all night long might be carried on the shoulders of speakers speaking for free and paying their expenses from their own pockets. Naturally, conferences that respect speakers’ time and professional skills compensate them and cover their expenses.

So support conferences that support and enable tech speakers. There are plenty of them out there — it just requires a bit more effort to explore and decide which event to attend next. Web conferences can be great, wonderful, inspirational, and friendly — regardless of whether they are large commercial conferences of small community-driven conferences — but first and foremost they have to be fair and respectful while covering the very basics first. Treating speakers well is one of these basics.

Editorial’s recommended reading:

I’d like to kindly thank Rachel Andrew, Bruce Lawson, Jesse Hernandez, Amanda Annandale, Mariona Ciller, Sebastian Golasch, Jared Spool, Peter-Paul Koch, Artem Denysov, Markus Gebka, Stephen Hay, Matthias Meier, Samuel Snopko, Val Head, Rian Kinney, Jenny Shen, Luc Poupard, Toni Iordanova, Lea Verou, Niels Leenheer, Cristiano Rastelli, Sara Soueidan, Heydon Pickering, David Bates, Mariona C. Miller, Vadim Gorbachev, David Pich, Patima Tantiprasut, Laurie Barth, Nathan Curtis, Ujjwal Sharma, Lea Verou, Jesse Hernandez, Amanda Annandale, Benjamin Hong, Bruce Lawson, Matthias Ott, Scott Gould, Charis Rooda, Zach Leatherman, Marcy Sutton, Bertrand Lirette, Roman Kuba, Eva Ferreira, Sara Soueidan, Joe Leech, Yoav Weiss, Markus Seyfferth and Bastian Widmer for reviewing the article.

(ra, il)
Categories: Around The Web

Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment

Smashing Magazine - Fri, 12/14/2018 - 6:20am
Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment Monthly Web Development Update 12/2018: WebP, The State Of UX, And A Low-Stress Experiment Anselm Hannemann 2018-12-14T12:20:03+01:00 2018-12-19T09:07:50+00:00

It’s the last edition of this year, and I’m pretty stoked what 2018 brought for us, what happened, and how the web evolved. Let’s recap that and remind us of what each of us learned this year: What was the most useful feature, API, library we used? And how have we personally changed?

For this month’s update, I’ve collected yet another bunch of articles for you. If that’s not enough reading material for you yet, you can always find more in the archive or the Evergreen list which contains the most important articles since the beginning of the Web Development Reading List. I hope your days until the end of the year won’t be too stressful and wish you all the best. See you next year!

News
  • Microsoft just announced that they’ll change their Edge strategy: They’re going to use Chromium as the new browser engine for Desktop instead of EdgeHTML and might even provide Microsoft Edge for macOS. They’ll also help with development on the Blink engine from now on.
  • Chrome 71 is out and brings relative time support via the Internationalization API. Also new is that speech synthesis now requires user activation.
  • Safari Technology Preview 71 is out, bringing supported-color-schemes in CSS and adding Web Authentication as an experimental feature.
  • Firefox will soon offer users a browser setting to block all permission requests automatically. This will affect autoplaying videos, web notifications, geolocation requests, camera and microphone access requests. The need to auto-block requests shows how horribly wrong developers are using these techniques. Sad news for those who rely on such requests for their services, like WebRTC calling services, for example.
General
  • We’ve finally come up with ways to access and use websites offline with amazing technology. But one thing we forgot about is that for the past thirty years we taught people that the web is online, so most people don’t know that offline usage even exists. A lesson in user experience design and the importance of reminding us of the history of the medium we’re building for.
UI/UX
  • Matthew Ström wrote about the importance of fixing things later and not trying to be perfect.
  • A somewhat satiric resource about the state of UX in 2019.
  • Erica Hall shows us examples of why most of ‘UX design’ is a myth and why not only design makes a great product but also the right product strategy and business model. The best example why you should read this is when Erica writes “Virgin America. Rdio. Google Reader. Comcast. Which of these offered a good experience? Which of these still exists?” A truth you can’t ignore and, luckily, this is not a pessimistic but very thought-provoking article with great tips for how we can use that knowledge to improve our products. With strategy, with design, with a business model that fits.
After curating and sharing 2,239 links with 264,016 designers around the world, the folks at UX Collective have isolated a few trends in what the UX industry is writing, talking, and thinking about. (Image credit; illustration by Camilla Rosa) Tooling HTML & SVG Accessibility CSS JavaScript
  • Google is about to bring us yet another API: the Badging API allows Web Desktop Apps to indicate new notifications or similar. The spec is still in discussion, and they’d be happy to hear your thoughts about it.
  • Hidde de Vries explains how we can use modern JavaScript APIs to scroll an element into the center of the viewport.
  • Available behind flags in Chrome 71, the new Background Fetch makes it possible to fetch resources that take a while to load — movies, for example — in the background.
  • Pete LePage explains how we can use the Web Share Target API to register a service as Share Target.
  • Is it still a good idea to use JavaScript for loading web fonts? Zach Leatherman shares why we should decide case by case and why it’s often best to use modern CSS and font-display: swap;.
  • Doka is a new standalone JavaScript image editor worth keeping in mind. While it’s not a free product, it features very handy methods for editing with a pleasant user experience, and by paying an annual fee, you ensure that you get bugfixes and support.
  • The Power of Web Components” shares the basic concepts, how to start using them, and why using your own HTML elements instead of gluing HTML, the related CSS classes, and a JavaScript trigger together can simplify things so much.
Security Privacy
  • Do you have a husband or wife? Kids? Other relatives? Then this essential guide to protecting your family’s data is something you should read and turn into action. The internet is no safe place, and you want to ensure your relatives understand what they’re doing — and it’s you who can protect them by teaching them or setting up better default settings.
Web Performance WebP offers both performance and features. Ire Aderinokun shares why and how to use it. (Image credit) Work & Life
  • Shana Lynch tells us what makes someone an ethical business leader, which values are important, how to stand upright when things get tough, and how to prepare for uncomfortable situations upfront.
  • Ozoemena Nonso tries to explain why we often aren’t happy. The thief of our happiness is not comparing ourselves with others; it’s that we struggle to get the model of comparison right. An incredibly good piece of life advice if you compare yourself with others often and feel that your happiness suffers from it.
  • A rather uncommon piece of advice: Why forcing others to leave their comfort zone might be a bad idea.
  • Sandor Dargo on how he managed to avoid distractions during work time and do his job properly again.
  • Paul Robert Lloyd writes about Cennydd Bowles’ book “Future Ethics” and while explaining what it is about, he also points out the challenges of ethics with a simple example.
  • Jeffrey Silverstein is a teacher and struggled a lot with finding time for side projects while working full-time. Now he found a solution which he shares with us in this great article about “How to balance full-time work with creative projects.” An inspiring read that I can totally relate to.
  • Ben Werdmüller shares his thoughts on why lifestyle businesses are massively underrated. But what’s a lifestyle business? He defines them as non-venture-funded businesses that allow their owners to maintain a certain level of income but not more. As a fun sidenote, this article shows how crazy rental prizes have become on the U.S. West Coast.
  • Jake Knapp shares how he survived six years with a distraction-free smartphone — no emails, no notifications. And he has some great tips for us and an exercise to try out. I recently moved all my apps into one folder on the second screen to ensure I need to search for the app which usually means I really want to open it and don’t just do it to distract myself.
  • Ryan Avent wrote about why we work so hard. This essay is well-researched and explains why we see work as crucial, why we fall in love with it, and why our lifestyle and society embraces to work harder all the time.
Jake Knapp spent six years with a distraction-free phone: no email, no social media, no browser. Now he shares what he learned from it and how you can try your own low-stress experiment. (Image credit) Going Beyond… (cm)
Categories: Around The Web

How To Convert An Infographic Into A Gifographic Using Adobe Photoshop

Smashing Magazine - Thu, 12/13/2018 - 8:00am
How To Convert An Infographic Into A Gifographic Using Adobe Photoshop How To Convert An Infographic Into A Gifographic Using Adobe Photoshop Manish Dudharejia 2018-12-13T14:00:11+01:00 2018-12-19T09:07:50+00:00

Visuals have played a critical role in the marketing and advertising industry since their inception. For years, marketers have relied on images, videos, and infographics to better sell products and services. The importance of visual media has increased further with the rise of the Internet and consequently, of social media.

Lately, gifographics (animated infographics) have also joined the list of popular visual media formats. If you are a marketer, a designer, or even a consumer, you must have come across them. What you may not know, however, is how to make gifographics, and why you should try to add them to your marketing mix. This practical tutorial should give you answers to both questions.

In this tutorial, we’ll be taking a closer look at how a static infographic can be animated using Adobe Photoshop, so some Photoshop knowledge (at least the basics) is required.

What Is A Gifographic? Some History

The word gifographic is a combination of two words: GIF and infographic. The term gifographic was popularized by marketing experts (and among them, by Neil Patel) around 2014. Let’s dive a little bit into history.

CompuServe introduced the GIF ( Graphics Interchange Format) on June 15, 1987, and the format became a hit almost instantly. Initially the use of the format remained somewhat restricted owing to patent disputes in the early years (related to the compression algorithm used in GIF files — LZW) but later, when most GIF patents expired, and owing to their wide support and portability, GIFs gained a lot in popularity which even lead the word “GIF” to become “Word of the year” in 2012. Even today, GIFs are still very popular on the web and on social media(*).

Getting workflow just right ain’t an easy task. So are proper estimates. Or alignment among different departments. That’s why we’ve set up “this-is-how-I-work”-sessions — with smart cookies sharing what works well for them. A part of the Smashing Membership, of course.

Explore Smashing Membership ↬

The GIF is a bitmap image format. It supports up to 8 bits per pixel so a single GIF can use a limited palette of up to 256 different colors (including — optionally — one transparent color). The Lempel–Ziv–Welch (LZW) is a lossless data compression technique that is used to compress GIF images, which in turn, reduces the file size without affecting their visual quality. What’s more interesting though, is that the format also supports animations and allows a separate palette of up to 256 colors for each animation frame.

Tracing back in history as to when the first infographic was created is much more difficult, but the definition is easy — the word “infographic” comes from “information” and “graphics,” and, as the name implies, an infographic serves the main purpose of presenting information (data, knowledge, etc.) quickly and clearly, in a graphical way.

In his 1983 book The Visual Display of Quantitative Information, Edward Tufte gives a very detailed definition for “graphical displays” which many consider today to be one of the first definitions of what infographics are, and what they do: to condense large amounts of information into a form where it will be more easily absorbed by the reader.

A Note On GIFs Posted On The Web (*)

Animated GIF images posted to Twitter, Imgur, and other services most often end up as H.264 encoded video files (HTML5 video), and are technically not GIFs anymore when viewed online. The reason behind this is pretty obvious — animated GIFs are perhaps the worst possible format to store video, even for very short clips, as unlike actual video files, GIF cannot use any of the modern video compression techniques. (Also, you can check this article: “Improve Animated GIF Performance With HTML5 Video” which explains how with HTML5 video you can reduce the size of GIF content by up to 98% while still retaining the unique qualities of the GIF format.)

On the other hand, it’s worth noting that gifographics most often remain in their original format (as animated GIF files), and are not encoded to video. While this leads to not-so-optimal file sizes (as an example, a single animated GIF in this “How engines work?” popular infographic page is between ~ 500 KB and 5 MB in size), on the plus side, the gifographics remain very easy to share and embed, which is their primary purpose.

Why Use Animated Infographics In Your Digital Marketing Mix?

Infographics are visually compelling media. A well-designed infographic not only can help you present a complex subject in a simple and enticing way, but it can also be a very effective mean of increasing your brand awareness as part of your digital marketing campaign.

Remember the popular saying, “A picture is worth a thousand words”? There is a lot of evidence that animated pictures can be even more successful and so recently motion infographics have witnessed an increase in popularity owing to the element of animation.

From Boring To Beautiful

They can breathe life into sheets of boring facts and mundane numbers with the help of animated charts and graphics. Motion infographics are also the right means to illustrate complex processes or systems with moving parts to make them more palatable and meaningful. Thus, you can easily turn boring topics into visually-engaging treats. For example, we created the gifographic "The Most Important Google Search Algorithm Updates Of 2015" elaborating the changes Google made to its search algorithm in 2015.

Cost-Effective

Gifographics are perhaps the most cost-effective alternative to video content. You don't need expensive cameras, video editing, sound mixing software, and a shooting crew to create animated infographics. All it takes is a designer who knows how to make animations by using Photoshop or similar graphic design tools.

Works For Just About Anything

You can use a gifographic to illustrate just about anything in bite-sized sequential chunks. From product explainer videos to numbers and stats, you can share anything through a GIF infographic. Animated infographics can also be interactive. For example, you can adjust a variable to see how it affects the data in an animated chart.

Note: An excellent example of an interactive infographic is “Building An Interactive Infographic With Vue.js” written by Krutie Patel. It was built with the help of Vue.js, SVG and GSAP (GreenSock Animation Platform).

SEO Boost

As a marketer, you are probably aware that infographics can provide a substantial boost to your SEO. People love visual media. As a result, they are more likely to share a gifographic if they liked it. The more your animated infographics are shared, the higher will be the boost in site traffic. Thus, gifographics can indirectly help improve your SEO and, therefore, your search engine rankings.

How To Create A Gifographic From An Infographic In Photoshop

Now that you know the importance of motion in infographics, let’s get practical and see how you can create your first gifographic in Photoshop. And if you already know how to make infographics in Photoshop, it will be even easier for you to convert your existing static infographic into an animated one.

Step 1: Select (Or Prepare) An Infographic

The first thing you need to do is to choose the static infographic that you would like to transform into a gifographic. For learning purposes you can animate any infographic, but I recommend you to pick up an image that has elements that are suitable for animation. Explainers, tutorials, and process overviews are easy to convert into motion infographics.

If you are going to start from scratch, make sure you have first finished the static infographic to the last detail before proceeding to the animation stage as this will save you a lot of time and resources — if the original infographic keeps changing you will also need to rework your gifographic.

Next, once you have finalized the infographic, the next step is to decide which parts you are going to animate.

Infographic finalization (Large preview) Step 2: Decide What The Animation Story Will Be

You can include some — or all — parts of the infographic in the animation. However, as there are different ways to create animations, you must first decide on the elements you intend to animate, and how. In my opinion, sketching (outlining) various animation case scenarios on paper is the best way to pick your storyline. It will save you a lot of time and confusion down the road.

Start by deciding which “frames” you would like to include in the animation. At this stage, frames will be nothing else but rough sketches made on sheets of paper. The higher the number of frames, the better the quality of your gifographic will be.

You may need to divide the animated infographic into different sections. So, be sure to choose an equal count of frames for all parts. If not, the gifographic will look uneven with each GIF file moving at a different speed.

Deciding and picking your animation story (Large preview) Step 3: Create The Frames In Photoshop

Open Adobe Photoshop to create different frames for each section of the gifographic. You will need to cut, rotate, and move the images painstakingly. You will need to remember the ultimate change you made to the last frame. You can use Photoshop ruler for the same.

You will need to build your animation from Layers in Photoshop. But, in this case, you will be copying all Photoshop layers together and editing each layer individually.

You can check the frames one by one by hiding/showing different layers. Once you have finished creating all the frames, check them for possible errors.

Create Frames in Photoshop. (Large preview)

You can also create a short Frame Animation using just the first and the last frame. You need to select both frames by holding the Ctrl/Cmd key (Windows/Mac). Now click on “Tween.” Select the number of frames you want to add in between. Select First frame if you want to add the new frames between the first and the last frames. Selecting “Previous Frame” option will add frames between your current selection and the one before it. Check the “All Layers” option to add all the layers from your selections.

Short frame animation (Large preview) Step 4: Save PNG (Or JPG) Files Into A New Folder

The next step is to export each animation frame individually into PNG or JPG format. (Note: JPG is a lossy format, so PNG would be usually a better choice.)

You should save these PNG files in a separate folder for the sake of convenience. I always number the saved images as per their sequence in the animation. It’s easy for me to remember that “Image-1” will be the first image in the sequence followed by “Image-2,” “Image-3,” and so on. Of course, you can save them in a way suitable for you.

Saving JPG files in a new folder (Large preview) Step 5: “Load Files Into Stack”

Next comes loading the saved PNG files to Photoshop.

Go to the Photoshop window and open File > Scripts > Load files into Stack...

A new dialog box will open. Click on the “Browse” button and open the folder where you saved the PNG files. You can select all files at once and click “OK.”

Note: You can check the “Attempt to Automatically Align Source Images” option to avoid alignment issues. However, if your source images are all the same size, this step is not needed. Furthermore, automatic alignment can also cause issues in some cases as Photoshop will move the layers around in an attempt to try to align them. So, use this option based on the specific situation — there is no “one size fits them all” recipe.

It may take a while to load the files, depending on their size and number. While Photoshop is busy loading these files, maybe you can grab a cup of coffee!

Load Files into Stack. (Large preview) Step 6: Set The Frames

Once the loading is complete, go to Window > Layers (or you can press F7) and you will see all the layers in the Layers panel. The number of Layers should match the number of frames loaded into Photoshop.

Once you have verified this, go to Window > Timeline. You will see the Timeline Panel at the bottom (the default display option for this panel). Choose “Create Frame Animation” option from the panel. Your first PNG file will appear on the Timeline.

Now, Select “Make Frames from Layers” from the right side menu (Palette Option) of the Animation Panel.

Note: Sometimes the PNG files get loaded in reverse, making your “Image-1” appear at the end and vice versa. If this happens, select “Reverse Layers” from Animation Panel Menu (Palette Option) to get the desired image sequence. Set the Frames. (Large preview) Step 7: Set The Animation Speed

The default display time for each image is 0.00 seconds. Toggling this time will determine the speed of your animation (or GIF file). If you select all the images, you can set the same display time for all of them. Alternatively, you can also set up different display time for each image or frame.

I recommend going with the former option though as using the same animation time is relatively easy. Also, setting up different display times for each frame may lead to a not-so-smooth animation.

You can also set custom display time if you don't want to choose from among the available choices. Click the “Other” option to set a customized animation speed.

You can also make the animation play in reverse. Copy the Frames from the Timeline Pallet and choose “Reverse Layers” option. You can drag frames with the Ctrl key (on Windows) or the Cmd key (on Mac).

You can set the number of times the animation should loop. The default option is “Once.” However, you can set a custom loop value using the “Other” option. Use the “Forever” option to keep your animation going in a non-stop loop.

To preview your GIF animation, press the Enter key or the “Play” button at the bottom of the Timeline Panel.

Set the Animation Speed. (Large preview) Step 8: Ready To Save/Export

If everything goes according to plan, the only thing left is to save (export) your GIF infographic.

To Export the animation as a GIF: Go to File > Export > Save for Web (Legacy)

  1. Select “GIF 128 Dithered” from the “Preset” menu.
  2. Select “256” from the “Colors” menu.
  3. If you will be using the GIF online or want to limit the file size of the animation, change Width and Height fields in the “Image Size” options accordingly.
  4. Select “Forever” from the “Looping Options” menu.

Click the “Preview” button in the lower left corner of the Export window to preview your GIF in a web browser. If you are happy with it, click “Save” and select a destination for your animated GIF file.

Note: There are lots of options that control the quality and file size of GIFs — number of colors, amount of dithering, etc. Feel free to experiment until you achieve the optimal GIF size and animation quality.

Your animated infographic is ready!

Saving your GIF infographic (Large preview) Step 9 (Optional): Optimization

Gifsicle (a free command-line program for creating, editing, and optimizing animated GIFs), and other similar GIF post-processing tools can help reduce the exported GIF file size beyond Photoshop’s abilities.

ImageOptim is also worth mentioning — dragging files to ImageOptim will directly run Gifsicle on them. (Note: ImageOptim is Mac-only but there are quite a few alternative apps available as well.)

Troubleshooting Tips

You are likely to run into trouble at two crucial stages.

Adding New Layers

Open the “Timeline Toolbar” drop-down menu and select the “New Layers Visible in all Frames” option. It will help tune your animation without any hiccups.

Adding new layers (Large preview) Layers Positioning

Sometimes, you may end up putting layers in the wrong frames. To fix this, you can select the same layer in a fresh frame and select “Match Layer Across Frames” option.

Positioning layers (Large preview) Gifographic Examples

Before wrapping this up, I would like to share a few good examples of gifographics. Hopefully, they will inspire you just as they did me.

  1. Google’s Biggest Search Algorithm Updates Of 2016
    This one is my personal favorite. Incorporating Google algorithm updates in a gifographic is difficult owing to its complexity. But, with the use of the right animations and some to-the-point text, you can turn a seemingly complicated subject into an engaging piece of content.
  2. Virtual Reality: A Fresh Perspective For Marketers
    This one turns a seemingly descriptive topic into a smashing gifographic. The gifographic breaks up the Virtual Reality topic into easy-to-understand numbers, graphs, and short paragraphs with perfect use of animation.
  3. How Google Works
    I enjoy reading blog posts by Neil Patel. Just like his post, this gifographic is also comprehensive. The only difference is Neil conveys the essential message through accurately placed GIFs instead of short paragraphs. He uses only the colors that Google’s logo comprises.
  4. The Author Rank Building Machine
    This one lists different tips to help you become an authoritative writer. The animation is simple with a motion backdrop of content creation factory. Everything else is broken down into static graphs, images, and short text paragraphs. But, the simple design works, resulting in a lucid gifographic.
  5. How Car Engines Work
    Beautifully illustrated examples of how car engines work (petrol internal combustion engines and hybrid gas/electric engines). Btw, it’s worth noting that in some articles, Wikipedia is also using animated GIFs for some very similar purposes.
Wrapping Things Up

As you can see, turning your static infographic into an animated one is not very complicated. Armed with Adobe Photoshop and some creative ideas, you can create engaging and entertaining animations, even from scratch.

Of course, your gifographic can have multiple animated parts and you’ll need to work on them individually, which, in turn, will require more planning ahead and more time. (Again, a good example of a rather complex gifographic would be the one shown in “How Car Engines Work?” where different parts of the engine are explained in a series of connected animated images.) But if you plan well, sketch, create, and test, you will succeed and you will be able to make your own cool gifographics.

If you have any questions, ask me in the comments and I’ll be happy to help.

Further Resources (mb, ra, yk, il)
Categories: Around The Web

The Importance Of Macro And Micro-Moment Design

Smashing Magazine - Thu, 12/13/2018 - 6:30am
The Importance Of Macro And Micro-Moment Design The Importance Of Macro And Micro-Moment Design Susan Weinschenk 2018-12-13T12:30:10+01:00 2018-12-19T09:07:50+00:00

(This article is kindly sponsored by Adobe.) When you design the information architecture, the navigation bars of an application, or the overall layout and visual design of a product, then you are focusing on macro design. When you design (one part of a page, one form, or one single task and interaction), then you are focusing on micro-moment design.

In my experience, designers often spend a lot of time on macro design issues, and sometimes less so on critical micro-moment design issues. That might be a mistake.

Here’s an example of how critical micro-moment design can be.

I read a lot of books. We are talking over a hundred books a year. I don’t even know for sure how many books I read, and because I read so many books, I am a committed library patron. Mainly for reading fiction for fun (and even sometimes for reading non-fiction), I rely on my library to keep my Kindle full of interesting things to read.

Luckily for me, the library system in my county and in my state is pretty good in terms of having books available for my Kindle. Unluckily, this statewide library website and app need serious UX improvements.

I was thrilled when my library announced that instead of using a (poorly designed) website (that did not have a mobile responsive design), the library was rolling out a brand new mobile app, designed specifically to optimize the experience on a mobile phone. “Yay!” I thought. “This will be great!”

Perhaps I spoke too soon.

Let me walk you through the experience of signing into the app. First, I downloaded the app and then went to log in:

(Large preview)

I didn’t have my library card with me (I was traveling), and I wasn’t sure what “Sign in with OverDrive” was about, but I figured I could select my library from the list, so I pressed on the down arrow.

(Large preview)

“Great,” I thought. Now I can just scroll to get to my library. I know that my library is in Marathon County here in Wisconsin. In fact, I know from using the website that they call my library: “Marathon County, Edgar Branch” or something similar, since I live in a village called Edgar, so I figured that would be what I should look for especially since I could see that the list went from B (Brown County) to F (Fond du Lac Public Library) with no E for Edgar showing. So I proceeded to scroll.

I scrolled for a while, looking for M (in hope of finding Marathon).

(Large preview)

Hmmm. I see Lone Rock, and then the next one on the list is McCoy. I know that I am in Marathon County, and that in fact, there are several Marathon County libraries. Yet, we seem to have skipped Marathon in the list.

I keep scrolling.

(Large preview)

Uh oh. We got to the end of the list (to the W’s), but now we seem to be starting with A again. Well, then, perhaps Marathon will now appear if I keep scrolling.

You know how many libraries there are in Wisconsin and are on this list? I know because as I started to document this user experience I decided to count the number of entries on this list (only a crazy UX professional would take time to do this, I think).

There are 458 libraries on this list, and the list kept getting to the end of the alphabet and then for some reason starting over. I never did figure out why.

Finally, though, I did get to Marathon!

(Large preview)

And then I discovered I was really in trouble since several libraries start with “Marathon County Public Library”. Since the app only shows the first 27 or so characters, I don’t know which one is mine.

You know what I did at this point?

I decided to give up. And right after I decided that, I got this screen (as “icing on the cake” so to speak):

(Large preview)

Did you catch the “ID” that I’m supposed to reference if I contact support? Seriously?

This is a classic case of micro-moment design problems.

I can guess that by now some of you are thinking, “Well, that wouldn’t happen to (me, my team, an experienced UX person).” And you might be right. Especially this particular type of micro-moment design fail.

However, I can tell you that I see micro-moment design failures in all kinds of apps, software, digital products, websites, and from all kinds of companies and teams. I’ve seen micro-moment design failures from organizations with and without experienced UX teams, tech-savvy organizations, customer-centric organizations, large established companies and teams, and new start-ups.

Let’s pause for a moment and contrast micro-moment design with macro design.

Let’s say that you are hired to evaluate the user experience of a product. You gather data about the app, the users, the context, and then you start walking through the app. You notice a lot of issues that you want to raise with the team — some large, some small:

  • There are some inconsistencies from page-to-page/screen-to-screen in the app. You would like to see whether they have laid out pages on a grid and if that can be improved;
  • You have questions about whether the color scheme meets branding guidelines;
  • You suspect there are some information architecture issues. The organization of items in menus and the use of icons seems not quite intuitive;
  • One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.

There are many ways to categorize user experience design factors, issues, and/or problems. Ask any UX professional and you will probably get a similar, but slightly different list. For example, UX people might think about the conceptual model, visual design, information architecture, navigation, content, typography, context of use, and more. Sometimes, though, it might be useful to think about UX factors, issues, and design in terms of just two main categories: macro design and micro-moment design.

In the example above, most of the factors on the list were macro design issues: inconsistencies in layout, color schemes, and information architecture. Some people talk about macro design issues as “high-level design” or “conceptual model design”. These are UX design elements that cross different screens and pages. These are UX design elements that give hints and cues about what the user can do with the app, and where to go next.

Macro design is critical if you want to design a product that people want to use. If the product doesn’t match the user’s mental model, if the product is not “intuitive” — these are often (not always, but often) macro design issues.

Which means, of course, that macro design is very important.

It’s not just micro-moment design problems that cause trouble. Macro design issues can result in massive UX problems, too. But macro design issues are more easily spotted by an experienced UX professional because they can be more obvious, and macro design usually gets time devoted to it relatively early in the design process.

If you want to make sure you don’t have macro design problems then do the following:

  • Do the UX research upfront that you need to do in order to have a good idea of the users’ mental models. What does the user expect to do with this product? What do they expect things to be called? Where do they expect to find information?
  • For each task that the user is going to do, make sure you have picked one or two “objects” and made them obvious. For instance, when the user opens an app for looking for apartments to rent the objects should be apartments, and the views of the objects should be what they expect: List, detail, photo, and map. If the user opens an app for paying an insurance bill, then the objects should be policy, bill, clinic visit, while the views should be a list, detail, history, and so on.
  • The reason you do all the UX-research-related things UXers do (such as personas, scenarios, task analyses, and so on) is so that you can design an effective, intuitive macro design experience.

It’s been my experience, however, that teams can get caught up in designing, evaluating, or fixing macro design problems, and not spend enough time on micro-moment design.

In the example earlier, the last issue is a micro-moment design issue:

  • One of the forms that users are supposed to fill out and submit is confusing, and you think people may not be able to complete the form and submit the information because it isn’t clear what the user is supposed to enter.

And the library example at the start of the article is also an example of micro-moment design gone awry.

Micro-moment design refers to problems with one very specific page/form/task that someone is trying to accomplish. It’s that “make-or-break” moment that decides not just whether someone wants to use the app, but whether they can even use the app at all, or whether they give up and abandon, or end up committing errors that are difficult to correct. Not being able to choose my library is a micro-moment design flaw. It means I can’t continue. I can’t use the app anymore. It’s a make-or-break moment for the app.

When we are designing a new product, we often focus on the macro design. We focus on the overall layout, information architecture, conceptual model, navigation model, and so on. That’s because we haven’t yet designed any micro-moments.

The danger is that we will forget to pay close attention to micro-moment design.

So, going back to our library example, and your possible disbelief that such a micro-moment design fail could happen on your watch. It can. Micro-moment design failures can happen for many reasons.

Here are a few common ones I’ve seen:

  • A technical change (for example, how many characters can be displayed in a field) is made after a prototype has been reviewed and tested. So the prototype worked well and did not have a UX problem, but the technical change occurred later, thereby causing a UX problem without anyone noticing.
  • Patterns and standards that worked well in one form or app are re-used in a different context/form/app, and something about the particular field for form in the new context means there is a UX issue.
  • Features are added later by a different person or team who does not realize the impact that particular feature, field, form has on another micro-moment earlier or later in the process.
  • User testing is not done, or it’s done on only a small part of the app, or it’s done early and not re-done later when changes are made.

If you want to make sure you don’t have micro-moment design problems then do the following:

  • Decide what are the critical make-or-break moments in the interface.
  • At each of these moments, decide what is it exactly that the user wants to do.
  • At each of these moments, decide what is it exactly that the product owner wants users to do.
  • Figure out exactly what you can do with design to make sure both of the above can be satisfied.
  • Make that something the highest priority of the interface.
Takeaways

Both macro and micro-moment design are critical to the user experience success of a product. Make sure you have a process for designing both, and that you are giving equal time and resources to both.

Identify the critical make-or-break micro-design moments when they finally do get designed, and do user testing on those as soon as you can. Re-test when changes are made.

Try talking about micro-moment design and macro design with your team. You may find that this categorization of design issues makes sense to them, perhaps more than whichever categorization scheme you’ve been using.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

(cm, ms, il)
Categories: Around The Web

Protecting Your Site With Feature Policy

Smashing Magazine - Wed, 12/12/2018 - 4:30am
Protecting Your Site With Feature Policy Protecting Your Site With Feature Policy Rachel Andrew 2018-12-12T11:30:30+02:00 2018-12-19T09:07:50+00:00

One of the web platform features highlighted at the recent Chrome Dev Summit was Feature Policy, which aims to “allow site authors to selectively enable and disable use of various browser features and APIs.” In this article, I’ll take a look at what that means for web developers, with some practical examples.

In his introductory article on the Google Developers site, Eric Bidelman describes Feature Policy as the following:

“The feature policies themselves are little opt-in agreements between developer and browser that can help foster our goals of building (and maintaining) high-quality web apps.”

The specification has been developed at Google by as part of the Web Platform Incubator Group activity. The aim of Feature Policy is for us, as web developers, to be able to state our usage of a web platform feature, explicitly to the browser. By doing so, we make an agreement about our use, or non-use of this particular feature. Based on this the browser can act to block certain features, or report back to us that a feature it did not expect to see is being used.

Examples might include:

  1. I am embedding an iframe and I do not want the embedded site to be able to access the camera of my visitor;
  2. I want to catch situations where unoptimized images are deployed to my site via the CMS;
  3. There are many developers working on my project, and I would like to know if they use outdated APIs such as document.write.

All of these things can be tracked, blocked or reported on as part of Feature Policy.

How To Use Feature Policy

In order to use Feature Policy, the browser needs to know two things: which feature you are creating a policy for, and how you want that feature to be handled.

Feature-Policy: <directive> <allowlist>

The <directive> is the name of the feature that you are setting the policy on.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬

The current list of features (sourced from the presentation given at Chrome Dev Summit) are as follows:

  • accelerometer
  • ambient-light-sensor
  • autoplay
  • camera
  • document-write
  • encrypted-media
  • fullscreen
  • geolocation
  • gyroscope
  • layout-animations
  • lazyload
  • legacy-image-formats
  • magnetometer
  • midi
  • oversized-images
  • payment
  • picture-in-picture
  • speaker
  • sync-script
  • sync-xhr
  • unoptimized-images
  • unsized-media
  • usb
  • vertical-scroll
  • vr

The <allowlist> details how the feature can be used — if at all — and takes one or more of the following values.

  • *
    The most liberal policy, stating that the feature will be allowed in this document, and any iframes whether from this domain or elsewhere. May only be used as a single value as it makes no sense to enable everything and also pass in a list of domains, for example.
  • self
    The feature will be available in the document and any iframes, however, the iframes must have the same origin.
  • src
    Only applicable when using an iframe allow attribute. This allows a feature as long as the document loaded into it comes from the same origin as the URL in the iframe’s src attribute.
  • none
    Disables the feature for the document and any nested iframes. May only be used as a single value.
  • <origin(s)>
    The feature is allowed for specific origins; this means that you can specify a list of domains where the feature is allowed. The list of domains is space separated.

There are two methods by which you can enable feature policies on your site: You can send an HTTP Header, or use the allow attribute on an iframe.

HTTP Header

Sending an HTTP Header means that you can enable a feature policy for the page or entire site setting that header, and also anything embedded in the site. Headers can be set for your entire site at the web server or can be sent from your application.

For example, if I wanted to prevent the use of the geolocation API and I was using the NGINX web server, I could edit the configuration files for my site in NGINX to add the following header, which would prevent any document in my site and any iframe embedded in from using the geolocation API.

add_header Feature-Policy "geolocation none;";

Multiple policies can be set in a single header. To prevent geolocation and vibrate but allow unsized-media from the domain example.com I could set the following:

add_header Feature-Policy "vibrate none; geolocation none; unsized-media http://example.com;"; The allow Attribute On iFrames

If we are primarily concerned with what happens with the content in an iframe, we can use Feature Policy on the iframe itself; this benefits from slightly better browser support at the time of writing with Chrome and Safari supporting this use.

If I am embedding a site and do not want that site to use geolocation, camera or microphone APIs then my iframe would look like the following example:

<iframe allow="geolocation 'none'; camera 'none'; microphone 'none'">

You may already be familiar with the individual attributes which control the content of iframes allowfullscreen, allowpaymentrequest, and allowusermedia. These can be replaced by the Feature Policy allow attribute, and for browser compatibility reasons you can use both on an iframe. If you do use both attributes, then the most restrictive one will apply. The Google article shows an example of an iframe that uses allowfullscreen — meaning that the iframe is allowed to enter fullscreen, but then a conflicting Feature Policy of fullscreen none. These conflict, so the most restrictive policy wins and this iframe would not be allowed to enter fullscreen.

<iframe allowfullscreen allow="fullscreen 'none'" src="...">

The iframe element also has a sandbox attribute designed to manage support for many features. This feature was also added to Content Security Policy with a sandbox value which disables all sandbox features, which can then be opted back into selectively. There is some crossover between sandbox features and those controlled by Feature Policy, and Feature Policy does not seek to duplicate those values already covered by sandbox. It does, however, address some of the limitations of sandbox by taking a more fine-grained approach to managing these policies, rather than one of turning everything off globally as one large policy set.

Feature Policy And Reporting API

Feature Policy violations can be reported via the Reporting API, which means that you could develop a comprehensive set of policies tracking feature usage across your site. This would be completely transparent to your users but give you a huge amount of information about how features were being used.

Browser Support For Feature Policy

Currently, browser support for Feature Policy is limited to Chrome, however, in many cases where you are using Feature Policy during development and when previewing sites this is not necessarily a problem.

Many of the use cases I will outline below are usable right now, without causing any impact to site visitors who are using browsers without support.

When To Use Feature Policy

I really like the idea of being able to use Feature Policy to help back up decisions made when developing the site. Decisions which may well be written up in documents such as a performance budget, or as part of a GDPR audit, but which then become something we have to remember to preserve through the life of the site. This is not always easy when multiple people work on a site; people who perhaps weren’t involved during that initial decision making, or may simply be unaware of the requirements. We think a lot about third parties managing to somehow impact our site, however, sometimes our sites need protecting from ourselves!

Keeping An Eye On Third Parties

You could prevent a third-party site from accessing the camera or microphone using a feature policy on the iframe with the allow attribute. If the reason for embedding that site has nothing to do with those features, then disabling them means that the site can never start to ask for those. This could then be linked with your processes for ensuring GDPR compliance. As you audit the privacy impact of your site, you can build in processes for locking down the access of third parties by way of feature policy — giving you and your visitors additional security and peace of mind.

This usage does rely on browser support for Feature Policy to block the usage. However, you could use Feature Policy reporting mode to inform you of usage of these APIs if the third party changed what they would be doing. This would give you a very quick heads-up — essentially as soon as the first person using Chrome hits the site.

Selectively Enabling Features

We also might want to selectively enable some features which are normally blocked. Perhaps we wish to allow an iframe loading content from another site to use the geolocation feature in the browser. Chrome by default blocks this, but if you are loading content from a trusted site you could enable the cross-origin request using Feature Policy. This means that you can safely turn on features when loading content from another domain that is under your control.

Catching Use Of Outdated APIs And Poorly Performing Features

Feature Policy can be run in a report-only mode. It can then track usage of certain features and let you know when they are found on the site. This can be useful in many scenarios. If you have a very large site with a lot of legacy code, enabling Feature Policy would help you to track down the places that need attention. If you work with a large team (especially if developers often pull in some third party libraries of code), Feature Policy can catch things that you would rather not see on the site.

Dealing With Poorly Optimized Images

While most of the articles I’ve seen about Feature Policy concentrate on the security and privacy aspects, the features around image optimization really appealed to me, as someone who deals with a lot of content generated by technical and non-technical users. Feature Policy can be used to help protect the user experience as well as the performance of your site by preventing overly large — or unoptimized images — being downloaded by visitors.

In an ideal world, your CMS would deal with image management, ensuring that images were sensibly resized, optimized for the web and the context they will be displayed in. Real life is rarely that ideal world, however, and so sometimes the job of resizing and optimizing images is left to content editors to ensure they are not uploading huge images to the web. This is particularly an issue if you are using a static CMS with no content management layer on top of it. Even as a technical person, it is very easy to forget to resize that giant screenshot or camera image you popped into a folder as a placeholder.

Currently behind a flag in Chrome are features which can help. The idea behind these features is to highlight the problematic images so that they can be fixed — without completely breaking the site.

The unsized-media feature policy looks for images or video which do not have a size set in the HTML or CSS. When an unsized media element loads, it can cause the content on the page to reflow.

In order to prevent any unsized media being added to the site, set the following header. Media will then be displayed with a default size of 300×150 pixels. You will see your site loading with small media, and realize you have a problem to fix.

Feature-Policy: unsized-media 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The oversized-images feature policy checks to see that images are not much large than their container. If they are, a placeholder will be shown instead. This policy is incredibly useful to check that you are not sending huge desktop images to your mobile users.

Feature-Policy: oversized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

The unoptimized-images feature policy checks to see if the data size of images in bytes is no more than 0.5x bigger than its rendering area in pixels. If this policy is enabled and images violate it, a placeholder will be shown instead of the image.

Feature-Policy: unoptimized-images 'none'

See a demo (needs Chrome Canary with Experimental Web Platform Features on).

Testing And Reporting On Feature Policy

Chrome DevTools will display a message to inform you that certain features have been blocked or enabled by a Feature Policy. If you have enabled Feature Policy on your site, you can check that this is working.

Support for Feature Policy has also been added to the Security Headers site, which means you can check for these along with headers such as Content Security Policy on your site — or other sites on the web.

There is a Chrome DevTools Extension which lets you toggle on and off different Feature Policies (also a great way to check your pages without needing to configure any headers).

If you would like to get into integrating your Feature Policies with the Reporting API, then there is further information in terms of how to do this here.

Further Reading And Resources

I have found a number of resources, many of which I used when researching this article. These should give you all that you need to begin implementing Feature Policy in your own applications. If you are already using Content Security Policy, this seems an additional logical step towards controlling the way your site works with the browser to help ensure the security and privacy of people using your site. You have the added bonus of being able to use Feature Policy to help you keep on top of performance-damaging elements being added to your site over time.

(il)
Categories: Around The Web

Introducing Float.com: A Better Alternative To Spreadsheets

Smashing Magazine - Tue, 12/11/2018 - 6:50am
Introducing Float.com: A Better Alternative To Spreadsheets Introducing Float.com: A Better Alternative To Spreadsheets Nick Babich 2018-12-11T12:50:00+01:00 2018-12-19T09:07:50+00:00

(This is a sponsored post.) In today’s highly competitive market, it’s vital to move fast. After all, we all know how the famous saying goes: “Time is money.” The faster your product team moves when creating a product, the higher the chance it’ll succeed in the market. If you are a project manager, you need a tool that helps you get the most out of each of your team member’s time.

Though creative teams typically work on new and innovative products, many still use legacy tools to manage their work. Spreadsheets are one of the most common tools in the project manager’s toolbox. While this might be adequate for a team of two or three members, as a team grows, managing the team’s time becomes a demanding job.

Whenever project managers try to manage their team using spreadsheets alone, they usually face the following problems:

1. Lack Of Glanceability

Understanding what’s really happening on a project takes a lot of work. It’s hard to visually grasp who’s busy and who’s not. As a result, some team members might end up overloaded, while others will have too little to do. You also won’t get a clear breakdown of how much time is being devoted to particular work and particular clients, which is crucial not just for billing purposes but also to inform your agency’s future decisions, like who to hire next.

2. Hard To Report To Stakeholders

With spreadsheets as the home of project management, translating data about people and time into tangible insights is a challenge. Data visualization is also virtually impossible with the limited range of chart-building functions that spreadsheet tools provide. As a result, reporting with spreadsheets becomes a time-consuming task. The more people and activities a project has, the more of a project manager’s time will be consumed by reporting.

3. Spreadsheets Manage Tasks, Not People

Managing projects with individual spreadsheets is a disaster waiting to happen. Though a single spreadsheet may give a clear breakdown of a single project, it has no way of indicating overload and underload for particular team members across all projects.

4. Lack Of High-Level Overview Of A Project

It’s well known in the industry that many designers suffer from tunnel vision: Without the big picture of a project in mind, their focus turns to solving ongoing tasks. But the problem of tunnel vision isn’t limited to designers; it also exists in project management.

When a project manager uses a tool like a spreadsheet, it is usually hard (or even impossible) to create a high-level overview of a project. Even when the project manager invests a lot of time in creating this overview, the picture becomes outdated as soon as the company’s direction shifts.

5. The Risk Of Outdated Information

Product development moves quickly, making it a struggle for project managers to continually monitor and introduce all required changes into the spreadsheet. Not only is this time-consuming, but it’s also not much fun — and, as a result, often doesn’t get done.

How Technology Makes Our Life Better: Introducing Float

The purpose of technology has always been to reduce mechanical work in order to focus on innovation. In a creative or product agency, the ultimate goal is to automate as much of the product design and human management process as possible, so that the team can focus on the deep work of creativity and execution.

With this in mind, let’s explore Float, a resource management app for creative agencies. Float makes it easier to understand who’s working on what, becoming a single source of truth for your entire team.

Helping Product Managers Overcome The Challenges Of Time And People Management

Float facilitates team collaboration and makes work more effective. Here’s how:

1. Visual Team Planner: See Team And Tasks At A Glance

Float allows you to plan tasks visually using the “Schedule” tab. In this view, you can allocate and update project assignments. A drag-and-drop interface makes scheduling your team simple.

Here’s an example of the Schedule View:

The schedule is not only a bird’s-eye view of who’s working on what and when, but also a dynamic canvas. Click on any empty space to create a task. Each task can be easily modified, extended or split. (Large preview)

But that’s not all. You can do more:

  • Prioritize tasks
    You can do this by simply moving them on top of others.
  • Duplicate tasks
    Simply press “Shift” and drag a selected task to a new location.
(Large preview)
  • Extend a task
    If someone on your team needs extra time to finish a task, you can extend the time in one click using the visual interface.
  • Split a task
    When it becomes evident that someone on your team needs help with a task, it’s easy to split the task into parts and assign each part to another member.
2. Built-In Reporting And Statistics

Float’s built-in reporting and statistics feature (“utilization” reports) can save project managers hours of manual work at the end of the week, month or quarter.

Float helps you keep track of all of a project’s hours. (Large preview)

It’s relatively easy to customize the format of the report. You can:

  • Choose the roles of team members (employees, contractors, all) in “People”;
  • Choose the type of tasks (tentative, confirmed, completed) in “Task”.

It’s vital to mention that Float allows you to define different roles for team members. For example, if someone works part-time, you can define this in the “Profile” settings and specify that their work is part-time only. This feature also comes handy when you need to calculate a budget.

(Large preview)

You can email team members their individual hours for the week.

3. Search And Filter: All The Information You Need, At Your Fingertips

All of your team’s information sits in one place. You don’t need to use different spreadsheets for different projects; to add a new project, simply click on “Projects” and add a new object.

You can also use the filtering tools to highlight specific projects. A simple way to see relevant tasks at a glance is to filter by tags, leaving only particular types of projects (for example, projects that have contractors).

(Large preview) 4. Getting A High-Level Overview Of All Your Projects

With Float, you’ll always have a record of what happened and when. - Set milestones in a project’s settings, and keep an eye on upcoming milestones in your Activity Feed.

With Float, you can set milestones for your project. (Large preview)
  • Drill down into a user or a team and review their schedule. When you see that someone is overscheduled (red will tell you that), you can move other team members to that activity and regroup them.
Float helps you keep track of all of a project’s hours. (Large preview)

You can view by day, week or month, making it easy to zoom out and see a big picture of your project.

Zoom in for a detailed day view, or zoom out to view a monthly forecast. (Large preview) 5. Reducing The Risk Of Outdated Information

You don’t need to switch between multiple tools to find out everything you need to know. This means your information will be up to date (it’s much easier to manage your project using a single tool, rather than many).

Float can be easily connected to all the services you use.

“Schedule”, “People” and “Projects Tools” work together to create context. (Large preview) Conclusion

If your team’s project management leaves a lot to be desired, Float is a no-brainer. Float makes it easy to see everything you need to know about your team’s projects in a single place, giving project managers the information and functionality they need to handle the fast-paced world of digital design and development. Get started with Float today!

(ms, ra, al, il)
Categories: Around The Web

How To Build A Real-Time App With GraphQL Subscriptions On Postgres

Smashing Magazine - Mon, 12/10/2018 - 8:00am
How To Build A Real-Time App With GraphQL Subscriptions On Postgres How To Build A Real-Time App With GraphQL Subscriptions On Postgres Sandip Devarkonda 2018-12-10T14:00:23+01:00 2018-12-19T09:07:50+00:00

In this article, we’ll take a look at the challenges involved in building real-time applications and how emerging tooling is addressing them with elegant solutions that are easy to reason about. To do this, we’ll build a real-time polling app (like a Twitter poll with real-time overall stats) just by using Postgres, GraphQL, React and no backend code!

The primary focus will be on setting up the backend (deploying the ready-to-use tools, schema modeling), and aspects of frontend integration with GraphQL and less on UI/UX of the frontend (some knowledge of ReactJS will help). The tutorial section will take a paint-by-numbers approach, so we’ll just clone a GitHub repo for the schema modeling, and the UI and tweak it, instead of building the entire app from scratch.

All Things GraphQL

Do you know everything you need to know about GraphQL? If you have your doubts, Eric Baer has you covered with a detailed guide on its origins, its drawbacks and the basics of how to work with it. Read article →

Before you continue reading this article, I’d like to mention that a working knowledge of the following technologies (or substitutes) are beneficial:

  • ReactJS
    This can be replaced with any frontend framework, Android or IOS by following the client library documentation.
  • Postgres
    You can work with other databases but with different tools, the principles outlined in this post will still apply.

You can also adapt this tutorial context for other real-time apps very easily.

A demonstration of the features in the polling app that we’ll be building. (Large preview)

As illustrated by the accompanying GraphQL payload at the bottom, there are three major features that we need to implement:

  1. Fetch the poll question and a list of options (top left).
  2. Allow a user to vote for a given poll question (the “Vote” button).
  3. Fetch results of the poll in real-time and display them in a bar graph (top right; we can gloss over the feature to fetch a list of currently online users as it’s an exact replica of this use case).

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ Challenges With Building Real-Time Apps

Building real-time apps (especially as a frontend developer or someone who’s recently made a transition to becoming a fullstack developer), is a hard engineering problem to solve. This is generally how contemporary real-time apps work (in the context of our example app):

  1. The frontend updates a database with some information; A user’s vote is sent to the backend, i.e. poll/option and user information (user_id, option_id).
  2. The first update triggers another service that aggregates the poll data to render an output that is relayed back to the app in real-time (every time a new vote is cast by anyone; if this done efficiently, only the updated poll’s data is processed and only those clients that have subscribed to this poll are updated):
    • Vote data is first processed by an register_vote service (assume that some validation happens here) that triggers a poll_results service.
    • Real-time aggregated poll data is relayed by the poll_results service to the frontend for displaying overall statistics.
A poll app designed traditionally

This model is derived from a traditional API-building approach, and consequently has similar problems:

  1. Any of the sequential steps could go wrong, leaving the UX hanging and affecting other independent operations.
  2. Requires a lot of effort on the API layer as it’s a single point of contact for the frontend app, that interacts with multiple services. It also needs to implement a websockets-based real-time API — there is no universal standard for this and therefore sees limited support for automation in tools.
  3. The frontend app is required to add the necessary plumbing to consume the real-time API and may also have to solve the data consistency problem typically seen in real-time apps (less important in our chosen example, but critical in ordering messages in a real-time chat app).
  4. Many implementations resort to using additional non-relational databases on the server-side (Firebase, etc.) for easy real-time API support.

Let’s take a look at how GraphQL and associated tooling address these challenges.

What Is GraphQL?

GraphQL is a specification for a query language for APIs, and a server-side runtime for executing queries. This specification was developed by Facebook to accelerate app development and provide a standardized, database-agnostic data access format. Any specification-compliant GraphQL server must support the following:

  1. Queries for reads
    A request type for requesting nested data from a data source (which can be either one or a combination of a database, a REST API or another GraphQL schema/server).
  2. Mutations for writes
    A request type for writing/relaying data into the aforementioned data sources.
  3. Subscriptions for live-queries
    A request type for clients to subscribe to real-time updates.

GraphQL also uses a typed schema. The ecosystem has plenty of tools that help you identify errors at dev/compile time which results in fewer runtime bugs.

Here’s why GraphQL is great for real-time apps:

  • Live-queries (subscriptions) are an implicit part of the GraphQL specification. Any GraphQL system has to have native real-time API capabilities.
  • A standard spec for real-time queries has consolidated community efforts around client-side tooling, resulting in a very intuitive way of integrating with GraphQL APIs.

GraphQL and a combination of open-source tooling for database events and serverless/cloud functions offer a great substrate for building cloud-native applications with asynchronous business logic and real-time features that are easy to build and manage. This new paradigm also results in great user and developer experience.

In the rest of this article, I will use open-source tools to build an app based on this architecture diagram:

A poll app designed with GraphQL Building A Real-Time Poll/Voting App

With that introduction to GraphQL, let’s get back to building the polling app as described in the first section.

The three features (or stories highlighted) have been chosen to demonstrate the different GraphQL requests types that our app will make:

  1. Query
    Fetch the poll question and its options.
  2. Mutation
    Let a user cast a vote.
  3. Subscription
    Display a real-time dashboard for poll results.
GraphQL request types in the poll app (Large preview) Prerequisites
  • A Heroku account (use the free tier, no credit card required)
    To deploy a GraphQL backend (see next point below) and a Postgres instance.
  • Hasura GraphQL Engine (free, open-source)
    A ready-to-use GraphQL server on Postgres.
  • Apollo Client (free, open-source SDK)
    For easily integrating clients apps with a GraphQL server.
  • npm (free, open-source package manager)
    To run our React app.
Deploying The Database And A GraphQL Backend

We will deploy an instance each of Postgres and GraphQL Engine on Heroku’s free tier. We can use a nifty Heroku button to do this with a single click.

Heroku button

Note: You can also follow this link or search for documentation Hasura GraphQL deployment for Heroku (or other platforms).

Deploying Postgres and GraphQL Engine to Heroku’s free tier (Large preview)

You will not need any additional configuration, and you can just click on the “Deploy app” button. Once the deployment is complete, make a note of the app URL:

<app-name>.herokuapp.com

For example, in the screenshot above, it would be:

hge-realtime-app-tutorial.herokuapp.com

What we’ve done so far is deploy an instance of Postgres (as an add-on in Heroku parlance) and an instance of GraphQL Engine that is configured to use this Postgres instance. As a result of doing so, we now have a ready-to-use GraphQL API but, since we don’t have any tables or data in our database, this is not useful yet. So, let’s address this immediately.

Modeling the database schema

The following schema diagram captures a simple relational database schema for our poll app:

Schema design for the poll app. (Large preview)

As you can see, the schema is a simple, normalized one that leverages foreign-key constraints. It is these constraints that are interpreted by the GraphQL Engine as 1:1 or 1:many relationships (e.g. poll:options is a 1: many relationship since each poll will have more than 1 option that are linked by the foreign key constraint between the id column of the poll table and the poll_id column in the option table). Related data can be modelled as a graph and can thus power a GraphQL API. This is precisely what the GraphQL Engine does.

Based on the above, we’ll have to create the following tables and constraints to model our schema:

  1. Poll
    A table to capture the poll question.
  2. Option
    Options for each poll.
  3. Vote
    To record a user’s vote.
  4. Foreign-key constraint between the following fields (table : column):
    • option : poll_id → poll : id
    • vote : poll_id → poll : id
    • vote : created_by_user_id → user : id

Now that we have our schema design, let’s implement it in our Postgres database. To instantly bring this schema up, here’s what we’ll do:

  1. Download the GraphQL Engine CLI.
  2. Clone this repo:
    $ git clone clone https://github.com/hasura/graphql-engine $ cd graphql-engine/community/examples/realtime-poll
  3. Go to hasura/ and edit config.yaml:
    endpoint: https://<app-name>.herokuapp.com
  4. Apply the migrations using the CLI, from inside the project directory (that you just downloaded by cloning):
    $ hasura migrate apply

That’s it for the backend. You can now open the GraphQL Engine console and check that all the tables are present (the console is available at https://<app-name>.herokuapp.com/console).

Note: You could also have used the console to implement the schema by creating individual tables and then adding constraints using a UI. Using the built-in support for migrations in GraphQL Engine is just a convenient option that was available because our sample repo has migrations for bringing up the required tables and configuring relationships/constraints (this is also highly recommended regardless of whether you are building a hobby project or a production-ready app).

Integrating The Frontend React App With The GraphQL Backend

The frontend in this tutorial is a simple app that shows poll question, the option to vote and the aggregated poll results in one place. As I mentioned earlier, we’ll first focus on running this app so you get the instant gratification of using our recently deployed GraphQL API , see how the GraphQL concepts we looked at earlier in this article power the different use-cases of such an app, and then explore how the GraphQL integration works under the hood.

NOTE: If you are new to ReactJS, you may want to check out some of these articles. We won’t be getting into the details of the React part of the app, and instead, will focus more on the GraphQL aspects of the app. You can refer to the source code in the repo for any details of how the React app has been built.

Configuring The Frontend App
  1. In the repo cloned in the previous section, edit HASURA_GRAPHQL_ENGINE_HOSTNAME in the src/apollo.js file (inside the /community/examples/realtime-poll folder) and set it to the Heroku app URL from above:
    export const HASURA_GRAPHQL_ENGINE_HOSTNAME = 'random-string-123.herokuapp.com';
  2. Go to the root of the repository/app-folder (/realtime-poll/) and use npm to install the prequisite modules and then run the app:
    $ npm install $ npm start
Screenshot of the live poll app (Large preview)

You should be able to play around with the app now. Go ahead and vote as many times as you want, you’ll notice the results changing in real time. In fact, if you set up another instance of this UI and point it to the same backend, you’ll be able to see results aggregated across all the instances.

So, how does this app use GraphQL? Read on.

Behind The Scenes: GraphQL

In this section, we’ll explore the GraphQL features powering the app, followed by a demonstration of the ease of integration in the next one.

The Poll Component And The Aggregated Results Graph

The poll component on the top left that fetches a poll with all of its options and captures a user’s vote in the database. Both of these operations are done using the GraphQL API. For fetching a poll’s details, we make a query (remember this from the GraphQL introduction?):

query { poll { id question options { id text } } }

Using the Mutation component from react-apollo, we can wire up the mutation to a HTML form such that the mutation is executed using variables optionId and userId when the form is submitted:

mutation vote($optionId: uuid!, $userId: uuid!) { insert_vote(objects: [{option_id: $optionId, created_by_user_id: $userId}]) { returning { id } } }

To show the poll results, we need to derive the count of votes per option from the data in vote table. We can create a Postgres View and track it using GraphQL Engine to make this derived data available over GraphQL.

CREATE VIEW poll_results AS SELECT poll.id AS poll_id, o.option_id, count(*) AS votes FROM (( SELECT vote.option_id, option.poll_id, option.text FROM ( vote LEFT JOIN public.option ON ((option.id = vote.option_id)))) o LEFT JOIN poll ON ((poll.id = o.poll_id))) GROUP BY poll.question, o.option_id, poll.id;

The poll_results view joins data from vote and poll tables to provide an aggregate count of number of votes per each option.

Using GraphQL Subscriptions over this view, react-google-charts and the subscription component from react-apollo, we can wire up a reactive chart which updates in realtime when a new vote happens from any client.

subscription getResult($pollId: uuid!) { poll_results(where: {poll_id: {_eq: $pollId}}) { option { id text } votes } } GraphQL API Integration

As I mentioned earlier, I used Apollo Client, an open-source SDK to integrate a ReactJS app with the GraphQL backend. Apollo Client is analogous to any HTTP client library like requests for python, the standard http module for JavaScript, and so on. It encapsulates the details of making an HTTP request (in this case POST requests). It uses the configuration (specified in src/apollo.js) to make query/mutation/subscription requests (specified in src/GraphQL.jsx with the option to use variables that can be dynamically substituted in the JavaScript code of your REACT app) to a GraphQL endpoint. It also leverages the typed schema behind the GraphQL endpoint to provide compile/dev time validation for the aforementioned requests. Let’s see just how easy it is for a client app to make a live-query (subscription) request to the GraphQL API.

Configuring The SDK

The Apollo Client SDK needs to be pointed at a GraphQL server, so it can automatically handle the boilerplate code typically needed for such an integration. So, this is exactly what we did when we modified src/apollo.js when setting up the frontend app.

Making A GraphQL Subscription Request (Live-Query)

Define the subscription we looked at in the previous section in the src/GraphQL.jsx file:

const SUBSCRIPTION_RESULT = ` subscription getResult($pollId: uuid!) { poll_results ( order_by: option_id_desc, where: { poll_id: {_eq: $pollId} } ) { option_id option { id text } votes } }`;

We’ll use this definition to wire up our React component:

export const Result = (pollId) => ( <Subscription subscription={gql`${SUBSCRIPTION_RESULT}`} variables={pollId}> {({ loading, error, data }) => { if (loading) return

Loading...</p>; if (error) return

Error :</p>; return ( <div> <div> {renderChart(data)} </div> </div> ); }} </Subscription> )

One thing to note here is that the above subscription could also have been a query. Merely replacing one keyword for another gives us a “live-query”, and that’s all it takes for the Apollo Client SDK to hook this real-time API with your app. Every time there’s a new dataset from our live-query, the SDK triggers a re-render of our chart with this updated data (using the renderChart(data) call). That’s it. It really is that simple!

Final Thoughts

In three simple steps (creating a GraphQL backend, modeling the app schema, and integrating the frontend with the GraphQL API), you can quickly wire-up a fully-functional real-time app, without getting mired in unnecessary details such as setting up a websocket connection. That right there is the power of community tooling backing an abstraction like GraphQL.

If you’ve found this interesting and want to explore GraphQL further for your next side project or production app, here are some factors you may want to use for building your GraphQL toolchain:

  • Performance & Scalability
    GraphQL is meant to be consumed directly by frontend apps (it’s no better than an ORM in the backend; real productivity benefits come from doing this). So your tooling needs to be smart about efficiently using database connections and should be able scale effortlessly.
  • Security
    It follows from above that a mature role-based access-control system is needed to authorize access to data.
  • Automation
    If you are new to the GraphQL ecosystem, handwriting a GraphQL schema and implementing a GraphQL server may seem like daunting tasks. Maximize the automation from your tooling so you can focus on the important stuff like building user-centric frontend features.
  • Architecture
    As trivial as the above efforts seem like, a production-grade app’s backend architecture may involve advanced GraphQL concepts like schema-stitching, etc. Moreover, the ability to easily generate/consume real-time APIs opens up the possibility of building asynchronous, reactive apps that are resilient and inherently scalable. Therefore, it’s critical to evaluate how GraphQL tooling can streamline your architecture.
Related Resources
  • You can check out a live version of the app over here.
  • The complete source code is available on GitHub.
  • If you’d like to explore the database schema and run test GraphQL queries, you can do so over here.
(rb, ra, yk, il)
Categories: Around The Web

What Can Be Learned From The Gutenberg Accessibility Situation?

Smashing Magazine - Fri, 12/07/2018 - 5:30am
What Can Be Learned From The Gutenberg Accessibility Situation? What Can Be Learned From The Gutenberg Accessibility Situation? Andy Bell 2018-12-07T11:30:08+01:00 2018-12-19T09:07:50+00:00

So far, Gutenberg has had a very mixed reception from the WordPress community and that reception has become increasingly negative since a hard deadline was set for the 5.0 release, even though many considered it to be incomplete. A hard release deadline in software is usually fine, but there is a glaring issue with this particular one: what will be the main editor for a platform that powers about 32% of the web isn’t fully accessible. This issue has been raised many times by the community, and it’s been effectively brushed under the carpet by Automattic’s leadership — at least it comes across that way.

Sounds like a messy situation, right? I’m going to dive into what’s happened and how this sort of situation might be avoided by others in the future.

Further Context

For those amongst us who haven’t been following along or don’t know much about WordPress, I’ll give you a bit of context. For those that know what’s gone on, you can skip straight to the main part of the article.

WordPress powers around 32% of the web with both the open-source, self-hosted CMS and the wordpress.com hosted blogs. Although WordPress, the CMS software is open-source, it is heavily contributed to by Automattic, who run wordpress.com, amongst other products. Automattic’s CEO, Matt Mullenweg is also the co-founder of the WordPress open source project.

It’s important to understand that WordPress, the CMS is not a commercial Automattic project — it is open source. Automattic do however make lots of decisions about the future of WordPress, including the brand new editor, Gutenberg. The editor has been available as a plugin while it’s been in development, so WordPress users can use it as their main editor and provide feedback — a lot of which has been negative. Gutenberg is shipping as the default editor in the 5.0 major release of WordPress, and it will be the forced default editor, with only the download of the Classic Editor preventing it. This forced change has had a mixed response from the community, to say the least.

I’ve personally been very positive about Gutenberg with my writing, teaching and speaking, as I genuinely think it’ll be a positive step for WordPress in the long run. As the launch of WordPress 5.0 has come ever closer, though, my concerns about accessibility have been growing. The accessibility issues are being “fixed” as I write this, but the handling of the situation has been incredibly poor, from Automattic.

I invite you to read this excellent, ever-updating Twitter thread by Adrian Roselli. He’s done a very good job of collecting information and providing expert commentary. He’s covered all of the events in a very straightforward manner.

Right, you’re up to speed, so let’s crack on.

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ What Happened?

For as long as the Gutenberg plugin has been available to install, there have been accessibility issues. Even when I very excitedly installed it and started hacking away at custom blocks back in March, I could see there was a tonne of issues with the basics, such as focus management. I kept telling myself, “This editor is very early doors, so it’ll all get fixed before WordPress 5.” The problem is: it didn’t. (Well, mostly, anyway.)

This situation was bad as it is, but two key things happened that made it worse. The accessibility lead, Rian Rietveld, resigned in October, citing political and codebase issues. The second thing is that Automattic set a hard deadline for WordPress 5’s release, regardless of whether accessibility issues were fixed or not.

Let me just illustrate how bad this is. As cited in Rian’s article: after an accessibility test round in March, the results indicated so many accessibility issues, most testers refused to look at Gutenberg again. We know that the situation has gotten a lot better since then, but there are still a tonne of open issues, even now.

I’ve got to say it how I see it, too. There’s clearly a cultural issue at Automattic in terms of their attitude towards accessibility and how they apparently compensate people who are willing to fix them, with a strange culture of free work, even from “outsiders”. Frankly, the company’s CEO, Matt Mullenweg’s attitude absolutely stinks — especially when he appears to be holding a potential professional engagement hostage over someone’s personal blog decision:

That's too bad was about to reach out to work with Deque on the audits.

— Matt Mullenweg (@photomatt) November 13, 2018

Allow me to double-down on the attitude towards accessibility for a moment. When a big company like Automattic decides to prioritize a deadline they pluck out of thin air over enabling people with impairments to use the editor that they will be forced to use it is absolutely shocking. Even more shocking is the message that it sends out that accessibility compliance is not as important as flashy new features. Ironically, there’s clearly commercial undertones to this decision for a hard deadline, but as always, free work is expected to sort it out. You’d expect a company like Automattic to fix the situation that they created with their own resource, right?

You’ll probably find it shocking that a crowd funding campaign has been put together to get an accessibility audit done on Gutenberg. I know I certainly do. You heard me correctly, too. The Gutenberg editor, which is a product of Automattic’s influence on WordPress who (as a company) were valued at over $1 Billion in 2014 are not paying for a much-needed accessibility audit. They are instead sitting back and waiting for everyone else to pay for it. Well, at least they were, until Matt Mullenweg finally committed to funding an audit on 29 November.

How Could This Mess Be Avoided?

Enough dragging people over coals (for now) and let us instead think about how this could have been avoided. Apart from the cultural issues that seem to de-prioritize accessibility at Automattic, I think the design process is mostly at fault in the context of the Gutenberg editor.

A lot of the issues are based around complexity and cognitive load. Creating blocks, editing the content, and maneuvering between blocks is a nightmare for visually impaired and/or keyboard users. Perhaps if accessibility was considered at the very start of the project, the process of creating, editing and moving blocks would be a lot simpler and thus, not a cognitive overload. The problem now is that accessibility is a fix rather than a core feature. The cognitive issues will continue to exist, albeit improved.

Another very obvious thing that could have been done differently would be to provide help and training on the JS-heavy codebase that was introduced. A lot of the accessibility fixing work seems to have been very difficult because the accessibility team had no React developers within it. There was clearly a big decision to utilize modern JavaScript because Mullenweg told everyone to “Learn JavaScript Deeply”. At that point, it would have made a lot of sense to help people who contribute a lot to WordPress for free to also learn JavaScript deeply so that they could have been involved way earlier in the process. I even saw this as an issue and made learning modern JavaScript and React a core focus in a tutorial series I co-authored with Lara Schenck.

I’m convinced that some foresight and investment in processes, planning, and people would have prevented a tonne of the accessibility issues from existing at all. Again, this points at issues with attitude from Automattic’s leader, in my opinion. He’s had the attitude that ignoring accessibility is fine because Gutenberg is a fantastic, empowering new editor. While this is true, it can’t be labeled as truly empowering if it prevents a huge number of users from managing content — in some cases, even doing their jobs. A responsible CEO in this position would probably write an incredibly apologetic statement that addressed the massive oversights. They would probably also postpone the hard deadline set until every accessibility issue was fixed. At the very least, they wouldn’t force the new editor on every single WordPress user.

Wrapping Up

I’ve got to add to this article that I am a massive WordPress fan and can see some unbelievably good opportunities for managing content that Gutenberg provides. It’s not just a new editor — it is a movement. It’s going to shape WordPress for years to come, and it should allow more designers and front-end developers into the ecosystem. This should be welcomed with open arms. Well, if and when it is fully accessible, anyway.

There are also a lot of incredible people working at Automattic and on the WordPress core team, who I have heaps of respect and love for. I know these people will help this situation come good in the end and will and do welcome this sort of critique. I also know that lessons will be learned and I have faith that a mess like this won’t happen again.

Use this situation as a warning, though. You simply can’t ignore accessibility, and you should study up and integrate it into the entire process of your projects as a priority.

(dm, ra, il)
Categories: Around The Web

Elements To Ditch Or Repurpose On Mobile

Smashing Magazine - Thu, 12/06/2018 - 8:00am
Elements To Ditch Or Repurpose On Mobile Elements To Ditch Or Repurpose On Mobile Suzanne Scacca 2018-12-06T14:00:02+01:00 2018-12-19T09:07:50+00:00

With the end of the year quickly approaching, everyone is chiming in with predictions for 2019 web design trends. For the most part, I think these predictions look quite similar to the ones made for 2018 — which is surprising.

As we move deeper into the mobile-first territory, we can’t adhere to the same predictions that made sense for websites viewed on desktop. We, of course, can’t forget about the desktop experience, but it needs to take a backseat to mobile. This is why I wish 2019 predictions (and beyond) would be more practical in nature.

We need to design websites primarily with mobile users in mind, which means having a more efficient system of content delivery. Rather than spend the next year or so adding more design techniques to our repertoire, maybe we should be taking some away?

As the abstract expressionist painter Hans Hofmann said:

“The ability to simplify means to eliminate the unnecessary so that the necessary may speak.”

So, today, I’m going to talk about the mobile design elements we’ve held onto for a little too long and what you should do about them going forward.

Why Do We Need To Get Rid Of Mobile Design Elements In 2019?

Although responsive design and minimalism have inched us closer to the desired effect of mobile first I don’t think it’s taken us as far as we can go. And part of that is because we’re reticent to let go of design elements that have been with us for a long time. They might seem essential, but I suspect that many of them can be removed from websites without harming the experience.

This is why: On desktop, there’s a lot of room to play with. Even if you don’t populate every inch of the screen with content, you find creative ways to use the space. With mobile, you’ve drastically reduced the real estate. One of the biggest side effects of this is the amount of scrolling that mobile visitors have to do.

Our new book, in which Alla Kholmatova explores how to create effective and maintainable design systems to design great digital products. Meet Design Systems, with common traps, gotchas and the lessons Alla has learned over the years.

Table of Contents → Why does this matter?

A 2018 study from the Nielsen Norman Group on scrolling and attention demonstrates that many users (57%) don’t mind scrolling past the above-the-fold line. That said, 74% of all viewing time occurs within the first two screenfuls.

NNG stats on how much content is consumed on a page (Source: Nielsen Norman Group) (Large preview)

If you try to fit all of those extraneous design elements from the traditional desktop experience into the mobile one, there’s a good chance your visitors won’t ever encounter them.

Although a longer scroll on mobile might be easy enough to execute, you might also find your visitors suffer from scrolling fatigue. My suggestion is to delete design elements on mobile that create excessive scrolls and, consequently, test visitors’ patience.

4 Mobile Design Elements You Should Ditch In 2019

If we’re not going to drastically change web design trends from 2018 to 2019, then I think now is a great time to clean up the mobile web experience. If you’re looking to increase times spent on site as well as your conversion rates, creating a sleeker and more efficient experience would greatly improve your mobile web designs.

In order to explain which mobile design elements you should ditch this year, I’m going to pit the desktop and mobile experiences against one another. This way you get a sense for why you need to say goodbye to it on mobile.

1. Sidebars

A sidebar has been a handy web design element for blogs and other news authorities for a long time. However, with responsive and mobile-first design taking over, the sidebar tends just to get shoved at the very bottom of blog posts now. But is that the best place for it?

The Blonde Abroad is an example of one that puts most of the sidebar content into the bottom of a post.

Here is how a post appears on desktop:

The Blonde Abroad blog sidebar on desktop (Source: The Blonde Abroad) (Large preview)

Note that this isn’t the end of the sidebar either. There are a number of other widgets below the ones shown in this screenshot. Which is why the mobile counterpart runs on way too long for this website:

The Blonde Abroad blog sidebar on mobile (Source: The Blonde Abroad) (Large preview)

What you’re seeing here isn’t a cool social media-centric page. This is what mobile users find after they scroll past:

  • Ads,
  • A promotion of her web store,
  • Recommended/related posts,
  • A subscriber form,
  • A comment form.

The Instagram feed then shows up, followed by the subscriber form once again! All in all, it takes about half of the page’s scrolls to get to the end of the content. The rest of the page is then filled with self-promotional material. It’s just way too much.

If Instagram is that prominent of a platform for her, she should have a link to it in the header. I would also suggest cutting back on the number of forms on the mobile web pages. Three forms (two of which are duplicates) is excessive. And I’d also probably recommend turning the recommended posts with images and titles into plain text links.

An example of an authority site that handles sidebars well is the MarketingSherpa blog. As you can see here, there is a fairly dense sidebar included in the desktop experience.

The MarketingSherpa desktop sidebar (Source: MarketingSherpa) (Large preview)

Turn your attention to mobile, however, and the sidebar disappears completely. Instead, you’ll encounter a super lightweight experience:

The MarketingSherpa mobile sidebar (Source: MarketingSherpa) (Large preview)

Below each post on the blog, you’ll find a succinct list of links recommended by the author. There is also a Previous/Next widget that enables readers to quickly move to the next published post. It’s a great way to keep readers moving through the site without having to make a mobile web page unnecessarily long.

2. Modal Pop-ups

I know that mobile pop-ups aren’t dying, at least so far as Google is concerned. But intrusive pop-ups aside, does the traditional pop-up have a place on mobile anymore? If we’re really thinking about ways to optimize the user experience, wouldn’t it make sense to do away with the modal altogether?

Here’s an example from Akamai that I’m shocked even exists:

The top of a mobile pop-up on the Akamai website (Source: Akamai) (Large preview)

While perusing one of the internal pages of the mobile site, this pop-up appeared on my screen. At first, I thought, “Oh, cool! A pop-up with a graphic and statistic.” But then I read it and realized it was a scrolling pop-up!

The bottom of a mobile pop-up on the Akamai website (Source: Akamai) (Large preview)

I’m honestly not sure I’ve seen one of these before, but I think it’s the perfect example of why modal mobile pop-ups are never a great idea. In addition to blocking the content of the site almost completely, the pop-up requires the visitor to do work in order to see the whole message.

I ran into another example of a bad pop-up. This one is on the Paul Mitchell website:

A Paul Mitchell pop-up matches the main header graphic (Source: Paul Mitchell) (Large preview)

I thought it was an odd choice to place the same promotion in both the pop-up and the scrolling hero image. This one, however, is easy enough to dismiss since it’s clear what is the pop-up and what is the image.

On mobile, it’s not that easy to distinguish:

A confusing duplication of a mobile ad or pop-up on the Paul Mitchell site (Source: Paul Mitchell) (Large preview)

If I hadn’t seen the matching pop-up on desktop, I likely would’ve thought this web page had an error upon first seeing the duplication. It also doesn’t help that the hero banner now has an arrow icon in a black box, which could easily be confused for the “X” that closes out the matching pop-up.

It’s a very odd design choice and one I’d tell everyone else to stay away from. Not only does the pop-up appear instantly on the home page (which is a no-no), but it creates a confusing first impression. It might not be the traditional modal, but it still looks bad.

Switching gears, the Four Seasons website does a very nice job of handling its pop-ups. Here is the desktop pop-up widget:

An interactive pop-up offer for the Four Seasons (Source: Four Seasons) (Large preview)

Click on the pop-up, and it will open a full-screen pop-up offer. This is a nice touch as it gives the visitor full control over whether they want to see the pop-up or not.

An interactive pop-up offer is revealed for the Four Seasons (Source: Four Seasons) (Large preview)

The mobile pop-up counterpart does something similar:

An interactive pop-up offer appears on the Four Seasons mobile site (Source: Four Seasons) (Large preview)

The pop-up offer sits snug against the header, never intruding on the experience of the mobile site.

An interactive pop-up offer expands on the Four Seasons mobile site (Source: Four Seasons) (Large preview)

Even once the pop-up is clicked, it never blocks the mobile website from view. It only pushes the content down further on the page. It’s simply designed, easy to follow and gives all of the control over to the mobile user in terms of engagement. It’s a great design choice and one I’d like to see more mobile designers use when designing pop-up elements going forward.

3. Sticky Side Elements

I think a sticky navigation bar or bottom bar on a mobile website is a brilliant idea. As we’ve already seen, visitors are willing to scroll on a website. But visitors are more likely to scroll further down a page if they have an easy way to go somewhere else — to another page, to check out, to a special discount offer, etc.

That said, I’m not a fan of sticky elements on the side of mobile websites. On desktop, they work well. They’re typically tiny icons or widgets that stick to the side or bottom corner of the site. They’re boldly colored, easy to recognize and give visitors the choice of interacting when they’re ready.

On mobile, however, sticky side elements are a bad idea.

Let’s take a look at the Sofitel website, for example.

Feedback widget sticks to side of Sofitel desktop site (Source: Sofitel) (Large preview)

As you can see, there’s an orange “Feedback” button stuck to the left side of the screen. As you scroll down the page, it remains put, making it convenient for visitors to drop the developer a line if something goes wrong.

Here’s how that same button appears on mobile:

Feedback widget covers content on Sofitel mobile site (Source: Sofitel) (Large preview)

Although the “Feedback” button is not always blocking content, there are occasions where it overlaps an image or text as a user scrolls. It might seem like a minor inconvenience, but it could easily be what takes a visitor from feeling annoyed or frustrated with a website to feeling completely over it.

Wreaths Across America is another example of a sticky element getting in the way. On desktop, the blue live chat widget is well-placed.

Wreaths Across America includes live chat widget on every page (Source: Wreaths Across America) (Large preview)

Then, move it over to mobile, and the live chat continuously covers a decent amount of content residing in the bottom-right corner.

Wreaths Across America live chat covers mobile content (Source: Wreaths Across America) (Large preview)

If your visitors aren’t actively engaging with live chat or other sticky side elements on mobile (and your statistics should tell you this), don’t leave them there. Or, at the very least, present an easy way to dismiss them.

One way to get around the sticky overlap issue is the solution BuzzFeed has chosen.

In recent years, many websites utilized floating and sticky social media icons. It was a logical choice as you never knew how long it would take for readers to decide that they just had to share your web page or post with their social media connections.

BuzzFeed sticks social media and sharing icons to the bottom of the screen (Source: BuzzFeed) (Large preview)

As we’ve seen with the live chat and Feedback widgets above, elements that stick to the side of the screen just don’t work on mobile. Instead, we should look to what BuzzFeed has done here and make those icons stick flush with the bottom of the screen.

We already know that sticky navigation and bottom bars stay out of the way of content, so let’s use these key areas of the mobile device to place sticky elements we want people to engage with.

4. Content

It’s not just these extraneous design elements or outliers that you should think about removing on the mobile experience. I believe there are times when content itself doesn’t need to be there.

If you want to get visitors to the crux of your message in just a few scrolls, you can’t be afraid to cut out content that isn’t 100% necessary.

I think ads are one of the worst offenders of this. TechRepublic has a particularly nasty example of this — both for desktop and mobile.

An oversized banner ad on the TechRepublic desktop site (Source: TechRepublic) (Large preview)

This is what the TechRepublic desktop website looks when you first visit it. That alone is horrendous. Why does anyone use ad banners above the header anymore? And why does this one have to be so large in size? Shouldn’t TechRepublic’s logo and navigation be the first thing people see?

It was my hope that, upon visiting the mobile site, the ad would’ve gone away. Sadly, that wasn’t the case.

An oversized banner ad on the TechRepublic mobile site (Source: TechRepublic) (Large preview)

What we have here is a Best Buy ad that takes up roughly a third of the TechRepublic mobile home page. Sure, once a visitor scrolls down, it will go away. But where do you think visitors’ eyes will go to first? I’m willing to bet some of them will see the logo in the top left and wonder how the heck they ended up on the Best Buy website.

This is one of those times when it’s best to rethink your monetization strategy if it’s going to intrude and confuse the mobile user’s experience.

Now, let’s look at the good.

Kohl’s has a pretty standard product page for an e-commerce website:

Kohl’s product page on desktop (Source: Kohl’s) (Large preview)

When displayed on mobile, however, you’ll find that the product views disappear:

Kohl’s product page on mobile (Source: Kohl’s) (Large preview)

Instead of trying to make room for them, the different product views are hidden under a slider. This is a nice choice if you would prefer not to compromise on how much content is displayed — especially if it’s essential to selling the product.

Another great example of picking and choosing your battles when it comes to displaying content on mobile comes from The Blonde Abroad.

Readers of her blog can choose content based on the global destination, as shown here on the desktop website:

The Blonde Abroad includes a searchable map on desktop (Source: The Blonde Abroad) (Large preview)

It’s a pretty neat search function, especially since it places the content within the context of an actual map.

Rather than try to force a graphic like this to fit to mobile, The Blonde Abroad includes only the essentials needed to conduct a search:

The Blonde Abroad includes only the standard search on mobile (Source: The Blonde Abroad) (Large preview)

While mobile readers might miss out on the mapped content, this provides a much more streamlined experience. Mobile users don’t want to have to scroll left and right, up and down, in order to search for content from an oversized graphic. At its core, this section of the site is about search. And, on mobile, this clean presentation of search options is enough to impress readers and inspire them to read more.

Wrapping Up

In Stephen King’s guide to writing, On Writing, he says something to this extent:

“Create your content. Then, review it and delete 10% of what you created.”

Granted, this applies to writing a story, but I believe this same logic applies to the designing of a mobile website. In other words: Why test your visitors’ patience — or even worse — create too cumbersome of an experience that they miss the most important parts of it? Go ahead and translate the idea you had for the traditional desktop landscape into a mobile setting. Then, review it on mobile and gut all of the unnecessary bits of content or design elements.

(ra, yk, il)
Categories: Around The Web

What Is The Role Of Creativity In UX Design?

Smashing Magazine - Thu, 12/06/2018 - 4:00am
What Is The Role Of Creativity In UX Design? What Is The Role Of Creativity In UX Design? Susan Weinschenk 2018-12-06T10:00:41+01:00 2018-12-19T09:07:50+00:00

(This article is kindly sponsored by Adobe.) You are working on a project for your client, designing the interface for a new application. There have been lots of meetings about the new product, and now it’s time for you to start working on sketching and prototyping a design.

The screens, pages, and forms you are about to create have to fit within the desires and constraints of several players — the marketing department, the developers, the business owner. Some questions start to form as you work on the interface:

“How creative should I be/am I expected to be with this design? Is my role to implement the vision that someone else has come up with? Should I be taking the ideas and constraints and creating my own vision? How much can I stray from the ideas I’ve been given?”

You speak to your main contact on the project and she says:

“Go for it, be creative. Let’s see what you come up with.”

You’re excited to be given a free hand, but now you have to figure out what does it mean to be creative with UX design and how do you go about “being creative”? Is creativity something you can just turn on? Is it a process you go do? Are some people just creative and others aren’t? And if so, which one are you?

Let’s explore.

What Is Creativity?

Creativity can mean a lot of things to a lot of people. The definition I find the most useful is:

“Creativity is a process that results in outcomes that are original and of value.”

This definition has several implications:

  • Process
    It’s not just the end result that defines whether someone is creative. In order to be creative the assumption is that you have followed a particular process. We’ll discuss the process a little later in this post.
  • Outcomes
    Although process is important, process alone is not enough. In order to claim creativity, you have to have something at the end of the process. You have to have an outcome.
  • Original and of value
    The outcome that you have at the end of the process has to be unique and be of some value to someone.

So what is this creative process? In order to know what process would result in creativity, you first have to understand how the brain works in terms of solving problems or coming up with new ideas.

The Brain Science Of Creativity

A popular idea about creativity is that creativity happens in the “right brain”. That’s actually not accurate. There is new and interesting research on what happens in the brain when people are being creative.

Both the left and right half of the brain are involved in creativity. In fact, there is no one area of the brain where creativity happens. Instead, there are three brain “networks” that are involved in creativity. A network is a collection of different parts of the brain that work together.

(Large preview) The Executive Attention Network

The Executive Attention Network refers to brain activity when you are focused on identifying and solving a problem, or deciding what you need to be creative about.

“What is the best way to design this form so that people who just want the default selections aren’t distracted, but also so that when someone needs one of the exceptions they can find what they need to fill out the form?”

When you stare at your screen and think of the above, you are using the Executive Attention Network. Once you have focused on the problem you want to solve, or the creative idea you want to work on, the next network that gets to work is called the Imagination Network.

The Imagination Network

The Imagination Network works in a mostly unconscious way. It reviews your knowledge and memories, and then runs simulations of possible ways to create what it is you set your intention to create with the executive attention network.

The Salience Network

Lastly there is the Salience Network. This is also a largely unconscious process. The Salience Network monitors the activity in the Imagination Network, and decides what to pick out and bring to your conscious awareness. This is when you have an “Ah-ha!” moment and say to yourself, “Oh, I know, I could try...”

According to Scott Barry Kaufman, these three networks (all working together) is how our brains normally work to solve problems or create (art, music, screens, writing, and so on).

Putting Your Creativity To Work

Given the way the brain works there are things you can do that help it be more creative. Here’s a list of four practical ideas that may sound like common sense, but actually go further than just common sense. They actually help you work with the three networks.

1. Clearly Identify Your Intention

Whether it is a problem you want to solve or a creative idea you want to work on, state and/or write down exactly what you are working on. For example:

  • How should I organize the data on this screen so that people can easily find what they need?
  • What would be a good color to use for the hover on this navigation bar?
  • Is there a container object I should consider so that the conceptual model of this design is clear?
  • Is there a way to show this data visually rather than just having it in a table?

By stating clearly what you are wanting to be creative about, you effectively engage the Executive Attention Network.

2. Take A Break

You’ve clearly stated your intention and your Imagination Network is ready to go to work. The next thing you should do is take some time off from that question/issue. In fact, you should take some time off from any intense mental activity.

If your Executive Attention Network is constantly engaged then it is hard for your Imagination Network to do its work. If possible, go do something that doesn’t require much thought. Go to lunch, go for a walk, return some phone calls; anything that will free up your brain is a good idea. Ideally, you would actually take a nap or go to sleep for the night.

(Large preview) 3. Record Your Ah-ha! Ideas

Your Salience Network will eventually start sending ideas to your conscious brain, and you need to be ready. Inspiration may arrive at any time, so be sure to be ready. Keep a pen and paper handy everywhere you go, or a phone near you with your voice recording app easy to access.

There is even waterproof paper and pencils that you can buy to keep in your shower (I do). When you get an idea, write it down. Don’t assume that good ideas will come around again. Capture them when they appear.

(Large preview) 4. Switch Between The High Level And The Detail

Research shows that one of the hallmarks of a creative person is their ability to zoom in and out, from the high level to the detail. I have found that to be true of UX Designers as well.

The best and most creative UX Designers that I know are able to think at a conceptual level about a design and then start sketching a detail screen level the next moment. Then they think about the high-level implication of what they just sketched, and then they swoop back down to the detail level.

If you are not used to doing this, then practice. Let’s say you are going to work on a design for an hour. Set a timer for ten minutes and start with some planning and thinking about the problem/solution at hand on a high level.

When the ten minutes are up, do some detail sketching of one detailed part of what you were thinking about. After ten more minutes, stop working on the details and pull back and think about how what you just worked on fits with the larger structure.

Go back and forth every ten minutes. If you are not used to working this way, it may be hard at first, but try it and see if you find that it actually helps your creativity.

I’ve talked to some UX designers that think that this means that they are disorganized, or that they haven’t done enough high-level work upfront. Upfront high-level work can be very important, but you need to allow yourself the freedom to zoom from high level to detail through the creative design process.

Some designers think that the “right” way to design is to go through the design process in a step-by-step way, going from the large picture (macro-design) to the micro-level. But the best designs are the ones that allow you to go back and forth between macro and micro as you need to.

Practice moving back and forth from a high-level macro view to a low-level micro view. The practice will help you to see the relationships between macro and micro and will also help you improve as a designer.

Encouraging A Creative Mindset

Two more ideas that to encourage a creative mindset are:

  1. Don’t be fooled into thinking that following a process is not being creative.
    A good process helps you be creative. A good process takes care of details, helps you think things through, and helps you set your intention. Most creative UX people follow a process.
  2. Don’t worry about constraints.
    Some people think that having constraints means they can’t be creative. The research shows that people are more creative when there are constraints. There is a lot of research on this topic, but an example study is one by Brent Rosso who concludes from the research that:
“Teams experiencing the right kinds of constraints in the right environments, and which saw opportunity in constraints, benefitted creatively from them. The results of this research challenge the assumption that constraints kill creativity, demonstrating instead that for teams able to accept and embrace them, there is freedom in constraint.”

Source: Creativity and Constraints: Exploring the Role of Constraints in the Creative Processes of Research and Development Teams (Organization Studies, 2014, Vol. 35(4) 551–585)

Of course, being too tightly constrained can stifle creativity, but most of the time the constraints we have (colors we can or can’t use, standards and guidelines we need to follow, fonts we have to use, technology considerations we have to align with, deadlines about when we have to have the prototype done) can actually help us be more creative because they engage the brain networks.

The best thing to do with constraints is to be very clear with them when you are setting your intention with the Executive Attention Network. For example, let’s say you have to come up with a prototype for an app, but you only have three days. When you set your intention with the Executive Attention Network, don’t forget to explicitly remind yourself you only have three days. Other constraints might have to do with the technology you can or can’t use, the size of the screen, the number of pages or screens, and so on.

I hope these ideas have sparked some creativity for you. The next time you start on a new project, set an intention for the next part of your project, then go for a walk, and bring a small pad of paper and a pen with you. You will be amazed at what happens next.

This article is part of the UX design series sponsored by Adobe. Adobe XD tool is made for a fast and fluid UX design process, as it lets you go from idea to prototype faster. Design, prototype and share — all in one app. You can check out more inspiring projects created with Adobe XD on Behance, and also sign up for the Adobe experience design newsletter to stay updated and informed on the latest trends and insights for UX/UI design.

(cm, ms, yk, il)
Categories: Around The Web

Caching Smartly In The Age Of Gutenberg

Smashing Magazine - Wed, 12/05/2018 - 7:00am
Caching Smartly In The Age Of Gutenberg Caching Smartly In The Age Of Gutenberg Leonardo Losoviz 2018-12-05T13:00:15+01:00 2018-12-19T09:07:50+00:00

Caching is needed for speeding up a site: instead of having the server dynamically create the HTML output for each request, it can create the HTML only after it is requested the first time, cache it, and serve the cached version from then on. Caching delivers a faster response, and frees up resources in the server. When optimizing the speed of our sites from the server side, caching ranks among the most critical tasks to get right.

When generating the HTML output for the page, if it contains code with user state, such as printing a welcome message “Hello {{User name}}!” for the logged in user, then the page cannot be cached. Otherwise, if Peter visits the site first, and the HTML output is cached, all users would then be welcomed with “Hello Peter!”

Hence, caching plugins, such as those available for WordPress, will generally offer to disable caching when the user is logged in, as shown below for plugin WP Super Cache:

WP Super Cache recommends to disable caching for logged in users. (Large preview)

Disabling caching for logged in users is undesirable and should be avoided, because even if the amount of HTML code with user state is minimal compared to the static content in the page, still nothing will be cached. The reason is that the entity to be cached is the page, and not the particular pieces of HTML code within the page, so by including a single line of code which cannot be cached, then nothing will be cached. It is an all-or-nothing situation.

Our new book, in which Alla Kholmatova explores how to create effective and maintainable design systems to design great digital products. Meet Design Systems, with common traps, gotchas and the lessons Alla has learned over the years.

Table of Contents →

To address this, we can architect our application to avoid rendering HTML code with user state on the server-side, and render it on the client-side only, after fetching its required data through an API (often based on REST or GraphQL). By removing user state from code rendered on the server, that page can then be cached, even if the user is logged in.

In this article, we will explore the following issues:

  • How do we identify those sections of code that require user state, isolate them from the page, and make them be rendered on the client-side only?
  • How can it be implemented for WordPress sites through Gutenberg?
Gutenberg Is Bringing Components To WordPress

As I explained in my previous article Implications of thinking in blocks instead of blobs, Gutenberg is a JavaScript-based editor for WordPress (more specifically, it is a React-based editor, encapsulating the React libraries behind the global wp object), slated for release in either November 2018 or January 2019. Through its drag-and-drop interface, Gutenberg will utterly transform the experience of creating content for WordPress and, at some later stage in the future, the process of building sites, switching from the current creation of a page through templates (header.php, index.php, sidebar.php, footer.php), and the content of the page through a single blob of HTML code, to creating components to be placed anywhere on the page, which can control their own logic, load their own data, and self-render.

To appreciate the upcoming change visually, WordPress is moving from this:

Currently pages are built through PHP templates. (Large preview)

To this:

In the near future, pages will be built by placing self-rendering components in them. (Large preview)

Even though Gutenberg as a site builder is not ready yet, we can already think in terms of components when designing the architecture of our site. As for the topic of this article, architecting our application using components as the unit for building the page can help implement an enhanced caching strategy, as we shall see below.

Evaluating The Relationship Between Pages And Components

As mentioned earlier, the entity being cached is the page. Hence, we need to evaluate how components will be placed on the page as to maximize the page’s cacheability. Based on their dependence on user state, we can broadly categorize pages into the following 3 groups:

  1. Pages without any user state, such as “Who we are” page.
  2. Pages with bits and pieces of user state, such as the homepage when welcoming the user (“Welcome Peter!”), or an archive page with a list of posts, showing a “Like” button under each post which is painted blue if the logged in user has liked that post.
  3. Pages naturally with user state, in which content depends directly from the logged in user, such as “My posts” of “Edit my profile” pages.

Components, on the other side, can simply be categorized as requiring user state or not. Because the architecture considers the component as the unit for building the page, the component has the faculty of knowing if it requires user state or not. Hence, a <WelcomeUser /> component, which renders “Welcome Peter!”, knows it requires user state, while a <WhoWeAre /> component knows that it does not.

Next, we need to place components on the page, and depending on the combination of page and component requiring user state or not, we can establish a proper strategy for caching the page and for rendering content to the user as soon as possible. We have the following cases:

1. Pages Without Any User State

These can be cached with no issues.

  • Page is cached => It can’t access user state.
  • Components, none of them requiring user state, are rendered in the server.
A page without user state can only contain components without user state. (Large preview) 2. Pages With Bits And Pieces Of User State

We could make the page either require user state or not. If we make the page require user state, then it cannot be cached, which is a wasted opportunity when most of the content in the page is static. Hence, we’d rather make the page not require user state, and those components requiring user state which are placed on the page, such as <WelcomeUser /> on the homepage, are made lazy-load: the server-side renders an empty shell, and the component is rendered instead in the client-side, after getting its data from an API.

Following this approach, all static content in the page will be rendered immediately through server-side rendering (SSR), and those bits and pieces with user state after some delay through client-side rendering (CSR).

  • Page is cached => It can’t access user state.
  • Components not requiring user state are rendered in the server.
  • Components requiring user state are rendered in the client.
A page with bits of user state contains CSR components with user state, and SSR components without user state. (Large preview) 3. Pages Naturally With User State

If the library or framework only enables client-side rendering, then we must follow the same approach as with #2: do not make the page require user state, and add a component, such as <MyPosts />, to self-render in the client.

However, since the main objective of the page is to show user content, making the user wait for this content to be loaded on a 2nd stage is not ideal. Let’s see this with an example: a user who has not logged in yet accesses page “Edit my profile”. If the site renders the content in the server, since the user is not logged in the server will immediately redirect to the login page. Instead, if the content is rendered in the client through an API, the user will first be presented a loading message, and only after the response from the API is back will the user be redirected to the login page, making the experience slower.

Hence, we are better off using a library or framework that supports server-side rendering, and we make the page require user state (making it non-cacheable):

  • Page is not cached => It can access user state.
  • Components, both requiring and not requiring user state, are rendered in the server.
A page with user state contains SSR components both with and without user state. (Large preview)

From this strategy and all the combinations it produces, deciding if a component must be rendered server or client-side simply boils down to the following pseudo-code:

if (component requires user state and page can’t access user state) { render component in client } else { render component in server }

This strategy allows to attain our objective: implemented for all pages in the site, for all components placed in each page, and configuring the site to not cache pages which access the user state, we can then avoid disabling caching any page whenever the user is logged in.

Rendering Components Client/Server-Side Through Gutenberg

In Gutenberg, those components which can be embedded on the page are called “blocks” (or also Gutenblocks). Gutenberg supports two types of blocks, static and dynamic:

  • Static blocks produce their HTML code already in the client (when the user is interacting with the editor) and save it inside the post content. Hence, they are client-side JavaScript-based blocks.
  • Dynamic blocks, on the other hand, are those which can change their content dynamically, such as a latest posts block, so they cannot save the HTML output inside the post content. Hence, in addition to creating their HTML code on the client-side, they must also produce it from the server on runtime through a PHP function (which is defined under parameter render_callback when registering the block in the backend through function register_block_type.)

Because HTML code with user state cannot be saved in the post’s content, a block dealing with user state will necessarily be a dynamic block. In summary, through dynamic blocks we can produce the HTML for a component both in the server and client-side, enabling to implement our optimized caching strategy. The previous pseudo-code, when using Gutenberg, will look like this:

if (block requires user state and page can’t access user state) { render block in client through JavaScript } else { render (dynamic) block in server through PHP code }

Unfortunately, implementing the dual client/server-side functionality doesn’t come without hardship: Gutenberg’s SSR is not isomorphic, ie it does not allow a single codebase to produce the output for both client and server-side code. Hence, developers would need to maintain 2 codebases, one in PHP and one in JavaScript, which is far from optimal.

Gutenberg also implements a <ServerSideRender /> component, however it advices against using it: this component was not thought for improving the speed of the site and rendering an immediate response to the user, but for providing compatibility with legacy code, such as shortcodes.

As it is explained in the documentation:

“ServerSideRender should be regarded as a fallback or legacy mechanism, it is not appropriate for developing new features against.

“New blocks should be built in conjunction with any necessary REST API endpoints, so that JavaScript can be used for rendering client-side in the edit function. This gives the best user experience, instead of relying on using the PHP render_callback. The logic necessary for rendering should be included in the endpoint, so that both the client-side JavaScript and server-side PHP logic should require a minimal amount of differences.”

As a result, when building our sites, we will need to decide if to implement SSR, which boosts the site’s speed by enabling an optimal caching experience and by providing an immediate response to the user when loading the page, but which comes at the cost of maintaining 2 codebases. Depending on the context, it may be worth it or not.

Configuring What Pages Require User State

Pages requiring (or accessing) user state will be made non-cacheable, while all other pages will be cacheable. Hence, we need to identify which pages require user state. Please notice that this applies only to pages, and not to REST endpoints, since the goal is to render the component already in the server when accessing the page, and calling the WP REST API’s endpoints implies getting the data for rendering the component in the client. Hence, from the perspective our our caching strategy, we can assume all REST endpoints will require user state, and so they don’t need to be cached.

To identifying which pages require user state, we simply create a function get_pages_with_user_state, like this:

function get_pages_with_user_state() { return apply_filters( 'get_pages_with_user_state', array() ); }

Upon which we implement hooks with the corresponding pages, like this:

// ID of the pages, retrieved from the WordPress admin define ('MYPOSTS_PAGEID', 5); define ('ADDPOST_PAGEID', 8); add_filter('get_pages_with_user_state', 'get_pages_with_user_state_impl'); function get_pages_with_user_state_impl($pages) { $pages[] = MYPOSTS_PAGEID; // "Add Post" may not require user state! // $pages[] = ADDPOST_PAGEID; return $pages; }

Please notice how we may not need to add user state for page “Add Post” (making this page cacheable), even though this page requires to validate that the user is logged in when submitting a form to create content on the site. This is because the “Add Post” page may simply display an empty form, requiring no user state whatsoever. Then, submitting the form will be a POST operation, which cannot be cached in any case (only GET requests are cached).

Disabling Caching Of Pages With User State In WP Super Cache

Finally, we configure our application to disable caching for those pages which require user state (and cache everything else.) We will do this for plugin WP Super Cache, by blacklisting the URIs of those pages in the plugin settings page:

We can disable caching URLs containing specific strings in WP Super Cache. (Large preview)

What we need to do is create a script that obtains the paths for all pages with user state, and saves it in the corresponding input field. This script can then be invoked manually, or automatically as part of the application’s deployment process.

First we obtain all the URIs for the pages with user state:

function get_rejected_strings() { $rejected_strings = array(); $pages_with_user_state = get_pages_with_user_state(); foreach ($pages_with_user_state as $page) { // Calculate the URI for that page to the list of rejected strings $path = substr(get_permalink($page), strlen(home_url())); $rejected_strings[] = $path; } return $rejected_strings; }

And then, we must add the rejected strings into WP Super Cache’s configuration file, located in wp-content/wp-cache-config.php, updating the value of entry $cache_rejected_uri with our list of paths:

function set_rejected_strings_in_wp_super_cache() { if ($rejected_strings = get_rejected_strings()) { // Keep the original values in $rejected_strings = array_merge( array('wp-.*\\\\.php', 'index\\\\.php'), $rejected_strings ); global $wp_cache_config_file; $cache_rejected_uri = "array('".implode("', '", $rejected_strings)."')"; wp_cache_replace_line('^ *\$cache_rejected_uri', "\$cache_rejected_uri = " . $cache_rejected_uri . ";", $wp_cache_config_file); } }

Upon execution of function set_rejected_strings_in_wp_super_cache, the settings will be updated with the new rejected strings:

Blacklisting the paths from pages accessing user state in WP Super Cache. (Large preview)

Finally, because we are now able to disable caching for the specific pages that require user state, there is no need to disable caching for logged in users anymore:

No need to disable caching for logged in users anymore! (Large preview)

That’s it!

Conclusion

In this article, we explored a way to enhance our site’s caching — mainly aimed at enabling caching on the site even when the users are logged in. The strategy relies on disabling caching only for those pages which require user state, and on using components which can decide if to be rendered on the client or on the server-side, depending on the page accessing the user state or not.

As a general concept, the strategy can be implemented on any architecture that supports server-side rendering of components. In particular, we analyzed how it can be implemented for WordPress sites through Gutenberg, advising to assess if it is worth the trouble of maintaining two codebases, one in PHP for the server-side code, and one in JavaScript for the client-side code.

Finally, we explained that the solution can be integrated into the caching plugin through a custom script to automatically produce the list of pages to avoid caching, and produced the code for plugin WP Super Cache.

After implementing this strategy to my site, it doesn’t matter anymore if visitors are logged in or not. They will always access a cached version of the homepage, providing a faster response and a better user experience.

(rb, ra, yk, il)
Categories: Around The Web

Web Design And Development Advent Roundup For 2018

Smashing Magazine - Mon, 12/03/2018 - 9:30am
Web Design And Development Advent Roundup For 2018 Web Design And Development Advent Roundup For 2018 Rachel Andrew 2018-12-03T16:30:30+02:00 2018-12-19T09:07:50+00:00

In the run-up to Christmas, there is a tradition across the web design and development community to produce advent calendars, typically with a new article or resource for each day of December. In this article, I have rounded up all those that I have found to be running this year, along with RSS feeds where they can be located, and Twitter accounts to make it easier to follow along.

Q: What’s 14 years old and not at all in a last minute panic?

— 24 ways (@24ways) November 27, 2018

It is a lot of work to publish an article every day — as we well know here at Smashing — and we have a whole team here to help. However, the majority of the calendars published here are true community efforts, often with the bulk of the work falling to an individual or tiny team, with no budget to pay authors and editors. So, please join us in supporting these efforts, share the articles that you enjoyed reading, and join the discussions respectfully. Whether you celebrate Christmas or not, you can certainly learn a lot of new things over the next 24 days.

Perl Advent

Perl Advent has been running since 2000 and is the longest running web advent calendar that I know of. Now old enough to enjoy a festive sherry (in Europe at least), they are back for 2018 for 24 merry days of Perl.

24 Ways

I have to admit a little bias in this subject as 24 Ways is the calendar started by my husband, business partner and fellow Smashing Magazine editor, Drew McLellan. 24 Ways has been running since 2005, when Drew had the idea to post a new article each day of December until Christmas. I don't think either of us expected that on the 30th November 2018, Drew would still be staying up until midnight to check that the first article went live!

Web forms are such an important part of the web, but we design them poorly all the time. The brand new Form Design Patterns book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ PerfPlanet Calendar

Ever since 2009, the Performance Calendar has been helping us to speed up the web. It is up again for the 10th year in a row and is maintained by Sergey Chernyshev.

24 Jours De Web

Next on my list is the French language calendar, 24 Jours De Web. Their first year of publication was 2012, and they are supporting the charity L'Auberge des Migrants, asking readers to donate to help refugees in Calais and the North of France, providing material and food aid, support and advocacy.

Christmas XP

The Christmas XP calendar is a little different. Rather than an article each day, they bring us a WebGL experiment. They have been bringing us experiments in WebGL since 2012.

AWS Advent

If you would like to brush up on your AWS skills then the AWS Advent calendar promises “a yearly exploration of AWS in 24 parts”. You’ll find a wide range of articles on security, deployment strategy and general tips and techniques to be aware of when using Amazon Web Services.

24 Days In December

If PHP is your language of choice then you can look forward to 24 days of PHP articles over at 24 Days in December. They began publishing in 2015, when Andreas Heigl realized that he missed Web Advent who had stopped publishing in 2012.

In his wrap-up post after the initial season Andreas wrote:

“I came to realize that I missed the phpadvent/webadvent the last years. And wouldn’t it be great to have a read about something PHP-related for the first 24 Days in December? Even for those people that are not Christian, the time before Christmas has to be a special time. And if it’s only due to the commercials and ads what to present.

So I would wait and see whether someone would arrange something.

But wait! When everyone thinks someone should do something, no-one will do anything.”

I’m happy that the community effort started by Andreas, when he realized that he would need to be the person who did something, is back for another year.

24 Pull Requests

The 24 Pull Requests calendar is a little different. It began in 2012 with the idea that contributors should, “Send 24 pull requests between December 1st and December 24th.” The idea being that this would be 24 gifts to the open-source community from every contributor. Since the project began, 19,859 have contributed (at the time of writing).

It’s a brilliant idea, and I really love their updated policy for 2018. Recognizing that making Pull Requests is only one way that people contribute to open source, the site has widened the scope of contributions. You can make a Pull Request as before if this is how you contribute. However, you can also complete a form to track your non-PR contribution. Perhaps you wrote a tutorial, ran a meetup, mentored someone — these things are all important contributions too. Read the updated contribution policy for 2018 and join in!

UXmas

UXmas have been publishing their Advent Calendar for UX folk since 2012. I am very impressed that they already have their call for submissions for 2019 up and running. So, you can read some useful aticles this year and submit your idea for next year at the same time.

24 Accessibility

24 Accessibility are back for a second year of accessibility posts in the run-up to Christmas. The project was started by Dennis Deacon, an Accessibility Engineer with The Paciello Group, and will be well worth following this year.

Fronteers Advent Calendar

Fronteers are running 24 posts in Dutch on their blog. A lovely touch is that each writer chooses a charity, and the Fronteers organization will donate 75 euros on their behalf.

Advent Of Code

If you would prefer a puzzle over an article, take a look at Advent of Code. Created by Eric Wastl, this is an Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

24 Days In Umbraco

24 Days In Umbraco is a calendar of articles relating to the Umbraco CMS. They have been publishing every December since 2012. Their About page says that the calendar started when:

“... we asked a bunch of Umbraco people if they had a favorite feature, a story or something else that they’d be willing to write a short article about. Then we'd post a new one here every day through December.” Advent Speaker Tips

New this year is a daily public speaking tip on the Notist blog.

Data-Driven Advent Calendar

The Data-Driven Advent Calendar will contain an article about data journalism each day of Advent. The project is from Journocode who are computer scientists and journalists working in newsrooms across Germany.

A Computer Of One’s Own

Posting to Medium, A Computer Of One’s Own will highlight the lives of the pioneers of the computer age. The project is by Alvaro Videla, with illustrations by Sebastian Navas.

React, JavaScript, Security And Elm Christmas

A set of calendars sponsored by Norwegian company Bekk. They are supporting React Christmas, JavaScript Christmas, Security Christmas and Elm Christmas. That’s a whole lot of articles to curate!

#devAdvent

Not a website but instead a hashtag: Sarah Drasner tweeted that she would be highlighting a person and project every day of Advent using the hastag #devAdvent. Follow along to get to know some new folks and the work they do.

Tomorrow I’m gonna start a dev advent calendar. Every day from the 1st to the 25th, I’ll highlight a new person and a project of theirs that I’m into and think people would benefit from knowing about. ❤️

— Sarah Drasner (@sarah_edo) November 30, 2018

Share The Ones I Missed!

If you know of a calendar related to web design and development that I haven’t mentioned here, post a comment. Enjoy your seasonal reading!

(il)
Categories: Around The Web

It’s Beginning To Look A Lot Like&hellip; December (2018 Wallpapers Edition)

Smashing Magazine - Fri, 11/30/2018 - 3:01am
It’s Beginning To Look A Lot Like&hellip; December (2018 Wallpapers Edition) It’s Beginning To Look A Lot Like&hellip; December (2018 Wallpapers Edition) Cosima Mielke 2018-11-30T09:01:00+01:00 2018-12-19T09:07:50+00:00

What are you looking forward to in December? Spending time with family and friends during the holidays, watching the birds gather in your snowy backyard, or celebrating “Bathtub Party Day” maybe? These are just some of the things that inspired artists and designers to create their desktop wallpapers this month.

All wallpapers in this post come in versions with and without a calendar for December 2018 and can be downloaded for free — as it has been our monthly tradition since more than nine years already. To cater for an extra bit of December joy, we also collected some wallpaper favorites from past years at the end of the post. Happy December and happy holidays!

Further Reading on SmashingMag:

Please note that:

  • All images can be clicked on and lead to the preview of the wallpaper,
  • You can feature your work in our magazine by taking part in our Desktop Wallpaper Calendar series. We are regularly looking for creative designers and artists to be featured on Smashing Magazine. Are you one of them?

Meet Smashing Book 6 — our brand new book focused on real challenges and real front-end solutions in the real world: from design systems and accessible single-page apps to CSS Custom Properties, CSS Grid, Service Workers, performance, AR/VR and responsive art direction. With Marcy Sutton, Yoav Weiss, Lyza D. Gardner, Laura Elizabeth and many others.

Table of Contents → Christmas Wreath

“Everyone is in the mood for Christmas when December starts. Therefore I made this Christmas wreath inspired wallpaper. Enjoy December and Merry Christmas to all!” — Designed by Melissa Bogemans from Belgium.

Cardinals In Snowfall

“During Christmas season, in the cold, colorless days of winter, Cardinal birds are seen as symbols of faith and warmth! In the part of America I live in, there is snowfall every December. While the snow is falling, I can see gorgeous Cardinals flying in and out of my patio. The intriguing color palette of the bright red of the Cardinals, the white of the flurries and the brown/black of dry twigs and fallen leaves on the snow-laden ground, fascinates me a lot, and inspired me to create this quaint and sweet, hand-illustrated surface pattern design as I wait for the December 2018 snowfall in my town!” — Designed by Gyaneshwari Dave from the United States.

Cozy

“December is all about coziness and warmth. Days are getting darker, shorter and colder. So a nice cup of hot cocoa just warms me up.” — Designed by Hazuki Sato from Belgium.

Sweet Snowy Tenderness

“You know that warm feeling when you get to spend cold winter days in a snug, homey, relaxed atmosphere? Oh, yes, we love it too! It is the sentiment we set our hearts on for the holiday season, and this sweet snowy tenderness is for all of us who adore watching the snowfall from our windows. Isn’t it romantic?” — Designed by PopArt Studio from Serbia.

’Tis The Season Of Snow

“The tiny flakes of snow have just begun to shower and we know it’s the start of the merry hour! Someone is all set to cram his sleigh with boxes of love as kids wait for their dear Santa to show up! Rightly said, ’tis the season of snow, surprise and lots and lots of fun! Merry Christmas!” — Designed by Sweans Technologies from London.

Bathtub Party Day

“December 5th is also known as Bathtub Party Day, which is why I wanted to visualize what celebrating this day could look like.” — Designed by Jonas Vanhamme from Belgium.

Cold Days, Warm Feelings

“Everything that reminds me of the cold days of December. I’ve tried to put everything in one illustration, the snow, hot coffee, mountains, snowman. Also my illustration is blue, it’s a cold color, so this give the illustration more of a winter effect.” — Designed by Dennis van den Heuvel from Belgium.

Oh Deer, It’s Cold!

“December brings more than Christmas only. It brings Winter. It brings the cold.” — Designed by Ellen Theuwen from Belgium.

Portland Snow Globe

Designed by Mad Fish Digital from the USA.

Another Christmas

“‘Christmas waves a magic wand over this world, and behold, everything is softer and more beautiful.’ (Norman Vincent Peale)” — Designed by Suman Sil from India.

A December To Remember

“Of all the months of the year, there is not a month so welcome to the young or so full of happy associations as this last month of the year. A month of giving, celebrations, and holidays. Christmas month is here. Make this last month of the year special for you and the ones around you.” — Designed by Procurement Software from India.

December Music

“Have you ever noticed how people have characteristic (or weird) poses playing instruments? It was my inspiration for drawing very simple and funny stick-figure musicians. Over the years I have drawn everything from violinists to pipa players (Chinese instrument) and from electric guitarists to tubaists. I never get bored of drawing new instrumentalists, ensembles or, in this case, a Christmas band. I wish you a very happy December with lots of music!” — Designed by Franke Margrete from The Netherlands.

The Mountains Shout Freedom

“December is that time of the year where snows starts to fall. It’s from this moment that we can go skiing and snowboarding again. It’s the best time of the year.” — Designed by Jasper Bogaert from Belgium.

Meeeh

“December is when winter begins, so I decided to go for some nice, cold, pastel colors and a wintery scenario. The ram is a family-related symbol and it’s cute, so I named it Meeeh.” — Designed by Ana Matos from Portugal.

Snow & Flake

“December always reminds me of snow and being with other people. That’s why I created two snowflakes Snow & Flake who are best buddies and love being with each other during winter time.” — Designed by Ian De Lantsheer from Belgium.

Midnight Aurora

“I was inspired by beautiful images of the Aurora that I saw on the internet.” — Designed by Wannes Verboven from Belgium.

Enlightened By The Christmas Spirit

“Christmas is the most wonderful time of the year! Once we’ve had our fill of turkey and welcomed the holiday season, we’re constantly encouraged to get into the spirit of the festive season.” — Designed by Mobile App Development from India.

All Of Them Lights

“I created this design in honour of the 9th of December, the day of lights.” — Designed by Mathias Geerts from Belgium.

Brrrr...!

Designed by Oumayma Jamali from Belgium.

Christmas House

Designed by Antun Hiršman from Croatia.

Christmas December

Designed by Think 360 Studio from India.

Separate Holidays

“My parents are divorced so I don’t really like the holidays because it feels like I always have to choose between my mum and dad.” — Designed by Micheline Van Looveren from Belgium.

Human Rights Month

“December is Human Rights Month, so I decided to design a wallpaper for this special month.” — Designed by Jonas Vanhamme from Belgium.

Winter Morning

“Early walks in the fields when the owls still sit on the fences and stare funny at you.” — Designed by Bo Dockx from Belgium.

Homeless Christmas

“December automatically brings to mind the Christmas spirit, the smell of delicious food, and the joy of opening beautiful presents. A couple of years ago I volunteered in a homeless shelter for a while. I even spent New Years’ Eve at the shelter. And ever since, Christmas also reminds me that a lot of others are much less fortunate than me…” — Designed by Kim Haesen from Belgium.

Christmas Feelings

Designed by Lieselotte Philips from Belgium.

Merry Christmas

“‘Christmas gives us the opportunity to pause and reflect on the important things around us.’ (David Cameron)” — Designed by Pinki Ghosh Dastidar from India.

International Tea Day

“December 15 is International Tea Day, so I thought to design a cup of tea, which also represents the cold weather during the winter.” — Designed by Hannah De Wachter from Belgium.

Money Doesn’t Grow On Trees

“I wanted to emphasize people who do not have enough money to celebrate Christmas like everyone else in the world.” — Designed by Angelique Buijzen from Belgium.

Explore The World

“‘We must go beyond textbooks, go out into the bypaths and untrodden depths of the wilderness and travel and explore and tell the world the glories of our journey.’ (John Hope Franklin)” — Designed by Dipanjan Karmakar from India.

Oldies But Goodies

Ready for a trip back in time? Here’s a collection of December goodies from past years that are too good to be forgotten. Please note that these wallpapers don’t come with a calendar.

’Tis The Season To Be Happy

Designed by Tazi from Australia.

Christmas Cookies

“Christmas is coming and a great way to share our love is by baking cookies.” — Designed by Maria Keller from Mexico.

The House On The River Drina

“Since we often yearn for a peaceful and quiet place to work, we have found inspiration in the famous house on the River Drina in Bajina Bašta, Serbia. Wouldn’t it be great being in nature, away from the civilization, swaying in the wind and listening to the waves of the river smashing your house, having no neighbors to bother you? Not sure about the Internet, though…” — Designed by PopArt Studio from Serbia.

Christmas Woodland

Designed by Mel Armstrong from Australia.

Getting Hygge

“There’s no more special time for a fire than in the winter. Cozy blankets, warm beverages, and good company can make all the difference when the sun goes down. We’re all looking forward to generating some hygge this winter, so snuggle up and make some memories.” — Designed by The Hannon Group from Washington D.C.

Joy To The World

“Joy to the world, all the boys and girls now, joy to the fishes in the deep blue sea, joy to you and me.” — Designed by Morgan Newnham from Boulder, Colorado.

Winter Wonderland

“‘Winter is the time for comfort, for good food and warmth, for the touch of a friendly hand and for a talk beside the fire: it is the time for home.’ (Edith Sitwell) — Designed by Dipanjan Karmakar from India.

December Through Different Eyes

“As a Belgian, December reminds me of snow, cosiness, winter, lights and so on. However, in the Southern Hemisphere it is summer at this time. With my illustration I wanted to show the different perspectives on December. I wish you all a Merry Christmas and Happy New Year!” — Designed by Jo Smets from Belgium.

’Tis The Season (To Drink Eggnog)

“There’s nothing better than a tall glass of Golden Eggnog while sitting by the Christmas tree. Let’s celebrate the only time of year this nectar of the gods graces our lips.” — Designed by Jonathan Shears from Connecticut, USA.

Gifts Lover

Designed by Elise Vanoorbeek from Belgium.

The Southern Hemisphere Is Calling

“Santa’s tired of winter (and the North Pole) and is flying to the South part of the globe to relax a little bit. He deserves a little vacation, don’t you think?” — Designed by Ricardo Gimenes from Sweden.

Have A Minimal Christmas

“My brother-in-law has been on a design buzzword kick where he calls everything minimal, to the point where he wishes people, “Have a minimal day!” I made this graphic as a poster for him.” — Designed by Danny Gugger from Madison, Wisconsin, USA.

Christmas Time!

Designed by Sofie Keirsmaekers from Belgium.

Happy Holidays

Designed by Bogdan Negrescu from the United States.

It’s In The Little Things

Designed by Thaïs Lenglez from Belgium.

Join In Next Month!

Please note that we respect and carefully consider the ideas and motivation behind each and every artist’s work. This is why we give all artists the full freedom to explore their creativity and express emotions and experience throughout their works. This is also why the themes of the wallpapers weren’t anyhow influenced by us, but rather designed from scratch by the artists themselves.

Thank you to all designers for their participation. Join in next month!

Categories: Around The Web

Strategies For Headless Projects With Structured Content Management Systems

Smashing Magazine - Thu, 11/29/2018 - 7:35am
Strategies For Headless Projects With Structured Content Management Systems Strategies For Headless Projects With Structured Content Management Systems Knut Melvær 2018-11-29T13:35:41+01:00 2018-12-19T09:07:50+00:00

This is the guide I wish I had the last couple of years when running projects with headless Content Management Systems (CMSs). I’ve been a developer, a user-experience and technology consultant, a project manager, information architect, and an author. The different hats have made me realize that even if we’ve had so-called “headless” CMSs for a while now, there’s still a way to go about thinking how to use them best.

We are now at a place where many of us rely on JavaScript frameworks for frontend work, using design systems made of components and compositions, rather than just implementing flat page layouts. There’s a lot of traction towards the JAMstacks and isomorphic/universal apps that run both on the server and the client. The final piece of the puzzle then is how we manage all the content.

Traditional CMSs are adding APIs to serve content through network requests and the JSON format. In addition, “headless” CMSs have emerged to exclusively serve content through APIs. My argument in this article though, is that we should spend less time talking about “headless”, and more about “structured content”. Because that is the essential quality of these systems. There are lots of implications for our craft implied by these systems, and we still have a way to go in terms of figuring out the good patterns of how we should deal with these technologies.

Coming to technology consulting from a background in humanities, I have learned a lot about how to organize and work with web projects that take a content-centric approach — both with the newer API-based as well as the traditional CMSs. I have come to appreciate how getting started early with actual live content from a CMS; doing so in a cross-disciplinary setting has not only made it possible to uncover complexities at an earlier stage but also lends agency to everyone involved, and gives opportunities to reflect on the challenges and possibilities of technology and design in its broadest sense.

Headless WordPress

Everyone knows that if a website is slow, users will abandon it. Let’s take a closer look at the basics of creating a decoupled WordPress. Read article →

In this article, I’ll suggest some overarching strategies, with some concrete, real-world examples on how to think about working with structured content. At the time of writing, I have just started working for a SaaS company that provides such a content management service, for hosting content delivered over APIs. I will make references to it, both because of my past experience with it in projects I was involved in as a consultant, but also because I think it aptly illustrates the points I want to make. So consider this a disclaimer of sorts.

That being said, I have been thinking about writing this article for a couple of years, and I have strived to make it applicable to whatever platform you choose to go with. So without further ado, let’s jump twenty years back in time in order to understand a bit more where we are today.

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ First Moves With Web Standards

In the early 2000s, the Web Standards movement inspired a field to change their ways of working. From a “layout-first” approach, they directed our attention towards how content on a page should be marked up semantically using HTML: A website’s menu isn’t a <table>, it’s a <nav>; A heading is not a <b>, it’s an <h1>. It was a significant step towards thinking about the different roles content web plays in order to help users find, identify and take it in.

The Web Standards movement introduced the argument that semantic markup improved accessibility, which also improved its ranking in the Google search results. It also marked a shift in how we thought about web content. Your website wasn’t longer the only place your content was represented. You also had to think about how your web pages were presented in other visual contexts, like in search results or screen readers. This was later fueled by social media and embedded previews of shared links. The mindset shifted from how the content should look, to what it should mean. This also happens to be the key to working with structured content.

With the adoption of pocket-size devices connected to the Internet, the web suddenly got a serious contender in apps. The competition, however, was mostly for the eyeballs of the end user. Many organizations still needed to distribute information about their products and services in both their apps and their different web presences. Concurrently, the web matured, and JavaScript and AJAX made it easier to connect different sources of content through APIs. Today, we have GraphQL and tooling that make content fetching and state management simpler. And so the bits of the technological puzzle begin to fall into place.

“Create Once, Publish Everywhere”

Though it’s mostly described as a “technological shift”, the embedding of content in JSON payloads (traveling along HTTP tubes) has an outsized impact of how we think about digital content and surrounding workflows. In some ways, it already has. Almost ten years ago, National Public Radio’s (NPR) Daniel Jacobson guest blogged at programmableweb.com about their approach, summed up in in the acronym COPE which stands for “Create Once, Publish Everywhere”. In the article, he introduces a content management system providing content to multiple digital interfaces through an API — not through an HTML rendering machine — as most CMSs at the time (and arguably now) did.

Illustration of NPR’s COPE system. Published originally on programmableweb.com (Oct 13, 2009) (Large preview)

NPR’s COPE “data management layer” is what would become the notion of “a headless CMS”. In the early days of COPE, it was achieved by structuring the content in XML. Today, JSON has become the dominant data format for transferring data over APIs, including internet of things devices, and other systems outside the web. If you want to exchange content with chatbots, voice interfaces, and even software for visual prototyping, you very often talk HTTP with a JSON accent.

“Uncoining” The Term “Headless CMS”

According to Google Trends, searches for “headless CMS” gained in popularity as late as 2015, i.e. six years after NPR’s COPE article. The term “headless” (at least in relation to digital technology and not late 18th-century French aristocracy), has been used a good while longer to talk about systems that run without a graphical user interface.

Note: One could argue that a command line interface is indeed “graphical” such as software on servers or testing environments (but let’s save that for another article).

I’m of two minds calling these new CMSs “headless”. We could as well call them “polycephalic” — that which has many heads. They are the Hydras and Cerbeuses of CMSs. “Headless” is also defining these systems by the capability they lack (i.e., a template engine for rendering web pages), instead of defining them by their true strength: making it possible to structure content without the constraints of the web. That being said, as of today, many of the solutions in this category could also be called “Nearly Headless Nick”. Because the editing interface is still tightly coupled to the system. Their “headlessness“ arises from their lack of a templating engine, that is, the machinery producing markup from content.

Note: I would almost definitely use a CMS called “Mimsy-Porpington” (known from the Harry Potter universe) though.

Instead, they make content available through an API, hence giving you more flexibility for how, what, and where you want to display and use this content. This makes them perfect companions to popular JavaScript frontend frameworks such as React, Angular, and Vue. And despite the claim of being able to deliver content to “websites, apps, and devices”, most of them are still limited by how web content works. This is most noticeable in the way most handle rich text — storing it either as HTML or Markdown.

Traditional CMSs have also started adding somewhat generic APIs in addition to their template rendering systems and call this “decoupled” as a way to distinguish themselves from their fresh competitors. “All this, and APIs, too!”* is the claim. Some of these CMSs are also pretty agnostic when it comes to the content modeling. For example, Craft CMS, makes almost no assumptions about your content model when you first install it. Wordpress is also moving towards using APIs for content delivery. I suspect the gap between the old players in the CMS field and the new will get narrower as we go along.

Nonetheless, putting content management behind APIs (instead of an HTML renderer) is an important step to more sophisticated ways of working in an age where an organization’s text, images, videos, and media are digitized and exposed to internal and external users and customers. It’s time though, to move away from defining their lacking frontend rendering capabilities, to what they really can do for us: give us a way to work with structured content. So, should we be calling them “Structured Content Management Systems”? As in, “No Bob, this isn’t your usual CMS. This is a SCMS, trust me, it’s going to be a thing.”

It’s Not About The Heads, It’s About Structured Content

The most radical change that the Structured Content Management Systems (SCMS) imposes is a move away from arranging content according to a page hierarchy to where you are free to structure content for whatever purpose you see fit. Avoiding duplicate content is a clear advantage because it increases reliability and decreases administrative burden (you don’t have to cope with duplicated content across multiple channels). In other words: Create Once, Publish Everywhere. If you just have to update your product description once — in one system — and it updates wherever your product is exposed to the user, that’s clearly an advantage.

While SCMS vendors frequently use “your website and an app” to justify thinking differently on page structure, you don’t have to cross the river to draw benefits from a structured content structure. With the popularity of JavaScript frameworks, it’s more and more common to build websites as a composition of individual components, that can be “filled” with different content depending on state and context. You may have a product card that appears in many different contexts throughout your web application. We’re seeing that modern web development moves away from setting documents and pages to composing components according to a mixture of user input, algorithms, and customization.

These trends for how design systems are made, and how we are encouraged to work in teams through processes of testing, learning, and iteration, makes the field of content management ripe for some new ways of thinking. Some patterns have emerged, but we still have many ways to go. Therefore, based on my experience from working in teams and projects that have put content front and center, and as now part of a team that builds a service for it (and I urge you to be aware of any bias here), I want to put forth some strategies that I believe can be helpful and create points for further discussion.

1. Approach Content In Multi-Disciplinary Teams

I believe that it is a thing of the past that a graphic designer can hand over stale, pixel-perfect pages to a frontend developer whose responsibility was to “implement” the design. We now make design systems consisting of smaller components, laid out in compositions that come with multiple possible states out of the box. More often than not, these components have to be resilient to user-generated input, which means that the sooner you introduce live content into the process, the better. A frontend developer’s responsibility isn’t to reproduce the vision of a graphic designer’s; it’s to maneuver a complex field of how browsers render HTML, CSS, and JavaScript, making sure that the user interfaces are responsive, accessible and performant.

When working as a technology consultant at Netlife (a consultancy specialized in user experience), I saw great steps being made towards collaboration between developers, designers, and user researchers. Even though our content editors were always involved in the project from the get-go, their contributions didn’t enter design workflow mainly because of technical friction.

The bottleneck was often a legacy CMS we couldn’t touch, or that it took time to build the content structure because it was dependent on the design layout. This often resulted in work being doubled: We made an HTML prototype, often based on content parsed from Markdown-files, which had to be re-implemented in the CMS-stack when the user testing was done, and everyone was pixel-perfect happy. This was often an expensive process as limitations in the CMS were discovered late in the process. It also creates pressure on all parts to “get it right the first time” and left less space for the kind of experimentation you would want in a design project.

Multi-Disciplinary Work Requires Nimble Systems

Moving to a SCMS in which it took minutes to code up a content model (where fields and API were ready instantly) turned our process upside down — and for the better. I remember sitting with the content editor of the new u4.no in the project’s first days. Talking through how they worked and would like to work with their content. Rather quickly, we translated our conclusions into simple JavaScript-objects that were instantly transformed to an editing environment in the browser. Figuring out helpful titles and descriptions for the titles. We talked about how they wanted text-snippets they could reuse across different pages and contexts, which they in-house called “nuggets”, which we then created then and there.

Allowing for this kind of exploration early in the project development — a content editor and a developer talking together while the interface was being made in front of us — felt powerful. Knowing that we could continue designing the frontend in React while she and her colleagues began working with the content. And not worrying about painting ourselves into a corner, like we often did with CMSs in which the structure was tightly coupled with how you had to code up the frontend part of it.

Example from u4.no’s custom editor environment in Sanity with its style guide is carefully and contextually integrated with the fields. (Large preview) A Content System Should Allow For Experimentation And Iteration

Creative redesign projects aside, a system for structured content should also allow you to continue improving, testing and iterate your content as part of your whole design system. UX designers should be able to quickly prototype with real content using tools like Sketch or Framer X. You should be able to augment content management with quantitative measurements, be it readability scales or how the content performs where it’s used.

Note: I used the term “UX designers” above despite having the opinion that we all should — in some way — relate to the process of making good user experiences. We’re all UX designers in our different strands of design.

Example of quantitative readability analysis in a rich text editor. (Large preview)

Working with structured content requires a bit of getting used to if you’re used to just WYSIWYG-ing content directly on your web page layout. Yet, it lends itself to a conversation that is more in line with how the digital design field is moving. Structured content lets a team of designers, developers, content editors, user researchers, and project managers collectively think about how a system should work to support users’ needs and strategic goals. This also requires you to think differently about how content structures, which takes us to the next strategy.

2. You Might Not Need A Pecking Order

One of the most notable changes for many is that systems for structured content are geared towards collections and lists of documents and not folder-like hierarchies that reflect website navigation structures. These structures stop making sense as soon as some of the content is to be used in other contexts — be it chatbots, print media or other websites. Traditional CMSs have tried to mitigate this by allowing for reusable content blocks, but they still need to be placed on page layouts and cumbersome to reason with through APIs.

Folder-based content management in Episerver. This screenshot isn’t old by the way. Published on episerver.com.(Large preview) Each Page To Its Own

As laid out in The Core Model, when one of your main referrers is either Google or sharing on social media, you should consider every page a landing page. And if you look at the distribution of page views, you will notice that some of your pages are way more popular than others. Unless you are a news website, those tend not to be the news, but those that let the user achieve whatever they hoped to achieve on your website. They are where business is actually happening.

Your digital content should be in service of the intersection of your own strategic goals and individual goals of your users. When the digital agency Bengler (sanity.io’s predecessor) made the new website for oma.eu, they didn’t structure the content after an elaborate hierarchy of pages. They made content types that reflected the organizational everyday reality, i.e. after projects, persons, and publications. In fact, the OMA-website is almost completely flat in terms of a content hierarchy, and the front page is generated from a mix of algorithmic and editorial rules.

How sanity.io structures their content (Large preview)

So, how to go about it? I believe a mix of thinking about your content as a reflection of how your organization’s mental model and what it needs to be to be useful for whatever your users need it for.

Here’s a basic example: When building a page of employees, you should probably start with a content type called person. A person can have a name, contact info, an image, different organizational roles, and a short biography. A person document can be reused in contact lists, article author bylines, chat support interfaces, and building access badges. Perhaps you already have an in-house system that knows who these people are and that comes with an API? Great, then synchronize with that.

Don’t Get Lost In An Ontological Rabbit Hole

It’s useful to return to Google’s way of indexing web pages and how they’re trying to index the world’s information. That’s why they are expending time and effort on linked data (RDFa, microformat, JSON-LD). If you annotate your web pages with JSON-LD elements, you will appear more prominently in search results. It’s also relevant when your information should be spoken by voice assistants and displayed in an assistant UI. If your content is already structured and easily available in an API, it will be relatively easy for you to implement it in these microformats.

I’m not sure I would recommend going all in on the ontologies of schema.org and various linked data resources though, at least not for editor purposes. You can quickly get lost in a rabbit hole of trying to make perfect platonic structures where it all fits.

Newsflash: It never will, because the world is a messy place, and because people think about stuff differently.

It’s more important to structure your content in a system that makes intuitive sense and lends itself to be adapted as needs change. This is why it’s important to start with content modelling early on in the design and development process — you need to learn about how it needs to be used.

Abstract From Reality, Not From CMS Conventions

It can be tempting to just follow whatever conventions your CMS comes with. Remember how Wordpress will give you “Posts” and “Pages”, and suddenly everything needs to be fitted into those boxes? A WYSIWYG rich text field is flexible in that it allows you to put in whatever, but the content will not be structured and easily adaptable — it’s only flexible once. But you need some place to begin your mapping of a content model. My suggestion is to begin with talking to people, i.e. the authors and readers.

How do people talk about the content internally? What do people call different things? You could run a free-listing exercise, a method used by ethnographers to map folk-taxonomies. For example, you could ask:

“Name the different types of content in our organization.”

Or, on a more specific level:

“Can you name the different types of reports we have in this organization?”

The point with this survey is to tease out the internalized taxonomies people carry, and not their opinions or feelings about things (something that often tends to derail design processes). You don’t have to ask particularly many before having a pretty exhaustive list you can work from. You’ll probably find that parts of your list come from conventions in your current CMS (that’s good to know if you are to do some remodelling). Now you should talk with your editor and try to pin down what they need the content to do.

Some questions you can ask could be the following:

  • Do you need to use this content in more than one place? Where?
  • What are the different relationships between the content types?
  • Where do we need the content to be displayed today, and tomorrow?
  • In which ways do we need content to be sorted? Can the ordering be done algorithmically, by the user, or does it have to be manually?
  • Are there systems or databases in other systems that we can synchronize with in order to prevent duplication?
  • Where do we want the canonical content to live? Should the SCMS be the source for it, or just augment existing content, e.g. marketing copy for products living in a product management system?

This doesn’t mean that you have to throw the traditional information architecture out with the now lukewarm bathwater. It still makes sense to have articles as a content type, if articles are a part of your organization’s content reality. But perhaps you don’t really need the abstract convention of categories, because how these articles have references to the type of services or products in them. And this relation allows for querying these articles in circumstances where it makes sense, without requiring someone to have “article category management” as part of their job description.

The article is also what makes it hard to decouple content completely from the presentation layer. We are so used to thinking about the layout and styling of the article, but in an age where you are expected to host your own content on your own domain, and then syndicate it to platforms such as medium.com, you already have given up control over visual presentation. This takes us to the next strategy.

3. Presentation Contexts Are Also Content Types Be Redesign Ready

You want to be able to adapt and quickly change the navigation structure of your website as well, without having to either rebuild your whole content architecture or fight against a stringent folder-like interface. You also want to be able to have some content hierarchy, because it sometimes makes sense, and sometimes it gets deeper than two levels, where most interfaces in the department of API-first CMSs fail to deliver much help.

Interface for arranging content in a hierarchy (called “Structure”) in Craft CMS. Content defined by their place in one hierarchy may make sense in some cases, but it’s a legacy from menu navigation that stops making sense when the content is reused across channels or placed by software like targeting algorithms. Published on craftcms.com (Large preview)

Interestingly, content management systems for chatbots tend to use similar hierarchical structures for arranging intent trees and dialog flows. This goes to say that content hierarchies play different roles in different channels, but often they provide ways of navigating through content. A way to approach this is to make types for navigation, where you can arrange content by references, and either build routes for web pages, menus, or paths for conversational interfaces.

Relationship Advice

References (or relationships) is what makes a system for structured content possible, and it’s really the core of everything we’re dealing with when it comes to content on the web (it’s the reason it’s metaphorically called the web in the first place). To be able to make references between bits of content is a very powerful thing, but it can also be costly in terms of how the backends are able to write and retrieve such data. So you may have to think differently if you have multitudes of documents since scale seldom comes for free.

It’s also worth considering that you don’t always need an explicit reference to join data; most often it can be done by criteria that has to do with the content, e.g. “give me all persons and all buildings within this geolocation”. The building and persons don’t need to have an explicit reference to each other, as long as it’s implied in a location field on both content types.

Example of a simple routing type for sanity.io. Notice that we have a “page” type, too. (Large preview) The page type is just a series of web page specific compositions where it’s possible to reuse other content types. (Large preview)

References between presentation types and other content types is useful when you can’t leave it to an algorithm in the presentation layer to join data. It may seem a bit cumbersome to explicitly draw these presentation types and make compositions of referred content, but it’s a solution to a problem you’ll often meet with SCMSs: It’s hard to know where content is being used. By including navigation types, you’ll explicitly tie content to presentation, but not just one. This makes it possible to reason to work with navigational structures independently of the content they lead to.

For example, in the screenshots we have tied Google Experiments to the routes type, allowing for adding multiple pages composed by references to content, which means that we can run A/B-tests with next to no content duplication. Since we also get a warning if we try to delete content that is referenced by other documents, this way of structuring will keep us from deleting something we shouldn’t.

Relationships across content types is a double-edged sword. It increases sustainability and is key to avoid duplication. On the other hand, you can easily cut yourself because you make dependencies between content, which (if not made transparent) can lead to unintended changes across the channels where your data is displayed. It would, for example, be bad if we could remove a “page” used by a “route” without warning.

This leads us to the next strategy, which (granted!) is partly beyond the power of the normal user as of today since it has to do with how different systems are architected. Still, it’s worth thinking about.

4. Don’t Put Rich Text In A Corner Rich Text Is More Than HTML

I can understand why HTML is given such prevalence in digital content, but know it also comes from something; it’s a subset of SGML, a generalized way of structuring machine-readable documents. As Claire L. Evans points out in the wonderful book “Broad Band: The Untold Story of the Women who made the Internet” (2018), there was already a vibrant community of people thinking about linked documents when HTML was introduced. Tim Berners-Lee’s proposal was a lot simpler than many of the other systems at the time, but that’s probably why it caught on and made the — as of now — open, free web possible.

When you’re in a browser on the world wide web, HTML is great. If you’re a writer who wants to publish something that ends up in simple HTML, Markdown is great. If you want your rich text content to be easily integrated into something that isn’t a browser, or a popular JavaScript-framework that lets you augment HTML with JavaScript in complex components (yes, we’re talking about React and Vue.js), having HTML in your API responses begins to be a bit of a hassle — especially if you need to parse it.

Almost everyone does it though, even the new kids on the block: I went through all the vendors on headlesscms.org and browsed through the documentation, and also signed up for those who didn’t mention it. With two exceptions, they all stored rich text either as HTML or Markdown. That’s fine if all you do is use Jekyll to render a website, or if you enjoy using dangerouslySetInnerHTML in React. But what if you want to reuse your content in interfaces that aren’t on the web? Or if you want more control and functionality in your rich text editor? Or just want it to be easier to render your rich text in one of the popular frontend frameworks, and have your components take care of different parts of your rich text content? Well, you’ll either have to find a smart way to parse that markdown or HTML into what you need, or, more conveniently, just have it stored more sensically in the first place.

For example, what if you want to output your rich text to a voice interface? We know that voice assistants are increasing in popularity. The most popular platforms for these assistants have the capabilities to get the text for spoken content through APIs. Then you want to take advantage of something like Speech Synthesis Markup Language. A system for portable text takes a more agnostic approach to rich text, which lets you adapt the same content for different kinds of interfaces.

Example of a rich text editor with speech synthesis capabilities. Compatible with, but not restricted to SSML).

Recommended reading: Experimenting With The SpeechSynthesis Interface

Portable Text As An Agnostic Rich Text Model

Portable text is also useful when you’re primarily doing content for the web. What if you want to have the possibility to nest and augment your text with data structures, such as a rich text footnote, or an inline editorial comment? Or an alternative phrase or wording for A/B-testing cases? Markdown and HTML quickly fall short, and you’ll have to rely on adding something like special shortcode tags, just like Wordpress has solved it. With portable text, you have an agnostic representation of content structures, without having to marry a certain implementation. Your content ends up being more sustainable and flexible for new redesigns and implementations.

There are also other advantages to portable text, especially if you want to be able to edit content collaboratively and in real time (as you do in Google Docs); you need to store rich text in another structure than HTML. If you do, you’ll also be able to take advantage of microservices and bots, such as spaCy, in order to annotate and augment your content without locking the document.

As for now, portable text isn’t widely adopted, but we’re seeing movements towards it. The specification isn’t very complex and can be explored at portabletext.org.

5. Make Sure Your SCMS Is In Service For Your Editors, And Not The Other Way Around

Digital content isn’t just used for your organization’s online web page leaflets anymore. For most of us, it encapsulates and defines how your organization is understood by the world, both from those within it and those outside: From product copy, micro texts to blog posts, chatbot responses, and strategy documents. We are millions of people that have to log into some CMS every day and navigate interfaces that were imagined twenty years ago with the assumptions of people who have never made much effort to user test or challenge their interfaces. Countless hours have been wasted away trying to fit a modern frontend experience into a page layout machine. Fortunately, this is soon a thing of the past.

As a technology consultant, I had to read through pages of technical specification whenever someone thought it was time to acquire a new CMS for themselves. There were demands from which server architecture it should run on (Windows servers, of course) to their ability to render “carousels” and “being able to edit web pages in place”, despite also requesting a “modular redesign”. When editors had been allowed to contribute to these specifications, they were also often dated to the what the editors had begotten used to. They seemed not aware that they could demand better user experiences, because enterprise software has to be big, lumpy and boring.

This is partly the fault of us making these systems. We tend to communicate technology features and specifications, and less what the everyday situation working with these systems look like. Sure, for a frontend designer, something supporting GraphQL is shorthand for how conveniently she is able to work against the backend, but on a higher level, it’s about the systems ability to accommodate for emerging workflows, where a content model could survive visual redesigns and design systems should be resilient to changes of its content.

Questions To Ask Of Your (S)CMS

If we are to embrace design processes, we can’t know prior to solving the problem whether the user tasks are best solved by making carousels (newsflash: most probably not), or whether A/B-testing makes sense for your case, even though it sounds cool.

Instead, ask questions like this:

  • Is it possible, and how exactly will multi-disciplinary teams work with this system?
  • How easy is it to change and migrate the content model?
  • How does it deal with file and image assets?
  • Has the editorial interface been user tested?
  • To what extent can the system be configured and customized to special workflows and needs of the editorial team?
  • How easy is it to export the content in a moveable format?
  • How does the system accommodate for collaboration?
  • Can content models be version controlled?
  • How easy is it to integrate the system with a larger ecosystem of flowing information?

The goal of these questions is to explore to what degree a content management system allows for a cross-disciplinary team to work effortlessly together, without too many bottle-necks or long deployment cycles. They also push the focus to be more about the content should be doing, and less about how things should look in a given context. Leave that for the design processes, where user testing probably will challenge assumptions one may have when looking into getting a new content system.

There are, of course, many factors in addition to this that probably have to be taken into consideration. The easiest thing to assess is the fiscal cost of software licenses and API-related costs if you are on a hosted service. The invisible cost (in time and attention spent by the team working with the system), is harder to estimate. From my experience, many of the SCMSs in combination with one of the popular frontend frameworks can significantly cut development time and allow for an agile (there’s my coin for the swear jar) design process. With the caveat that your team is prepared to solve some of the problems that come out of the box with traditional CMSs.

Towards Structured Content

The ways we work with digital content has changed dramatically since the World Wide Web made working with interconnected documents mainstream. Organizations, businesses, and corporations have amassed gigabytes of this content, which now is stuck in rigid page hierarchies, HTML markup, and clunky user interfaces.

Using a Structured Content Management System can be a great way to free your content from a paradigm that begins to feel its age. But it isn’t a trivial exercise, and success comes from being able to work multi-disciplinary and put your content model to the test. You need to get rid of some conventions you have grown used to by dealing with CMSs designed to output hierarchical websites. That means that you need to think differently about ordering content, make presentations types in order to make it easier to orchestrate content across multiple channels and to consider how you structure rich text so that it can be used outside of HTML contexts.

This article deals with some of the high-level concerns working with SCMSs. There are, of course, loads of exciting challenges when you start working with this in your team. You have to rethink stuff we’ve taken for granted for many years, but that’s probably a good thing. Because we are forced to evaluate our content, not only from its place on a digital page but from its role in a larger system that works for whatever goals your organization and your users may have.

I believe that we can achieve content models that are more meaningful and easier to sustain in the long run, and that means saving time and expenses. It means more flexibility in terms of inventing new outputs and services, and less tie in with software vendors. Because a well-made Structured Content Management System will make it easy for you to take your content and go elsewhere. And that makes for some interesting competition. Hopefully, all in favor of the users.

(dm, ra, il)
Categories: Around The Web

A Complete Guide To Routing In Angular

Smashing Magazine - Wed, 11/28/2018 - 9:00am
A Complete Guide To Routing In Angular A Complete Guide To Routing In Angular Ahmed Bouchefra 2018-11-28T15:00:49+01:00 2018-12-18T10:20:11+00:00

In case you’re still not quite familiar with Angular 7, I’d like to bring you closer to everything this impressive front-end framework has to offer. I’ll walk you through an Angular demo app that shows different concepts related to the Router, such as:

  • The router outlet,
  • Routes and paths,
  • Navigation.

I’ll also show you how to use Angular CLI v7 to generate a demo project where we’ll use the Angular router to implement routing and navigation. But first, allow me to introduce you to Angular and go over some of the important new features in its latest version.

Introducing Angular 7

Angular is one of the most popular front-end frameworks for building client-side web applications for the mobile and desktop web. It follows a component-based architecture where each component is an isolated and re-usable piece of code that controls a part of the app’s UI.

A component in Angular is a TypeScript class decorated with the @Component decorator. It has an attached template and CSS stylesheets that form the component’s view.

Angular 7, the latest version of Angular has been recently released with new features particularly in CLI tooling and performance, such as:

  • CLI Prompts: A common command like ng add and ng new can now prompt the user to choose the functionalities to add into a project like routing and stylesheets format, etc.
  • Adding scrolling to Angular Material CDK (Component DevKit).
  • Adding drag and drop support to Angular Material CDK.
  • Projects are also defaulted to use Budget Bundles which will warn developers when their apps are passing size limits. By default, warnings are thrown when the size has more than 2MB and errors at 5MB. You can also change these limits in your angular.json file. etc.

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ Introducing Angular Router

Angular Router is a powerful JavaScript router built and maintained by the Angular core team that can be installed from the @angular/router package. It provides a complete routing library with the possibility to have multiple router outlets, different path matching strategies, easy access to route parameters and route guards to protect components from unauthorized access.

The Angular router is a core part of the Angular platform. It enables developers to build Single Page Applications with multiple views and allow navigation between these views.

Let’s now see the essential Router concepts in more details.

The Router-Outlet

The Router-Outlet is a directive that’s available from the router library where the Router inserts the component that gets matched based on the current browser’s URL. You can add multiple outlets in your Angular application which enables you to implement advanced routing scenarios.

<router-outlet></router-outlet>

Any component that gets matched by the Router will render it as a sibling of the Router outlet.

Routes And Paths

Routes are definitions (objects) comprised from at least a path and a component (or a redirectTo path) attributes. The path refers to the part of the URL that determines a unique view that should be displayed, and component refers to the Angular component that needs to be associated with a path. Based on a route definition that we provide (via a static RouterModule.forRoot(routes) method), the Router is able to navigate the user to a specific view.

Each Route maps a URL path to a component.

The path can be empty which denotes the default path of an application and it’s usually the start of the application.

The path can take a wildcard string (**). The router will select this route if the requested URL doesn’t match any paths for the defined routes. This can be used for displaying a “Not Found” view or redirecting to a specific view if no match is found.

This is an example of a route:

{ path: 'contacts', component: ContactListComponent}

If this route definition is provided to the Router configuration, the router will render ContactListComponent when the browser URL for the web application becomes /contacts.

Route Matching Strategies

The Angular Router provides different route matching strategies. The default strategy is simply checking if the current browser’s URL is prefixed with the path.

For example our previous route:

{ path: 'contacts', component: ContactListComponent}

Could be also written as:

{ path: 'contacts',pathMatch: 'prefix', component: ContactListComponent}

The patchMath attribute specifies the matching strategy. In this case, it’s prefix which is the default.

The second  matching strategy is full. When it’s specified for a route, the router will check if the the path is exactly equal to the path of the current browser’s URL:

{ path: 'contacts',pathMatch: 'full', component: ContactListComponent} Route Params

Creating routes with parameters is a common feature in web apps. Angular Router allows you to access parameters in different ways:

You can create a route parameter using the colon syntax. This is an example route with an id parameter:

{ path: 'contacts/:id', component: ContactDetailComponent} Route Guards

A route guard is a feature of the Angular Router that allows developers to run some logic when a route is requested, and based on that logic, it allows or denies the user access to the route. It’s commonly used to check if a user is logged in and has the authorization before he can access a page.

You can add a route guard by implementing the CanActivate interface available from the @angular/router package and extends the canActivate() method which holds the logic to allow or deny access to the route. For example, the following guard will always allow access to a route:

class MyGuard implements CanActivate { canActivate() { return true; } }

You can then protect a route with the guard using the canActivate attribute:

{ path: 'contacts/:id, canActivate:[MyGuard], component: ContactDetailComponent} Navigation Directive

The Angular Router provides the routerLink directive to create navigation links. This directive takes the path associated with the component to navigate to. For example:

<a [routerLink]="'/contacts'">Contacts</a> Multiple Outlets And Auxiliary Routes

Angular Router supports multiple outlets in the same application.

A component has one associated primary route and can have auxiliary routes. Auxiliary routes enable developers to navigate multiple routes at the same time.

To create an auxiliary route, you’ll need a named router outlet where the component associated with the auxiliary route will be displayed.

<router-outlet></router-outlet> <router-outlet name="outlet1"></router-outlet>
  • The outlet with no name is the primary outlet.
  • All outlets should have a name except for the primary outlet.

You can then specify the outlet where you want to render your component using the outlet attribute:

{ path: "contacts", component: ContactListComponent, outlet: "outlet1" } Creating An Angular 7 Demo Project

In this section, we’ll see a practical example of how to set up and work with the Angular Router. You can see the live demo we’ll be creating and the GitHub repository for the project.

Installing Angular CLI v7

Angular CLI requires Node 8.9+, with NPM 5.5.1+. You need to make sure you have these requirements installed on your system then run the following command to install the latest version of Angular CLI:

$ npm install -g @angular/cli

This will install the Angular CLI globally.

Installing Angular CLI v7 (Large preview)

Note: You may want to use sudo to install packages globally, depending on your npm configuration.

Creating An Angular 7 Project

Creating a new project is one command away, you simply need to run the following command:

$ ng new angular7-router-demo

The CLI will ask you if you would like to add routing (type N for No because we’ll see how we can add routing manually) and which stylesheet format would you like to use, choose CSS, the first option then hit Enter. The CLI will create a folder structure with the necessary files and install the project’s required dependencies.

Creating A Fake Back-End Service

Since we don’t have a real back-end to interact with, we’ll create a fake back-end using the angular-in-memory-web-api library which is an in-memory web API for Angular demos and tests that emulates CRUD operations over a REST API.

It works by intercepting the HttpClient requests sent to the remote server and redirects them to a local in-memory data store that we need to create.

To create a fake back-end, we need to follow the next steps:

  1. First, we install the angular-in-memory-web-api module,
  2. Next, we create a service which returns fake data,
  3. Finally, configure the application to use the fake back-end.

In your terminal run the following command to install the angular-in-memory-web-api module from npm:

$ npm install --save angular-in-memory-web-api

Next, generate a back-end service using:

$ ng g s backend

Open the src/app/backend.service.ts file and import InMemoryDbService from the angular-in-memory-web-api module:

import {InMemoryDbService} from 'angular-in-memory-web-api'

The service class needs to implement InMemoryDbService and then override the createDb() method:

@Injectable({ providedIn: 'root' }) export class BackendService implements InMemoryDbService{ constructor() { } createDb(){ let contacts = [ { id: 1, name: 'Contact 1', email: 'contact1@email.com' }, { id: 2, name: 'Contact 2', email: 'contact2@email.com' }, { id: 3, name: 'Contact 3', email: 'contact3@email.com' }, { id: 4, name: 'Contact 4', email: 'contact4@email.com' } ]; return {contacts}; } }

We simply create an array of contacts and return them. Each contact should have an id.

Finally, we simply need to import InMemoryWebApiModule into the app.module.ts file, and provide our fake back-end service.

import { InMemoryWebApiModule } from “angular-in-memory-web-api”; import { BackendService } from “./backend.service”; /* ... */ @NgModule({ declarations: [ /*...*/ ], imports: [ /*...*/ InMemoryWebApiModule.forRoot(BackendService) ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }

Next create a ContactService which encapsulates the code for working with contacts:

$ ng g s contact

Open the src/app/contact.service.ts file and update it to look similar to the following code:

import { Injectable } from '@angular/core'; import { HttpClient } from “@angular/common/http”; @Injectable({ providedIn: 'root' }) export class ContactService { API_URL: string = "/api/"; constructor(private http: HttpClient) { } getContacts(){ return this.http.get(this.API_URL + 'contacts') } getContact(contactId){ return this.http.get(`${this.API_URL + 'contacts'}/${contactId}`) } }

We added two methods:

  • getContacts()
    For getting all contacts.
  • getContact()
    For getting a contact by id.

You can set the API_URL to whatever URL since we are not going to use a real back-end. All requests will be intercepted and sent to the in-memory back-end.

Creating Our Angular Components

Before we can see how to use the different Router features, let’s first create a bunch of components in our project.

Head over to your terminal and run the following commands:

$ ng g c contact-list $ ng g c contact-detail

This will generate two ContactListComponent and ContactDetailComponent components and add them to the main app module.

Setting Up Routing

In most cases, you’ll use the Angular CLI to create projects with routing setup but in this case, we’ll add it manually so we can get a better idea how routing works in Angular.

Adding The Routing Module

We need to add AppRoutingModule which will contain our application routes and a router outlet where Angular will insert the currently matched component depending on the browser current URL.

We’ll see:

  • How to create an Angular Module for routing and import it;
  • How to add routes to different components;
  • How to add the router outlet.

First, let’s start by creating a routing module in an app-routing.module.ts file. Inside the src/app create the file using:

$ cd angular7-router-demo/src/app $ touch app-routing.module.ts

Open the file and add the following code:

import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const routes: Routes = []; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }

We start by importing the NgModule from the @angular/core package which is a TypeScript decorator used to create an Angular module.

We also import the RouterModule and Routes classes from the @angular/router package . RouterModule provides static methods like RouterModule.forRoot() for passing a configuration object to the Router.

Next, we define a constant routes array of type Routes which will be used to hold information for each route.

Finally, we create and export a module called AppRoutingModule(You can call it whatever you want) which is simply a TypeScript class decorated with the @NgModule decorator that takes some meta information object. In the imports attribute of this object, we call the static RouterModule.forRoot(routes) method with the routes array as a parameter. In the exports array we add the RouterModule.

Importing The Routing Module

Next, we need to import this module routing into the main app module that lives in the src/app/app.module.ts file:

import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, AppRoutingModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }

We import the AppRoutingModule from ./app-routing.module and we add it in the imports array of the main module.

Adding The Router Outlet

Finally, we need to add the router outlet. Open the src/app/app.component.html file which contains the main app template and add the <router-outlet> component:

<router-outlet></router-outlet>

This is where the Angular Router will render the component that corresponds to current browser’s path.

That’s all steps we need to follow in order to manually setup routing inside an Angular project.

Creating Routes

Now, let’s add routes to our two components. Open the src/app/app-routing.module.ts file and add the following routes to the routes array:

const routes: Routes = [ {path: 'contacts' , component: ContactListComponent}, {path: 'contact/:id' , component: ContactDetailComponent} ];

Make sure to import the two components in the routing module:

import { ContactListComponent } from './contact-list/contact-list.component'; import { ContactDetailComponent } from './contact-detail/contact-detail.component';

Now we can access the two components from the /contacts and contact/:id paths.

Adding Navigation Links

Next let’s add navigation links to our app template using the routerLink directive. Open the src/app/app.component.html and add the following code on top of the router outlet:

<h2><a [routerLink] = "'/contacts'">Contacts</a></h2>

Next we need to display the list of contacts in ContactListComponent. Open the src/app/contact-list.component.ts then add the following code:

import { Component, OnInit } from '@angular/core'; import { ContactService } from '../contact.service'; @Component({ selector: 'app-contact-list', templateUrl: './contact-list.component.html', styleUrls: ['./contact-list.component.css'] }) export class ContactListComponent implements OnInit { contacts: any[] = []; constructor(private contactService: ContactService) { } ngOnInit() { this.contactService.getContacts().subscribe((data : any[])=>{ console.log(data); this.contacts = data; }) } }

We create a contacts array to hold the contacts. Next, we inject ContactService and we call the getContacts() method of the instance (on the ngOnInit life-cycle event) to get contacts and assign them to the contacts array.

Next open the src/app/contact-list/contact-list.component.html file and add:

<table style="width:100%"> <tr> <th>Name</th> <th>Email</th> <th>Actions</th> </tr> <tr *ngFor="let contact of contacts" > <td>{{ contact.name }}</td> <td>{{ contact.email }}</td> <td> <a [routerLink]="['/contact', contact.id]">Go to details</a> </td> </tr> </table>

We loop through the contacts and display each contact’s name and email. We also create a link to each contact’s details component using the routerLink directive.

This is a screen shot of the component:

Contact list (Large preview)

When we click on the Go to details link, it will take us to ContactDetailsComponent. The route has an id parameter, let’s see how we can access it from our component.

Open the src/app/contact-detail/contact-detail.component.ts file and change the code to look similar to the following code:

import { Component, OnInit } from '@angular/core'; import { ActivatedRoute } from '@angular/router'; import { ContactService } from '../contact.service'; @Component({ selector: 'app-contact-detail', templateUrl: './contact-detail.component.html', styleUrls: ['./contact-detail.component.css'] }) export class ContactDetailComponent implements OnInit { contact: any; constructor(private contactService: ContactService, private route: ActivatedRoute) { } ngOnInit() { this.route.paramMap.subscribe(params => { console.log(params.get('id')) this.contactService.getContact(params.get('id')).subscribe(c =>{ console.log(c); this.contact = c; }) }); } }

We inject ContactService and ActivatedRoute into the component. In ngOnInit() life-cycle event we retrieve the id parameter that will be passed from the route and use it to get the contact’s details that we assign to a contact object.

Open the src/app/contact-detail/contact-detail.component.html file and add:

<h1> Contact # {{contact.id}}</h1> <p> Name: {{contact.name}} </p> <p> Email: {{contact.email}} </p> Contact details (Large preview)

When we first visit our application from 127.0.0.1:4200/, the outlet doesn’t render any component so let’s redirect the empty path to the contacts path by adding the following route to the routes array:

{path: '', pathMatch: 'full', redirectTo: 'contacts'}

We want to match the exact empty path, that’s why we specify the full match strategy.

Conclusion

In this tutorial, we’ve seen how to use the Angular Router to add routing and navigation into our application. We’ve seen different concepts like the Router outlet, routes, and paths and we created a demo to practically show the different concepts. You can access the code from this repository.

(dm, ra, yk, il)
Categories: Around The Web

An Extensive Guide To Progressive Web Applications

Smashing Magazine - Tue, 11/27/2018 - 8:00am
An Extensive Guide To Progressive Web Applications An Extensive Guide To Progressive Web Applications Ankita Masand 2018-11-27T14:00:29+01:00 2018-12-17T10:57:50+00:00

It was my dad’s birthday, and I wanted to order a chocolate cake and a shirt for him. I headed over to Google to search for chocolate cakes and clicked on the first link in the search results. There was a blank screen for a few seconds; I didn’t understand what was happening. After a few seconds of staring patiently, my mobile screen filled with delicious-looking cakes. As soon as I clicked on one of them to check its details, I got an ugly fat popup, asking me to install an Android application so that I could get a silky smooth experience while ordering a cake.

That was disappointing. My conscience didn’t allow me to click on the “Install” button. All I wanted to do was order a small cake and be on my way.

I clicked on the cross icon at the very right of the popup to get out of it as soon as I could. But then the installation popup sat at the bottom of the screen, occupying one-fourth of the space. And with the flaky UI, scrolling down was a challenge. I somehow managed to order a Dutch cake.

After this terrible experience, my next challenge was to order a shirt for my dad. As before, I search Google for shirts. I clicked on the first link, and in a blink, the entire content was right in front of me. Scrolling was smooth. No installation banner. I felt as if I was browsing a native application. There was a moment when my terrible internet connection gave up, but I was still able to see the content instead of a dinosaur game. Even with my janky internet, I managed to order a shirt and jeans for my dad. Most surprising of all, I was getting notifications about my order.

I would call this a silky smooth experience. These people were doing something right. Every website should do it for their users. It’s called a progressive web app.

As Alex Russell states in one of his blog posts:

“It happens on the web from time to time that powerful technologies come to exist without the benefit of marketing departments or slick packaging. They linger and grow at the peripheries, becoming old-hat to a tiny group while remaining nearly invisible to everyone else. Until someone names them.” A Silky Smooth Experience On The Web, Sometimes Known As A Progressive Web Application

Progressive web applications (PWAs) are more of a methodology that involves a combination of technologies to make powerful web applications. With an improved user experience, people will spend more time on websites and see more advertisements. They tend to buy more, and with notification updates, they are more likely to visit often. The Financial Times abandoned its native apps in 2011 and built a web app using the best technologies available at the time. Now, the product has grown into a full-fledged PWA.

But why, after all this time, would you build a web app when a native app does the job well enough?

Let’s look into some of the metrics shared in Google IO 17.

Our new book, in which Alla Kholmatova explores how to create effective and maintainable design systems to design great digital products. Meet Design Systems, with common traps, gotchas and the lessons Alla has learned over the years.

Table of Contents →

Five billion devices are connected to the web, making the web the biggest platform in the history of computing. On the mobile web, 11.4 million monthly unique visitors go to the top 1000 web properties, and 4 million go to the top thousand apps. The mobile web garners around four times as many users as native applications. But this number drops sharply when it comes to engagement.

A user spends an average of 188.6 minutes in native apps and only 9.3 minutes on the mobile web. Native applications leverage the power of operating systems to send push notifications to give users important updates. They deliver a better user experience and boot more quickly than websites in a browser. Instead of typing a URL in the web browser, users just have to tap an app’s icon on the home screen.

Most visitors on the web are unlikely to come back, so developers came up with the workaround of showing them banners to install native applications, in an attempt to keep them deeply engaged. But then, users would have to go through the tiresome procedure of installing the binary of a native application. Forcing users to install an application is annoying and reduces further the chance that they will install it in the first place. The opportunity for the web is clear.

Recommended reading: Native And PWA: Choices, Not Challengers!

If web applications come with a rich user experience, push notifications, offline support and instant loading, they can conquer the world. This is what a progressive web application does.

A PWA delivers a rich user experience because it has several strengths:

  • Fast
    The UI is not flaky. Scrolling is smooth. And the app responds quickly to user interaction.

  • Reliable
    A normal website forces users to wait, doing nothing, while it is busy making rides to the server. A PWA, meanwhile, loads data instantaneously from the cache. A PWA works seamlessly, even on a 2G connection. Every network request to fetch an asset or piece of data goes through a service worker (more on that later), which first verifies whether the response for a particular request is already in the cache. When users get real content almost instantly, even on a poor connection, they trust the app more and view it as more reliable.

  • Engaging
    A PWA can earn a place on the user’s home screen. It offers a native app-like experience by providing a full-screen work area. It makes use of push notifications to keep users engaged.

Now that we know what PWAs bring to the table, let’s get into the details of what gives PWAs an edge over native applications. PWAs are built with technologies such as service workers, web app manifests, push notifications and IndexedDB/local data structure for caching. Let’s look into each in detail.

Service Workers

A service worker is a JavaScript file that runs in the background without interfering with the user’s interactions. All GET requests to the server go through a service worker. It acts like a client-side proxy. By intercepting network requests, it takes complete control over the response being sent back to the client. A PWA loads instantly because service workers eliminate the dependency on the network by responding with data from the cache.

A service worker can only intercept a network request that is in its scope. For example, a root-scoped service worker can intercept all of the fetch requests coming from a web page. A service worker operates as an event-driven system. It goes into a dormant state when it is not needed, thereby conserving memory. To use a service worker in a web application, we first have to register it on the page with JavaScript.

(function main () { /* navigator is a WEB API that allows scripts to register themselves and carry out their activities. */ if ('serviceWorker' in navigator) { console.log('Service Worker is supported in your browser') /* register method takes in the path of service worker file and returns a promises, which returns the registration object */ navigator.serviceWorker.register('./service-worker.js').then (registration => { console.log('Service Worker is registered!') }) } else { console.log('Service Worker is not supported in your browser') } })()

We first check whether the browser supports service workers. To register a service worker in a web application, we provide its URL as a parameter to the register function, available in navigator.serviceWorker (navigator is a web API that allows scripts to register themselves and carry out their activities). A service worker is registered only once. Registration does not happen on every page load. The browser downloads the service worker file (./service-worker.js) only if there is a byte difference between the existing activated service worker and the newer one or if its URL has changed.

The above service worker will intercept all requests coming from the root (/). To limit the scope of a service worker, we would pass an optional parameter with one of the keys as the scope.

if ('serviceWorker' in navigator) { /* register method takes in an optional second parameter as an object. To restrict the scope of a service worker, the scope should be provided. scope: '/books' will intercept requests with '/books' in the url. */ navigator.serviceWorker.register('./service-worker.js', { scope: '/books' }).then(registration => { console.log('Service Worker for scope /books is registered', registration) }) }

The service worker above will intercept requests that have /books in the URL. For example, it will not intercept request with /products, but it could very well intercept requests with /books/products.

As mentioned, a service worker operates as an event-driven system. It listens for events (install, activate, fetch, push) and accordingly calls the respective event handler. Some of these events are a part of the life cycle of a service worker, which goes through these events in sequence to get activated.

Installation

Once a service worker has been registered successfully, an installation event is fired. This is a good place to do the initialization work, like setting up the cache or creating object stores in IndexedDB. (IndexedDB will make more sense to you once we get into its details. For now, we can just say that it’s a key-value pair structure.)

self.addEventListener('install', (event) => { let CACHE_NAME = 'xyz-cache' let urlsToCache = [ '/', '/styles/main.css', '/scripts/bundle.js' ] event.waitUntil( /* open method available on caches, takes in the name of cache as the first parameter. It returns a promise that resolves to the instance of cache All the URLS above can be added to cache using the addAll method. */ caches.open(CACHE_NAME) .then (cache => cache.addAll(urlsToCache)) ) })

Here, we’re caching some of the files so that the next load is instant. self refers to the service worker instance. event.waitUntil makes the service worker wait until all of the code inside it has finished execution.

Activation

Once a service worker has been installed, it cannot yet listen for fetch requests. Rather, an activate event is fired. If no active service worker is operating on the website in the same scope, then the installed service worker gets activated immediately. However, if a website already has an active service worker, then the activation of a new service worker is delayed until all of the tabs operating on the old service worker are closed. This makes sense because the old service worker might be using the instance of the cache that is now modified in the newer one. So, the activation step is a good place to get rid of old caches.

self.addEventListener('activate', (event) => { let cacheWhitelist = ['products-v2'] // products-v2 is the name of the new cache event.waitUntil( caches.keys().then (cacheNames => { return Promise.all( cacheNames.map( cacheName => { /* Deleting all the caches except the ones that are in cacheWhitelist array */ if (cacheWhitelist.indexOf(cacheName) === -1) { return caches.delete(cacheName) } }) ) }) ) })

In the code above, we’re deleting the old cache. If the name of a cache doesn’t match with the cacheWhitelist, then it is deleted. To skip the waiting phase and immediately activate the service worker, we use skip.waiting().

self.addEventListener('activate', (event) => { self.skipWaiting() // The usual stuff })

Once service worker is activated, it can listen for fetch requests and push events.

Fetch Event Handler

Whenever a web page fires a fetch request for a resource over the network, the fetch event from the service worker gets called. The fetch event handler first looks for the requested resource in the cache. If it is present in the cache, then it returns the response with the cached resource. Otherwise, it initiates a fetch request to the server, and when the server sends back the response with the requested resource, it puts it to the cache for subsequent requests.

/* Fetch event handler for responding to GET requests with the cached assets */ self.addEventListener('fetch', (event) => { event.respondWith( caches.open('products-v2') .then (cache => { /* Checking if the request is already present in the cache. If it is present, sending it directly to the client */ return cache.match(event.request).then (response => { if (response) { console.log('Cache hit! Fetching response from cache', event.request.url) return response } /* If the request is not present in the cache, we fetch it from the server and then put it in cache for subsequent requests. */ fetch(event.request).then (response => { cache.put(event.request, response.clone()) return response }) }) }) ) })

event.respondWith lets the service worker send a customized response to the client.

Offline-first is now a thing. For any non-critical request, we must serve the response from the cache, instead of making a ride to the server. If any asset is not present in the cache, we get it from the server and then cache it for subsequent requests.

Service workers only work on HTTPS websites because they have the power to manipulate the response of any fetch request. Someone with malicious intent might tamper the response for a request on an HTTP website. So, hosting a PWA on HTTPS is mandatory. Service workers do not interrupt the normal functioning of the DOM. They cannot communicate directly with the web page. To send any message to a web page, it makes use of post messages.

Web Push Notifications

Let’s suppose you’re busy playing a game on your mobile, and a notification pops up telling you of a 30% discount on your favorite brand. Without any further ado, you click on the notification and shop your breath out. Getting live updates on, say, a cricket or football match or getting important emails and reminders as notifications is a big deal when it comes to engaging users with a product. This feature was only available in native applications until PWA came along. A PWA makes use of web push notifications to compete with this powerful feature that native apps provide out of the box. A user would still receive a web push notification even if the PWA is not open in any of the browser tabs and even if the browser is not open.

A web application has to ask permission of the user to send them push notifications.

Browser Prompt for asking permission for Web Push notifications. (Large preview)

Once the user confirms by clicking the “Allow” button, a unique subscription token is generated by the browser. This token is unique for this device. The format of the subscription token generated by Chrome is as follows:

{ "endpoint": "https://fcm.googleapis.com/fcm/send/c7Veb8VpyM0:APA91bGnMFx8GIxf__UVy6vJ-n9i728CUJSR1UHBPAKOCE_SrwgyP2N8jL4MBXf8NxIqW6NCCBg01u8c5fcY0kIZvxpDjSBA75sVz64OocQ-DisAWoW7PpTge3SwvQAx5zl_45aAXuvS", "expirationTime": null, "keys": { "p256dh": "BJsj63kz8RPZe8Lv1uu-6VSzT12RjxtWyWCzfa18RZ0-8sc5j80pmSF1YXAj0HnnrkyIimRgLo8ohhkzNA7lX4w", "auth": "TJXqKozSJxcWvtQasEUZpQ" } }

The endpoint contained in the token above will be unique for every subscription. On an average website, thousands of users would agree to receive push notifications, and for each of them, this endpoint would be unique. So, with the help of this endpoint, the application is able to target these users in the future by sending them push notifications. The expirationTime is the amount of time that the subscription is valid for a particular device. If the expirationTime is 20 days, it means that the push subscription of the user will expire after 20 days and the user won’t be able to receive push notifications on the older subscription. In this case, the browser will generate a new subscription token for that device. The auth and p256dh keys are used for encryption.

Now, to send push notifications to these thousands of users in the future, we first have to save their respective subscription tokens. It’s the job of the application server (the back-end server, maybe a Node.js script) to send push notifications to these users. This might sound as simple as making a POST request to the endpoint URL with the notification data in the request payload. However, it should be noted that if a user is not online when a push notification intended for them is triggered by the server, they should still get that notification once they come back online. The server would have to take care of such scenarios, along with sending thousands of requests to the users. A server keeping track of the user’s connection sounds complicated. So, something in the middle would be responsible for routing web push notifications from the server to the client. This is called a push service, and every browser has its own implementation of a push service. The browser has to tell the following information to the push service in order to send any notification:

  1. The time to live
    This is how long a message should be queued, in case it is not delivered to the user. Once this time has elapsed, the message will be removed from the queue.
  2. Urgency of the message
    This is so that the push service preserves the user’s battery by sending only high-priority messages.

The push service routes the messages to the client. Because push has to be received by the client even if its respective web application is not open in the browser, push events have to be listened to by something that continuously monitors in the background. You guessed it: That’s the job of the service worker. The service worker listens for push events and does the job of showing notifications to the user.

So, now we know that the browser, push service, service worker and application server work in harmony to send push notifications to the user. Let’s look into the implementation details.

Web Push Client

Asking permission of the user is a one-time thing. If a user has already granted permission to receive push notifications, we shouldn’t ask again. The permission value is saved in Notification.permission.

/* Notification.permission can have one of these three values: default, granted or denied. */ if (Notification.permission === 'default') { /* The Notification.requestPermission() method shows a notification permission prompt to the user. It returns a promise that resolves to the value of permission*/ Notification.requestPermission().then (result => { if (result === 'denied') { console.log('Permission denied') return } if (result === 'granted') { console.log('Permission granted') /* This means the user has clicked the Allow button. We’re to get the subscription token generated by the browser and store it in our database. The subscription token can be fetched using the getSubscription method available on pushManager of the serviceWorkerRegistration object. If subscription is not available, we subscribe using the subscribe method available on pushManager. The subscribe method takes in an object. */ serviceWorkerRegistration.pushManager.getSubscription() .then (subscription => { if (!subscription) { const applicationServerKey = '' serviceWorkerRegistration.pushManager.subscribe({ userVisibleOnly: true, // All push notifications from server should be displayed to the user applicationServerKey // VAPID Public key }) } else { saveSubscriptionInDB(subscription, userId) // A method to save subscription token in the database } }) } }) }

In the subscribe method above, we’re passing userVisibleOnly and applicationServerKey to generate a subscription token. The userVisibleOnly property should always be true because it tells the browser that any push notification sent by the server will be shown to the client. To understand the purpose of applicationServerKey, let’s consider a scenario.

If some person gets ahold of your thousands of subscription tokens, they could very well send notifications to the endpoints contained in these subscriptions. There is no way for the endpoint to be linked to your unique identity. To provide a unique identity to the subscription tokens generated on your web application, we make use of the VAPID protocol. With VAPID, the application server voluntarily identifies itself to the push service while sending push notifications. We generate two keys like so:

const webpush = require('web-push') const vapidKeys = webpush.generateVAPIDKeys()

web-push is an npm module. vapidKeys will have one public key and one private key. The application server key used above is the public key.

Web Push Server

The job of the web push server (application server) is straightforward. It sends a notification payload to the subscription tokens.

const options = { TTL: 24*60*60, //TTL is the time to live, the time that the notification will be queued in the push service vapidDetails: { subject: 'email@example.com', publicKey: '', privateKey: '' } } const data = { title: 'Update', body: 'Notification sent by the server' } webpush.sendNotification(subscription, data, options)

It uses the sendNotification method from the web push library.

Service Workers

The service worker shows the notification to the user as such:

self.addEventListener('push', (event) => { let options = { body: event.data.body, icon: 'images/example.png', } event.waitUntil( /* The showNotification method is available on the registration object of the service worker. The first parameter to showNotification method is the title of notification, and the second parameter is an object */ self.registration.showNotification(event.data.title, options) ) })

Till now, we’ve seen how a service worker makes use of the cache to store requests and makes a PWA fast and reliable, and we’ve seen how web push notifications keep users engaged.

To store a bunch of data on the client side for offline support, we need a giant data structure. Let’s look into the Financial Times PWA. You’ve got to witness the power of this data structure for yourself. Load the URL in your browser, and then switch off your internet connection. Reload the page. Gah! Is it still working? It is. (Like I said, offline is the new black.) Data is not coming from the wires. It is being served from the house. Head over to the “Applications” tab of Chrome Developer Tools. Under “Storage”, you’ll find “IndexedDB”.

IndexedDB on Financial Times PWA. (Large preview)

Check out the “Articles” object store, and expand any of the items to see the magic for yourself. The Financial Times has stored this data for offline support. This data structure that lets us store a massive amount of data is called IndexedDB. IndexedDB is a JavaScript-based object-oriented database for storing structured data. We can create different object stores in this database for various purposes. For example, as we can see in the image above that “Resources”, “ArticleImages” and “Articles” are called as object stores. Each record in an object store is uniquely identified with a key. IndexedDB can even be used to store files and blobs.

Let’s try to understand IndexedDB by creating a database for storing books.

let openIdbRequest = window.indexedDB.open('booksdb', 1)

If the database booksdb doesn’t already exist, the code above will create a booksdb database. The second parameter to the open method is the version of the database. Specifying a version takes care of the schema-related changes that might happen in future. For example, booksdb now has only one table, but when the application grows, we intend to add two more tables to it. To make sure our database is in sync with the updated schema, we’ll specify a higher version than the previous one.

Calling the open method doesn’t open the database right away. It’s an asynchronous request that returns an IDBOpenDBRequest object. This object has success and error properties; we’ll have to write appropriate handlers for these properties to manage the state of our connection.

let dbInstance openIdbRequest.onsuccess = (event) => { dbInstance = event.target.result console.log('booksdb is opened successfully') } openIdbRequest.onerror = (event) => { console.log(’There was an error in opening booksdb database') } openIdbRequest.onupgradeneeded = (event) => { let db = event.target.result let objectstore = db.createObjectStore('books', { keyPath: 'id' }) }

To manage the creation or modification of object stores (object stores are analogous to SQL-based tables — they have a key-value structure), the onupgradeneeded method is called on the openIdbRequest object. The onupgradeneeded method will be invoked whenever the version changes. In the code snippet above, we’re creating a books object store with unique key as the ID.

Let’s say that, after deploying this piece of code, we have to create one more object store, called as users. So, now the version of our database will be 2.

let openIdbRequest = window.indexedDB.open('booksdb', 2) // New Version - 2 /* Success and error event handlers remain the same. The onupgradeneeded method gets called when the version of the database changes. */ openIdbRequest.onupgradeneeded = (event) => { let db = event.target.result if (!db.objectStoreNames.contains('books')) { let objectstore = db.createObjectStore('books', { keyPath: 'id' }) } let oldVersion = event.oldVersion let newVersion = event.newVersion /* The users tables should be added for version 2. If the existing version is 1, it will be upgraded to 2, and the users object store will be created. */ if (oldVersion === 1) { db.createObjectStore('users', { keyPath: 'id' }) } }

We’ve cached dbInstance in the success event handler of the open request. To retrieve or add data in IndexedDB, we’ll make use of dbInstance. Lets add some book records in our books object store.

let transaction = dbInstance.transaction('books') let objectstore = dbInstance.objectstore('books') let bookRecord = { id: '1', name: ’The Alchemist', author: 'Paulo Coelho' } let addBookRequest = objectstore.add(bookRecord) addBookRequest.onsuccess = (event) => { console.log('Book record added successfully') } addBookRequest.onerror = (event) => { console.log(’There was an error in adding book record') }

We make use of transactions, especially while writing records on object stores. A transaction is simply a wrapper around an operation to ensure data integrity. If any of the actions in a transaction fails, then no action is performed on the database.

Let’s modify a book record with the put method:

let modifyBookRequest = objectstore.put(bookRecord) // put method takes in an object as the parameter modifyBookRequest.onsuccess = (event) => { console.log('Book record updated successfully') }

Let’s retrieve a book record with the get method:

let transaction = dbInstance.transaction('books') let objectstore = dbInstance.objectstore('books') /* get method takes in the id of the record */ let getBookRequest = objectstore.get(1) getBookRequest.onsuccess = (event) => { /* event.target.result contains the matched record */ console.log('Book record', event.target.result) } getBookRequest.onerror = (event) => { console.log('Error while retrieving the book record.') } Adding Icon On Home Screen

Now that there is hardly any distinction between a PWA and a native application, it makes sense to offer a prime position to the PWA. If your website fulfills the basic criteria of a PWA (hosted on HTTPS, integrates with service workers and has a manifest.json) and after the user has spent some time on the web page, the browser will invoke a prompt at the bottom, asking the user to add the app to their home screen, as shown below:

Prompt to add Financial Times PWA on home screen. (Large preview)

When a user clicks on “Add FT to Home screen”, the PWA gets to set its foot on the home screen, as well as in the app drawer. When a user searches for any application on their phone, any PWAs that match the search query will be listed. They will also be seen in the system settings, which makes it easy for users to manage them. In this sense, a PWA behaves like a native application.

PWAs make use of manifest.json to provide this feature. Let’s look into a simple manifest.json file.

{ "name": "Demo PWA", "short_name": "Demo", "start_url": "/?standalone", "background_color": "#9F0C3F", "theme_color": "#fff1e0", "display": "standalone", "icons": [{ "src": "/lib/img/icons/xxhdpi.png?v2", "sizes": "192x192" }] }

The short_name appears on the user’s home screen and in the system settings. The name appears in the chrome prompt and on the splash screen. The splash screen is what the user sees when the app is getting ready to launch. The start_url is the main screen of your app. It’s what users get when they tap an icon on the home screen. The background_color is used on the splash screen. The theme_color sets the color of the toolbar. The standalone value for display mode says that the app is to be operated in full-screen mode (hiding the browser’s toolbar). When a user installs a PWA, its size is merely in kilobytes, rather than the megabytes of native applications.

Service workers, web push notifications, IndexedDB, and the home screen position make up for offline support, reliability, and engagement. It should be noted that a service worker doesn’t come to life and start doing its work on the very first load. The first load will still be slow until all of the static assets and other resources have been cached. We can implement some strategies to optimize the first load.

Bundling Assets

All of the resources, including the HTML, style sheets, images and JavaScript, are to be fetched from the server. The more files, the more HTTPS requests needed to fetch them. We can use bundlers like WebPack to bundle our static assets, hence reducing the number of HTTP requests to the server. WebPack does a great job of further optimizing the bundle by using techniques such as code-splitting (i.e. bundling only those files that are required for the current page load, instead of bundling all of them together) and tree shaking (i.e. removing duplicate dependencies or dependencies that are imported but not used in the code).

Reducing Round Trips

One of the main reasons for slowness on the web is network latency. The time it takes for a byte to travel from A to B varies with the network connection. For example, a particular round trip over Wi-Fi takes 50 milliseconds and 500 milliseconds on a 3G connection, but 2500 milliseconds on a 2G connection. These requests are sent using the HTTP protocol, which means that while a particular connection is being used for a request, it cannot be used for any other requests until the response of the previous request is served. A website can make six asynchronous HTTP requests at a time because six connections are available to a website to make HTTP requests. An average website makes roughly 100 requests; so, with a maximum of six connections available, a user might end up spending around 833 milliseconds in a single round trip. (The calculation is 833 milliseconds - 100⁄6 = 1666. We have to divide 1666 by 2 because we’re calculating the time spend on a round trip.) With HTTP2 in place, the turnaround time is drastically reduced. HTTP2 doesn’t block the connection head, so multiple requests can be sent simultaneously.

Most HTTP responses contain last-modified and etag headers. The last-modified header is the date when the file was last modified, and an etag is a unique value based on the contents of the file. It will only be changed when the contents of a file are changed. Both of these headers can be used to avoid downloading the file again if a cached version is already locally available. If the browser has a version of this file locally available, it can add any of these two headers in the request as such:

ETag and Last-Modified Headers. (Large preview)

The server can check whether the contents of the file have changed. If the contents of the file have not changed, then it responds with a status code of 304 (not modified).

If-None-Match Header. (Large preview)

This indicates to the browser to use the locally available cached version of the file. By doing all of this, we’ve prevented the file from being downloaded.

Faster responses are in now place, but our job is not done yet. We still have to parse the HTML, load the style sheets and make the web page interactive. It makes sense to show some empty boxes with a loader to the user, instead of a blank screen. While the HTML document is getting parsed, when it comes across <script src='asset.js'></script>, it will make a synchronous HTTP request to the server to fetch asset.js, and the whole parsing process will be paused until the response comes back. Imagine having a dozen of synchronous static asset references. These could very well be managed just by making use of the async keyword in script references, like <script src='asset.js' async></script>. With the introduction of the async keyword here, the browser will make an asynchronous request to fetch asset.js without hindering the parsing of the HTML. If a script file is required at a later stage, we can defer the downloading of that file until the entire HTML has been parsed. A script file can be deferred by using the defer keyword, like <script src='asset.js' defer></script>.

Conclusion

We’ve learned a lot of many new things that make for a cool web application. Here’s a summary of all of the things we’ve explored in this article:

  1. Service workers make good use of the cache to speed up the loading of assets.
  2. Web push notifications work under the hood.
  3. We use IndexedDB to store a massive amount of data.
  4. Some of the optimizations for instant first load, like using HTTP2 and adding headers like Etag, last-modified and If-None-Match, prevent the downloading of valid cached assets.

That’s all, folks!

(rb, ra, al, yk, il)
Categories: Around The Web

Avoiding The Pitfalls Of Automatically Inlined Code

Smashing Magazine - Mon, 11/26/2018 - 8:00am
Avoiding The Pitfalls Of Automatically Inlined Code Avoiding The Pitfalls Of Automatically Inlined Code Leonardo Losoviz 2018-11-26T14:00:08+01:00 2018-12-14T08:31:52+00:00

Inlining is the process of including the contents of files directly in the HTML document: CSS files can be inlined inside a style element, and JavaScript files can be inlined inside a script element:

<style> /* CSS contents here */ </style> <script> /* JS contents here */ </script>

By printing the code already in the HTML output, inlining avoids render-blocking requests and executes the code before the page is rendered. As such, it is useful for improving the perceived performance of the site (i.e. the time it takes for a page to become usable.) For instance, we can use the buffer of data delivered immediately when loading the site (around 14kb) to inline the critical styles, including styles of above-the-fold content (as had been done on the previous Smashing Magazine site), and font sizes and layout widths and heights to avoid a jumpy layout re-rendering when the rest of the data is delivered.

However, when overdone, inlining code can also have negative effects on the performance of the site: Because the code is not cacheable, the same content is sent to the client repeatedly, and it can’t be pre-cached through Service Workers, or cached and accessed from a Content Delivery Network. In addition, inline scripts are considered not safe when implementing a Content Security Policy (CSP). Then, it makes a sensible strategy to inline those critical portions of CSS and JS that make the site load faster but avoided as much as possible otherwise.

With the objective of avoiding inlining, in this article we will explore how to convert inline code to static assets: Instead of printing the code in the HTML output, we save it to disk (effectively creating a static file) and add the corresponding <script> or <link> tag to load the file.

Let’s get started!

Recommended reading: WordPress Security As A Process

Front-end is messy and complicated these days. That's why we publish articles, printed books and webinars with useful techniques to improve your work. Even better: Smashing Membership with a growing selection of front-end & UX goodies. So you get your work done, better and faster.

Explore Smashing Membership ↬ When To Avoid Inlining

There is no magic recipe to establish if some code must be inlined or not, however, it can be pretty evident when some code must not be inlined: when it involves a big chunk of code, and when it is not needed immediately.

As an example, WordPress sites inline the JavaScript templates to render the Media Manager (accessible in the Media Library page under /wp-admin/upload.php), printing a sizable amount of code:

JavaScript templates inlined by the WordPress Media Manager.

Occupying a full 43kb, the size of this piece of code is not negligible, and since it sits at the bottom of the page it is not needed immediately. Hence, it would make plenty of sense to serve this code through static assets instead or printing it inside the HTML output.

Let’s see next how to transform inline code into static assets.

Triggering The Creation Of Static Files

If the contents (the ones to be inlined) come from a static file, then there is not much to do other than simply request that static file instead of inlining the code.

For dynamic code, though, we must plan how/when to generate the static file with its contents. For instance, if the site offers configuration options (such as changing the color scheme or the background image), when should the file containing the new values be generated? We have the following opportunities for creating the static files from the dynamic code:

  1. On request
    When a user accesses the content for the first time.
  2. On change
    When the source for the dynamic code (e.g. a configuration value) has changed.

Let’s consider on request first. The first time a user accesses the site, let’s say through /index.html, the static file (e.g. header-colors.css) doesn’t exist yet, so it must be generated then. The sequence of events is the following:

  1. The user requests /index.html;
  2. When processing the request, the server checks if the file header-colors.css exists. Since it does not, it obtains the source code and generates the file on disk;
  3. It returns a response to the client, including tag <link rel="stylesheet" type="text/css" href="/staticfiles/header-colors.css">
  4. The browser fetches all the resources included in the page, including header-colors.css;
  5. By then this file exists, so it is served.

However, the sequence of events could also be different, leading to an unsatisfactory outcome. For instance:

  1. The user requests /index.html;
  2. This file is already cached by the browser (or some other proxy, or through Service Workers), so the request is never sent to the server;
  3. The browser fetches all the resources included in the page, including header-colors.css. This image is, however, not cached in the browser, so the request is sent to the server;
  4. The server hasn’t generated header-colors.css yet (e.g. it was just restarted);
  5. It will return a 404.

Alternatively, we could generate header-colors.css not when requesting /index.html, but when requesting /header-colors.css itself. However, since this file initially doesn’t exist, the request is already treated as a 404. Even though we could hack our way around it, altering the headers to change the status code to a 200, and returning the content of the image, this is a terrible way of doing things, so we will not entertain this possibility (we are much better than this!)

That leaves only one option: generating the static file after its source has changed.

Creating The Static File When The Source Changes

Please notice that we can create dynamic code from both user-dependant and site-dependant sources. For instance, if the theme enables to change the site’s background image and that option is configured by the site’s admin, then the static file can be generated as part of the deployment process. On the other hand, if the site allows its users to change the background image for their profiles, then the static file must be generated on runtime.

In a nutshell, we have these two cases:

  1. User Configuration
    The process must be triggered when the user updates a configuration.
  2. Site Configuration
    The process must be triggered when the admin updates a configuration for the site, or before deploying the site.

If we considered the two cases independently, for #2 we could design the process on any technology stack we wanted. However, we don’t want to implement two different solutions, but a unique solution which can tackle both cases. And because from #1 the process to generate the static file must be triggered on the running site, then it is compelling to design this process around the same technology stack the site runs on.

When designing the process, our code will need to handle the specific circumstances of both #1 and #2:

  • Versioning
    The static file must be accessed with a “version” parameter, in order to invalidate the previous file upon the creation of a new static file. While #2 could simply have the same versioning as the site, #1 needs to use a dynamic version for each user, possibly saved in the database.
  • Location of the generated file
    #2 generates a unique static file for the whole site (e.g. /staticfiles/header-colors.css), while #1 creates a static file for each user (e.g. /staticfiles/users/leo/header-colors.css).
  • Triggering event
    While for #1 the static file must be executed on runtime, for #2 it can also be executed as part of a build process in our staging environment.
  • Deployment and distribution
    Static files in #2 can be seamlessly integrated inside the site’s deployment bundle, presenting no challenges; static files in #1, however, cannot, so the process must handle additional concerns, such as multiple servers behind a load balancer (will the static files be created in 1 server only, or in all of them, and how?).

Let’s design and implement the process next. For each static file to be generated we must create an object containing the file’s metadata, calculate its content from the dynamic sources, and finally save the static file to disk. As a use case to guide the explanations below, we will generate the following static files:

  1. header-colors.css, with some style from values saved in the database
  2. welcomeuser-data.js, containing a JSON object with user data under some variable: window.welcomeUserData = {name: "Leo"};.

Below, I will describe the process to generate the static files for WordPress, for which we must base the stack on PHP and WordPress functions. The function to generate the static files before deployment can be triggered by loading a special page executing shortcode [create_static_files] as I have described in a previous article.

Further recommended reading: Making A Service Worker: A Case Study

Representing The File As An Object

We must model a file as a PHP object with all corresponding properties, so we can both save the file on disk on a specific location (e.g. either under /staticfiles/ or /staticfiles/users/leo/), and know how to request the file consequently. For this, we create an interface Resource returning both the file’s metadata (filename, dir, type: “css” or “js”, version, and dependencies on other resources) and its content.

interface Resource { function get_filename(); function get_dir(); function get_type(); function get_version(); function get_dependencies(); function get_content(); }

In order to make the code maintainable and reusable we follow the SOLID principles, for which we set an object inheritance scheme for resources to gradually add properties, starting from the abstract class ResourceBase from which all our Resource implementations will inherit:

abstract class ResourceBase implements Resource { function get_dependencies() { // By default, a file has no dependencies return array(); } }

Following SOLID, we create subclasses whenever properties differ. As stated earlier, the location of the generated static file, and the versioning to request it will be different depending on the file being about the user or site configuration:

abstract class UserResourceBase extends ResourceBase { function get_dir() { // A different file and folder for each user $user = wp_get_current_user(); return "/staticfiles/users/{$user->user_login}/"; } function get_version() { // Save the resource version for the user under her meta data. // When the file is regenerated, must execute `update_user_meta` to increase the version number $user_id = get_current_user_id(); $meta_key = "resource_version_".$this->get_filename(); return get_user_meta($user_id, $meta_key, true); } } abstract class SiteResourceBase extends ResourceBase { function get_dir() { // All files are placed in the same folder return "/staticfiles/"; } function get_version() { // Same versioning as the site, assumed defined under a constant return SITE_VERSION; } }

Finally, at the last level, we implement the objects for the files we want to generate, adding the filename, the type of file, and the dynamic code through function get_content:

class HeaderColorsSiteResource extends SiteResourceBase { function get_filename() { return "header-colors"; } function get_type() { return "css"; } function get_content() { return sprintf( " .site-title a { color: #%s; } ", esc_attr(get_header_textcolor()) ); } } class WelcomeUserDataUserResource extends UserResourceBase { function get_filename() { return "welcomeuser-data"; } function get_type() { return "js"; } function get_content() { $user = wp_get_current_user(); return sprintf( "window.welcomeUserData = %s;", json_encode( array( "name" => $user->display_name ) ) ); } }

With this, we have modeled the file as a PHP object. Next, we need to save it to disk.

Saving The Static File To Disk

Saving a file to disk can be easily accomplished through the native functions provided by the language. In the case of PHP, this is accomplished through the function fwrite. In addition, we create a utility class ResourceUtils with functions providing the absolute path to the file on disk, and also its path relative to the site’s root:

class ResourceUtils { protected static function get_file_relative_path($fileObject) { return $fileObject->get_dir().$fileObject->get_filename().".".$fileObject->get_type(); } static function get_file_path($fileObject) { // Notice that we must add constant WP_CONTENT_DIR to make the path absolute when saving the file return WP_CONTENT_DIR.self::get_file_relative_path($fileObject); } } class ResourceGenerator { static function save($fileObject) { $file_path = ResourceUtils::get_file_path($fileObject); $handle = fopen($file_path, "wb"); $numbytes = fwrite($handle, $fileObject->get_content()); fclose($handle); } }

Then, whenever the source changes and the static file needs to be regenerated, we execute ResourceGenerator::save passing the object representing the file as a parameter. The code below regenerates, and saves on disk, files “header-colors.css” and “welcomeuser-data.js”:

// When need to regenerate header-colors.css, execute: ResourceGenerator::save(new HeaderColorsSiteResource()); // When need to regenerate welcomeuser-data.js, execute: ResourceGenerator::save(new WelcomeUserDataUserResource());

Once they exist, we can enqueue files to be loaded through the <script> and <link> tags.

Enqueuing The Static Files

Enqueuing the static files is no different than enqueuing any resource in WordPress: through functions wp_enqueue_script and wp_enqueue_style. Then, we simply iterate all the object instances and use one hook or the other depending on their get_type() value being either "js" or "css".

We first add utility functions to provide the file’s URL, and to tell the type being either JS or CSS:

class ResourceUtils { // Continued from above... static function get_file_url($fileObject) { // Add the site URL before the file path return get_site_url().self::get_file_relative_path($fileObject); } static function is_css($fileObject) { return $fileObject->get_type() == "css"; } static function is_js($fileObject) { return $fileObject->get_type() == "js"; } }

An instance of class ResourceEnqueuer will contain all the files that must be loaded; when invoked, its functions enqueue_scripts and enqueue_styles will do the enqueuing, by executing the corresponding WordPress functions (wp_enqueue_script and wp_enqueue_style respectively):

class ResourceEnqueuer { protected $fileObjects; function __construct($fileObjects) { $this->fileObjects = $fileObjects; } protected function get_file_properties($fileObject) { $handle = $fileObject->get_filename(); $url = ResourceUtils::get_file_url($fileObject); $dependencies = $fileObject->get_dependencies(); $version = $fileObject->get_version(); return array($handle, $url, $dependencies, $version); } function enqueue_scripts() { $jsFileObjects = array_map(array(ResourceUtils::class, 'is_js'), $this->fileObjects); foreach ($jsFileObjects as $fileObject) { list($handle, $url, $dependencies, $version) = $this->get_file_properties($fileObject); wp_register_script($handle, $url, $dependencies, $version); wp_enqueue_script($handle); } } function enqueue_styles() { $cssFileObjects = array_map(array(ResourceUtils::class, 'is_css'), $this->fileObjects); foreach ($cssFileObjects as $fileObject) { list($handle, $url, $dependencies, $version) = $this->get_file_properties($fileObject); wp_register_style($handle, $url, $dependencies, $version); wp_enqueue_style($handle); } } }

Finally, we instantiate an object of class ResourceEnqueuer with a list of the PHP objects representing each file, and add a WordPress hook to execute the enqueuing:

// Initialize with the corresponding object instances for each file to enqueue $fileEnqueuer = new ResourceEnqueuer( array( new HeaderColorsSiteResource(), new WelcomeUserDataUserResource() ) ); // Add the WordPress hooks to enqueue the resources add_action('wp_enqueue_scripts', array($fileEnqueuer, 'enqueue_scripts')); add_action('wp_print_styles', array($fileEnqueuer, 'enqueue_styles'));

That’s it: Being enqueued, the static files will be requested when loading the site in the client. We have succeeded to avoid printing inline code and loading static resources instead.

Next, we can apply several improvements for additional performance gains.

Recommended reading: An Introduction To Automated Testing Of WordPress Plugins With PHPUnit

Bundling Files Together

Even though HTTP/2 has reduced the need for bundling files, it still makes the site faster, because the compression of files (e.g. through GZip) will be more effective, and because browsers (such as Chrome) have a bigger overhead processing many resources.

By now, we have modeled a file as a PHP object, which allows us to treat this object as an input to other processes. In particular, we can repeat the same process above to bundle all files from the same type together and serve the bundled version instead of all the independent files. For this, we create a function get_content which simply extracts the content from every resource under $fileObjects, and prints it again, producing the aggregation of all content from all resources:

abstract class SiteBundleBase extends SiteResourceBase { protected $fileObjects; function __construct($fileObjects) { $this->fileObjects = $fileObjects; } function get_content() { $content = ""; foreach ($this->fileObjects as $fileObject) { $content .= $fileObject->get_content().PHP_EOL; } return $content; } }

We can bundle all files together into the file bundled-styles.css by creating a class for this file:

class StylesSiteBundle extends SiteBundleBase { function get_filename() { return "bundled-styles"; } function get_type() { return "css"; } }

Finally, we simply enqueue these bundled files, as before, instead of all the independent resources. For CSS, we create a bundle containing files header-colors.css, background-image.css and font-sizes.css, for which we simply instantiate StylesSiteBundle with the PHP object for each of these files (and likewise we can create the JS bundle file):

$fileObjects = array( // CSS new HeaderColorsSiteResource(), new BackgroundImageSiteResource(), new FontSizesSiteResource(), // JS new WelcomeUserDataUserResource(), new UserShoppingItemsUserResource() ); $cssFileObjects = array_map(array(ResourceUtils::class, 'is_css'), $fileObjects); $jsFileObjects = array_map(array(ResourceUtils::class, 'is_js'), $fileObjects); // Use this definition of $fileEnqueuer instead of the previous one $fileEnqueuer = new ResourceEnqueuer( array( new StylesSiteBundle($cssFileObjects), new ScriptsSiteBundle($jsFileObjects) ) );

That’s it. Now we will be requesting only one JS file and one CSS file instead of many.

A final improvement for perceived performance involves prioritizing assets, by delaying loading those assets which are not needed immediately. Let’s tackle this next.

async/defer Attributes For JS Resources

We can add attributes async and defer to the <script> tag, to alter when the JavaScript file is downloaded, parsed and executed, as to prioritize critical JavaScript and push everything non-critical for as late as possible, thus decreasing the site’s apparent loading time.

To implement this feature, following the SOLID principles, we should create a new interface JSResource (which inherits from Resource) containing functions is_async and is_defer. However, this would close the door to <style> tags eventually supporting these attributes too. So, with adaptability in mind, we take a more open-ended approach: we simply add a generic method get_attributes to interface Resource as to keep it flexible to add to any attribute (either already existing ones or yet to be invented) for both <script> and <link> tags:

interface Resource { // Continued from above... function get_attributes(); } abstract class ResourceBase implements Resource { // Continued from above... function get_attributes() { // By default, no extra attributes return ''; } }

WordPress doesn’t offer an easy way to add extra attributes to the enqueued resources, so we do it in a rather hacky way, adding a hook that replaces a string inside the tag through function add_script_tag_attributes:

class ResourceEnqueuerUtils { protected static tag_attributes = array(); static function add_tag_attributes($handle, $attributes) { self::tag_attributes[$handle] = $attributes; } static function add_script_tag_attributes($tag, $handle, $src) { if ($attributes = self::tag_attributes[$handle]) { $tag = str_replace( " src='${src}'>", " src='${src}' ".$attributes.">", $tag ); } return $tag; } } // Initize by connecting to the WordPress hook add_filter( 'script_loader_tag', array(ResourceEnqueuerUtils::class, 'add_script_tag_attributes'), PHP_INT_MAX, 3 );

We add the attributes for a resource when creating the corresponding object instance:

abstract class ResourceBase implements Resource { // Continued from above... function __construct() { ResourceEnqueuerUtils::add_tag_attributes($this->get_filename(), $this->get_attributes()); } }

Finally, if resource welcomeuser-data.js doesn’t need to be executed immediately, we can then set it as defer:

class WelcomeUserDataUserResource extends UserResourceBase { // Continued from above... function get_attributes() { return "defer='defer'"; } }

Because it is loaded as deferred, a script will load later, bringing forward the point in time in which the user can interact with the site. Concerning performance gains, we are all set now!

There is one issue left to resolve before we can relax: what happens when the site is hosted on multiple servers?

Dealing With Multiple Servers Behind A Load Balancer

If our site is hosted on several sites behind a load balancer, and a user-configuration dependant file is regenerated, the server handling the request must, somehow, upload the regenerated static file to all the other servers; otherwise, the other servers will serve a stale version of that file from that moment on. How do we do this? Having the servers communicate to each other is not just complex, but may ultimately prove unfeasible: What happens if the site runs on hundreds of servers, from different regions? Clearly, this is not an option.

The solution I came up with is to add a level of indirection: instead of requesting the static files from the site URL, they are requested from a location in the cloud, such as from an AWS S3 bucket. Then, upon regenerating the file, the server will immediately upload the new file to S3 and serve it from there. The implementation of this solution is explained in my previous article Sharing Data Among Multiple Servers Through AWS S3.

Conclusion

In this article, we have considered that inlining JS and CSS code is not always ideal, because the code must be sent repeatedly to the client, which can have a hit on performance if the amount of code is significant. We saw, as an example, how WordPress loads 43kb of scripts to print the Media Manager, which are pure JavaScript templates and could perfectly be loaded as static resources.

Hence, we have devised a way to make the website faster by transforming the dynamic JS and CSS inline code into static resources, which can enhance caching at several levels (in the client, Service Workers, CDN), allows to further bundle all files together into just one JS/CSS resource as to improve the ratio when compressing the output (such as through GZip) and to avoid an overhead in browsers from processing several resources concurrently (such as in Chrome), and additionally allows to add attributes async or defer to the <script> tag to speed up the user interactivity, thus improving the site’s apparent loading time.

As a beneficial side effect, splitting the code into static resources also allows the code to be more legible, dealing with units of code instead of big blobs of HTML, which can lead to a better maintenance of the project.

The solution we developed was done in PHP and includes a few specific bits of code for WordPress, however, the code itself is extremely simple, barely a few interfaces defining properties and objects implementing those properties following the SOLID principles, and a function to save a file to disk. That’s pretty much it. The end result is clean and compact, straightforward to recreate for any other language and platform, and not difficult to introduce to an existing project — providing easy performance gains.

(rb, ra, yk, il)
Categories: Around The Web

Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks

Smashing Magazine - Fri, 11/23/2018 - 8:00am
Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks Monthly Web Development Update 11/2018: Just-In-Time Design And Variable Font Fallbacks Anselm Hannemann 2018-11-23T14:00:24+01:00 2018-12-13T10:58:14+00:00

How much does design affect the perception of our products and the users who interact with them? To me, it’s getting clearer that design makes all the difference and that unifying designs to a standard model like the Google Material Design Kit doesn’t work well. By using it, you’ll get a decent design that works from a technical perspective, of course. But you won’t create a unique experience with it, an experience that lasts or that reaches people on a personal level.

Now think about which websites you visit and if you enjoy being there, reading or even contributing content to the service. In my opinion, that’s something that Instagram manages to do very well. Good design fits your company’s purpose and adjusts to what visitors expect, making them feel comfortable where they are and enabling them to connect with the product. Standard solutions, however, might be nice and convenient, but they’ll always have that anonymous feel to them which prevents people from really caring for your product. It’s in our hands to shape a better experience.

News
  • Yes, Firefox 63 is here, but what does it bring? Web Components support including Custom Elements with built-in extends and Shadow DOM. prefers-reduced-motion media query support is available now, too, Developer Tools have gotten a font editor to make playing with web typography easier, and the accessibility inspector is enabled by default. The img element now supports the decoding attribute which can get sync, async, or auto values to hint the preferred decoding timing to the browser. Flexbox got some improvements as well, now supporting gap (row-gap, column-gap) properties. And last but not least, the Media Capabilities API, Async Clipboard API, and the SecurityPolicyViolationEvent interface which allows us to send CSP violations have also been added. Wow, what a release!
  • React 16.6 is out — that doesn’t sound like big news, does it? Well, this minor update brings React.lazy(), a method you can use to do code-splitting by wrapping a dynamic import in a call to React.lazy(). A huge step for better performance. There are also a couple of other useful new things in the update.
  • The latest Safari Tech Preview 68 brings <input type="color"> support and changes the default behavior of links that have target="_blank" to get the rel="noopener" as implied attribute. It also includes the new prefers-color-scheme media query which allows developers to adapt websites to the light or dark mode settings of macOS.
  • From now on, PageSpeed Insights, likely still the most commonly used performance analysis tool by Google, is now powered by project Lighthouse which many of you have already used additionally. A nice iteration of their tool that makes it way more accurate than before.

Web forms are such an important part of the web, but we design them poorly all the time. The brand-new “Form Design Patterns” book is our new practical guide for people who design, prototype and build all sorts of forms for digital services, products and websites. The eBook is free for Smashing Members.

Check the table of contents ↬ General
  • Explore structured learning paths to discover everything you need to know about building for the modern web. web.dev is the new resource by the Google Web team for developers.
  • No matter how you feel about Apple Maps (I guess most of us have experienced moments of frustration with it), but this comparison about the maps data they used until now and the data they currently gather for their revamped Maps is fascinating. I’m sure that the increased level of detail will help a lot of people around the world. Imagine how landscape architects could make use of this or how rescue helpers could profit from that level of detail after an earthquake, for example.
From fast load times to accessibility — web.dev helps you make your site better. HTML & SVG
  • Andrea Giammarchi wrote a polyfill library for Custom Elements that allows us to extend built-in elements in Safari. This is super nice as it allows us to extend native elements with our own custom features — something that works in Chrome and Firefox already, and now there’s this little polyfill for other browsers as well.
  • Custom elements are still very new and browser support varies. That’s why this html-parsed-element project is useful as it provides a base custom element class with a reliable parsedCallback method.
JavaScript UI/UX
  • How do you build a color palette? Steve Schoger from RefactoringUI shares a great approach that meets real-life needs.
  • Matthew Ström’s article “Just-in-time Design” mentions a solution to minimize the disconnection between product design and product engineering. It’s about adopting the Just-in-time method for design. Something that my current team was very excited about and I’m happy to give it a try.
  • HolaBrief looks promising. It’s a tool that improves how we create design briefs, keeping everyone on the same page during the process.
  • Mental models are explanations of how we see the world. Teresa Man wrote about how we can apply mental models to product design and why it matters.
  • Shelby Rogers shares how we can build better 404 error pages.
Steve Schoger looks into color palettes that really work. (Image credit) Tooling
  • The color palette generator Palx lets you enter a base hex value and generates a full color palette based on it.
Security
  • This neat Python tool is a great XSS detection utility.
  • Svetlin Nakov wrote a book about Practical Cryptography for Developers which is available for free. If you ever wanted to understand or know more about how private/public keys, hashing, ciphers, or signatures work, this is a great place to start.
  • Facebook claimed that they’d reveal who pays for political ads. Now VICE researched this new feature and posed as every single of the current 100 U.S. senators to run ads ‘paid by them’. Pretty scary to see how one security failure that gives users more power as intented can change world politics.
Privacy
  • I don’t like linking to paid, restricted articles but this one made me think and you don’t need the full story to follow me. When Tesla announced that they’d ramp up model 3 production to 24⁄7, a lot of people wanted to verify this, and a company that makes money by providing geolocation data captured smartphone location data from the workers around the Tesla factories to confirm whether this could be true. Another sad story of how easy it is to track someone without consent, even though this is more a case of mass-surveillance than individual tracking.
Web Performance
  • Addy Osmani shares a performance case study of Netflix to improve Time-to-Interactive of the streaming service. This includes switching from React and other libraries to plain JavaScript, prefetching HTML, CSS, and (React) JavaScript and the usage of React.js on the server side. Quite interesting to see so many unconventional approaches and their benefits. But remember that what works for others doesn’t need to be the perfect approach for your project, so take it more as inspiration than blindly copying it.
  • Harry Roberts explains all the details that are important to know about CSS and Network Performance. A comprehensive collection that also provides some very interesting tips for when you have async scripts in your code.
  • I love the tiny ImageOptim app for batch optimizing my images for web distribution. But now there’s an impressive web app called “Squoosh” that lets you optimize images perfectly in your web browser and, as a bonus, you can also resize the image and choose which compression to use, including mozJPEG and WebP. Made by the Google Chrome team.
CSS How to design for dark mode while maintaining accessibility, readability, and a consistent feel for your brand? Andy Clarke shares some valuable tips. (Image credit) Work & Life Going Beyond…
  • Neil Stevenson on Steve Jobs, creativity and death and why this is a good story for life. Although copying Steve Jobs is likely not a good idea, Neil provides some different angles on how we might want to work, what to do with our lives, and why purpose matters for many of us.
  • Ryan Broderick reflects on what we did by inventing the internet. He concludes that all that radicalism in the world, those weird political views are all due to the invention of social media, chat software and the (not so sub-) culture of promoting and embracing all the bad things happening in our society. Remember 4chan, Reddit, and similar services, but also Facebook et al? They contribute and embrace not only good ideas but often stupid or even harmful ones. “This is how we radicalized the world” is a sad story to read but well-written and with a lot of inspiring thoughts about how we shape society through technology.
  • I’m sorry, this is another link about Bitcoin’s energy consumption, but it shows that Bitcoin mining alone could raise global temperatures above the critical limit (2°C) by 2033. It’s time to abandon this inefficient type of cryptocurrency. Now.
  • Wilderness is something special. And our planet has less and less of it, as this article describes. The map reveals that only very few countries have a lot of wilderness these days, giving rare animals and species a place to live, giving humans a way to explore nature, to relax, to go on adventures.
  • We definitely live in exciting times, but it makes me sad when I read that in the last forty years, wildlife population declined by 60%. That’s a pretty massive scale, and if this continues, the world will be another place when I’m old. Yes, when I am old, a lot of animals I knew and saw in nature will not exist anymore by then, and the next generation of humans will not be able to see them other than in a museum. It’s not entirely clear what the reasons are, but climate change might be one thing, and the ever-growing expansion of humans into wildlife areas probably contributes a lot to it, too.
(cm, il)
Categories: Around The Web

Happy First Anniversary, Smashing Members!

Smashing Magazine - Wed, 11/21/2018 - 8:00am
Happy First Anniversary, Smashing Members! Happy First Anniversary, Smashing Members! Bruce Lawson 2018-11-21T14:00:59+01:00 2018-12-13T10:58:14+00:00

Doesn’t time fly? And don’t ships sail? A year ago, we launched our Smashing Membership programme so that members of the Smashing readership could support us for a small amount of money (most people pay $5 or $9 a month, and can cancel at any time). In return get access to our ebooks, members-only webinars, discounts on printed books and conferences, and other benefits.

We did this because we wanted to reduce advertising on the site; ad revenues were declining, and the tech-savvy Smashing audience was becoming increasingly aware of the security and privacy implications of ads. And we were inspired by the example of The Guardian, a British newspaper that decided to keep its content outside a paywall but ask readers for support. Just last week, the Guardian’s editor-in-chief revealed that they have the financal support of 1 million people.

Welcome aboard — we’re celebrating! It’s the first year of Smashing Membership (or Smashing Members’ Ship... get it?)! Into Year Two

We recently welcomed Bruce Lawson to the team as our Membership Commissioning Editor. Bruce is well known for his work on accessibility and web standards, as well as his fashion blog and world-class jokes.

So now that the team is larger, we’ll be bringing you more content — going up to three webinars a month. The price stays the same. And, of course, we’d love your input on subjects or speakers — let us know on Slack.

When we set up Membership, we promised that it would be an inclusive place where lesser-heard voices (in addition to big names) would be beamed straight to your living room/ home office/ sauna over Smashing TV. Next month, for example, Bruce is pleased to host a webinar by Eka, Jing, and Sophia from Indonesia, Singapore, and the Philippines to tell us about the state of the web in South East Asia. Perhaps you’d like to join us?

Please consider becoming a Smashing Member. Your support allows us to bring you great content, pay all our contributors fairly, and reduce advertising on the site.

Thank you so much to all who have helped to make it happen! We sincerely appreciate it.

(bl, sw, il)

Categories: Around The Web
Syndicate content