Planet Drupal

Simple Website Approach Using a Headless CMS: Part 1 I strongly believe that the path for innovation requires a mix of experimentation, sweat, and failure. Without experimenting with new solutions, new technologies, new tools, we are limiting our ability to improve, arresting our potential to be better, to be faster, and sadly ensuring that we stay rooted in systems, processes and...

Last weekend, over 29 million people watched the Royal Wedding of Prince Harry and Meghan Markle. While there is a tremendous amount of excitement surrounding the newlyweds, I was personally excited to learn that the royal family's website is built with Drupal! Royal.uk is the official website of the British Royal Family, and is visited by an average of 12 million people each year. Check it out at https://www.royal.uk!

The Long Road to Drupal 9 catch Fri, 05/25/2018 - 05:33

On the spring Saturday morning we gathered together experienced and beginning developers to talk about Drupal, share how to get to DrupalCon and charge each other with inspiration.

Check what Drupal Cafe was like

 

Mediacurrent team members will be heading to the mountains in North Carolina for a two day Drupal-filled event for all levels of Drupal skills. Asheville Drupal User Group is a small but dedicated community of Drupalers who will host their 8th annual Asheville Drupal Camp on July 13-15th at UNC Asheville. Mediacurrent will be sponsoring the event and will have 6 team members presenting sessions. We even have Mediacurrent Lead Drupal Architect, April Sides as one of the organizers of the event. From technical Drupal developing to making friends in a remote work place, check out what Mediacurrent has in store for Asheville Drupal Camp 2018:  

Demystifying Decoupled Drupal with Contenta CMS

Speakers: Mark Shropshire, Open Source Security Lead at Mediacurrent and Bayo Fodeke, Senior Drupal Developer at Mediacurrent

Contenta is an open source API-first Drupal distribution that makes out of the box decoupled Drupal accessible. This session will demonstrate installing Contenta, working with included features, using demo content and consumers, and working with the Contenta community.

Takeaways:

  • Install Contenta
  • Know how to contribute back to Contenta
  • Know how to connect a frontend application to a Contenta backend
Pivoting in a Project: Strategies for adjusting to scope changes

Speakers: Brian Manning, Project Manager at Mediacurrent and Kelly Dassing, Senior Project Manager at Mediacurrent

Pivots come in a variety of shapes and sizes. They can be a minor change that’s quickly integrated into scope, or a major departure that alters the entire course of the project. When you encounter these shifts, it’s vital you strategize, communicate, and continue to capture the vision of the client so the final product is a solid foundation for your client’s goals and KPIs—not a point of resentment.

Key points:

  • Kicking off the project with an organized team and plan of attack
  • Communicating with your whole team and the client
  • Being ready to PIVOT
  • Keeping your team grounded in the delivery
  • Conducting a retrospective and additional planning—not a postmortem
GatsbyJS: A Powerful FE Tool for Decoupled Devs

Speaker: Grayson Hicks, Front End Developer at Mediacurrent

GatsbyJS is an exciting way of thinking about building sites for the modern web. Is it a framework? Is it a static site generator? This session will cover the benefits of using GatsbyJS and will include the best and not so best use cases.

Key points:

  • What Gatsby's GraphQL data layer is and how and why to embrace it
  • Gatsby's internal API for building a Gatsby starter to fit your team
  • Looking at Gatsby's plugin/source/transformer system for taking Gatsby from a a blog-generator to a site-generator
Common Accessibility Mistakes and How to Avoid Them

Speaker: Ben Robertson, Front End Developer at Mediacurrent

Accessible web design really boils down to a few basic principles and when you have these as your first principles when starting a project, you can save your self, your team, and your clients hours of headaches around accessibility testing. 

This presentation will describe a few basic principles to keep in mind when building accessible web experiences, provide explanations and examples of these principles in code, and identify common accessibility pitfalls and how to avoid them.

Topics covered:

  • Simple JavaScript techniques for ensuring accessible components
  • CSS properties that affect accessibility
  • How to use modern CSS (flexbox, grid) without compromising accessibility
Only load what you need with Drupal 8 Libraries

Speaker: Zack Hawkins, Director of Front End Development at Mediacurrent

Gone are the days of having one massive JavaScript or CSS file. his session will explore the use of libraries to conditionally load assets and resolve dependencies.

Key topics:

  • Introduction to libraries in Drupal 8.
  • Library options and configuration.
  • What a component based workflow looks like with libraries.
  • Code splitting with Webpack and libraries.
  • Library gotchas and things to be aware of.
Friends Inside My Computer: Making Connections in a Remote Workplace

Speaker: Kelly Dassing, Senior Project Manager at Mediacurrent, Chris Manning, Director of QA at Mediacurrent, and Sam Seide, Drupal Developer at Mediacurrent

Hear the story of the real-life friendship that blossomed between these three Mediacurrent team members from different departments and how it helps them in their day-to-day work.

This session will be best appreciated by anyone who is a remote worker, whether employed by a small company or larger corporation.
 

Additional Resources

"Shrop" Talk at Drupal Camp Asheville 2016
Drupal Camp Asheville 2016
Mediacurrent to Present 7 Sessions at Drupalcon Nashville

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Last weekend, over 29 million people watched the Royal Wedding of Prince Harry and Meghan Markle. While there is a tremendous amount of excitement surrounding the newlyweds, I was personally excited to learn that the royal family's website is built with Drupal! Royal.uk is the official website of the British royal family, and is visited by an average of 12 million people each year. Check it out at https://www.royal.uk!

File attachments:  royal-uk-742x1114.jpg

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Since the release of Drupal 8.0.0 in November 2015, the Drupal 8 core committers have been discussing when and how we'll release Drupal 9. Nat Catchpole, one of Drupal 8's core committers, shared some excellent thoughts about what goes into making that decision.

The driving factor in that discussion is security support for Drupal 8’s third party dependencies (e.g. Symfony, Twig, Guzzle, jQuery, etc). Our top priority is to ensure that all Drupal users are using supported versions of these components so that all Drupal sites remain secure.

In his blog, Nat uses Symfony as an example. The Symfony project announced that it will stop supporting Symfony 3 in November 2021, which means that Symfony 3 won't receive security updates after that date. Consequently, by November 2021, we need to prepare all Drupal sites to use Symfony 4 or later.

Nothing has been decided yet, but the current thinking is that we have to move Drupal to Symfony 4 or later, release that as Drupal 9, and allow enough time for everyone to upgrade to Drupal 9 by November 2021. Keep in mind that this is just looking at Symfony, and none of the other components.

This proposal builds on top of work we've already done on in the context of making Drupal upgrades easy, so upgrades from Drupal 8 to Drupal 9 should be smooth and much simpler than previous upgrades.

If you're interested in the topic, check out Nat's post. He goes in more detail about potential release timelines, including how this impacts our thinking about Drupal 7, Drupal 8 and even Drupal 10. It's a complicated topic, but the goal of Nat's post is to raise awareness and to solicit input from the broader community before we decide our official timeline and release dates on Drupal.org.

Search is an important facet of any large website these days. We’d talked previously about why you want to take full control of your site search. Bombarding your users with a mess of links won’t do anyone any favors. One of our favorite solutions for this problem is Apache Solr and recently we had the opportunity to set it up on Drupal 8. Let’s take a moment to go through a bit of what that solution looked like and some thoughts along the way.

With Europe threatening $25,000,000 fines and Facebook losing $80,000,000,000 of stock value, are you paying attention to data privacy yet? If millions and billions of dollars in news headlines never grabbed you, maybe you've noticed the dozens of e-mails from services you'd forgotten ever signing up for, declaring how much they respect your right to control your data. These e-mails are silly and possibly illegal, but they nonetheless welcome us to a better world of greater privacy rights and people's control of their own data that we web developers should embrace.

The huge potential fines (for large companies, the sky's the limit at four percent of global revenue) come from the European Union's General Data Protection Regulation, and they signal that the GDPR is more than a suggestion. If you're not a European-based company, the European Union does not intend to discriminate: You're still liable when citizens of member states use your services or are monitored by you.

Don't lose sleep for Facebook's wealthy stockholders. That sizeable dip in Facebook stock was not due to the impending GDPR enforcement, but came in the wake of the Cambridge Analytica scandal. Since then, the privacy-invading monopoly so many rich people are betting on regained its market cap and then some. (GDPR-related lawsuits are just starting.)

There's a lot of good resources for GDPR-proofing existing sites (see the bottom of this article); the work ranges from trivial for most sites to monumental tasks for web developers who, fortunately for me, aren't me (and who have finished their labor, I hope, as GDPR enforcement took effect today).

The fun and exciting part starts when we get to build new sites or new features on existing sites and from the beginning put privacy by design into practice (which also is in the law). And yes, I'm referring to complying with a continental government's regulations as fun and exciting.

This goes well beyond an organization's web site, of course. Web developers may be the ones to introduce it to organizations, though, so we should be prepared. Here's the gist.

Organizations must request any personal data in clear and plain language describing the specific pieces of information and how it will be used, such that consent can be given freely and unambiguously through an affirmative action.

This means you need to be always thinking of why you are collecting information, and not collecting information you don't need at all, and deleting any personal information you no longer need. You can collect nearly anything if you get clear consent, but if you have a legitimate business interest for the data you collect, you'll have even fewer requirements, and the people who use your site or service will have a smoother experience.

You further need to allow people to export their personal data, to rectify inaccurate data, and to challenge decisions you make on the basis of their personal data. If you don't have a legitimate business interest for the data (or it's overridden by people's rights), then you must also provide a mechanism for people to erase their data.

If your business interests involve spying, lying, or trying to manipulate people into bad financial, personal, and political decisions— maybe re-think your business. At the very least, try to avoid becoming part of the infrastructure for a police state.

It's GDPR day, a wonderful opportunity to think ethically, and explore another way to put your customers, clients, or constituents first!

Resources

From most thorough to most practical.

Earlier this year Lullabot was engaged to do a redesign of Pantheon’s homepage and several landing pages. If you’re not familiar with Pantheon, they’re one of the leading hosting companies in the Drupal and WordPress space. The timeline for this project was very ambitious—a rollout needed to happen before Drupalcon Nashville. Being longtime partners in the Drupal community, we were excited to take this on.

The front-end development team joined while our design team was still hard at work. Our tasks were to 1) get familiar with the current Drupal 7 codebase, and 2) make performance improvements where possible. All of this came with the caveat that when the designs landed, we were expected to move on them immediately.

Jumping into Pantheon’s codebase was interesting. It’s a situation that we’ve seen hundreds of times: the codebase starts off modularly, but as time progresses and multiple developers work on it, the codebase grows larger, slower, and more unwieldy. This problem isn’t specific to Drupal – it’s a common pandemic across many technologies.

Defining website speed

Website speed is a combination of back-end speed and browser rendering speed, which is determined by the front-end code. Pantheon’s back-end speed is pretty impressive. In addition to being built atop Pantheon’s infrastructure, they have integrated Fastly CDN with HTTP/2.

80-90% of the end-user response time is spent on the front-end. Start there. – Steve Souders

In a normal situation, the front-end accounts for 80% of the time it takes for the browser to render a website. In our situation, it was more like 90%.

What this means is that there are a lot of optimizations we can make to make the website even faster. Exciting!

Identifying the metrics we’ll use

For this article, we’re going to concentrate on four major website speed metrics:

  1. Time to First Byte – This is the time it takes for the server to get the first byte of the HTML to you. In our case, Pantheon’s infrastructure has this handled extremely well.
  2. Time to First Paint – This is when the browser has initially finished layout of the DOM and begins to paint the screen.
  3. Time to Last Hero Paint – This is the time it takes for the browser to finish painting the hero in the initial viewport.
  4. Time to First Interactive – This is the most important metric. This is when the website is painted and becomes usable (buttons clickable, JavaScript working, etc.).

We’re going to be looking at Chrome Developer Tools’ Performance Panel throughout this article. This profile was taken on the pre-design version of the site and identifies these metrics.

undefined Front-end Performance with H2

HTTP/2 (aka H2) is a new-ish technology on the web that governs how servers transfer data back and forth between the browser. With the previous version of HTTP (1.1), each resource request required an entire TCP handshake that introduced additional latency per request. To mitigate this, front-end developers would aggregate resources into as few files as possible. For example, front-end developers would combine multiple CSS files into one monolithic file. We would also combine multiple small images, into one “sprite” image, and display only parts of it at a time. These “hacks” cut down on the number of HTTP requests.

With H2, these concerns are largely mitigated by the protocol itself, leaving the front-end developer to split resources into their own logical files. H2 allows multiplexing of multiple files under one TCP stream.  Because Pantheon has H2 integrated with its global CDN, we were able to make use of these new optimizations.

Identifying the problem(s)

The frontend was not optimized. Let’s take a look at the network tab in Chrome Developer Tools:

undefined

The first thing to notice is that the aggregated CSS bundle is 1.4MB decompressed. This is extremely large for a CSS file, and merited further investigation.

The second thing to observe: the site is loading some JavaScript files that may not be in use on the homepage.

undefined Let's optimize the CSS bundle first

A CSS bundle of 1.4MB is mongongous and hurts our Time to First Paint metric, as well as all metrics that occur afterward. In addition to having to download a larger file, the browser must also parse and interpret it.

Looking deeper into this CSS bundle, we found some embedded base64 and encoded SVG images. When using HTTP/1.1, this saved the round-trip request to the server for that resource but means the browser downloads the image regardless of whether it's needed for that page. Furthermore, we discovered many of these images belonged to components that weren’t in use anymore.

Similarly, various landing pages were only using a small portion of the monolithic CSS file. To decrease the CSS bundle size, we initially took a two-pronged approach:

  1. Extract the embedded images, and reference them via a standard CSS background-image property. Because Pantheon comes with integrated HTTP/2, there’s virtually no performance penalty to load these as separate files.
  2. Split the monolithic CSS file into multiple files, and load those smaller files as needed, similar to the approach taken by modern “code splitting” tools.

Removing the embedded images dropped the bundle to about 700KB—a big improvement, but still a very large CSS file. The next step was “code splitting” the CSS bundle into multiple files. Pantheon’s codebase makes use of various Sass partials that could have made this relatively simple. Unfortunately, the Sass codebase had very large partials and limited in-code documentation.

It’s hard to write maintainable Sass. CSS by its very nature “cascades” down the DOM tree into various components. It’s also next-to-impossible to know where exactly in the HTML codebase a specific CSS selector exists. Pantheon’s Sass codebase was a perfect example of these common problems. When moving code around, we couldn't anticipate all the potential visual modifications.

Writing maintainable Sass

Writing maintainable Sass is hard, but it’s not impossible. To do it effectively, you need to make your code easy to delete. You can accomplish this through a combination of methods:

  • Keep your Sass partials small and modular. Start thinking about splitting them up when they reach over 300 lines.
  • Detailed comments on each component detailing what it is, what it looks like, and when it’s used.
  • Use a naming convention like BEM—and stick to it.
  • Inline comments detailing anything that’s not standard, such as !important declarations, magic numbers, hacks, etc.
Refactoring the Sass codebase

Refactoring a tangled Sass codebase takes a bit of trial and error, but it wouldn’t have been possible without a visual regression system built into the continuous integration (CI) system.

Fortunately, Pantheon had BackstopJS integrated within their CI pipeline. This system looks at a list of representative URLs at multiple viewport widths. When a pull request is submitted, it uses headless Chrome to reach out to the reference site, and compare it with a Pantheon Multidev environment that contains the code from the pull request. If it detects a difference, it will flag this and show the differences in pink.

undefined

Through blood, sweat, and tears, we were able to significantly refactor the Sass codebase into 13 separate files. Then in Drupal, we wrote a custom preprocess function that only loads CSS for each content type.

From this, we were able to bring the primary CSS file down to a manageable 400KB (and only 70KB over the wire). While still massive, it’s no longer technically mongongous. There are still some optimizations that can be made, but through these methods, we decreased the size of this file to a third of its original size.

Optimizing the JavaScript stack

Pantheon’s theme was created in 2015 and made heavy use of jQuery, which was a common practice at the time. While we didn’t have the time or budget to refactor jQuery out of the site, we did have time to make some simple, yet significant, optimizations.

JavaScript files take much more effort for the browser to interpret than a similarly sized image or media file. Each JavaScript file has to be compiled on the fly and then executed, which utilizes up the main thread and delays the Time to Interactive metric. This causes the browser to become unresponsive and lock up and is very common on low-powered devices such as mobile phones.

Also, JavaScript files that are included in the header will also block rendering of the layout, which delays the Time to First Paint metric.

undefined

The easiest trick to avoid delaying these metrics is simple: remove unneeded JavaScript. To do this, we inventoried the current libraries being used, and made a list of the selectors that were instantiating the various methods. Next, we scraped the entirety of the HTML of the website using Wget, and cross-referenced the scraped HTML for these selectors.

# Recursively scrape the site, ignore specific extensions, and append .html extension. wget http://panther.local -r -R gif,jpg,pdf,css,js,svg,png –adjust-extension

Once we had a list of where and when we needed each library, we modified Drupal’s preprocess layer to load them only when necessary. Through this, we reduced the JavaScript bundle size by several hundred kilobytes! In addition, we moved as many JavaScript files into the footer as possible (thus improving Time to First Paint).

Additional front-end performance wins

After the new designs were rolled out, we took an additional look at the site from a performance standpoint.

undefined

You can see in the image above that there’s an additional layout operation happening late in the rendering process. If we look closely, Chrome Developer Tools allows us to track this down.

The cause of this late layout operation down is a combination of factors: First, we’re using the font-display: swap; declaration. This declaration tells the browser to render the page using system fonts first, but then re-render when the webfonts are downloaded. This is good practice because it can dramatically decrease the Time to First Paint metric on slow connections.

Here, the primary cause of this late layout operation is that the webfonts were downloaded late. The browser has to first download and parse the CSS bundle, and then reconcile that with the DOM in order to begin downloading the webfonts. This eats up vital microseconds.

undefined

In the image above, note the low priority images being downloaded before the high priority webfonts. In this case the first webfont to download was the 52nd resource to download!

Preloading webfonts via resource hints

What we really need to do is get our webfonts loaded earlier. Fortunately, we can preload our webfonts with resource hints!

<link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_3_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_2_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_4_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic/360074_1_0.woff2" as="font" type="font/woff2" crossorigin> <link rel="preload" href="/sites/all/themes/zeus/fonts/tablet_gothic_condensed/360074_5_0.woff2" as="font" type="font/woff2" crossorigin> undefined

Yes! With resource hints, the browser knows about the files immediately, and will download the files as soon as it is able to. This eliminates the re-layout and associated flash of unstyled content (FOUT). It also decreases the time to the Time to Last Hero Paint metric.

undefined Preloading responsive images via resource hints

Let’s look at the Film Strip View within Chrome Developer Tools’ Network Tab. To do this, you click the toolbar icon that looks like a camera. When we refresh, we can see that everything is loading quickly—except for the hero’s background image. This affects the Last Hero Paint metric.

undefined

If you we hop over to the Network tab, we see that the hero image was the 65th resource to be downloaded. This is because the image is being referenced via the CSS background-image property. To download the image, the browser must first download and interpret the CSS bundle and then cross-reference that with the DOM.

We need to figure out a way to do resource hinting for this image. However, to save bandwidth, we load different versions of the background image dependent on the screen width. To make sure we download the correct image, we include the media attribute on the link element.

We need to make sure we don’t load multiple versions of the same background image, and we certainly do not want to fetch the unneeded resources.

  <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary--small.jpg" as="image" media="(max-width: 640px)">   <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary--med.jpg" as="image" media="(min-width: 640px) and (max-width: 980px)">   <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary--med-large.jpg" as="image" media="(min-width: 980px) and (max-width: 1200px)">   <link rel="preload" href="/<?php print $zeus_theme_path; ?>/images/new-design/homepage/hero-image-primary.jpg" as="image" media="(min-width: 1200px)">

At this point, the correct background image for the viewport is being downloaded immediately after the fonts, and this completely removes the FOUT (flash of unstyled content)—at least on fast connections.

Conclusion

Front-end website performance is a constantly moving target, but is critical to the overall speed of your site. Best practices evolve constantly. Also, modern browsers bring constant updates to performance techniques and tools needed to identify problems and optimize rendering. These optimizations don’t have to be difficult, and can typically be done in hours.

undefined Special thanks

Special thanks to my kickass coworkers on this project: Marc Drummond, Jerad Bitner, Nate Lampton, and Helena McCabe. And to Lullabot’s awesome designers Maggie Griner, Marissa Epstein, and Jared Ponchot.

Also special thanks to our amazing counterparts at Pantheon: Matt Stodolnic, Sarah Fruy, Michelle Buggy, Nikita Tselovalnikov, and Steve Persch.

What our clients are saying

...provided us with excellent, expert service in a professional and personable manner.
...dedicated, competent and driven to get the job done and done well.
...a pleasure to work with, combining patience (for my busy schedule and at times overwhelmed brain) with her strong motivation and energy to keep me going
...able to take my abstract ideas and add their expertise to bring them to life in a way that was better than I could have imagined!
...we just want you to know that we are appreciative!
...I have no doubt we will have the best site in the 2010 election of any PA candidate
...able to translate technical information in an accessible way...
I'm so happy we chose to work with PEERLESS Design.
I love directing our customers to our new site knowing that they are going to be able to find exactly what they are looking for...
...very responsive to our questions and needs
...creative, independent, responsive...
" PDI provides us prompt, effective and efficient service in maintaining our Drupal based website."
...your punctuality, your casual and open personalities, and both your hard copy and online portfolios speak very highly of you and your business as well
...can do anything any other designer can do and generally quicker, cheaper and better.
I would highly recommend her for any position requiring IT design and development
Thanks so much for everything!
I have seen the first layouts and they are awesome...
... they also made suggestions which showed me that they fully understood what I wanted to accomplish.
... incredibly impressed with what you brought to the table
I had a very tight deadline and budget, and they met it, seemingly with ease.
...continued to monitor it closely and is still always available to help me if I have any questions
I realized that I had picked the right company to work with soon after beginning a project with Peerless Design, Inc.
I would highly recommend her for any position requiring IT design and development
A great experience and a much improved website.
...took my less than mediocre site and completely revamped it into a beautiful, professional, and easy-to-navigate site