Test Driven Development (TDD) facilitates clean and stable code. Drupal 8 has embraced this paradigm with a suite of testing tools that allow a developer to write unit tests, functional tests, and functional JavaScript tests for their custom code. Unfortunately, there is no JavaScript unit testing framework readily available in Drupal core, but don’t fret. This article will show you how to implement JavaScript unit testing.

Why unit test your JavaScript code?

Testing units of code is a great practice, and also guarantees that any future developer doesn’t commit a regression to your logic. Adding unit coverage for JavaScript code is helpful for testing specific logical blocks of code quickly and efficiently without the overhead both in development time and testing time of functional tests.

An example of JavaScript code that would benefit from unit testing would be an input field validator. For demonstration purposes, let’s say you have a field label that permits certain characters, but you want to let the user know immediately if they entered something incorrectly, maybe with a warning message.

Here’s a crude example of a validator that checks an input field for changes. If the user enters a value that is not permitted, they are met with an error alert.

(($, Drupal) => {
  Drupal.behaviors.labelValidator = {
    attach(context) {
      const fieldName = "form.form-class input[name=label]";
      const $field = $(fieldName);
      $field.on("change", () => {
        const currentValue = $field.val();
        if (currentValue.length > 0 && !/^[a-zA-Z0-9-]+$/.test(currentValue)) {
          alert("The value you entered is incorrect!");
        }
      });
    }
  };
})(jQuery, Drupal);

JavaScript

We only allow letters, numbers, and hyphens in this sample validator. We now have a good idea of test data we can create for our test.

Setting up JS Unit Testing

In the world of JavaScript unit testing, Jest has a well-defined feature set, a large community, and is the most popular choice among developers. To begin using Jest, add jest as a development dependency in your favorite manager. Then create a Jest config file, and add your directories for testing. I recommend enabling lcov ; a test coverage reporter that converts test results into local HTML pages.

Writing a Test

We want to test our Drupal behavior, but we need jQuery and the global Drupal object. Have no fear! We can mock all of this. For simplicity’s sake, we can mock both jQuery and Drupal to test the code we want. The point here is to collect the validation logic and run it on our test cases.

There are a couple of different techniques we can use to meet our requirements. You can create a test DOM using a library like JSDOM and require the jQuery library. This gives you the ability to simulate HTML and DOM events. This approach is fine, but our goal is to test our custom validation logic, not to test third-party libraries, or simulate the DOM. Similar to mocking classes and methods in PHPUnit, we can do the same with jest.

Our testing environment is Node, so we can leverage the global object to mock Drupal, jQuery, and even the alert function. Please see Node’s global variable documentation for more information on this object. We can do this in the setup logic of jest with beforeAll:

beforeAll(() => {
  global.alert = jest.fn();
  global.Drupal = {
    behaviors: {}
  };
  global.jQuery = jest.fn(selector => ({
    on(event, callback) {
      validator = callback;
    },
    val() {
      return fieldValue;
    }
  }));
  const behavior = require("label-validator.es6.js");
  Drupal.behaviors.labelValidator.attach();
});

JavaScript

This makes our behavior available to the global Drupal object. We also have mocked jQuery, so we can collect the callback on which we want to run the tests. We run the attach method on the behavior to collect the callback. You may have noticed that we never declared the validator or fieldValue variables; we do this at the top of our test so we have them available in our tests.

// The validation logic we collect from the `change` event.
let validator = () => "";

// The value of the input we set in our tests.
let fieldValue = "";

JavaScript

With the intention of cleanup, we want to unset all the global objects after we have run our tests. In our case, the globals we are mocking do not exist in Node, so it is safe to set them to null. In cases in which we are mocking defined values, we would want to save a backup of that global and then mock it. After we are done testing, we would set the backup back to its corresponding global. There are also many techniques related to mocking globals and even core Node libraries. For an example, check out the documentation on the jest website.

Here is our tear-down logic. We use the jest function afterAll to achieve this:

afterAll(() => {
  global.Drupal = null;
  global.jQuery = null;
  global.alert = null;
});

JavaScript

We need to create an array of values that we know should pass validation and fail validation. We will call them validLabels and invalidLabels, respectively:

/**
 * List of valid labels for the input.
 *
 * @type {string[]}
 */
const validLabels = [
  "123ABVf123",
  "123",
  "AB",
  "1",
  "",
  "abcdefghijklmnop12345678910",
  "ab-3-cd"
];

/**
 * List of invalid labels for the input.
 *
 * @type {string[]}
 */
const invalidLabels = [
  "!@#fff",
  "test test",
  "(123)",
  "ABCDEF123!",
  "^Y1",
  " ",
  "'12346'",
];

JavaScript

Finally, we are ready to start writing our tests. We can use jest’s provided test function, or we can use the “describe it” pattern. I prefer the “describe it” pattern because you can provide detailed information on what you are testing and keep it in the same test scope.

Firstly, we want to test our valid data, and we know that these values should never trigger an alert. We will call the validator on each test value and set the expectation that the alert function is never called. But before we write the test, we want to make sure to clear all our mocks between tests to prevent mock pollution. We can achieve this with beforeEach:

beforeEach(() => {
  jest.clearAllMocks();
});

JavaScript

After writing our valid data test, we will write our invalid data test. This test should expect an alert for each invalid value sent. Putting it all together we have:

describe("Tests label validation logic", () => {
  beforeEach(() => {
    jest.clearAllMocks();
  });
  it("valid label test", () => {
    validLabels.forEach(value => {
      fieldValue = value;
      validator();
    });
    expect(global.alert.mock.calls.length).toBe(0);
  });
  it("invalid label test", () => {
    invalidLabels.forEach(value => {
      fieldValue = value;
      validator();
    });
    expect(global.alert.mock.calls.length).toBe(invalidLabels.length);
  });
});

JavaScript

After writing our tests, we can check our coverage and see we have hit 100%!

Visual output of Jest using the lcov test reporter

Jest is extremely flexible and has a large ecosystem. There are many different ways we could have achieved the above results; hopefully this gives you some useful ideas on how to unit test your javascript code.


The entire sample Jest test:

/* global test expect beforeEach afterAll beforeAll describe jest it */
// The validation logic we collect from the `change` event.
let validator = () => "";
// The value of the input we set in our tests.
let fieldValue = "";
// the setup function where we set our globals.
beforeAll(() => {
  global.alert = jest.fn();
  global.Drupal = {
    behaviors: {}
  };
  global.jQuery = jest.fn(selector => ({
    on(event, callback) {
      validator = callback;
    },
    val() {
      return fieldValue;
    }
  }));
  const behavior = require("label-validator.es6.js");
  Drupal.behaviors.labelValidator.attach();
});
// Global tear down function we use to remove our mocks.
afterAll(() => {
  global.Drupal = null;
  global.jQuery = null;
  global.alert = null;
});
/**
 * List of valid labels for the input.
 *
 * @type {string[]}
 */
const validLabels = [
  "123ABVf123",
  "123",
  "AB",
  "1",
  "",
  "abcdefghijklmnop12345678910",
  "ab-3-cd"
];
/**
 * List of invalid labels for the input.
 *
 * @type {string[]}
 */
const invalidLabels = [
  "!@#fff",
  "test test",
  "(123)",
  "ABCDEF123!",
  "^Y1",
  " ",
  "'12346'",
];
// The tests.
describe("Tests label validation logic", () => {
  beforeEach(() => {
    jest.clearAllMocks();
  });
  it("valid label test", () => {
    validLabels.forEach(value => {
      fieldValue = value;
      validator();
    });
    expect(global.alert.mock.calls.length).toBe(0);
  });
  it("invalid label test", () => {
    invalidLabels.forEach(value => {
      fieldValue = value;
      validator();
    });
    expect(global.alert.mock.calls.length).toBe(invalidLabels.length);
  });
});

JavaScript

Resources

Continuous integration and delivery (CI/CD) is an important part of any modern software development cycle. It ensures code quality remains high, helps keep applications secure, and bridges the gap between everyday work and your visitors’ experience.

Nowadays it’s a given that a CI/CD pipeline will be part of a workflow, but choosing a provider and/or platform can be difficult. Oomph has made use of a number of CI/CD tools over the years: DeployBot, Jenkins, and Travis CI have all made appearances. Most of our projects in the last few years have used Travis, but more recently we’ve found it to be unreliable. Just as we began searching for a new provider, full CI/CD support was announced for GitHub Actions.

We immediately added Actions to the list of providers we were interested in, and after some comparison, we began migrating projects to it. Overall we’ve found it to be beneficial — the syntax is well-designed, workflows are extensible and modular, the platform is reliable and performant, and we’ve experienced no major trouble.

There are already plenty of good guides and articles on how to use GitHub Actions; we won’t repeat that here. Instead, we’ll look at a few gotchas and issues that we’ve encountered while using the platform, to give an accurate picture of things you may come across while implementing GitHub Actions.

Considerations

The team behind GitHub Actions knew what they were doing, and it’s clear they learned from and improved on previous CI/CD implementations. This is most obvious in the clear structure of the syntax, the straightforward pricing model, and the useful feature set. However, Actions’ in-progress state is apparent in some areas.

Artifact Storage and Billing

GitHub provides a generous amount of free build time for all repositories and organizations. Storage, though, is much more limited — only 2GB is included for GitHub Teams organizations. If you want to store build artifacts for all of your CI/CD jobs (a good idea for testing and repeatability) you may need to configure a “spending limit” — i.e. a maximum amount you’re willing to spend each month on storage. GitHub charges $0.25/GB for storage beyond the included 2GB.

Artifact storage is still rudimentary. Jobs can upload artifacts for download by other jobs later in the workflow, but the lifetime of those artifacts cannot be configured; they will expire after 90 days and the only way to delete them beforehand is manual. Manual deletions will also take some time to free up storage space.

We also experienced an issue where our reported usage for Actions storage was greatly (~500%) exaggerated, putting us far past our spending limit and breaking builds. When we reached out to GitHub’s support, though, they responded quickly to let us know this was a system-wide issue and they were working on it; the issue was resolved some days later and we were not charged for the extra storage. We were able to work around it in the meantime by extending our spending limit.

Restarting and Debugging Jobs

If a workflow fails or is canceled, it can be restarted from the workflow page. However, it’s not yet possible to restart certain jobs; the entire workflow has to be run again. GitHub is working on support for job-specific restarts.

Debugging job failures also is not yet officially supported, but various community projects make this possible. We’ve used Max Schmitt’s action-tmate to debug our builds, and that does the job. In fact, I prefer this approach to the Travis method; with this we can specify the point of the workflow where we want to start debugging, whereas Travis always starts debugging at the beginning of the build.

Log Output

GitHub Actions has an excellent layout for viewing the output of jobs. Each job in a workflow can be viewed and within that each step can be expanded on its own. The output from the current step can also be seen in near-real-time. Unfortunately, this last bit has been somewhat unreliable for us, lagging behind by a bit or failing to show the output for short steps. (To be fair to GitHub, I have never used a CI/CD platform where the live output worked flawlessly.) Viewing the logs after completion has never been a problem.

Configuring Variables/Outputs

GitHub Actions allows you to configure outputs for an action, so a later step can use some value or outcome from an earlier step. However, this only applies to packaged actions that are included with the uses method.

To do something similar with a free-form step is more convoluted. First, the step must use some odd syntax to set an output parameter, e.g.:

- name: Build
  id: build
  run: |
    ./scripts/build.sh
    echo "::set-output name=appsize::$(du -csh --block-size=1G  build/ | tail -n1 | cut -d$'\t' -f1)"

YAML

Then a later step can reference this parameter with the steps context:

- name: Provision server
  run: terraform apply -var “app_ebs_volume_size=${{ steps.build.outputs.appsize }}”

YAML

However, the scope of the above is limited to the job it takes place inside of. To reference values across jobs you must also set the values within the outputs map in the jobs context, e.g.:

jobs:
  build:
    runs-on: ubuntu-latest
    outputs:
      appsize: ${{ steps.step1.outputs.appsize }}
    steps:
    - name: Build
      id: build
      run: |
        ./scripts/build.sh
        echo "::set-output name=appsize::$(du -csh --block-size=1G  build/ | tail -n1 | cut -d$'\t' -f1)"
  infra:
    runs-on: ubuntu-latest
    needs: build
    steps:
    - run: terraform apply -var “app_ebs_volume_size=${{ needs.build.outputs.appsize }}”

YAML

Importantly, the outputs map from a previous job is only made available to jobs that require it with the needs directive.

While this setup is workable, the syntax feels a little weird, and the lack of documentation on it makes it difficult to be certain of what you’re doing. This is evolving, as well; the jobs.<jobs_id>.outputs context was only released in early April. Before that was added, persisting data across jobs required the use of build artifacts, which was clunky and precluded its use for sensitive values.

Self-hosted Runners

Sometimes security or access requirements prohibit a cloud-hosted CI/CD runner from reaching into an environment to deploy code or provision resources, or some sensitive data needs to be secured. For these scenarios, GitHub provides the ability to self-host Actions runners. Self-hosted runners can instead run the CI/CD process from an arbitrary VM or container within the secured network or environment. You can use them alongside cloud-hosted runners; as an example, in some situations we use cloud-hosted runners to test and validate builds before having the self-hosted runners deploy those builds to an environment.

This feature is currently in beta, but it has proven reliable and extremely useful in the places we’ve needed them.

Reliability and Performance

Overall GitHub Actions has been very reliable for us. There have been periods of trouble here and there but GitHub is open about the issues and generally addresses them in short order. We have not (yet) been seriously impeded by any outages or degradation, which is a significant improvement over our previous CI/CD situation.

Overall Experience

In general, the switch to GitHub Actions has been a positive experience. We have made significant improvements to our CI/CD workflows by switching to Actions; the platform has some great features and it has certainly been beneficial for our development lifecycle. While Actions may have a few quirks or small issues here and there we wouldn’t hesitate to recommend it as a CI/CD platform.

The first stable release for Drupal 9 shipped right on schedule — June 3, 2020. The Drupal 8.9.0 release was available the same day, and that means end-of-life for 8.7.x.

Since we all have migrated our sites from Drupal 7 to 8.9.x already (right??), it should be a fairly straightforward process to port everything from 8 to 9 when the time comes. This article covers what is involved with the 8 to 9 migration, sharing some of the gotchas we encountered in the hopes that you can have a smooth transition.


Are you familiar with what is coming in Drupal 9? How can you assess what is needed? How do you know what code needs to be updated? What other steps are involved?


This will help prepare you when it comes time to make the leap and to reassure you that this should be a straightforward and painless process.

Drupal 9

Drupal 9 is not being built in a different codebase than Drupal 8, so all new features will be backward-compatible. That is a significant departure if you recently went through a Drupal 6 to 7, or Drupal 7 to 8 migration. You won’t have to map content types and fields using migration modules or custom migration plugins and you won’t have to restructure your custom modules from scratch. This is really good news for companies and organizations who want to port sites before Drupal 8 end of life in November 2021 and who want to avoid or minimize the disruption that can come with a complicated migration.

In terms of what the code looks like, Drupal 9 will be the same as the last Drupal 8 minor release (which is set to be 8.9), with deprecated code removed and third-party dependencies updated. Upgrading to Drupal 9 should be like any other minor upgrade, so long as you have removed or replaced all deprecated code.

The Drupal.org documentation visualizes the differences between Drupal 8.9 and 9 with this image:

Drupal 9.0 API = Drupal 8.9 API minus deprecated parts plus third party dependencies updated

Upgrades

Symfony 3 -> 4.4

The biggest change for third party dependencies is the use of Symfony 4.4 for Drupal 9. Drupal 8 relies on Symfony 3, and to ensure security support, Symfony will have to be updated for Drupal 9.

Twig 1 -> 2

Drupal 9 will use Twig 2 instead of Twig 1 (Drupal 8). CKEditor 5 is planned to be used for a future version of Drupal 9; this issue references 9.1.x for the transition. Drupal 9 will still depend on jQuery, but most components of jQuery UI will be removed from core.

PHPUnit 6 -> 7

For testing, PHPUnit 7 will be used instead of version 6. The Simpletest API will be deprecated in Drupal 9 and PHPUnit is recommended in its place. If you have an existing test suite using PHPUnit, you might have to replace a lot of deprecated code, just as you will do for custom modules.

6 Month release schedule

Along the lines of how Drupal 8 releases worked, Drupal 9.1.0, 9.2.0, and so on, will each contain new backwards-compatible features for Drupal 9 every six months after the initial Drupal 9.0 release. The list of Strategic Initiatives gives a detailed overview of major undertakings that have been completed for Drupal 8 or are proposed and underway for Drupal 9. We might see automatic updates for 9.1, or drush included in core.

How can you assess what is needed to upgrade?

There are some comprehensive guides available on Drupal.org that highlight the steps needed for Drupal 9 readiness. A lot of functions, constants, and classes in Drupal core have been deprecated in Drupal 9.

Some deprecations call for easy swap-outs, like the example below:

Call to deprecated method url() of class Drupal\file\Entity\File. Deprecated in drupal:8.0.0 and is removed from drupal:9.0.0. Please use toUrl() instead.

You can see a patch that has been created that swaps out url() with toUrl() straightforwardly:

-  $menuItem['thumbnail_url'] = file_url_transform_relative($imageFile->Url());
+  $menuItem['thumbnail_url'] = file_url_transform_relative($imageFile->toUrl()->toString());

Some deprecations are more involved and do require some code rewrites if your custom modules are relying on the outdated code.

Example:

Call to deprecated function pagerdefaultinitialize() in drupal:8.8.0 and is removed from drupal:9.0.0. Use \Drupal\Core\Pager\PagerManagerInterface->defaultInitialize() instead.

There is an active issue in the Drupal core issue queue for this deprecation. Rewriting outdated code sometimes requires going through issue queue comments and doing some research to figure out how the core module has been reconfigured. Often it is easiest to look at the core code itself, or to grep for that function in other core modules to see how they have handled the deprecation.

This is how I ended up replacing the pagerdefaultinitialize() deprecated function for the limit() method in our custom module:

use Drupal\Core\Database\Query\PagerSelectExtender;
+ use Drupal\Core\Pager\PagerManagerInterface;
+ use Drupal\Core\Pager;

class CountingPagerSelectExtender extends PagerSelectExtender {
  /**
   * {@inheritdoc}
   */
  public function limit($limit = 10) {
    parent::limit($limit);
     + /** @var \Drupal\Core\Pager\PagerManage $pagerManager */+ $pager_manager = \Drupal::service('pager.manager');

    if (empty($this->limit)) {
      return $this;
    }

    $this
      ->ensureElement();
    $total_items = $this
      ->getCountQuery()
      ->execute()
      ->fetchField();
     - $current_field = pager_default_initialize($total_items, $this->limit, $this->element);
     + $pager = $pager_manager->createPager($total_items, $this->limit, $this->element);
     + $current_page = $pager->getCurrentPage();
    $this
      ->range($current_page * $this->limit, $this->limit);
    return $this;
  }

How do you know what code needs to be updated?

Fortunately, as is usually the case with Drupal, there is a module for that! Upgrade Status

This contributed module allows you to scan all the code of installed modules. Sometimes a scan can take a while, so it might make sense to scan custom modules one by one if you want to step through your project. Upgrade Status generates reports on the deprecated code that must be replaced and can be exported in HTML format to share with others on your team.

If you are using a composer-based workflow, install Upgrade Status using the following command:

composer require 'drupal/upgrade_status:^2.0'

YAML

You might also need the Git Deploy contributed module as a dependency. Our projects did.

The Upgrade Status module relies on a lot of internals from the Drupal Check package. You can install Drupal Check with composer and run it if you want a quicker tool in the terminal to go through the codebase to identify code deprecations, and you don’t care about visual reporting or the additional checks offered by Upgrade Status.

Tools such as Upgrade Status and Drupal Check are extremely useful in helping to pinpoint which code will no longer be supported once you upgrade your project to Drupal 9. The full list of deprecated code was finalized with the Drupal 8.8.0 release in December 2019. There could be some future additions but only if absolutely necessary. The Drupal Core Deprecation Policy page goes into a lot more detail behind the justification for and mechanics of phasing out methods, services, hooks, and more.

@deprecated in drupal:8.3.0 and is removed from drupal:9.0.0.  
Use \Drupal\Foo\Bar::baz() instead.  
@see http://drupal.org/mode/the-change-notice-nid

YAMLThe deprecation policy page explains how the PHPdoc tags indicate deprecated code

For the most part, all deprecated APIs are documented at: api.drupal.org/api/drupal/deprecated

There are a lot of pages

Since so many maintainers are currently in the process of preparing their projects for Drupal 9, there is a lot of good example code out there for the kinds of errors that you will most likely see in your reports.

Check out the issues on Drupal.org with Issue Tag “Drupal 9 compatibility”, and if you have a few thousand spare hours to wade through the queues, feel free to help contributed module maintainers work towards Drupal 9 readiness!

Upgrade Status note

My experience was that I went through several rounds of addressing the errors in the Upgrade Status report. For several custom modules, after I cleared out one error, re-scanning surfaced a bunch more. My first pass was like painting a wall with a roller. The second and third passes entailed further requirements and touch-ups to achieve a polished result.

What about previous Drupal releases?

Drupal 8 will continue to be supported until November 2021, since it is dependent on Symfony 3, which has an end-of-life at the same time.

Drupal 7 will also continue to be supported by the community until November 2021, with vendor extended support offered at least until 2024.

Now is a good time to get started on preparing for Drupal 9!

This post will assume you have already completed the base setup of enabling Layout Builder and added the ability to manage layouts to one of your content types. If you are not to this point check out Drupal.orgs documentation on layout builder or this article by Tyler Fahey which goes over setup and some popular contrib module enhancements.

As we mentioned in part 1 of this series, you should expect a little DIY with Layout Builder. So far the best way we have found to theme Layout Builder is by creating a custom module to provide our own custom layouts and settings. By defining custom layouts in a custom module we get the ability to control each layout’s markup as well as the ability to add/remove classes based on the settings we define.

Writing the custom layout module

Setup the module

Start by creating your custom module and providing the required .info.yml file.

demo_layout.info.yml:

name: Demo Layout
description: Custom layout builder functionality for our theme.
type: module
core: 8.x
package: Demo

dependencies:
  - drupal:layout_builder

YAML

Remove default core layouts

Layout Builder comes with some standard layouts by default. There’s nothing wrong with these, but generally for our clients, we want them only using our layouts. This hook removes those core layouts, leaving only the layouts that we will later define:

demo_layout.module

/**
 * Implements hook_plugin_filter_TYPE__CONSUMER_alter().
 */
function demo_layout_plugin_filter_layout__layout_builder_alter(array &$definitions): void {
  // Remove all non-demo layouts from Layout Builder.
  foreach ($definitions as $id => $definition) {
    if (!preg_match('/^demo_layout__/', $id)) {
      unset($definitions[$id]);
    }
  }
}

PHP

Register custom layouts and their regions

The next step is to register the custom layouts and their respective regions. This process is well documented in the following drupal.org documentation: https://www.drupal.org/docs/8/api/layout-api/how-to-register-layouts

For this particular demo module we are going to define a one column and a two column layout. These columns will be able to be sized later with the settings we provide.

demo_layout.layouts.yml

demo_layout__one_column:
  label: 'One Column'
  path: layouts/one-column
  template: layout--one-column
  class: Drupal\demo_layout\Plugin\Layout\OneColumnLayout
  category: 'Columns: 1'
  default_region: first
  icon_map:
    - [first]
  regions:
    first:
      label: First

demo_layout__two_column:
  label: 'Two Column'
  path: layouts/two-column
  template: layout--two-column
  class: Drupal\demo_layout\Plugin\Layout\TwoColumnLayout
  category: 'Columns: 2'
  default_region: first
  icon_map:
    - [first, second]
  regions:
    first:
      label: First
    second:
      label: Second

YAML

Pay close attention to the path, template, and class declarations. This determines where the twig templates and their respective layout class get placed.

Creating the base layout class

Now that we have registered our layouts, it’s time to write a base class that all of the custom layouts will inherit from. For this demo we will be providing the following settings:

However, there is a lot of PHP to make this happen. Thankfully for the most part it follows a general pattern. To make it easier to digest, we will break down each section for the Column Width setting only and then provide the entire module at the end which has all of the settings.

src/Plugin/Layout/LayoutBase.php

<?php
  declare(strict_types = 1);

  namespace Drupal\demo_layout\Plugin\Layout;

  use Drupal\demo_layout\DemoLayout;
  use Drupal\Core\Form\FormStateInterface;
  use Drupal\Core\Layout\LayoutDefault;

  /**
   * Provides a layout base for custom layouts.
   */
  abstract class LayoutBase extends LayoutDefault {

  }

PHP

Above is the layout class declaration. There isn’t a whole lot to cover here other than to mention use Drupal\demo_layout\DemoLayout;. This class isn’t necessary but it does provide a nice one-stop place to set all of your constant values. An example is shown below:

src/DemoLayout.php

<?php

declare(strict_types = 1);

namespace Drupal\demo_layout;

/**
 * Provides constants for the Demo Layout module.
 */
final class DemoLayout {

  public const ROW_WIDTH_100 = '100';

  public const ROW_WIDTH_75 = '75';

  public const ROW_WIDTH_50 = '50';

  public const ROW_WIDTH_25 = '25';

  public const ROW_WIDTH_25_75 = '25-75';

  public const ROW_WIDTH_50_50 = '50-50';

  public const ROW_WIDTH_75_25 = '75-25';

}

PHP

The bulk of the base class logic is setting up a custom settings form using the Form API. This form will allow us to formulate a string of classes that get placed on the section or to modify the markup depending on the form values. We are not going to dive into a whole lot of detail as all of this is general Form API work that is well documented in other resources.

Setup the form:

/**
   * {@inheritdoc}
   */
  public function buildConfigurationForm(array $form, FormStateInterface $form_state): array {

    $columnWidths = $this->getColumnWidths();

    if (!empty($columnWidths)) {
      $form['layout'] = [
        '#type' => 'details',
        '#title' => $this->t('Layout'),
        '#open' => TRUE,
        '#weight' => 30,
      ];

      $form['layout']['column_width'] = [
        '#type' => 'radios',
        '#title' => $this->t('Column Width'),
        '#options' => $columnWidths,
        '#default_value' => $this->configuration['column_width'],
        '#required' => TRUE,
      ];
    }

    $form['#attached']['library'][] = 'demo_layout/layout_builder';

    return $form;
  }

 /**
   * {@inheritdoc}
   */
  public function validateConfigurationForm(array &$form, FormStateInterface $form_state) {
  }

  /**
   * {@inheritdoc}
   */
  public function submitConfigurationForm(array &$form, FormStateInterface $form_state) {
    $this->configuration['column_width'] = $values['layout']['column_width'];
  }

 /**
   * Get the column widths.
   *
   * @return array
   *   The column widths.
   */
  abstract protected function getColumnWidths(): array;

PHP

Finally, we add the build function and pass the column width class:

/**
   * {@inheritdoc}
   */
  public function build(array $regions): array {
    $build = parent::build($regions);

    $columnWidth = $this->configuration['column_width'];
    if ($columnWidth) {
      $build['#attributes']['class'][] = 'demo-layout__row-width--' . $columnWidth;
    }

    return $build;
  }

PHP

Write the column classes

Now that the base class is written, we can write column-specific classes that extend it. These classes are very minimal since most of the logic is contained in the base class. All that is necessary is to provide the width options for each individual class.

src/Plugin/Layout/OneColumnLayout.php

<?php

declare(strict_types = 1);

namespace Drupal\demo_layout\Plugin\Layout;

use Drupal\demo_layout\DemoLayout;

/**
 * Provides a plugin class for one column layouts.
 */
final class OneColumnLayout extends LayoutBase {

  /**
   * {@inheritdoc}
   */
  protected function getColumnWidths(): array {
    return [
      DemoLayout::ROW_WIDTH_25 => $this->t('25%'),
      DemoLayout::ROW_WIDTH_50 => $this->t('50%'),
      DemoLayout::ROW_WIDTH_75 => $this->t('75%'),
      DemoLayout::ROW_WIDTH_100 => $this->t('100%'),
    ];
  }

  /**
   * {@inheritdoc}
   */
  protected function getDefaultColumnWidth(): string {
    return DemoLayout::ROW_WIDTH_100;
  }
}

PHP

src/Plugin/Layout/TwoColumnLayout.php

<?php

declare(strict_types = 1);

namespace Drupal\demo_layout\Plugin\Layout;

use Drupal\demo_layout\DemoLayout;

/**
 * Provides a plugin class for two column layouts.
 */
final class TwoColumnLayout extends LayoutBase {

  /**
   * {@inheritdoc}
   */
  protected function getColumnWidths(): array {
    return [
      DemoLayout::ROW_WIDTH_25_75 => $this->t('25% / 75%'),
      DemoLayout::ROW_WIDTH_50_50 => $this->t('50% / 50%'),
      DemoLayout::ROW_WIDTH_75_25 => $this->t('75% / 25%'),
    ];
  }

  /**
   * {@inheritdoc}
   */
  protected function getDefaultColumnWidth(): string {
    return DemoLayout::ROW_WIDTH_50_50;
  }
}

PHP

We can now check out the admin interface and see our custom form in action.

One column options:

Two column options:

Add twig templates

The last step is to provide the twig templates that were declared earlier in the demo_layout.layouts.yml file. The variables to be aware of are:

src/layouts/one-column/layout–one-column.html.twig

{#
/**
 * @file
 * Default theme implementation to display a one-column layout.
 *
 * Available variables:
 * - content: The content for this layout.
 * - attributes: HTML attributes for the layout <div>.
 * - settings: The custom form settings for the layout.
 *
 * @ingroup themeable
 */
#}

{%
  set row_classes = [
    'row',
    'demo-layout__row',
    'demo-layout__row--one-column'
  ]
%}

{% if content %}
  <div{{ attributes.addClass( row_classes|join(' ') ) }}>
    <div {{ region_attributes.first.addClass('column', 'column--first') }}>
      {{ content.first }}
    </div>
  </div>
{% endif %}

Twig

src/layouts/two-column/layout–two-column.html.twig

{#
/**
 * @file
 * Default theme implementation to display a two-column layout.
 *
 * Available variables:
 * - content: The content for this layout.
 * - attributes: HTML attributes for the layout <div>.
 * - settings: The custom form settings for the layout.
 *
 * @ingroup themeable
 */
#}

{# Get the column widths #}
{% set column_widths = settings.column_width|split('-') %}

{%
  set row_classes = [
    'row',
    'demo-layout__row',
    'demo-layout__row--two-column'
  ]
%}

{% if content %}
  <div{{ attributes.addClass( row_classes|join(' ') ) }}>

    {% if content.first %}
      <div {{ region_attributes.first.addClass('column', 'column--' ~ column_widths.0, 'column--first') }}>
        {{ content.first }}
      </div>
    {% endif %}

    {% if content.second %}
      <div {{ region_attributes.second.addClass('column', 'column--' ~ column_widths.1, 'column--second') }}>
        {{ content.second }}
      </div>
    {% endif %}

    </div>
  </div>
{% endif %}

Twig

Notice settings.column_width was passed with a string: 75-25. We need to split it and place each value on our column which results in the following output.

<div class="demo-layout__row-width--75-25 row demo-layout__row demo-layout__row--two-column ">
  <div class="column column--75 column--first"></div>
  <div class="column column--25 column--second"></div>
</div>

HTML

Since these are custom classes, and we haven’t written any CSS, these columns do not have any styling. Depending on your preference, you can implement your own custom column styles or wire up a grid framework such as Bootstrap in order to get the columns to properly size themselves.

Wrapping it up

You should be at a point where you have an idea of how to create custom settings in order to theme layout builder sections. You can take this method and extend it however you need to for your particular project. There’s no definitive best way to do anything in the world of web development, and Layout Builder is no exception to that rule. It’s a great addition to Drupal’s core functionality, but for larger sites, it likely won’t be and shouldn’t be the only way you handle layout. Much like Drupal itself though, as more and more people use it, Layout Builder will only become stronger, more robust, more fully-featured, and better documented. If it doesn’t seem like a good fit for you right now, it may become a better fit as it grows. If it does seem like a good fit, be ready to get your hands dirty!

The full demo layouts module with all of the custom settings is available here: https://github.com/oomphinc/layout-builder-demo/tree/master/moduleexamples/demolayout


THE BRIEF

The RISD Museum publishes a document for every exhibition in the museum. Most of them are scholarly essays about the historical context around a body of work. Some of them are interviews with the artist or a peek into the process behind the art. Until very recently, they have not had a web component.

The time, energy, and investment in creating a print publication was becoming unsustainable. The limitations of the printed page in a media-driven culture are a large drawback as well. For the last printed exhibition publication, the Museum created a one-off web experience — but that was not scalable.

The Museum was ready for a modern publishing platform that could be a visually-driven experience, not one that would require coding knowledge. They needed an authoring tool that emphasized time-based media — audio and video — to immediately set it apart from printed publications of their past. They needed a visual framework that could scale and produce a publication with 4 objects or one with 400.

A sample of printed publications that were used for inspiration for variation and approach.

THE APPROACH

A Flexible Design System

Ziggurat was born of two parents — Oomph provided the design system architecture and the programmatic visual options while RISD provided creative inspiration. Each team influenced the other to make a very flexible system that would allow any story to work within its boundaries. Multimedia was part of the core experience — sound and video are integral to expressing some of these stories.

The process of talking, architecting, designing, then building, then using the tool, then tweaking the tool pushed and pulled both teams into interesting places. As architects, we started to get very excited by what we saw their team doing with the tool. The original design ideas that provided the inspiration got so much better once they became animated and interactive.

Design/content options include:

  • Multiple responsive column patterns inside row containers
  • Additionally, text fields have the ability to display as multiple columns
  • “Hero” rows where an image is the primary design driver, and text/headline is secondary. Video heroes are possible
  • Up to 10-colors to be used as row backgrounds or text colors
  • Choose typefaces from Google Fonts for injection publication-wide or override on a page-by-page basis
  • Rich text options for heading, pull-quotes, and text colors
  • Video, audio, image, and gallery support inside any size container
  • Video and audio player controls in a light or dark theme
  • Autoplaying videos (where browsers allow) while muted
  • Images optionally have the ability to Zoom in place (hover or touch the image to see the image scale by 200%) or open more

There are 8 chapters total in RAID the Icebox Now and four supporting pages. For those that know library systems and scholarly publications, notice the Citations and credits for each chapter. A few liberally use the footnote system. Each page in this publication is rich with content, both written and visual.


RAPID RESPONSE

An Unexpected Solution to a New Problem

The story does not end with the first successful online museum publication. In March of 2020, COVID-19 gripped the nation and colleges cut their semesters short or moved classes online. Students who would normally have an in-person end-of-year exhibition in the museum no longer had the opportunity.

Spurred on by the Museum, the university invested in upgrades to the Publication platform that could support 300+ new authors in the system (students) and specialized permissions to limit access only to their own content. A few new features were fast-tracked and an innovative ability for some authors to add custom javascript to Department landing pages opened the platform up for experimentation. The result was two online exhibitions that went into effect 6 weeks after the concepts were approved — one for 270+ graduate students and one for 450+ undergraduates.

With the release of Drupal 8.7 in May of 2019 came the rollout of the much-anticipated Layout Builder core module. According to Drupal.org, Layout Builder allows content editors and site builders to easily and quickly create visual layouts for displaying content by providing the ability to drag and drop site-wide blocks and content fields into regions within a given layout. Drupalists were excited about it, and so were we.

For a long time, we developed and came to heavily rely on our own extension of the Paragraphs module to give content managers the power to build and modify flexible layouts. When we heard that there would now be an equivalent option built right into core, we thought, “could this be the end of Paragraphs?” Well, the only way to find out is to dig in and start using it in some real-world scenarios.

Layout Builder is still new enough that many how-to guides only focus on installing it and enabling it on a content type or two and overviews of options that are available right out of the box. That’s probably fine if your use-case is something akin to Umami, Drupal.org’s example recipe site. But if you want to use it in a significant way on a larger site, it probably won’t be long before you want to customize it to fit your situation. Once you get to that point, documentation becomes scant. If you’ve already got some experience rolling your own extension of a module or at least writing preprocesses, you’re more likely to get better mileage out of your experience with Layout Builder.

First, let’s take a look at some of the pros and cons of using Layout Builder. If there are any deal-breakers for you, it’s better to identify them sooner than later.

Layout Builder Pros:

1. All core code

Yes, the fact that Layout Builder is a core initiative that will continue to get attention and updates is fantastic no matter how stable similar module initiatives might be. As it’s core, you get great integration for language translation.

2. Block-based, but supports fields as well

Blocks are a familiar core Drupal content paradigm. They can be used as one-off content containers or as repeatable containers for content that appears in multiple places but should be edited from a single location. Fields can also be placed as content into a layout, which makes building custom templates that can continue to leverage fields very flexible.

3. Better WYSIWYG authoring experience

End-users will be much happier with the (not quite) WYSIWYG editing experience. While it is not directly one-to-one, it is much better than what we have seen with Paragraphs, which is a very “Drupal” admin experience. In many ways, previously, Preview was needed to know what kind of design your content was creating.

4. Supports complex design systems with many visual options

Clients can get quite a bit of design control and can see the effects of their decisions very quickly. It makes building special landing pages very powerful.

5. Plays nice with Clone module

While custom pages described in Pro #4 are possible, they are time-consuming to create. The Clone module is a great way to make copies of complex layouts to modify instead of starting from scratch each time.

6. “Locked” Layouts are the default experience

While complex custom pages are possible, they are not the default. Layout Builder might have been intended to replace custom template development, because by default when it is applied to a content type, the option to override the template on a node-by-node basis is not turned on. A site builder needs to decide to turn this feature on. When you do, proceed with caution.

Layout Builder Cons

1. Lack of Documentation

Since LB is so relatively new, there is not an abundance of documentation in the wild. People are using it, but it is sort of still the Wild Wild West. There are no established best practices on how to use it yet. Use it with Paragraphs? Maybe. Use it for the entire page, including header and footer? You can. Nest it in menus? We’ve done it. Should we have done it? Time will tell.

2. More time is required to do it well

Because of Con #1, it’s going to take more time. More time to configure options, more time to break designs down into repeatable components, and more time to test all the variations and options that design systems present.

3. Admin interface can conflict with front-end styles

While Pro #3 is a great reason to use LB, it should be known that some extra time will be needed to style the admin mode of your content. There is some bleeding together of admin and non-admin front-end styles that could cause your theme build to take longer.

An example: We created a site where Layout Builder custom options could control the animation of blocks. Great for the front-end, but very annoying for the backend author when blocks would animate while they were trying to edit.

4. Admin editing experience still in its infancy

Again, while Pro #3 is significant, the current admin editing experience is not the best. We know it is being worked on, and there are modules that help, but it is something that could tip the scales depending on the project and the admin audience.

5. Doesn’t play nice with other template methods

Which is to say that you can’t easily have a page that is partially LB and partially a templated View or something else. You can create a View that can be placed into a Block that is placed via Layout Builder, but you can’t demarcate LB to one section of a page and have a View or the body field in the other.

6. Content blocks do not export with configuration

As blocks go, the configuration of a block is exportable, but the content isn’t. Same with the blocks that Layout Builder uses, which can make keeping staging/production environments in sync frustrating. Just like with Paragraphs or custom blocks, you’ll have to find a reliable way of dealing with this.

7. Overriding a default layout has consequences

We have seen this ourselves first-hand. The design team and client want a content type to be completely flexible with Layout Builder, so the ability for an author to override the default template is turned on. That means the node is now decoupled from the default template. Any changes to the default will not propagate to those nodes that have been decoupled and modified. For some projects, it’s not a big deal, but for others, it might become a nightmare.

8. The possibility of multiple design options has a dark side

Too many options can be a bad thing. It can be more complex for authors than necessary. It can add a lot more time to theming and testing the theme when options create exponential possibilities. And it can be very hard to maintain.


With great power comes great responsibility. Layout Builder certainly gives the Drupal community great power. Are we ready for the caveats that come with that?

Ready to tackle customizing Layout Builder? Watch for Part Two, where we’ll dive into defining our own layouts and more.

As everyone is aware, the world is in the grips of a crushing global health crisis. Our day-to-day lives have changed dramatically. Our children are learning from computers at home, some of us are without work, and others are working from home for the first time. Events and social gatherings have been canceled or are going digital. Without a doubt, the global business climate has changed. This is no different for non-profit organizations like the Drupal Association (DA).

At the end of March, Drupal Association Executive Director Heather Rocker posted on the DA blog — Drupal Association Statement re: Financial Effects of COVID-19. This post outlines the DA’s financial impact if the Association could not host DrupalCon this year. With the rapid changes and stay-at-home orders, the Association is potentially on the hook for event fees whether or not attendees showed up — this is all dependent on force majeure being activated. She calls for support from the community to help us close this gap so we may continue to support Drupal, thrive and serve you. A second post from Drupal Project Founder Dries Buytaert titled Sustaining The Drupal Association in Uncertain Times highlighted the need for the community to step up and help.

Dries and his wife Vanessa pledged to match individual contributions up to $100,000. And last week Oomph and nearly thirty other businesses in the Drupal community stepped up with a pledge to triple match individual donations. Listen to Chris Murray, CEO of Oomph and Matt Westgate of Lullabot discuss this fundraising effort on Talking Drupal #245.

At Oomph we feel it’s our responsibility to answer this call from the Drupal Association and support a community that has supported our work through the years. This support will be in addition to our previously committed community support efforts. We will still be the event sponsor of the New England Drupal Camp, sending Oomphers to attend and speak at conferences and camps, committing patches and fixes to issues on Drupal.org, and continuing to help in any way we can.

If you are feeling inspired by this news, please join us in supporting the Drupal Association. Visit the #DrupalCares page on drupal.org for more information on ways to give.

Our hope is that you (as we do) will feel it is your duty to support the Drupal Association. We all benefit from this great open source community and we pay nothing to be part of it. Dries reminds us in his post how “Drupal has weathered many storms.” Drupal and the Drupal Association will come out of this stronger and that will be in large part to the community of individuals and organizations helping to support this effort.

Join with Oomph in its support of this community! After all, we come for the code and stay for the community!


THE BRIEF

Transform the Experience

The core Earthwatch experience happens outdoors in the form of an expedition — usually for about a week and far away from technology in locations like the Amazon Basin, Uganda, or the Great Barrier Reef. But before this in-person experience happens, an expedition volunteer encounters a dizzying array of digital touchpoints that can sow confusion and lead to distrust. Earthwatch needed “Experience Transformation.”

SURVEY THE LANDSCAPE

Starting with a deep strategy and research engagement, Oomph left no stone unturned in cataloging users and their journeys through a decade’s worth of websites and custom applications. We were able to conduct multiple interview sessions with engaged advocates of the organization. Through these interviews, the Earthwatch staff learned how to conduct more interviews themselves and listen to their constituents to internalize what they find wonderful about the experience as well as what they find daunting.

CREATE THE MAP

With a high-level service blueprint in place, Oomph then set out to transform the digital experiences most essential to the organization: the discovery and booking journey for individuals and the discovery, research, and inquiry journey for corporate sustainability programs.

The solution took shape as an overhaul and consolidation of Earthwatch’s public-facing websites.


THE RESULTS

The Journey Before the Journey

A fresh design approach that introduces new colors, beautiful illustrations, and captivating photography.

Expedition discovery, research, and booking was transformed into a modern e-commerce shopping experience.

Corporate social responsibility content architecture was overhauled with trust-building case studies and testimonials to drive an increase in inquiries.


IN THEIR WORDS

The Oomph team far surpassed our (already high!) expectations. As a nonprofit, we had a tight budget and knew it would be a massive undertaking to overhaul our 7-year-old site while simultaneously launching an organizational rebrand. Oomph helped to guide us through the entire process, providing the right level of objective, data-driven expertise to ensure we were implementing user experience and design best practices. They listened closely to our needs and helped to make the website highly visual and engaging while streamlining the user journey. Thanks to their meticulous project management and time tracking, we successfully launched the site on time and exactly on budget.

ALIX MORRIS MHS, MS, Director of Communications, Earthwatch

Drupal 8 is amazing and the cache improvements it provides are top-notch. However, what happens when you need to display a cached page that shows the same entity with personalized content to different users?

Why would you need to do this? Perhaps you need to show user statistics on a dashboard. Maybe a control panel needs to show information from a 3rd party system. Maybe you need to keep track of a user’s progress as they work through an online learning course. Anytime you want to reuse the UI/layout of an entity, but also want to display dynamic/personalized information alongside that entity, this could work for you.

The Challenge

In a recent project, we needed to create a view of taxonomy terms showing courses to which a user was enrolled. The taxonomy terms needed to show the user’s current progress in each course and this status would be different for each user. Taking that a step further, each course had lesson nodes that referenced it and each of those lesson nodes needed to display a different status based on the user. To add a little more complexity, the status on the lesson nodes would show different information depending on the user’s permissions. 😱

The challenge was how to display this highly personalized information to different users while still maintaining Drupal’s internal and dynamic page caching.

The Solution

Computed fields

First, we relied on computed fields that would allow us to dynamically get information for individual entities and output those fields in the render array of the entity.

To create a computed field for the course taxonomy term you first need to:

1. Generate a computed field item list in /modules/custom/mymodule/src/Plugin/Field/TermStatusItemList.php:

<?php
  namespace Drupal\mymodule\Plugin\Field;

  use Drupal\Core\Field\FieldItemList;
  use Drupal\Core\Field\FieldItemListInterface;
  use Drupal\Core\TypedData\ComputedItemListTrait;

  /**
   * TermStatusItemList class to generate a computed field.
   */
  class TermStatusItemList extends FieldItemList implements FieldItemListInterface {
    use ComputedItemListTrait;

    /**
     * {@inheritdoc}
     */
    protected function computeValue() {
      $entity = $this->getEntity();

      // This is a placeholder for the computed field.
      $this->list[0] = $this->createItem(0, $entity->id());
    }
  }

PHP

All Drupal fields can potentially have an unlimited cardinality, and therefore need to extend the FieldItemList class to provide the list of values stored in the field. The above is creating the item list for our computed field and is utilizing the ComputedItemListTrait to do the heavy lifting of the requirements for this field.

2. Next, generate a custom field formatter for the computed field:

<?php
  namespace Drupal\mymodule\Plugin\Field\FieldFormatter;

  use Drupal\Core\Field\FormatterBase;
  use Drupal\Core\Field\FieldItemListInterface;

  /**
   * Plugin implementation of the mymodule_term_status formatter.
   *
   * @FieldFormatter(
   *   id = "mymodule_term_status",
   *   module = "mymodule",
   *   label = @Translation("Display a personalized field"),
   *   field_types = {
   *     "integer"
   *   }
   * )
   */
  class TermStatusFormatter extends FormatterBase {

    /**
     * {@inheritdoc}
     */
    public function viewElements(FieldItemListInterface $items, $langcode) {
      $elements = [];

      foreach ($items as $delta => $item) {
        $entity_id = $item->getValue();
        if (is_array($entity_id)) {
          $entity_id = array_shift($entity_id);
        }
        // Show the request time for now.
         $elements[] = [
          '#markup' => \Drupal::time()->getRequestTime(),
        ];
      }

      return $elements;
    }
  }

PHP

The formatter handles the render array that is needed to display the field. Here we are looping through the items that were provided in the computeValue method from earlier and generating a render array for each value. We are using the requestTime() method to provide a dynamic value for this example.

3. Let Drupal know about our new computed field with hook_entity_base_field_info:

<?php

  use Drupal\Core\Entity\EntityTypeInterface;
  use Drupal\Core\Field\BaseFieldDefinition;

  /**
   * Implements hook_entity_base_field_info().
   */
  function mymodule_entity_base_field_info(EntityTypeInterface $entity_type) {
  if ($entity_type->id() === 'taxonomy_term') {
    $fields['mymodule_term_status'] = BaseFieldDefinition::create('integer')
      ->setName('mymodule_term_status')
      ->setLabel(t('My Module Computed Status Field'))
      ->setComputed(TRUE)
      ->setClass('\Drupal\mymodule\Plugin\Field\TermStatusItemList')
      ->setDisplayConfigurable('view', TRUE);

    return $fields;
  }
}

PHP

Now that we have the field and formatter defined, we need to attach it to the appropriate entity. The above uses hook_entity_base_field_info to add our field to all entities of type taxonomy_term. Here we give the field a machine name and a label for display. We also set which class to use and whether a user can manage display through the UI.

4. Next you need to define a display mode and add the new computed field to the entity’s display:

Since this example used the integer BaseFieldDefinition, default formatter is of the type integer. Change this to use the new formatter type:

In this example screen, the Drupal admin Manage Display screen is shown with “My Module Computed Status Field” format being set to ‘Display a course status”

Now, when you view this term you will see the request time that the entity was first displayed.

In this rendered output example screen, “My Module Computed Status Field” shows a value of “1582727613”, which is a Unix timestamp value

Great… but, since the dynamic page cache is enabled, every user that views this page will see the same request time for the entity, which is not what we want. We can get a different results for different users by adding the user cache context to the #markup array like this:

$elements[] = [
  '#markup' => \Drupal::time()->getRequestTime(),
  '#cache' => [
    'contexts' => [
      'user',
    ],
  ],
];

PHP

This gets us closer, but we will still see the original value every time a user refreshes this page. How do we get this field to change with every page load or view of this entity?

Lazy Builder

Lazy builder allows Drupal to cache an entity or a page by replacing the highly dynamic portions with a placeholder that will get replaced very late in the render process.

Modifying the code from above, let’s convert the request time to use the lazy builder. To do this, first we update the field formatter to return a lazy builder render array instead of the #markup that we used before.

1. Convert the #markup from earlier to use the lazy_builder render type:

/**
 * {@inheritdoc}
 */
public function viewElements(FieldItemListInterface $items, $langcode) {
  $elements = [];

  foreach ($items as $delta => $item) {
    $entity_id = $item->getValue();
    if (is_array($entity_id)) {
      $entity_id = array_shift($entity_id);
    }
    $elements[] = [
      '#lazy_builder' => [
        'myservice:getTermStatusLink,
        [$entity_id],
      ],
      '#create_placeholder' => TRUE,
    ];
  }
  return $elements;
}

PHP

Notice that the #lazy_builder type accepts two parameters in the array. The first is a method in a service and the second is an array of parameters to pass to the method. In the above, we are calling the getTermStatusLink method in the (yet to be created) myservice service.

2. Now, let’s create our service and getTermStatusLink method. Create the file src/MyService.php:

<?php
  namespace Drupal\mymodule;

  class MyService {

    /**
     * @param int $term_id
     *
     * @return array
     */
    public function getTermStatusLink(int $term_id): array {
      return ['#markup' => \Drupal::time()->getRequestTime()];
    }
  }

PHP

3. You’ll also need to define your service in mymodule.services.yml

services:
  myservice:
    class: Drupal\mymodule\MyService

YAML

After clearing your cache, you should see a new timestamp every time you refresh your page, no matter the user. Success!?… Not quite 😞

Cache Contexts

This is a simple example that currently shows how to setup a computed field and a simple lazy builder callback. But what about more complex return values?

In our original use case we needed to show four different statuses for these entities that could change depending on the user that was viewing the entity. An administrator would see different information than an authenticated user. In this instance, we were using a view that had a user id as the contextual filter like this /user/%user/progress. In order to accommodate this, we had to ensure we added the correct cache contexts to the computed field lazy_builder array.

$elements[] = [
  '#lazy_builder' => [
    'myservice:getTermStatusLink,
    [
      $entity_id,
      $user_from_route,
    ],
  ],
  '#create_placeholder' => TRUE,
  '#cache' =>; [
    'contexts' => [
      'user',
      'url',
    ],
  ],
];

PHP

Now, to update the lazy builder callback function to show different information based on the user’s permissions.

/**
 * @param int $term_id
 * @return array
 */
public function getCourseStatusLink(int $term_id): array {
  $markup = [
    'admin' => [
      '#type' => 'html_tag',
      '#tag' => 'h2',
      '#access' => TRUE,
      '#value' => $this->t('Administrator only information %request_time', ['%request_time' => \Drupal::time()->getRequestTime()]),
    ],
    'user' => [
      '#type' => 'html_tag',
      '#tag' => 'h2',
      '#access' => TRUE,
      '#value' => $this->t('User only information %request_time', ['%request_time' => \Drupal::time()->getRequestTime()]),
    ],
  ];

  if (\Drupal::currentUser()->hasPermission('administer users')) {
    $markup['user']['#access'] = FALSE;
  }
  else {
    $markup['admin']['#access'] = FALSE;
  }

  return $markup;
}

PHP

The callback function will now check the current user’s permissions and show the appropriate field based on those permissions.

In this first example rendered screen, the “My Module Computed Status Field” has a value of “Administrator only information 1582734877”
while in the second rendered screen, the value for a user is “User only information 1582734892”

There you have it, personalized content for entities while still allowing Drupal’s cache system to be enabled. 🎉

Final Notes

Depending on the content in the render array that is returned from the lazy builder callback, you’ll want to ensure the appropriate cache tags are applied to that array as well. In our case, we were using custom entities in the callback so we had to ensure the custom entities cache tags were included in the callback’s render array. Without those tags, we were seeing inconsistent results, especially once the site was on a server using Varnish.

Download the source for the above code: github.com/pfrilling/personalized-content-demo

Thanks for reading and we hope this helped someone figure out how to work towards a personalized but performant digital product.

Note: This blog post is NOT about Voting in Iowa and an App that failed to do its job. But if anyone wants an opinion on how that App should have been built and how much it should have cost, drop us a line 🙂

The great thing about the open source community around Drupal is the range of complex features that already exist. Sometimes, though, the documentation on how to use those pre-built pieces of functionality is lacking. That’s a situation we found ourselves in recently with Drupal’s Voting API.

We found a great tutorial for implementing the Voting API without any customization. Drupalize.Me also has helpful instructional video and text. But if you want to extend the module programmatically, there is very little material online to help guide the way.

The Problem to Solve

Oomph needed to launch a national crowd-sourcing contest for a long-time client. We had to provide visitors with a chance to submit content, and we needed to enable a panel of judges to vote on that content and determine a winner. But behind the scenes, we had to add functionality to enable voting according to specific criteria that our client wanted to enforce. For moderation, there would be an admin page that displays all entries, the scores from all judges, and the ability to search and sort.

Oh, and we had to turn it around in three months — maybe we should have led with that requirement. 😊

Architecting the Solution

Drupal 8 was a natural fit for rendering forms to the user and collecting the input for judging. A few contributed modules got us closer to the functionality we wanted — webformwebformcontentcreatorfivestarvotingapi, and votingapi_widgets.

The robust framework of the Voting API has been around since Drupal 4, so it was a great foundation for this project. It made no sense to build our own voting system when one with a stable history existed. Customizing the way that the API worked put up some roadblocks for our engineering team, however, and the lack of documentation around the API did not help. We hope that by sharing what we learned along the way, we can support the community that maintains the Voting API.

The Lifecycle of the Contest

Submission

The submission form we created is, at its base, a multi-step Webform. It is divided into three pages of questions and includes input for text, images, and video. The form wizard advances users from page to page, with the final submittal kicking off additional processes. A visitor can save an incomplete submission and return to it later. The Webform module contains all of these features, and it saved our team a lot of work.

Example of a multi-step webform

After pressing Submit, there is some custom pre-processing that happens. In this case, we needed to execute a database lookup for one of the submitted fields. Below is an example of code that can be added to a custom module to look up a value and save it to the recently submitted webform. This is a basic code template and won’t include sanitizing user input or all the checks you might need to do on a production-level implementation.

The code snippet and other examples from this blog post are available on Github as a Gist: gist.github.com/bookworm2000/cd9806579da354d2dd116a44bb22b04c.

use \Drupal\webform\Entity\WebformSubmission;

/**
 * Implements hook_ENTITY_TYPE_presave().
 */
function mymodule_webform_submission_presave(WebformSubmission $submission) {
 // Retrieve the user input from the webform submission.
 $submitted_data = $submission->getData();
 $zipcode = $submitted_data['zipcode'];

  // Retrieve the state name from the database custom table.
  // This is calling a service included in the mymodule.services.yml file.
  $state_lookup = \Drupal::service('mymodule.state_lookup');
  $state_name = $state_lookup->findState($zipcode);

  if ($state_name) {
    // Update the webform submission state field with the state name.
    $submitted_data['state'] = $state_name;
    $submission-&amp;gt;setData($submitted_data);
  }
}

PHP

Content Entry

The Webform Content Creator module allows you to map fields from a submission to a specified content type’s fields. In our case, we could direct all the values submitted by the contestant to be mapped to our custom Submission content type — every time a user submitted an entry, a new node of content type Submission was created, with all the values from the webform populated.

id: dog_contest_submission_content_creator
title: 'Dog Contest Submission Content Creator'
webform: mycustomform
content_type: dog_contest_submission
field_title: 'Dog Contest - [webform_submission:values:name]'
use_encrypt: false
encryption_profile: ''
elements:
  field_email:
    type: false
    webform_field: email
    custom_check: false
    custom_value: ''
  field_dog_breed:
    type: false
    webform_field: dog_breed
    custom_check: false
    custom_value: ''
  field_dog_name:
    type: false
    webform_field: dog_name
    custom_check: false
    custom_value: ''
  field_dog_story:
    type: false
    webform_field: the_funniest_thing_my_dog_ever_did
    custom_check: false
    custom_value: ''

YAML

Voting

The main technical obstacle was that there were no voting widgets that fit the project requirements exactly:

The Fivestar contributed module is the module used most often on Drupal sites to implement the Voting API. It is extensible enough to allow developers or designers to add custom icons and customized CSS. The basic structure of the widget is always the same, unfortunately, and was not suitable for our contest. The available widget options are default stars, small stars, small circles, hearts, flames, or “Craft,” as shown:

The Fivestar module voting options

We enabled the Fivestar module and opted to override the base widget in order to implement a new one.

Technical Deep Dive

After enabling the Voting API Widgets module, we could now add our voting field to the submission content type. (Add a field of type “Fivestar Rating” if you want to try out the Fivestar module’s custom field.)

Note: we used the Voting API Widgets 8.x-1.0-alpha3 releaseThere is now already an alpha5 release that introduces some structural (breaking!) changes to the base entity form that conflict with our custom solution. Job security!

Select a new field of type “Voting API”

In the field settings for the new Voting API field, the only plugin choices available are plugins from the Fivestar module.

Select a vote type and plugin for the Score

We could not use those plugins for our use case, so we went ahead and built a custom plugin to extend the VotingApiWidgetBase class. For the purpose of this example, let’s call it “SpecialWidget.”

Additionally, we needed to have the ability to set defaults for the voting widget. For that, we had to add a custom form that extends the BaseRatingForm class. The example class here is called “TenRatingForm,” since the voting widget will display a 1-to-10 dropdown list for the judges.

The file structure for the directories and files in the custom module reads like this:

modules
- contrib
- custom
-- mymodule
--- src
---- Form
----- TenRatingForm.php
---- Plugin
----- votingapi_widget
------ SpecialWidget.php
---- StateLookup.php
--- mymodule.info.yml
--- mymodule.module
--- mymodule.services.yml

Let’s look at the SpecialWidget.php file in more detail. It is fairly straightforward, composed of a namespace, class declaration, and 2 inherited methods.

The namespace references the custom module. The annotation sets the widget values list, so you can adjust easily to add your own content. It is critical to include a “use” statement to incorporate the VotingApiWidgetBase class, or else the buildForm( and getStyles() methods from the parent class will not be found, and angry error messages will show up when you try to use your new custom voting widget.

namespace Drupal\mymodule\Plugin\votingapi_widget;
use Drupal\votingapi_widgets\Plugin\VotingApiWidgetBase;
    /**
     * Custom widget for voting.
     *
     * @VotingApiWidget(
     *   id = "special",
     *   label = @Translation("Special rating"),
     *   values = {
     *    1 = @Translation("1"),
     *    2 = @Translation("2"),
     *    3 = @Translation("3"),
     *    4 = @Translation("4"),
     *    5 = @Translation("5"),
     *    6 = @Translation("6"),
     *    7 = @Translation("7"),
     *    8 = @Translation("8"),
     *    9 = @Translation("9"),
     *    10 = @Translation("10"),
     *   },
     * )
     */

PHP

The other important section to point out is defining the class and the buildForm() method. The SpecialWidget class will now inherit the methods from VotingApiWidgetBase, so you do not need to copy all of them over.

class SpecialWidget extends VotingApiWidgetBase {
  /**
   * Vote form.
   */
  public function buildForm($entity_type, $entity_bundle, $entity_id, $vote_type, $field_name, $style, $show_results, $read_only = FALSE): array {
    $form = $this->getForm($entity_type, $entity_bundle, $entity_id, $vote_type, $field_name, $style, $show_results, $read_only);
    $build = [
      'rating' => [
        '#theme' => 'container',
        '#attributes' =&gt; [
          'class' => [
            'votingapi-widgets',
            'special',
            ($read_only) ? 'read_only' : '',
          ],
        ],
        '#children' =&gt; [
          'form' => $form,
        ],
      ],
    ];
    return $build;
  }
}

PHP

One additional crucial step is overriding the order of operations with which the entity builds occur. The Voting API Widgets module takes precedence over custom modules, so it is necessary to strong-arm your module to the front to be able to see changes. Ensure that the custom plugin is being called by the Voting API Widgets module, and then also ensure that the mymodule_entity_type_build() in the custom module takes precedence over the votingapi_widgets_entity_type_build() call. These functions go in the mymodule.module file.

/**
 * Implements hook_entity_type_build().
 */
function mymodule_entity_type_build(array &amp;$entity_types) {
  $plugins = \Drupal::service('plugin.manager.voting_api_widget.processor')-&gt;getDefinitions();
  foreach ($plugins as $plugin_id =&gt; $definition) {
    // Override the votingapi_widgets form class for the custom widgets.
    if ($plugin_id === 'special') {
      $entity_types['vote']->setFormClass('votingapi_' . $plugin_id,
        'Drupal\mymodule\Form\TenRatingForm');
    }
  }
}

/**
 * Implements hook_module_implements_alter().
 */
function mymodule_module_implements_alter(array &amp;$implementations, string $hook) {
  if ($hook === 'entity_type_build') {
    $group = $implementations;
    $group = $implementations['mymodule'];
    unset($implementations['mymodule']);
    $implementations['mymodule'] = $group;
  }
}

PHP

After adding the custom plugin to the custom module, the option to select the widget will be available for use by the Voting API field. (After clearing all caches, of course.) The field here is called Score.

Select a vote type and plugin for the Score

Adjusting the style of the widget can be done in the “Manage Display” section for the content type:

Choose a display option for the VotingAPI Formatter in Manage Display

And here is how it looks when the voting field has been added to the Submission content type:

The last piece of the puzzle are the permissions. As with most custom features, your existing user roles and permissions will need to be configured to allow users to vote, change votes, clear votes, etc… — all the features that the Voting API Widgets module provides. Unless the voting will be done by authenticated users, most of the boxes should be checked for anonymous users — the general public.

The Permissions screen and the VotingAPI options

The judges can now vote on the node. A judge can vote as many times as they want, according to our client’s specs. Each vote will be saved by the Voting API. Depending on how you want to use the voting data, you can opt to display the most recent vote, the average of all votes, or even the sum of votes.

All voting data will be entered into the votingapi_vote table of your Drupal database by the Voting API module.

SQL Create statement for the Voting API showing how the data is stored

Success!

To Wrap it All Up

We hope you can appreciate the benefit of leveraging open source software within the Drupal ecosystem to power your projects and campaigns. Between the Voting API and Voting API Widgets modules alone, there were over 5,000 lines of code that our engineers did not have to write. Extending an existing codebase that has been designed with OOP principles in mind is a major strength of Drupal 8.

While not formally decoupled, we were able to separate the webform and submission theming structurally from the voting functionality so that our designers and engineers could work side-by-side to deliver this project. Credit to our team members Phil Frilling and Ben Holt for technical assists. The client rated us 10 out of 10 for satisfaction after voting was formally opened in a live environment!


THE BRIEF

The American Veterinary Medical Association (AVMA) advocates on behalf of 91,000+ members — mostly doctors but some veterinary support staff as well. With roots as far back as 1863, their mission is to advance the science and practice of veterinary medicine and improve animal and human health. They are the most widely recognized member organization in the field.

Make the Brand Shine

The AVMA website is the main communications vehicle for the organization. But the framework was very out of date — the site was not mobile-friendly and some pages were downright broken. The brand was strong, but the delivery on screen was weak and the tools reflected poorly.

Our goals were to:

IMPROVE THE SITE MAP

Content bloat over the years created a site tree that was in bad need of pruning.

IMPROVE SEARCH

When a site has so much content to offer, search can be the quickest way to find relevant information for a motivated user. Our goals were to make search more powerful while maintaining clarity of use.

COMMUNICATE THE VALUE OF MEMBERSHIP

Resources and benefits that come with membership were not clearly illustrated and while members were renewing regularly, they were not interacting with the site as a resource as often as they could.

STRENGTHEN THE BRAND

If the site was easier to navigate and search, if it had a clear value proposition for existing and prospective members, and if the visual design were modern and device-friendly, the brand would be stronger.


THE APPROACH

Put Members First

Oomph embarked on an extensive research and discovery phase which included:

  • A competitor Analysis of 5 groups in direct competition and 5 similar membership-driven organizations
  • An online survey for the existing audience
  • A content and SEO audits
  • Several in-person workshops with stakeholder groups, including attendance at their annual convention to conduct on-the-spot surveys
  • More phone interviews with volunteers, members, and additional stakeholders

With a deep bed of research and personal anecdotes, we began to architect the new site. Communication was high as well, with numerous marketing, communications, and IT team check-ins along the way:

  • An extensive card sort exercise for information architecture improvements — 200+ cards sorted by 6 groups from throughout the organization
  • A new information architecture and audience testing
  • A content modeling and content wireframe exercises
  • A brand color accessibility audit
  • Over a dozen wireframes
  • Three style tiles (mood boards) with revisions and refinements
  • Wireframe user testing
  • A set of deep-dive technical audits
  • Several full design mockups with flexible component architecture

Several rounds of style tiles explored a new set of typefaces to support a modern refresh of the brand. Our ideas included darkening colored typography to meet WCAG thresholds, adding more colored tints for design variability, and designing a set of components that could be used to create marketing pages using Drupal’s Layout Builder system.


THE RESULTS

The design update brought the main brand vehicle fully into the modern web. Large headlines and images, chunks of color, and a clearer hierarchy of information makes each pages’ purpose shine. A mega-menu system breaks complex navigation into digestible parts, with icons and color to help differentiate important sections. The important yearly convention pages got a facelift as well, with their own sub-navigation system.

BUILD DETAILS

  • Drupal 8 CMS
  • Layout Builder for flexible page building
  • Aptify member-management
  • Single Sign-On (SSO) integration with Drupal and Aptify
  • Content migration from SharePoint, WordPress, and CSV files
  • Hosted with Acquia

FINAL THOUGHTS

Supporting Animals & Humans Alike

Membership to the AVMA for a working veterinary doctor is an important way to keep in touch with the wider community while also learning about the latest policy changes, health updates, and events. The general public can more easily find information about common pet health problems, topical issues around animal well-being during natural disasters, and food and toy recalls. The goal of supporting members first while more broadly providing value to prospective members and non-members alike has coalesced into this updated digital property.

We look forward to supporting animal health and human safety as we continue to support and improve the site over the next year.