Updating a Drupal website is of paramount importance for security. While the update process may be a simple one – and a backup before the update can quickly take you to a previous sane stable state – having tests in place and ready to roll can help you find the not-so-obvious issues.
How Test Automations Can Save the Day
Do you have a Drupal-based web platform that uses more than 30 core and contributed modules?
Is your web platform more than a year old?
Are you a developer who prefers to keep the code base up-to-date with the latest Drupal core?
If your answer to any of these questions is “yes,” then there will always be a need to stay on top of the regular Drupal core updates, contributed module updates, and the security patches.
NOTE: In this article, updates refer to installations of new, minor versions of core and contributed modules.
With every introduction of new code, there is a risk of affecting the existing functionality of your web application, not to mention the additional overhead of doing a complete regression test of the web application functionality. Although this is more important for applications in continuous development mode, even stable and mature applications which have an active user base will require you to think, re-think, and eventually plan the update.
The Drupal Security Advisories page on Drupal.org announces updates to Drupal core, contributed modules, and the latest security patches. (To check the updates available for currently installed modules, visit the site’s
/admin/reports/updates page for a report.) The biggest trigger for the update process is an updated version of Drupal core being released, followed by “Critical” security patches to one or more of the modules used by the current system. If any of these happens, the planning process should start. The following steps can help plan, prepare for, and manage the update process:
What to update: Review the Security Advisories page and the
/admin/reports/updates to create a list of which specific items need to be updated.
How to prioritize: As mentioned above, the highest priority would be core, followed by critical security patches and regular updates. If the site hasn’t been through an update for a while and has over 100 modules installed, the list of updates would be long. In our experience with a somewhat complex Drupal platform (over 200 modules in place), in spite of doing updates every month, we still routinely have a backlog of 4-5 updates waiting in the queue.
If the list is long, the updates will have to be spread across phases, with each phase including a mix of module updates and security patches. When all the planned updates are finally completed, sure enough, there will be new items that need to be updated. Regular updates will lead to a better system and a better experience for users of the system.
Getting started: Begin the update process by setting up a separate test environment. Deploying an independent code branch (with the latest code) on this test environment will allow you to carry out all post-update tests without disturbing the continuous delivery mode of the development and quality assurance teams. On this environment, the planned updates can be done in phases, and a quick check of impacted code and functional areas can be tested by those handling the updates. But what about the less-obvious impact areas that one may not be aware of?
Here come the post-update tests! If there are no automated regression suites already built for your system, have no fear. There are steps you can take to quickly create an automated regression suite. This is a one-time activity, albeit an automated regression suite that can be re-used after any kind of update activity with little or no modifications. Here are the ingredients:
- Install Java.
- Download the Apache JMeter binaries.
- Create a CSV file with all accessible URLs in the application. Use only relative paths in the CSV file, for example,
/logout– instead of example.com/user/login. If possible, create separate csv files with URLs for anonymous users and authenticated users.
Write the tests: Here’s a surprise: you don’t have to actually write the tests. You can simply download the template script. Some modifications to this script will be required. Open the script in JMeter and expand the keys to see the complete script. Here are the things to change:
- Under “User defined variables” set the value of the host variable.
- Change the protocol to http or https, depending on what is being used by the current application.
- Disable the thread group for “User Type X” with a right click on Disable, or Ctrl-T.
- Disable the thread group for “Site Admin” to start with.
- Rename the CSV file with URLs for anonymous users as
- Also rename the logged-in users CSV file as
Anonymous_InvalidURL.csv. Make sure that all URLs in this file are only accessible to logged-in users. This CSV will act as an access control check to make sure anonymous users are not able to access any unauthorized content.
- Count the URLs in both files. Go to the “Thread group for anonymous users” and click on “1st Loop controller”. In the “Loop count” field, enter the number of URLs in the file. Repeat for the second “Loop controller” and enter the count of invalid URLs for Anonymous users.
- Place the two CSV files in the same directory as the downloaded
.jmxfile. You can place them elsewhere too, in which case add the full path followed by either the forward slash ‘/’ or backslash ‘\’ as used by the operating system in the “User Defined Variables”.
- For a valid URL, JMeter checks the assertions that the response code returned is not one of 404 or 403. It also checks that text like “error” or “access denied” is not present on the page body. Similarly, for invalid URLs, it looks for text that anonymous users get when trying to access a restricted URL. Change the text assertion to match what the web application provides, by expanding the loop controller and the nested requests. You can add as many assertions as needed. They will apply to each URL in the CSV file.
- And now you are ready to roll! Run the script by clicking on the Run button. (Duh.)
- Click on the “View Results tree” section to view the results. You should see all greens.
- Extend the test to all roles in the system by creating two CSV files for each role. The above JMeter script has two identical thread groups to illustrate this as “Site Admin” and “User role X”.
Debug the test: If you do not see all greens yet, it is possible that the assertions are failing. Look at all the assertions, and add or remove the relevant assertions for your web application. Once everything is running fine, you can extend the test to cover site admin URLs or other roles, as explained in the steps above. You will also need to add the credentials for the role in the “Enter Credentials & Login” step in the script. It is recommended that you add URLs for only one role at a time. Test, and then add another role. The greatest benefits can be achieved only by covering all possible pages in the application.
Finally, a look at the reports: After all the desired roles and URLs have been added to the JMeter test, you can view and analyze the Summary report for collective average response times on various valid URLs. This test script can also be used to generate a uniform load on the web application, and you can monitor server resources to identify performance bottlenecks as well.
Run this test before and after every update: So, once the regression suite is complete and correct, running the test suite will do a complete regression of all the GET requests on the site for all roles, and negative tests as well.
NOTE: The GET requests can cover navigation to listing pages, landing pages, and add and edit content pages. However, dynamic components that are loaded using Ajax or jQuery or form submissions are not covered by these tests.
Now, a quick check of the critical form submissions (i.e., POST requests) is all that remains to be done. Regression issues due to updates will be quickly identified. The automated tests will at least point out if there are bigger problems caused by regression. The effectiveness of these tests depends on the amount of coverage they provide. You can increase coverage simply by adding more URLs to your CSV files.
Future Steps With Continuous Integration
Reducing any kind of manual intervention is the next step here; continuous integration is the answer. Setting up Jenkins and creating a free-style project which can be scheduled to run at regular intervals, or on demand, is what needs to be done. To run the same JMeter script using Jenkins, you will need to create a Mavenized project structure, and in the
pom.xml, mention the JMeter-Maven plugin as a dependency.
To include verification of dynamic components on the application and different kinds of form submissions, additional “HTTP Requests” need to be separately created under a Thread group. This will involve a little more time to correlate the dynamic form parameters in Drupal like
form_token for every request. A tool like Fiddler can be used to learn the parameters in the request or the Blazemeter plugin can help to record the requests.
The web application or web platform we have been building and enhancing for over two years now – with 244 core and contributed modules – has these JMeter tests in place, which run every day through Jenkins. The reports are sent to everyone on the team, and updates are also posted on Slack when the tests pass or fail.
Challenge: Until four months ago, only Drupal core was being updated, and we had a backlog of over 70 contributed module updates. It seemed like an uphill task to update them all in one go.
Solution: The modules to be updated were classified into Low-, Medium-, and High-risk categories. The updates were then divided into multiple sprints, with each sprint having progressively more low-risk updates. With this approach, we were able to clear the initial backlog of updates and regression tests in three months. By the time we struck off old items from the backlog, there was already a new bunch of updates, but the list is much smaller now, and we consistently address all core and module updates in every sprint.
The automated regression tests are also used to check the “deltas” in performance if some key areas of the application are changed by comparing a before and after report. They also act as a smoke test if a developer wants to quickly verify if there's been no unintended impact. Recently we used the tests after updating the PHP version from 5.3 to 5.6 to observe the response time trend across requests, before and after the change.
With a dedicated and passionate Drupal Community, Drupal core and its contributed modules go through constant updates. This makes it critical for webmasters and administrators to ensure timely application of these updates to prevent security vulnerabilities, enhance stability, and enable new features. Updates need to be structured without impacting current functionality, continuous development, delivery, and maintenance cycles.
To keep sites up-to-date, a few simple regression tests can cover your entire web application with minimum investment of time and effort; modules or other environment-related updates become a lot easier and faster.
Image: "Infinity by Juan Salmoral is licensed under CC BY-NC-ND 2.0
Also you could use visual testing to ensure that your updates haven't broken anything. For example welcome to give a try our service http://backtrac.io for that. It allows to create a snapshot of your site before and after release. Also you can compare environments (if you deploy your updates to staging environment first).