Please note, this is a STATIC archive of website developer.mozilla.org from 03 Nov 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

Test Case Writing Primer

A test case writing primer

Writing a clear, concise manual test case is an important part of quality assurance. There are a number of things you can do to make sure a manual tester has easy steps to follow when he/she runs a test case in Moztrap (moztrap.mozilla.org), Mozilla's manual testing database. Also, be aware that a good chunk of tests that we write cover "functional testing", which means the emphasis is on how the feature works. Part of manual testing is confirming that an existing feature works as expected. As we continue development on our product, we may occasionally break certain functionality, and the purpose of the manual tests in Moztrap is a "gut check" that everything works as expected. We want our users to be happy!

Understanding Moztrap test cases

In Moztrap, we currently have three main types of testing. Each of them contains several sub-types:

Community testing

  1. [Community] 30min Smoke Test
  2. [Community] 1h Smoke Test
  3. [Community] 2h Smoke Test
  4. [Community] Basic Functionality Tests

The main difference between these tests and those in other categories is that they have been created especially to be run by the community each day. While the community can run any tests in Moztrap, these tests will only be run by them.

Basic Functionality Tests cover the minimum basic functionality of Firefox. A basic functional test of the Firefox toolbar might cover more of the common operations, and wouldn't drill down and test every level of functionality.

Smoke Tests are a much smaller subset of the basic functionality, but cover many of the common operations that browser users may do while browsing. A smoke test of the Firefox toolbar would probably cover customizing the toolbar at one level and then making sure when you restarted that Firefox honored those settings (again, this has to do with the fact that smoke tests are more focused on catching regressions). The 30min Smoke Test covers the main browser functionality and one should be able to run it in only 30 minutes. The 1h Smoke Test covers a little more since the tester has an extra half an hour, while the 2h Smoke Test should cover most of the main functionality of the browser.

Regression testing

  1. Firefox X Regression Tests
  2. [Regression] Flash Player
  3. [Regression] GPU Drivers

Regression tests are created and run in order to ensure that existing functionality has not been broken by recent changes in the code. The Firefox X Regression Tests are created taking into account all the changes made in version X of Firefox. Running them should reveal if anything got broken by them.

The Flash Player regression tests have been created to be run each time a new Flash version is released. They are meant to check if the new Flash Player version has revealed any unknown Firefox issues, or if it has bad enough regressions itself and it needs to be blacklisted.

The GPU Drivers regression tests are run every time new GPU beta drivers are released. Just like with Flash, these tests are meant to check if the new drivers reveal unknown Firefox issues, or if they are buggy enough themselves, that they need to be blacklisted.

Feature testing

Each time a new feature lands in Firefox, QA creates a suite of tests for it. These tests are run several times before the feature is released, to ensure it doesn't get released with any major issues. The corresponding tests and test runs then remain in Moztrap for a while so the community can run them whenever they want. The time a run stays active is decided by the QA that created it considering the changes planned for the feature in the near future.

Q: Where should my test case go once it has been created?

A: Don’t worry too much about which group a testcase should belong to, the Mozilla QA team can help you with putting your test case in the right bucket. Just ask for some help on the #qa irc channel. If you are creating multiple tests for the same purpose, you might just create your own test suite/run.

Understanding terminology used in test cases

There is certain terminology associated with many areas of the browser and the email client. An example is the browser chrome, which is another name for the graphical framework and elements of the web browser window. The browser chrome includes things such as the title bar, toolbar buttons, scrollbars, status bar and window frame.

It is probably best not to include terminology such as "chrome" in Moztrap test cases, since a wide variety of testers come to Moztrap and may not know what the "browser chrome" is. A better approach is to call out specifically the elements of the chrome, such as the window frame, status bar or scrollbars. Even if you don't know the technical name for things, be as descriptive as possible. A Mozilla QA person can always go back in and assist you in cleaning up any unclear areas.

There are some standard items in the Browser User Interface that we like to refer to, and these include:

  1. Menu Bar: includes the items at the top of the browser such as File | Edit | View | History | Bookmarks | Tools | Help.
  2. Location Bar: also known as the URL bar. Either term is fine when writing Moztrap test cases.
  3. Search Bar: area on the right hand side of the browser that has a little logo in it, used for searches.
  4. Toolbar: includes a set of icons that allow you to navigate.
  5. Bookmarks Toolbar: located below three other bars (2-4), it's where you can add individual bookmarks, bookmark folders and RSS feeds.
  6. Context Menu: you get a context menu when you right click on the content area, or a browser element, etc.

Bottom Line: Don't get too hung up in terminology. A Mozilla QA person can help you navigate these waters.

Overall caveats for writing test cases for Moztrap

  1. Be concise with your wording.
  2. Make the steps easy to follow.
  3. Be descriptive if necessary (See Example 2 for how being descriptive can help).
  4. Don't get hung up on terminology.
  5. Have a clear set of expected results.
  6. Write your test cases so someone unfamiliar with the tool(s) or product(s) could understand them.
  7. When writing, keep your audience in mind.

Writing test cases for Moztrap

The set of steps that you will create when writing a testcase are similar to the "Steps to Reproduce" that you might use when filing a bug (some of you might be familiar with this if you have used Bugzilla). When writing a test case, here are some helpful guidelines to use:

Test case title

When creating a title for a test case, try to drill down and be as concise as possible.

Good examples would be:

  • Adding a Bookmark
  • Open Search Using Keyboard Shortcuts
  • Restore Home Page to default

Bad Examples would be:

  • Home Page
  • Help
  • Fonts

The bad examples don't give enough information about what part of help, home page and fonts will be tested. A better phrase for both might be:

  • Setting a Home Page
  • Opening Mozilla Firefox Help
  • Increasing Font Size Using Keyboard commands

Test case steps

  • The minimal set of steps necessary to test the feature. Try not to be too wordy here - emphasis is on being concise!
  • When writing, think about the step by step process that will get you to the end game.
  • When I write test case steps, I often script out the basic steps first and then do several iterations to firm them up.
  • When writing a test case for a new feature, it is best to test all aspects of it first and get an understanding of how it works before writing the test cases. So make sure to launch all windows and click all radio buttons to understand what is supposed to happen.
  • You can storyboard the screens if that is helpful. One way it so to take screenshots and then write your steps below them.
  • If there are special setup steps, make sure to include them at the top of the test case. An example of this would be: If you are asking a tester to test a feature that requires a fresh profile, make sure to state that at the beginning of the test case. Otherwise the tester may test using a profile that already exists.
  • Remember that Firefox and Thunderbird products run on three platforms (Windows, Mac and Linux), and there may be slight differences in behavior between these platforms. If you have the ability to check the other platforms when writing test cases, that would be great. Otherwise, when you submit your test case, note that it has only been written for Windows and a QA team member can tweak it so the verbiage works for the other platforms.

Expected results

  • What the application or feature should do when it is invoked by the user.
  • Try to avoid using terms like "Expected behavior should be observed." It is better to phrase the expected results so the person testing will understand what is supposed to happen.

Let's now look at some sample test cases.

Sample test cases

EXAMPLE 1

You are asked to write a test case to test the functionality of adding an icon to the Firefox toolbar.

When writing this test case, you will first need to think about the series of steps you would need to take to add a toolbar icon. When flushing this out, you also need to remember there are various ways to invoke the "Customize Toolbar" User Interface, and you might need to consider that fact when writing the testcase.

First, let's start with a minimal set of steps and expected results to get us to the additon of a toolbar icon.

Title: Add an icon to the Firefox toolbar

Steps:

  1. Right click on the toolbar, and select "Customize" from the dropdown menu.
  2. A "Customize Toolbar" dialog box containing icons/spaces should be shown.
  3. Using your mouse, select an individual icon.
  4. Drag the icon to the toolbar and place it to the right of the URL bar (between the end of the URL bar and the beginning of the search box).
  5. Click "Done" to close the Customize Toolbar dialog box.

Expected Results:

The icon should be added to the toolbar in the place that you specified.

This is an example of a basic test case. But there are a number of assumptions that are not explicity called out there, such as:

  • Other ways to get to the customize toolbar besides a right click (it is important to test these as well - menu and keyboard commands). It is not always necessary to include these in the test case, but it is helpful since it widens the swath of what is tested since users may come at the feature from different angles.
  • The test case stands on its own as a test to whether you can add a icon, but what about the functionality of the actual icon once it has been added (such as when you click the Print icon, does it actually launch the print dialog)? It is often useful to write test cases that "kills two birds with one stone", and simply adding a line that says "Click on the toolbar icon you added and confirm that it launches the expected functionality" will make the test case solid. Note that this won't apply to every test case, but is just so happens that in this example there is great benefit from the addition of that extra line. Think about this kind of thing when you're crafting your test cases. You don't want to include everything in one test case, but, at the same time, it behooves you to include something that will make the test case useful in more ways than one.

Here is the test case, rewritten to include testing the functionality of the Print icon.

  1. Right click on the toolbar, and select "Customize" from the dropdown menu (or go to main menu->View->Toolbar->Customize...).
  2. A "Customize Toolbar" dialog box containing icons/spaces should be shown.
  3. Select the Print icon using your mouse.
  4. Drag the Print icon to the toolbar and place it to the right of the URL bar (between the end of the URL bar and the beginning of the search box).
  5. Click "Done" to close the Customize Toolbar dialog box.
  6. Click the Print icon you just added.

Expected Results:

  • The Print icon should be added to the toolbar in the place that you specified.
  • Clicking the Print icon should launch the Print dialog box.
EXAMPLE 2

Title: Drag/Select a Web Page

Steps:

  1. View any web page.
  2. Drag-select the page. (Specifically, while holding down the mouse button, drag the mouse pointer downwards from any point in the browser's content region to the bottom of the browser's content region.)

Expected Results:

  • The window should scroll downwards. Scrolled content should be selected.

How this test case could be improved:

This test case is a good first cut, but could still stand some improvement. For example, in the expected results, will a user definitively be able to identify how content is selected? Usually when the content is selected, it will be highlighted in a color. That would probably be something useful to include. Also, this test case demands a page that has a scroll window, so it would useful to provide a URL for the user so they can navigate quickly to a page that has enough content to be scrolled.

Creating a Variation of this test case

If I wanted to create a variation of this test case, I might do something like this:

Title: Drag/Select a Web Page

Steps:

  1. View a web page that has enough content to need scrolling.
  2. Drag-select the page (while holding down the mouse button, drag the mouse pointer downwards from any point in the browser's content region to the bottom of the browser's content region).

Expected Results:

The window should scroll downwards and all the scrolled content should be selected. Selected content is highlighted in a color that may vary depending on your system settings.

Hopefully this has given you a basic framework to use when writing a test case. As noted earlier, the Mozilla QA team is ready, willing and able to help out in our IRC channel should you need assistance.

Original document information

  • Author(s): Rebecca Billings
  • Date last modified: May 20, 2014 at 8:52 am. PST

 

Document Tags and Contributors

Tags: 
 Contributors to this page: Aleksej, mwargers, pragmatic
 Last updated by: Aleksej,