Please note, this is a STATIC archive of website developer.mozilla.org from 03 Nov 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

The browser chrome test suite is an automated testing framework designed to allow testing of application chrome windows using JavaScript. It currently allows you to run JavaScript code in the same scope as the main Firefox browser window and report results using the same functions as the Mochitest test framework. As all tests, they won't work in a build with tests disabled (--disable-tests).

Running the browser chrome tests

To run Mochitest, first build Mozilla with your changes; then run

./mach mochitest-browser

This will launch your build and open a "browser chrome tests" window, and report the results in the UI and to stdout.

It is possible to run specific groups of tests. As with Mochitest, the path given as an argument is the path to a test or directory within the Mozilla source tree. If the path points to a directory, then the tests in that directory and all of its subdirectories will be run.

For example, to run the tests in browser/base/content/test the command would be:

./mach mochitest-browser browser/base/content/test/

or without mach

TEST_PATH=<path_to_the_tests> make -C <objdir> mochitest-browser-chrome

To run tests in debugger the following should work

./mach mochitest-browser --debugger gdb browser/base/content/test/

Run ./mach help mochitest-browser for more options.

Writing browser chrome tests

Browser chrome tests are snippets of JavaScript code that run in the browser window's global scope. A simple test would look like this:

 function test() {
   ok(gBrowser, "gBrowser exists");
   is(gBrowser, getBrowser(), "gBrowser and getBrowser() are the same");
 }

The test() function is invoked by the test harness when the test is run. The test file can contain other functions, they will be ignored unless invoked by test().

gBrowser is set in browser.js and is a tabbrowser element (the tabbrowser with id="content" in browser.xul).

Note: Be careful when naming your functions and variables. Since the test files are executed in the same scope as the browser window, conflicting variable names could cause trouble while running the tests. You should attempt to reduce the side effects of the testing code and "clean up" after yourself, to avoid influencing other tests.

The comparison functions are identical to those supported by Mochitests, see how the comparison functions work in the Mochitest documentation for more details. The EventUtils helper functions are available on the EventUtils object defined in the global scope.

The test file name must be prefixed with "browser_", and must have a file extension of ".js". Files that don't match this pattern will be ignored by the test harness. Using a descriptive file name is strongly encouraged instead of just using a bug number.

You can collect common utils and helpers in a file called head.js, that must live in the same folder as the browser-chrome tests. This file will be injected into the test scope for each test living in the same folder. Notice that any function call in head.js main scope will run before the main test() method.

Asynchronous tests

When writing async tests, you can use the add_task method with Promises. See the xpcshell documentation for more information about this.

The test suite also supports asynchronous tests, using the same function names as Mochitest. Call waitForExplicitFinish() from test() if you want to delay reporting a result until after test() has returned. Call finish() once the test is complete. Be aware that the test harness will mark tests that take too long to complete as FAILED (the current timeout is 30 seconds).

 function test() {
   waitForExplicitFinish();
   setTimeout(completeTest, 1000);
 }
 
 function completeTest() {
   ok(true, "Timeout ran");
   finish();
 }

If your test is randomly timing out and you think that's just due to it taking too long, you can extend the timeout. Be aware that this is not a solution; you should investigate why your test is taking so long, since it's most likely due to a bad test design or a performance problem. If you can rewrite the test to make is shorter, split it into smaller tests, or find why it's taking so long, you should really do that instead!

 function test() {
   // requestLongerTimeout accepts an integer factor, that is a multiplier for the the default 30 seconds timeout.
   // So a factor of 2 means: "Wait for at last 60s (2*30s)".
   requestLongerTimeout(2);
   waitForExplicitFinish();
   
   setTimeout(completeTest, 40000);
 }
 
 function completeTest() {
   ok(true, "Timeout did not run");
   finish();
 }

 

Exceptions in tests

Any exceptions thrown under test() will be caught and reported in the test output as a failure. Exceptions thrown outside of test() (e.g. in a timeout, event handler, etc) will not be caught, but will result in a timed out test if they prevent finish() from being called.

Cleaning up after yourself

If you need to do special clean up after running your test, you can register a cleanup function that is guaranteed to be run after your test finishes. You can call registerCleanupFunction() at any point in your test, even in head.js if you need to register a clean up function for all tests in that folder. Notice that you can register as many clean up functions as you will. Clean up functions are also guaranteed to be called if your test timeouts, so you can ensure that in case of timeouts you won't pollute next running tests and causing them to fail.

registerCleanupFunction(function() {
  // Clean up test related stuff here.
});

function test() {
  // Add some test related stuff.
}

When writing tests, design for failure. It is much better to call registerCleanupFunction() than doing the cleanup after you have successfully run your tests because the cleanup functions are always called, no matter what. For instance, if you change a preference you want to make sure that the preference is always reset so that it doesn't impact other tests after yours.

Adding a new browser chrome test to the tree

To add a new browser chrome test to the tree, add it to the browser.ini file in the same folder as the test. Also remember that the test file's name must begin with "browser_" for the test to be recognized as a browser chrome test. If you are adding the first tests in a directory, make sure to also include any head.js you added to support-files.

Support-files

Once added to support-file section of browser.ini support files may be referenced as https://example.com/browser/[path_to_file] or chrome://mochitests/content/browser/[path_to_file].