Please note, this is a STATIC archive of website developer.mozilla.org from 03 Nov 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

Gaia Integration tests

This document covers running and writing integration tests for Gaia apps — written in JavaScript and run via Marionette — and provides a detailed explanation of how the integration tests are run.

Running tests

This section looks at setting up your environment correctly to run the existing integration test suite. The prerequisite is a working clone of the gaia repository. Note that tests live with the rest of the apps code (in apps/my_app/test/marionette for example) and test files always end in _test.js. Shared code for tests lives under shared/test/integration.

Setup

Shortcut: If you don't want to mess with your own environment, you can try MozITP — this toolset will automatically setup a pre-configured Ubuntu VM and Gaia Integration Test environment, allowing you can start testing on Mulet or a real device in one click.

As of late November 2015, to setup your environment for running tests on the master branch you need:

Running all the tests

The following command runs all the integration tests for all the apps. It will also install the marionette-js-runner and associated Node.js modules if not already done.

make test-integration

Important: If you get the error:

npm ERR! 404 'gaia-marionette' is not in the npm registry.

Make sure you really have npm 2.0.

You will get to watch the tests pop up on your screen and you will probably also get to hear various sounds as they run.  This can be distracting or possibly maddening. To run specific tests you can set the TEST_FILES environment variable. To do so in the bash shell (default on Mac OS X), use a command of the form:

TEST_FILES=<test> make test-integration

For example, you could run the day_view_test.js test in calendar app with the below command.

TEST_FILES=./apps/calendar/test/marionette/day_view_test.js make test-integration

If you would like to run more than one test, use a space-separated list of files within the TEST_FILES variable:

TEST_FILES="./apps/calendar/test/marionette/day_view_test.js ./apps/calendar/test/marionette/today_test.js" make test-integration

Invoking tests for a specific app

To run just the tests for a specific app, you can use the following form:

make test-integration APP=<APP>

For example, you could run all tests for the calendar app with

make test-integration APP=calendar

Running the tests in the background, quietly

You might want to run the tests in the background, meaning that you won't have to see the tests run, worry about them stealing your focus, or risk impacting the running tests. One solution is to use Xvfb:

xvfb-run make test-integration

You can also run Xephyr:

# In one terminal
Xephyr :1 -screen 500x700

# in another terminal
DISPLAY=:1 make test-integration

If you are using PulseAudio and want to keep the tests quiet, then you can specify PULSE_SERVER=":" to force an invalid server so no sound is output:

PULSE_SERVER=":" make test-integration

You can of course combine both:

PULSE_SERVER=":" xvfb-run make test-integration

Running tests without building a profile

If you would like to run tests without building a profile, use make test-integration-test:

PROFILE_FOLDER=profile-test make # generate profile directory in first time
make test-integration-test

Running tests with a custom B2G Desktop build

By default, the test harness downloads a prebuilt version of B2G Desktop and runs the tests with that.

Sometimes, you want to run the tests with a custom build of B2G Desktop instead (for example, to test a local Gecko patch). To do this, first build B2G Desktop as described here, and then run:

RUNTIME=<path to objdir of b2g desktop build>/dist/bin/b2g make test-integration

On OS X, you need to run the B2G Desktop app from within its bundle, so run:

RUNTIME=<path to objdir of b2g desktop build>/dist/B2G.app/Contents/MacOS/b2g make test-integration

Running tests on device

You can run tests on device by plugging in your phone and adding the BUILDAPP=device to the make command:

BUILDAPP=device make test-integration

Skipping a test file

You can skip certain test files with the following command:

SKIP_TEST_FILES="/abs/path/to/skipped_test.js /abs/path/to/other/skipped_test.js" make integration

Debugging Tests

To view gecko log output from a test, use the following:

HOST_LOG=stdout make test-integration  # Pipes logs from gecko process to your node process
VERBOSE=1 make test-integration        # Proxies console.* api calls to your node process
NODE_DEBUG=* make test-integration     # Enables debugging logs from the internal module called module

Troubleshooting

This section goes over some problems you may encounter while trying to run the integration tests.

Running on Linux

The test harness seems to assume that the "node" command resolves to the Node.js executable. On some Linux distributions, this is not the case, even when Node.js is installed. For example, on Debian Jessie, the "node" command is provided by the "Amateur Packet Radio Node program" package, and Node.js is available under the "nodejs" command instead.

Bug 1136831 is on file for this problem. In the meantime, a workaround is to symlink "node" to point to "nodejs". On Debian you can install the package nodejs-legacy to fix this:

apt-get install nodejs-legacy

If Marionette-JS Starts To Throw Errors

There are times where your test runs may leave B2G in a state that causes errors to be thrown if you try to run tests. We basically need to get B2G back to a clean state.

make really-clean          # This will remove all downloaded parts of B2G
make reset-gaia            # This will reset the device and reinstall FXOS

You may have to run this on the Aries to reset the device

adb root                   # Run root level commands on your device

Troubleshooting  Mac OSX El Capitan

If you already have Firefox Nightly installed on your system, the tests may not run. The tests now run with Firefox Nightly, and pointing them to a custom version of the B2G simulator as described in the sections above will fail. Instead, you'll need to run the tests pointing to your installation of Firefox Nightly on your Mac. Run the following command to execute the tests:

$ RUNTIME=/Applications/FirefoxNightly.app/Contents/MacOS/firefox make test-integration

Use cases and design patterns

This section explores some use cases and design patterns that can help make the JavaScript Marionette tests we write more maintainable.

What are common things for ui tests to do?

Marionette tests simulate users interacting with apps.

In the abstract, your marionette tests will test your app by interacting with it as a real user would. What do Firefox OS app users do? Here are some examples:

  • Launch an app
  • Click or swipe inside of an app
  • Read text in an area of an app
  • Wait for the UI to update with some new content or a new view
  • Type text into input fields
  • Switch between apps
  • Close apps

Amazingly, you can simulate all of these user actions using Marionette! You can read Marionette's detailed, code-level documentation to get to grips with the basic tasks. For more advanced interactions, marionette JS has a rich collection of plugins including ones for app management, form completion, injecting scripts, reading and writing to device storage, editing settings, and anything else your heart desires!

Marionette is faster than my application!

An extremely common issue is that marionette performs UI actions much faster than the average user. Imagine, for instance, that we had an application with two simple views, each with a button to navigate to the other one. We might expect the following snippet of pseudocode to work:

Important: This is an example of what not to do. Most marionette tests that fail intermittently exhibit the following behavior.

view1.navigateToOther();
view2.navigateToOther();
assert.ok(view1.isDisplayed());

This simple test tries to navigate from one view to the other, navigate back, and then make sure that the original view is displayed. This test will fail intermittently since there's a race condition between the application code — which renders the two views — and the test code — which makes the implicit (and incorrect) assumption that the UI is responding synchronously. Instead, we need to program our test to behave like a user. Users don't try to click things or read things before they are actually visible in the UI. Users "poll" the UI by looking at it until the thing they're waiting for shows up. Here's a better version of our test:

client.waitFor(view1.isDisplayed.bind(view1));
view1.navigateToOther();
client.waitFor(view2.isDisplayed.bind(view2));
view2.navigateToOther();
client.waitFor(view1.isDisplayed.bind(view1));

Note: Marionette.Client#waitFor is a utility that blocks your test execution until the parameter function returns true.

How can tests switch back to the system app?

Sometimes, your tests will need to switch between interacting with another app and the system app. Examples include:

  • Selecting a webapp to fulfill a web activity
  • Dismissing an alert box
  • Interacting with the value selector

Both Client#switchToFrame and Apps#switchToApp help us jump between different iframes.

Note: Marionette.Client#switchToFrame will switch to the default (top-level) context when called with no arguments. This behavior is documented in the webdriver spec.

Regarding all of the many windows and contexts

As you probably already know, Gecko makes a distinction between content and chrome JavaScript contexts. If you would like to jump into the chrome context to mock web APIs, access the dev tools, and any number of other things you can't do in content windows, you need Marionette.Client#setContext. If you want to expose methods or properties from chrome to code running in content, you need to use __exposedProps__ as we do in this Alarms example. Another important thing to know is that to persist data in JavaScript between calls to Marionette.Client#executeScript, you will need to put things on window.wrappedJSObject like we do in objectcache.js inside marionette-apps.

How should I structure my UI test libraries?

Most of the test code you will write won't be general purpose enough to warrant abstracting into a general-purpose plugin. For app-specific code, we recommend having a separate file/class/module for each view. We demonstrate this pattern in gaia's calendar test libraries. The views folder has a unique class for each Calendar app view. The general calendar object has methods to navigate between the different views. In each of the views, we "scope" methods to search for elements in the DOM contained within the view's root element.

Often the views will contain getter methods to read an element or data from the view and setter methods to write data to an <input> or <select> element. A good example is creating a calendar event, which involves calling setter methods on the edit event view and saving. It's also idiomatic to abstract multiple UI actions that form a single "logical" action into view methods. For instance, setting a collection of reminders for a calendar event involves tapping a select option for each reminder.

How the test runner works

This section provides a detailed review of how the test runner works, for those that are interested.

Note: All of the various ways the Marionette JavaScript integration tests are run happens via the same make test-integration path.  Only parameters are changed.

What triggers the tests?

On travis:

In automation:

  • The build automation runs the following:
    make test-integration NPM_REGISTRY=https://npm-mirror.pub.build.mozilla.org
    REPORTER=mocha-tbpl-reporter TEST_MANIFEST=./shared/test/integration/tbpl-manifest.json
  • The custom NPM_REGISTRY may or may not continue to exist; our node_modules are now sourced from our gaia-node-modules repo.
  • The custom reporter mocha-tbpl-reporter just prints out TEST-START / TEST-PASS / TEST-UNEXPECTED-FAIL / TEST-END that the parsers consuming Buildbot output expect, instead of all of those cool checkmarks you get when using the (default) spec reporter we use locally and on Travis.  Note that the time-stamps from the Buildbot output are much more realistic than what the spec reporter claims for test durations.  Do not believe the spec reporter: it doesn't include your setup times!
  • The tbpl-manifest.json manifest lists tests specifically blacklisted in automation

Locally:

  • Whatever you want, you got it.

Okay, what does make test-integration actually do?

  • The Makefile invokes bin/gaia-marionette, passing MARIONETTE_RUNNER_HOST, TEST_MANIFEST, and REPORTER as command-line arguments pulled out of the make environment.
  • bin/gaia-marionette is a wrapper around marionette-js-runner's bin/marionette-mocha and generally appears to be a place to add a few more defaults, duplicate a bunch of work already done in the Makefile, and generately be a place to cram stuff if you don't understand Makefiles / want to make things take longer to run by avoiding parallelization.  The good news is that because of all this you can invoke it directly.  The notable bits in here are:
    • It uses "find" to locate all potential tests.  If APP is in the environment, it only looks for tests under a given app.  It will find blacklisted test files, but these are filtered out by marionette-mocha.
    • gaia/shared/test/integration/setup.js is specified as the first script to run when we spin up the mocha runtime.  This currently is just a place to require and add plugins for use across all app tests.
    • gaia/shared/test/integration/profile.js is specified as the base configuration for all marionette integration tests' Gecko profiles.  Adding things to this file will cause all tests to have certain prefs set, settings set, and/or apps installed.  You almost certainly should not add things to this file.
    • gaia/shared/test/integration/profile_builder.js is specified as the profile builder.  It uses the Gaia Makefile, mozilla-profile-builder in conjunction with profile.js above and specific per-test-file settings you specify in your marionette.client({ ... }) call to configure your profile.  We discuss the actual steps, when they actually happen at runtime, below.
  • marionette-js-runner's bin/marionette-mocha applies its own defaults that don't matter to us, applies the manifest rules in lib/applymanifest.js to filter out blacklisted tests and spins up a ParentRunner instance.
  • The ParentRunner instantiates a ChildRunner.  The ParentRunner is boring and its name is somewhat misleading.  Both the ParentRunner and ChildRunner live entirely in the same "runner" process.  There is currently only ever one ChildRunner.

Now things get somewhat complicated, so let's take a second to get an overview of all the processes that can be active and how they work:

  • The runner process. This is the bin/marionette-mocha process with the ParentRunner and ChildRunner.
    • It is a node.js environment.
    • None of your test code runs in this process!
    • It forks the mocha / bin/_mocha node.js script to be the "mocha test" (sub-)process.
      • This is done using the node.js child_process.fork mechanism.  This allows the "runner" process to send messages to the child and receive message events in return.
    • It gets bossed around by the "mocha test" process.  Literally.  The messages get received and converted into lookups on the ChildRunner object with apply() then called and a callback generated that will marshal the result back down to the client.  This means that the runner process can and is told to:
      • Start/stop/restart "hosts".  Hosts are your B2G Desktop/Firefox/B2G devices.
  • The mocha test process.  This is the mocha / bin/_mocha node.js script.
    • It is a node.js environment and this is the stock mocha test runner.
    • This is where your test code runs!
    • The mocha runner gets told about the following files, in the following order:
    • See below for details on the execution model and how a test run actually works.
  • The "host" process(es).  These are B2G Desktop/Firefox/(magic adb control wrapper) instances.
    • These are Gecko processes, potentially a hierarchy of processes now that OOP is in play.
    • They are communicated with via the Marionette protocol.
  • "server" processes: These are test/fake servers spun up by various tests.  These currently all seem to be spun up by the "mocha test" processes directly, but it's conceivable the "runner" process could also spin some up.  Some known examples:
    • A test web-server.  gaia/shared/test/integration/server.js forks off a subprocess that runs gaia/shared/test/integration/server_child.js and communicates via the same child_process message-passing mechanism that the "runner" and "mocha test" processes use.
      • there are duplicates of this implementation in browser and homescreen for some reason, likely historical.
    • The e-mail app has fake IMAP/POP3/SMTP servers that it shares with its back-end implementation.  These live in the mail-fakeservers repo.  The fake-servers actually run in an XPCShell Gecko-style instance that initially communicates via a low-level IPC interface using json-wire-protocol to bootstrap equivalence with the back-end tests' mechanism and then afterwards with a synchronous HTTP REST interface for all the e-mail domain work.  It's due for some cleanup.

What is in my global scope in my test file?

  • Marionette: this comes from lib/runtime.js.  It is a function that's a wrapper around mocha's TDD suite() method.  Source is in lib/runtime/marionette.js.  It has the following notable properties annotated onto it:
    • client(stuffToAddToTheProfile, driver that defaults to the synchronous Marionette API): Bound to HostManager.createHost from lib/runtime/hostmanager.js.
    • plugin(name, module): This is a bound function that invokes HostManager.addPlugin from lib/runtime/hostmanager.js.  It adds the plugin for the duration of your test suite.
  • The TDD interface pokes setup/teardown/suiteSetup/suiteTeardown/suite/test into the global namespace.

What are marionette plugins?

  • Marionette plugins are node.js modules which extend the native abilities of the js marionette client.
  • They conform to the following simple plugin API:
MarionettePlugin = {
  /**
   * @param {Marionette.Client} client the marionette client to extend.
   * @param {Object} options plugin-specific configuration.
   * @return {MarionettePlugin} appropriately configured plugin.
   */
  setup: function(client, options) {}
};
  • They get exposed through the client (i.e. client.apps).
  • Existing plugins include

What is mocha's TDD execution model?

  • The mocha runner require()s all of the files it is told about in sequence in Mocha.prototype.loadFiles().  This consists of:
    • synchronously emitting a pre-require event.
      • The TDD interface setup/teardown/suiteSetup/suiteTeardown/suite/test into the global namespace.
    • synchronously requiring the file in question.
      • This means the top level of your file is executed.
      • Any marionette() calls to define a suite (since Marionette is just a wrapper around suite()) will in turn have their defining function executed synchronously.
      • The test() calls inside the marionette() calls are NOT executed.  Those don't happen until the test is actually run.
      • This does mean you have to be very careful not to do anything foolish in the top-level of your file or the marionette() file.
    • synchronously emitting a 'require' event passing the module from the file in question.
    • synchronously emitting a 'post-require' event.
  • The Runner runs the suites and tests in sequence.
    • emits start.
    • runSuite loops over all the suites, for each one:
      • emits suite.
      • emits beforeAll, which is what suiteSetup maps to.
      • calls runTests loops over all the tests, for each one:
        • emits test.
        • emits beforeEach, which is what setup maps to.
        • calls runTest, which runs your test function.
        • emits test end.
          • This runs checkGlobals() to make sure you didn't clobber anything unexpected into the global state.  It fails your test if you did.
        • emits afterEach, which is what teardown maps to.
      • emits afterAll, which is what suiteTeardown maps to.
      • emits suite end.
    • emits end.

What is the life-cycle of the Gecko processes?

For your test suite (aka each top-level marionette('description', function() {...}) in your file), a new profile is created.

For each test (aka each test('description', function() {...}) in your marionette('description', ...)), the host gets restarted after each test().

The nitty gritty of this is that your call to marionette.client() invokes HostManager.createHost() in lib/runtime/hostmanager.js, which uses the mocha TDD infrastructure to decorate your suite with the following:

  • suiteSetup():
    • Host.create() in lib/runtime/host.js gets called, which causes createHost() in ChildRunner to get invoked.
      • This builds a profile from scratch if it doesn't exist, otherwise the existing profile is reused but settings/etc. are clobbered to be how we want them.
      • The "runner" process waits for the host to start-up.  It connects to the host with the async API and starts a session, then deletes it, and only generates the callback notification that will allow the "mocha test" process to know the host is ready.
  • setup():
    • Causes the driver to connect to the host at startup.
      • driver.resetWithDriver() gets called: this sounds scary, but it just resets internal state.  See marionette-js-client's lib/marionette/client.js.
      • All the plugins on record get registered with the client.
      • client.startSession() gets called, presumably doing something.
  • teardown() #1:
    • client.deleteSession() gets called, presumably doing something.
  • teardown() #2:
    • Host.restart() in lib/runtime/host.js gets called, which causes restartHost() in ChildRunner to get invoked.
      • This calls stopHost under the hood and then reuses much of the createHost() logic except the existing remotes handle is reused in the call to _buildRemote.
  • suiteTeardown():

Writing integration tests

marionette-js-runner has some simple overviews that are worth checking out.  You can find js marionette client documentation here!

File naming

Integration tests are located in the test/integration/ directory.

See also

Document Tags and Contributors

 Last updated by: chrisdavidmills,