Please note, this is a STATIC archive of website developer.mozilla.org from 03 Nov 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

There was a scripting error on this page. While it is being addressed by site editors, you can view partial content below.

{{ FirefoxOSSidebar }}

This article provides information about running performance tests on Gaia, as well as how to create new tests.

NOTE: Usage of test-perf and Datazilla has been deprecated. For updated resources on performance testing within Gaia, please refer to Raptor.

Running the tests

The tests are run on a regular basis on Datazilla; however, you can also run them yourself. To do so, you'll need an engineer build with Marionette enabled and remote debugging disabled. See Gaia Build System Primer, Customizing the preferences for more information on how doing this.

Test Requirements

Since bug 915156 landed on December 6th 2013, make test-perf requires Node.js on the host to run the tests. The relevant modules should be installed automatically with npm.

Prior to running the tests, you need to configure a runner host. The runner host is a module that will either run the tests in B2G desktop or on a device (real or virtual — like an emulator). By default it runs in B2G Desktop, which is not very relevant for performance. To configure the runner just edit the file local.mk in the Gaia top level directory (create it if it doesn't already exist) and put the following line:

MARIONETTE_RUNNER_HOST=marionette-device-host

This will use the device host runner. The default value is marionette-b2gdesktop-host.

The alternative to this is to do:

MARIONETTE_RUNNER_HOST=marionette-device-host make test-perf 

Note: If you have more than one device connected, you need to set the environment variable ANDROID_SERIAL. See adb devices to know what value to use. Make sure you have an up-to-date Gaia version running on it.

Output

By default the test output the data in JSON format. By default it is output to stdout and might be mixed with error message from other commands like npm. This is not a very good idea for automation. So you can redirect this JSON output to a file. Just define MOZPERFOUT for the host runner, either on the command line as an option or in the local.mk file as shown above.

MOZPERFOUT=myfile.json

There is a "spec" reporter that allow reporting the output in a more human readable format. To use it, set the environment as follow:

REPORTER=ConsoleMozPerf

This will make the test output something easier to read. Not easier to parse. There is no real syntax.

For now, any other value will use the JSON reporter.

Note: MOZPERFOUT will be honoured whichever reporter you select.

Running all tests

In general you can run these tests on 1.4 and upwards from Gaia master. 1.3 might no longer be able to handle the test runs. There is an exception for 1.3t (Tarako). since bug 1006064 landed, if you want to run the tests against Tarako (1.3t), you should run it from the Gaia 1.3t. From 2.0 and onwards, we consider that you should run the test from the same Gaia tree.

To run all the tests, use the following command:

make test-perf

Note: Since early August 2014 (currently only on master) the b2g process is restarted after each test, not after each test run, to improve the reliability of the tests (see bug 1027232). If you want to disable this, set the RESTART_B2G environment variable to "0" when running the tests.

Running tests for a specific app

This is done by naming the app you want to run the tests for,  in the APP variable, for example:

APP=browser make test-perf

Running tests for a set of apps

You can also specify a set of apps, inside the APPS variable, to run the tests against a specific set:

APPS="browser communications/contacts" make test-perf

Setting the number of runs

By default, each test is run five times. You can change that by setting the value of RUNS before running the tests. For example, to run each test three times you'd use this option:

RUNS=3 make test-perf

Known issues

When running test on Buri/Hamachi (Alcatel one touch fire), you get:

Not enough fields given the number of keys.

You can safely ignore the warning. It is just that b2g-info on the device is too old as it comes from 1.2 and we only change Gecko and Gaia on these.

Writing new tests

With the details of running the test suite out the way, let's now look at how you can write your own performance tests for Gaia.

Startup event tests

We have setup a standard for app startup events. If you want to test the app startup, please follow the responsiveness guidelines. The startup_event_test.js test will drive it. Make sure to whitelist your app in /tests/performance/config.json, by adding it to the list specified by mozLaunch.

Note: This is only implemented in v2.0 and later. If your code uses startup-path-done events then it is using the deprecated style and should be updated.

If you want to measure intermediate launch stages that are not part of the reponsiveness standard, you can dispatch these using the method described below. Dispatching performance events is all you need, they will be collected automatically.

Other event based tests

Now if you want to test specific features in your app you can do so by sending events. The test will be in two part. The instrumentation part that lives in the app itself, and the control part that will use marionette to control the app to perform actions.

Instrumentation

To record the events, all you have to do is dispatch them.

First, include our helper in your app:

<script src="/shared/js/performance_testing_helper.js"></script>

Note: The use of module loaders like RequireJS or Alameda, are perfectly acceptable provided it is loaded before any performance events are triggered.

You need to be cautious and make sure you adjust the unit tests so that the PerformaceTestingHelper is either loaded or shimmed. A simple shim is to put this in the unit test source file:

var PerformanceTestingHelper = {
  dispatch: function() { }
};

The Travis CI jobs we run out of Github will error if you don't do that properly.

Having done that, you can use the helper to dispatch events when it seems appropriate to do so. First you should dispatch a start event. It is important as the 'start' event is sent when we register the listeners, so for your feature you likely want to do this much later. So choose where the feature start and add the proper event dispatch.

PerformanceTestingHelper.dispatch('my-feature-start');

When you're ready to stop collecting data and to report the numbers, you need to send the my-feature-done event, also called the last event, to tell the helper to finish:

PerformanceTestingHelper.dispatch('my-feature-done');

Also you might want to send intermediate events as appropriate.

Note: Here we use "my-feature-" as a prefix for the performance event. This is just an example. Please use an obvious name and try to use it consistently.

Controlling the app

The second part is writing JavaScript to the test framework to perform the test. The filename must end with _test.js and live in apps/<myapp>/test/performance/.

It is a lot like a marionette integration test (based on mocha), but with a few twists: in the setup() function you must inject the helper atom that is being used to collect the performance events.

PerformanceHelper.injectHelperAtom(client);

You must pass a lastEvent parameter to the PerformanceHelper constructor. This will be the last event on which to wait to test your feature.

When calling performanceHelper.reportRunDurations() toward the end you must pass the name of the start event you dispatched, otherwise the measurement will be from the start, ie when we inject the helper atom. An easy to figure out the error is if you see the start event in the results. And in that case you'll the the startup events as well as these will be dispatched too.

Note: You should study existing tests to get a become more familiar with the process.

Collecting memory statistics

You can collect the memory usage for both the b2g process and the current app. Just do

var memUsage = performanceHelper.getMemoryUsage(app);

app is the application object. memusage will contain several objects enumerating the memory statistics.

Running tests from a non-engineering device

If you don't have an engineering build on your phone you'll have to do some additional steps:

  1. Clone B2G, and build with ./config.sh DEVICE-NAME (e.g. ./config.sh keon)
  2. Build the Gecko part via ./build.sh gecko
  3. Connect the phone and flash gecko via ./flash.sh gecko
  4. Clone Gaia, and create a file build/custom-prefs.js with content user_pref("marionette.defaultPrefs.enabled", true);
  5. Enable Remote Debugging on the phone and run make reset-gaia to reset the phone (or make install-gaia if you trust yourself)
  6. Disable Remote Debugging and verify that everything is OK by running adb devices. The device should show up.
  7. Now running a perf test should work. Verify via RUNS=1 APP=browser make test-perf

Filing bugs

Please file bugs in Bugzilla, product "Firefox OS", component "Gaia::PerformanceTest".

See also

Document Tags and Contributors

 Contributors to this page: chrisdavidmills, eliperelman, Hub, teoli, janjongboom, julienw, Sheppy, Rik
 Last updated by: chrisdavidmills,