本文主要讲述了有关在 Gaia 上运行性能测试以及如何创建新的测试。
运行测试
The tests are run on a regular basis on Datazilla; however, you can also run them yourself. To do so, you'll need an engineer build with Marionette enabled and remote debugging disabled. See Gaia Build System Primer, Customizing the preferences for more information on how doing this.
测试需求
Since bug 915156 landed on December 6th 2013, make test-perf
requires Node.js on the host to run the tests. The relevant modules should be installed automatically with npm
.
Prior to running the test, you need to configure a runner host. The runner host is a module that will either run the test in B2G desktop or on a device (real or virtual — like an emulator). By default it runs in B2G desktop, which is not very relevant for performance. To configure the runner just edit the file local.mk
in the Gaia top level directory (create it if it doesn't already exist) and put the following line:
MARIONETTE_RUNNER_HOST=marionette-device-host
This will use the device host runner. The default value is marionette-b2gdesktop-host
.
The alternative to this is to do:
MARIONETTE_RUNNER_HOST=marionette-device-host make test-perf
Note: only one device at a time is supported, either an emulator or real device. Make sure you have an up-to-date Gaia version running on it.
输出
By default the test output the data in JSON format. By default it is output to stdout
and might be mixed with error message from other commands like npm
. This is not a very good idea for automation. So you can redirect this JSON output to a file. Just define MOZPERFOUT
for the host runner, either on the command line as an option or in the local.mk
file as shown above.
MOZPERFOUT=myfile.json
There is a "spec" reporter that allow reporting the output in a more human readable format. To use it, set the environment as follow:
REPORTER=ConsoleMozPerf
This will make the test output something easier to read. Not easier to parse. There is no real syntax.
For now, any other value will use the JSON reporter.
Note: MOZPERFOUT
will be honoured whichever reporter you select.
对所有 app 运行测试
In general you can run these test on 1.4 and up from Gaia master. 1.3 might no longer be able to have the tests runs. There is an exception for 1.3t (Tarako). since bug 1006064 landed, if you want to run the tests against Tarako (1.3t), you should run it from the Gaia 1.3t.
From 2.0 and onwards, we consider that you should run the test from the same Gaia tree.
make test-perf
对指定的 app 运行测试
APP=browser make test-perf
对多个 app 运行测试
APPS="browser communications/contacts" make test-perf
Setting the number of runs
By default, each test is run five times. You can change that by setting the value of RUNS
before running the tests. For example, to run each test three times you'd use this option:
RUNS=3 make test-perf
已知问题
When running test on Buri/Hamachi (Alcatel one touch fire), you get:
Not enough fields given the number of keys.
You can safely ignore the warning. It is just that b2g-info
on the device is too old as it comes from 1.2 and we only change Gecko and Gaia on these.
Writing new tests
With the details of running the test suite out the way, let's now look at how you can write your own performance tests for Gaia.
Startup event tests
We have setup a standard for app startup events. If you want to test the app startup, please follow the responsiveness guidelines. The startup_event_test.js
test will drive it. Make sure to whitelist your app in /tests/performance/startup_events_test.js
, by adding it to the list specified by whitelistedApps
. Also as a transition measure, you should add it to whitelistedUnifiedApps
which list the apps that use that new method. Once all the apps will be migrated, then we this will disappear - if you can't find it that mean it is not needed anymore.
Note: this is only implemented in 2.0 and later. If you code use startup-path-done
events then it is using the old style and should be updated.
If you want to measure intermediate launch stages that are not part of the reponsiveness standard, you can dispatch these using the method described below. Dispatching performance events is all you need, they will be collected automatically.
其他的 event 基础测试
Now if you want to test specific features in your app you can do so by sending events. The test will be in two part. The instrumentation part that lives in the app itself, and the control part that will use marionette to control the app to perform actions.
Instrumentation
To record the events, all you have to do is dispatch them.
First, include our helper in your app:
<script defer src='/shared/js/performance_testing_helper.js'></script>
Note: If you use a module loader like RequireJS or Alameda, you might prefer to use that, which is perfectly fine.
You need to be cautious and make sure you adjust the unit tests so that the PerformaceTestingHelper
is either loaded or shimmed. A simple shim is to put this in the unit test source file:
var PerformanceTestingHelper = { dispatch: function() { } };
The Travis CI jobs we run out of Github will error if you don't do that properly.
Having done that, you can use the helper to dispatch events when it seems appropriate to do so. First you should dispatch a start event. It is important as the 'start
' event is sent when we register the listeners, so for your feature you likely want to do this much later. So choose where the feature start and add the proper event dispatch.
PerformanceTestingHelper.dispatch('my-feature-start');
When you're ready to stop collecting data and to report the numbers, you need to send the my-feature-done
event, also called the last event, to tell the helper to finish:
PerformanceTestingHelper.dispatch('my-feature-done');
Also you might want to send intermediate events as appropriate.
Note: Here we use "my-feature-" as a prefix for the performance event. This is just an example. Please use an obvious name and try to use it consistently.
Controlling the app
The second part is writing a JavaScript to the test framework to perform the test. The filename must end with _test.js
and live in apps/<myapp>/test/performance/
.
It is a lot like a marionette integration test (based on mocha), but with a few twists: in the setup()
function you must inject the helper atom that is being used to collect the performance events.
PerformanceHelper.injectHelperAtom(client);
You must pass a lastEvent
parameter to the PerformanceHelper
constructor. This will be the last event on which to wait to test your feature.
When calling performanceHelper.reportRunDurations()
toward the end you must pass the name of the start event you dispatched, otherwise the measurement will be from the start, ie when we inject the helper atom. An easy to figure out the error is if you see the start event in the results. And in that case you'll the the startup events as well as these will be dispatched too.
Note: you may want to look at existing test to get a better idea.
Collecting memory statistics
You can collect the memory usage for both the b2g process and the current app. Just do
var memUsage = performanceHelper.getMemoryUsage(app);
app
is the application object. memusage
will contain several objects enumerating the memory statistics.
Running tests from a non-engineering device
If you don't have an engineering build on your phone you'll have to do some additional steps:
- Clone B2G, and build with
./config.sh DEVICE-NAME
(e.g../config.sh keon
) - Build the Gecko part via
./build.sh gecko
- Connect the phone and flash gecko via
./flash.sh gecko
- Clone Gaia, and create a file
build/custom-prefs.js
with contentuser_pref("marionette.defaultPrefs.enabled", true);
- Enable Remote Debugging on the phone and run
make reset-gaia
to reset the phone (ormake install-gaia
if you trust yourself) - Disable Remote Debugging and verify that everything is OK by running
adb devices
. The device should show up. - Now running a perf test should work. Verify via
RUNS=1 APP=browser make test-perf
提交 bug
Please file bugs in Bugzilla, product "Firefox OS", component "Gaia::PerformanceTest".