この記事ではGaiaのパフォーマンステスト実行に関する情報を、新規テストを作る方法と共に提供します。
記: test-perf
の使用と Datazilla は非推奨です。Gaia に関するパフォーマンステストの最新リソースは、Raptor を見て下さい。
テストを実行する
テストはDatazillaの通常の基礎の上で実行します; しかしながら、自分自身でも実行できます。このためには、Marionette が有効でリモートデバッグが無効なエンジニアリングビルドが必要です。 この方法の詳細情報は、Gaia ビルドシステム入門の、設定カスタマイズ を見て下さい。
テストの要求事項
2013/12/6 から バグ 915156 が落ち着いたため、make test-perf
はホスト上でテストを実行するのに Node.js が必要です。関連モジュールは npm
で自動インストールされるでしょう。
テストを実行する前に、実行ホストを調整する必要があります。実行ホストはB2G デスクトップ や、端末上で (実際または仮想の — emulatorのように) テストを実行させるモジュールです。デフォルトではB2G デスクトップ内で実行して、これはパフォーマンスと深くは関連しません。実行ホストを調整するには、単にGaiaの最上位ディレクトリにある local.mk
ファイルを編集して(存在しない場合は作成します)、次の行を加えます:
MARIONETTE_RUNNER_HOST=marionette-device-host
これで端末の実行ホストを使えます。デフォルト値はmarionette-b2gdesktop-host
です。
もう一つは次のようにします:
MARIONETTE_RUNNER_HOST=marionette-device-host make test-perf
記: 1つより多い端末が接続されている場合、ANDROID_SERIAL環境変数をセットしなければなりません。
どの値を使うかを知るには、adb devices
を見ます。更新済の Gaia バージョンを実行しているのを確認します。
出力
By default the test output the data in JSON format. By default it is output to stdout
and might be mixed with error message from other commands like npm
. This is not a very good idea for automation. So you can redirect this JSON output to a file. Just define MOZPERFOUT
for the host runner, either on the command line as an option or in the local.mk
file as shown above.
MOZPERFOUT=myfile.json
There is a "spec" reporter that allow reporting the output in a more human readable format. To use it, set the environment as follow:
REPORTER=ConsoleMozPerf
This will make the test output something easier to read. Not easier to parse. There is no real syntax.
For now, any other value will use the JSON reporter.
Note: MOZPERFOUT
will be honoured whichever reporter you select.
全てのテストを実行する
In general you can run these tests on 1.4 and upwards from Gaia master. 1.3 might no longer be able to handle the test runs. There is an exception for 1.3t (Tarako). since バグ 1006064 landed, if you want to run the tests against Tarako (1.3t), you should run it from the Gaia 1.3t. From 2.0 and onwards, we consider that you should run the test from the same Gaia tree.
To run all the tests, use the following command:
make test-perf
Note: Since early August 2014 (currently only on master) the b2g process is restarted after each test, not after each test run, to improve the reliability of the tests (see バグ 1027232). If you want to disable this, set the RESTART_B2G
environment variable to "0" when running the tests.
特定アプリのテストを実行する
This is done by naming the app you want to run the tests for, in the APP
variable, for example:
APP=browser make test-perf
一連のアプリのテストを実行する
You can also specify a set of apps, inside the APPS
variable, to run the tests against a specific set:
APPS="browser communications/contacts" make test-perf
実行数の設定
By default, each test is run five times. You can change that by setting the value of RUNS
before running the tests. For example, to run each test three times you'd use this option:
RUNS=3 make test-perf
既知の問題
When running test on Buri/Hamachi (Alcatel one touch fire), you get:
Not enough fields given the number of keys.
You can safely ignore the warning. It is just that b2g-info
on the device is too old as it comes from 1.2 and we only change Gecko and Gaia on these.
新規テストを書く
With the details of running the test suite out the way, let's now look at how you can write your own performance tests for Gaia.
スタートアップイベントのテスト
We have setup a standard for app startup events. If you want to test the app startup, please follow the responsiveness guidelines. The startup_event_test.js
test will drive it. Make sure to whitelist your app in /tests/performance/config.json
, by adding it to the list specified by mozLaunch
.
Note: This is only implemented in v2.0 and later. If your code uses startup-path-done
events then it is using the deprecated style and should be updated.
If you want to measure intermediate launch stages that are not part of the reponsiveness standard, you can dispatch these using the method described below. Dispatching performance events is all you need, they will be collected automatically.
他のイベントに基づくテスト
Now if you want to test specific features in your app you can do so by sending events. The test will be in two part. The instrumentation part that lives in the app itself, and the control part that will use marionette to control the app to perform actions.
Instrumentation
To record the events, all you have to do is dispatch them.
First, include our helper in your app:
<script src="/shared/js/performance_testing_helper.js"></script>
Note: The use of module loaders like RequireJS or Alameda, are perfectly acceptable provided it is loaded before any performance events are triggered.
You need to be cautious and make sure you adjust the unit tests so that the PerformaceTestingHelper
is either loaded or shimmed. A simple shim is to put this in the unit test source file:
var PerformanceTestingHelper = { dispatch: function() { } };
The Travis CI jobs we run out of Github will error if you don't do that properly.
Having done that, you can use the helper to dispatch events when it seems appropriate to do so. First you should dispatch a start event. It is important as the 'start
' event is sent when we register the listeners, so for your feature you likely want to do this much later. So choose where the feature start and add the proper event dispatch.
PerformanceTestingHelper.dispatch('my-feature-start');
When you're ready to stop collecting data and to report the numbers, you need to send the my-feature-done
event, also called the last event, to tell the helper to finish:
PerformanceTestingHelper.dispatch('my-feature-done');
Also you might want to send intermediate events as appropriate.
Note: Here we use "my-feature-" as a prefix for the performance event. This is just an example. Please use an obvious name and try to use it consistently.
アプリをコントロールする
The second part is writing JavaScript to the test framework to perform the test. The filename must end with _test.js
and live in apps/<myapp>/test/performance/
.
It is a lot like a marionette integration test (based on mocha), but with a few twists: in the setup()
function you must inject the helper atom that is being used to collect the performance events.
PerformanceHelper.injectHelperAtom(client);
You must pass a lastEvent
parameter to the PerformanceHelper
constructor. This will be the last event on which to wait to test your feature.
When calling performanceHelper.reportRunDurations()
toward the end you must pass the name of the start event you dispatched, otherwise the measurement will be from the start, ie when we inject the helper atom. An easy to figure out the error is if you see the start event in the results. And in that case you'll the the startup events as well as these will be dispatched too.
Note: You should study existing tests to get a become more familiar with the process.
メモリの統計情報を集める
You can collect the memory usage for both the b2g process and the current app. Just do
var memUsage = performanceHelper.getMemoryUsage(app);
app
is the application object. memusage
will contain several objects enumerating the memory statistics.
非エンジニアリング端末でテスト実行する
If you don't have an engineering build on your phone you'll have to do some additional steps:
- Clone B2G, and build with
./config.sh DEVICE-NAME
(e.g../config.sh keon
) - Build the Gecko part via
./build.sh gecko
- Connect the phone and flash gecko via
./flash.sh gecko
- Clone Gaia, and create a file
build/custom-prefs.js
with contentuser_pref("marionette.defaultPrefs.enabled", true);
- Enable Remote Debugging on the phone and run
make reset-gaia
to reset the phone (ormake install-gaia
if you trust yourself) - Disable Remote Debugging and verify that everything is OK by running
adb devices
. The device should show up. - Now running a perf test should work. Verify via
RUNS=1 APP=browser make test-perf
バグ登録する
Please file bugs in Bugzilla, product "Firefox OS", component "Gaia::PerformanceTest".