Please note, this is a STATIC archive of website developer.mozilla.org from November 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

我们的志愿者还没有将这篇文章翻译为 中文 (简体)加入我们帮助完成翻译!

There are two large SpiderMonkey test suites: js/src/tests and js/src/jit-test. See Running Automated JavaScript Tests for details.

Most new tests could go in either suite. The main differences are:

  1. jstests run in both the shell and the browser, whereas jit-tests run only in the shell.
  2. jstests automatically load js/src/tests/shell.js before they run, which creates a ton of functions.

To add a new jit-test, make a new file in js/src/jit-test/tests/basic or one of the other subdirectories of jit-test/tests.

To add a new jstest, put the new code in one of these three directories:

  • js/src/tests/ecma_5 - New tests for behavior required by Edition 5 of the ECMAScript standard belong in the appropriate subdirectory here.
  • js/src/tests/js1_8_5/extensions - New tests that cover SpiderMonkey-specific extensions can go here.
  • js/src/tests/js1_8_5/regress - All other new regression tests can go here.

Other js/src/tests subdirectories exist, but most of them contain older tests.

Creating the test case file

Have a look at the existing files and follow what they do.

jstests has a special requirement:

  • The call to reportCompare in every jstest is required by the test harness. Except in old tests or super strange new tests, it should be the last line of the test.

All tests can use the assertEq function.

assertEq(v1, v2[, message])

Check that v1 and v2 are the same value. If they're not, throw an exception (which will cause the test to fail).

Handling shell or browser specific features

jstests run both in the browser and in the JavaScript shell.

If your test needs to use browser-specific features, either:

  • make the test silently pass if those features aren't present; or
  • write a mochitest instead (preferred); or
  • at the top of the test, add the comment  // skip-if(xulRuntime.shell), so that it only runs in the browser.

If your test needs to use shell-specific features, like gc(), either:

  • make the test silently pass if those features aren't present; or
  • make it a jit-test (so that it never runs in the browser); or
  • at the top of the test, add the comment // skip-if(!xulRuntime.shell), so that it only runs in the shell.

It is easy to make a test silently pass; anyone who has written JS code for the Web has written this kind of if-statement:

if (typeof gc === 'function') {
    var arr = [];
    arr[10000] = 'item';
    gc();
    assertEq(arr[10000], 'item', 'gc must not wipe out sparse array elements');
} else {
    print('Test skipped: no gc function');
}
reportCompare(0, 0, 'ok');

Choosing the comparison function

Every jstest loads all the code in js/src/tests/shell.js, which includes a few extra functions.

reportCompare

reportCompare(expected, actual, description) is somewhat like assertEq(actual, expected, description) except that the first two arguments are swapped, failures are reported via stdout rather than by throwing exceptions, and the matching is fuzzy in an unspecified way. For example, reportCompare sometimes considers numbers to be the same if they are "close enough" to each other, even if the == operator would return false.

expected = 3;
actual   = 1 + 2;
reportCompare(expected, actual, '3==1+2');

reportMatch

reportMatch(expectedRegExp, actual, description) is used to test if an actual value is matched by an expected regular expression. This comparison is used in circumstances where the actual value may vary within a set pattern and also to allow tests to be used both in the C implementation of the JavaScript engine (SpiderMonkey) and the Java implementation of the JavaScript engine (Rhino) which differ in their error messages or when an error message has changed between branches. For example, a test which recurses to death can report Internal Error: too much recursion on the 1.8 branch while reporting InternalError: script stack space quota is exhausted on the 1.9 branch. To handle this you might write:

actual   = 'No Error';
expected = /InternalError: (script stack space quota is exhausted|too much recursion)/;
try {
  f = function() { f(); }
}
catch(ex) {
  actual = ex + '';
  print('Caught exception ' + ex);
}
reportMatch(expected, actual, 'recursion to death');

compareSource

compareSource(expected, actual, description) is used to test if the decompilation of a JavaScript object (conversion to source code) matches an expected value. Note that tests which use compareSource should be located in the decompilation sub-suite of a suite. For example, to test the decompilation of a simple function you could write:

var f  = (function () { return 1; });
expect = 'function () { return 1; }';
actual = f + '';
compareSource(expect, actual, 'decompile simple function');

Handling abnormal test terminations

Some tests can terminate abnormally even though the test has technically passed. Earlier we discussed the deprecated approach of using the -n naming scheme to identify tests whose PASSED, FAILED status is flipped by the post test processing code in jsDriver.pl and post-process-logs.pl. A different approach is to use the expectExitCode(exitcode) function which outputs a string:

--- NOTE: IN THIS TESTCASE, WE EXPECT EXIT CODE <exitcode> ---

that tells the post-processing scripts jsDriver.pl or post-process-logs.pl that the test passes if the shell or browser terminates with that exit code. Multiple calls to expectExitCode will tell the post-processing scripts that the test actually passed if any of the exit codes are found when the test terminates.

This approach has limited use, however. In the JavaScript shell, an uncaught exception or out of memory error will terminate the shell with an exit code of 3. However, an uncaught error or exception will not cause the browser to terminate with a non-zero exit code. To make the situation even more complex, newer C++ compilers will abort the browser with a typical exit code of 5 by throwing a C++ exception when an out of memory error occurs. Simply testing the exit code does not allow you to distinguish the variety of causes a particular abnormal exit may have.

In addition, some tests pass if they do not crash; however, they may not terminate unless killed by the test driver.

A modification will soon be made to the JavaScript tests to allow an arbitrary string to be output which will be used to post process the test logs to better determine if a test has passed regardless of its exit code.

Performance testing

Do not attempt to test the performance of engine features in the test suite. 

Please keep in mind that the JavaScript test suite is run on a wide variety of wildly varying hardware plaforms, from phones all the way up to servers. Even tests that check for polynomial time complexity will start to fail in a few years when they have sped up enough to run faster than the granularity of the OS scheduler or when run on platforms with higher latencies than your development workstation. These tests will also show up as infrequent oranges on our heavily loaded test machines, lowering the value of our test suite for everyone. Just don't do it, it's never worth it.

Do not add performance tests to the test suite.

It is not generally even possible to tell if the speed of any particular feature is going to be important in the real world without running a real-world benchmark. It is very hard to write a good real-world benchmark. For this reason, the best place to find out if a change is performance sensitive is on arewefastyet.com.

Focus on writing fast, light tests that cover a single feature. There is basically no cost to adding a new test, so add as many feature tests as needed to cover each feature orthogonally. Remember that whenever a test fails, someone -- probably you -- is going to have to figure out what went wrong.

Testing your test

Run your new test locally before checking it in (or posting it for review). Nobody likes patches that include failing tests!

It's also a good sanity check to run each new test against an unpatched shell or browser. The test should fail if it's working properly.

Checking in completed tests

Tests are usually reviewed and pushed just like any other code change. Just include the test in your patch.

Security-sensitive tests should not be committed until the corresponding bug has been made public. Instead, ask a SpiderMonkey peer how to proceed.

It is OK under certain circumstances to push new tests to certain repositories without a code review. Don't do this unless you know what you're doing. Ask a SpiderMonkey peer for details.

文档标签和贡献者

 此页面的贡献者: fscholz, oluc, kscarfone, terrence, syssgx, Jorend, Jimb, Bc, Goolic, Mike.kaplinskiy
 最后编辑者: fscholz,