Please note, this is a STATIC archive of website developer.mozilla.org from November 2016, cach3.com does not collect or store any user information, there is no "phishing" involved.

The reftest harness compares the display of two Web pages. If the bitmaps resulting from displaying the two files in an 800x1000 window are identical, the test passes. If they differ, the test fails.  Or, alternatively, the conditions can be reversed (a != test rather than an == test). The power of the tool comes from the fact that there is more than one way to achieve any given visual effect in a browser. So, if the effect of complex markup is being tested, put that complex markup into a page and create another page that uses simple markup to achieve the same visual effect. Reftest will then compare them and verify whether they produce the same bitmap.

This idea can seem odd when first encountered. Automated testing usually compares output against an invariant, a "gold standard", that is determined to be correct. If one has software that multiplies numbers, one wants a regression test to show that 2 * 2 continues to be calculated to be 4, not something similar to but not quite exactly 4. But an operating system does change with time. It is not invariant. And a browser may change the visual effect produced by a tag while still being compliant with relevant standards. For example, the HTML 4.01 specification at the W3C specifies that text inside of a <blockquote> will be indented, but it does not specify the number of pixels of the indentation. If a browser changes the depth of the indenting and the visual construct is tested against an invariant, the test would appear to fail. But the test should not fail, unless the "<blockquote>" element did not cause any indentation at all. If a regression test harness has false failures, it makes the harness not trustworthy and the harness will not be used.

Running reftest-based unit tests

To run all the reftests, go to the directory where you save Firefox's source code and run:

./mach reftest

If you want to run a particular set of reftests, pass the path as an argument:

./mach reftest path/from/sourcedir/reftest.list
Note: mach is a Python2 script. If your use Python3 as default you must edit the first line of mach.

Running IPC reftests

Reftests can also be run in a separate process, which can differ from same-process rendering in significant ways. Currently, IPC reftests are only being run on Linux. To run:

MOZ_LAYERS_FORCE_SHMEM_SURFACES=1 make -C $(OBJDIR) reftest-ipc 
Note: Right now, automation currently only runs layout/reftests/reftest-sanity/reftest.list! If you try to run the full suite, you may experience stalls or other issues.

Creating reftest-based unit tests 

Your first reftest

It is a silly example, but this will step you through creating your first reftest.

Step 1
For now you must check out and build the browser in order to run the tests. See the Build Documentation for details on doing that. Sorry about this, but the released builds and the nightly builds are built with the "--disable-tests" option and reftest will not work - see bug 369809.
Step 2
Open a terminal window. Create a directory (inside Firefox's source code tree) and make that your current directory (i.e. move to that directory).
Step 3
Create a file named foo.html with the following:
<html><head><title>reftest0001</title>
<body><strong>Hello!</strong></body>
</html>
Step 4
Create a file named bar.html with the following:
<html><head><title>reftest0001</title>
<body><b>Hello!</b></body>
</html>
Step 5
Create a file named reftest.list with the following:
== foo.html bar.html
You are now ready to run the test (but first you must go back to the root of Firefox's source code tree):
$ ./mach reftest path/to/reftest.list 2>&1 | grep REFTEST
REFTEST PASS: file:///Users/ray/mozilla-central/path/to/foo.html
$

Congratulations! You have just created your first reftest!

The re-direct and the grep reduce the amount of excess output from the browser. If you built a debug version of the browser, there can be a lot of extra console output. The reftest.list file can be named whatever you want, not necessarily reftest.list (but it has to end with .list).

More to do

Create more reftests. New tests can be added to the reftest.list file, which can contain any number of tests. The file can include other things, but it does not get very complicated. Keep in mind new tests should fit a 600x600 window so we can test on mobile platforms easier. Here is an example.

include ../other.list
== foo.html bar.html
!= aaa.html bbb.html

The first line, as one might expect, includes another manifest. The second line should look familiar. It says that foo.html and bar.html should produce visually identical output. The third line says that aaa.html and bbb.html should NOT produce visually identical output. More information on this file can be found in the README.txt referenced below, which was written by the creator of the reftest tool.

There is one thing about automated tests that may not be obvious. There is really no way to construct a test that is too small. If you want to check something, and it seems trivial, that is ok. The cost of adding a new test to an automated suite is very, very low. For tests that are run manually, this is not true. The cost of thinking about and managing the execution of a manual test is fairly high. This is why manual tests tend to get longer, include more steps and ultimately become a long list that actually tests a lot of things.

So, create small tests. For example, it occurs to me that we assume that spaces between a element name and an attribute name have no effect, but do we know this is true? Who checks this? It is completely trivial, but so what. I can create 50 or 100 test files that have spaces between the element name and the attribute of a element for a bunch of different elements, add those to the list of tests to be run, and it causes no problems for anyone. Maybe it will actually take 500 test files to actually check this behavior. It really does not matter.

So, my point is, if you have an idea then create a test. Really. It would be better to have more tests than we need than too few.

Your second and third reftest

For these tests create the following files:

spaces1.html:

<html><head><title>spaces1</title></head>
<body>
X X
</body></html>

spaces2.html:

<html><head><title>spaces2</title></head>
<body>
X&nbsp;X
</body></html>

spaces3.html:

<html><head><title>spaces3</title></head>
<body>
X&nbsp;&nbsp;X
</body></html>

spaces4.html:

<html><head><title>spaces4</title></head>
<body>
X  X
</body></html>

reftests.txt:

== spaces1.html spaces2.html
!= spaces3.html spaces4.html

The first two files, spaces1.html and spaces2.html, are confirming only that a space (the character equal to 0x20 in ASCII) creates the same visual construct as the HTML entity for a non-breaking space. The second pair of files, spaces3.html and spaces4.html, confirms that two regular spaces do NOT produce the same visual construct as two non-breaking spaces.

When we run them, we see:

$ ./mach reftest path/to/reftest.txt 2>&1 | grep REFTEST
REFTEST PASS: file:///Users/ray/mo/spaces1.html
REFTEST PASS: (!=) file:///Users/ray/mo/spaces3.html
$ 

Fabulous!

Other comparisons

Note that it should also be possible to create a reftest that tests markup against an actual graphic image of the visual construct that should result. This is probably going to be a more fragile test, for the reasons described above. But it may be necessary.

For example, say that certain markup should produce a certain Sanskrit glyph. How does one test that? One should be able to "snap a picture" of the glyph as it should be displayed and then the reference page would include that graphic in an <img> element.

More investigation into whether this will work is definitely warranted. Experiments with this have shown it to be not as easy as one would hope. We'll see.

Here is a list of reftest opportunities files in the source. These are files that have been checked in to be tested. Presumably, one was supposed to open the pages with a browser and look at them and see if they look right. It is hard to tell how many of these will be usable as reftests. If the file is associated with a bug, the bug should be examined. I have seen a case where the html file in the bug had a problem, but the checked-in version was "cleaned up" and not valid for testing.

Mozilla has used HTML generation tools in the past. The htmlgen tool is an example of this. Tools like this may be more useful now that we have reftest to exercise the files. It would also be useful to generate files that combine HTML and CSS in interesting or unusual ways. 

Reftests and elevated privileges

Reftests that you intend to check-in must not rely on behaviour that requires elevated privileges. The build process runs reftests in a profile that does not automatically grant elevated privileges: requesting them will cause the standard security alert to display, which will likely cause the machine running the test to hang or timeout.

Any tests that require such privileges to work correctly should be rewritten as Mochitests. The helper functions snapshotWindow and compareSnapshots are available in testing/mochitest/tests/SimpleTest/WindowSnapshot.js.

Testing invalidation

Testing that a document displays correctly once it has loaded is only one part of testing rendering. Another part is testing invalidation - testing that when a document is changed after it has finished loading and displaying, that the browser correctly "invalidates" the parts of the screen that should change so that the screen displays the correct output the next time it is repainted. Invalidation tests check both that the internal state of the document has been updated correctly, and that the browser then correctly invalidates and repaints the appropriate parts of the screen.

In order to test invalidation it is important that invalidation tests let the document completely finish loading and displaying before making the changes for which invalidation and repainting is to be tested. Making the changes before the document has completely finished loading and painting would mean that the test may not actually test the browser's invalidation logic, since the changed parts of the document may end up displaying correctly purely due to a pending post-load paint.

To write an invalidation reftest requires three extra steps. First you need to add class="reftest-wait" to the root element in the test to tell the reftest framework not to check the rendering as soon as the test finishes loading and moving on to the next test. Next you need to add a listener for the 'MozReftestInvalidate' event, and only make the changes you want to test invalidation for after that event has fired. Third, you need to remove 'reftest-wait' from the root element's 'class' attribute to tell the reftest framework that the test is now ready to have its rendering tested.

The reason for using the 'MozReftestInvalidate' event is because a document's initial painting is not typically finished when the 'load' event fires. It would be possible to try and wait for the initial rendering to be done using a setTimeout, but that would be unreliable, and just as bad, it can increase the time it takes to run a test many times over (which when you're running thousands of tests can really slow things down). The 'MozReftestInvalidate' event is designed to fire as soon after the initial rendering of the document is finished as possible, but never before. The reftest framework fires one MozReftestInvalidate event at the document root element for a reftest-wait test when it is safe to make changes that should test invalidation. The event bubbles up to the document and window so you can set listeners there too.

References

Document Tags and Contributors

 Last updated by: teoli,