Coverity is a static analysis tool that can detect many kinds of defects.
Getting Access
Coverity results for Firefox are available at https://scan.coverity.com/projects/firefox/ or https://scan.coverity.com/projects/firefox-mobile/. That page shows high-level statistics about the number of issues in the project. To view individual issues, you will need to request permissions.
Viewing Issues
There are numerous ways to view the issues found by Coverity. Click on the "hamburger" menu at the top left to see them.
The "COMPONENTS - All In Project" link is a good place to start. It shows issues grouped by component, where components are defined by directories. This lets you focus first on parts of the code that you know well.
Note that loading of issues can be slow.
Once you select a component, the window will show three sections:
- Top left: a list of issues.
- Bottom left: a source code window that contains an explanation of the selected issues.
- Right: additional information about this issue.
Navigation through the list of issues is straightforward.
Dealing with issues
Each issue is initially given the "Unclassified" classification and an "Undecided" action. Issues fall into four common categories.
- False positive. These most often occur when Coverity incorrectly thinks that a code path is possible when it actually isn't due to some condition or constraint.
- Intentional. Coverity's analysis is correct, but what it detects is not a problem. For example, it often complains about the use of the
rand()
function because it is not a source of high-quality random numbers, but many parts of the code do not need high quality random numbers. Also, it often complains about uninitialized scalar values in constructors, but there are many case where this is reasonable, especially for classes that have anInit()
function that is always called on newly-constructed objects. - Bug. Coverity's analysis is correct, and what it detects is a genuine defect.
- Unknown. This is when you can't decide which of the above three categories applies. Issues can be subtle, and not knowing is common, especially when looking at issues in parts of the code you don't know well. If you are uncertain, it is better to leave an issue alone than to wrongly mark is as ignorable.
If you decide an issue is not worth addressing, do the following steps.
- Change the "Classification" to "False Positive" or "Intentional".
- Change the "Action" to "Ignore".
- Put your email address in the "Owner" field.
- Write a brief explanation in the comment box.
If you decide an issue is worth addressing, do the following steps.
- File a bug report in Bugzilla.
- Mark it as blocking bug 1230156, which is the Coverity meta-bug. (The bug nickname is "coverity-analysis".)
- Put the CID number(s) in the whiteboard field.
- Set the "coverity" keyword.
- Attach a fix if you are able to write one. Otherwise, make a needinfo request of an appropriate person.
- Change the "Classification" to "Bug". (Usually, though "False Positive" or "Intentional" is possible if you decide the code should be rewritten for clarity even if it is not buggy per se.)
- Change the "Action" to "Fix Required" or "Fix Submitted", depending on whether you have written a fix.
- Put a link to the bug in the "" field.
- Put your email address in the "Owner" field.
The following screenshot shows the Triage pane for an issue that has been filed in Bugzilla and fixed.
We generally do not use the "Severity" field.
Odds and Ends
How are new analysis runs triggered? How often does that happen?
Because the code base it huge, Coverity limits the number of analysis. Analysis are triggered every Monday for Firefox Desktop and every for 4 days for Fennec (Java code only).
This is managed by release management using a Jenkins instance.
How to use modelling to decrease the false-positive rate
The best approach to decrease the false positive rate is to use a model that Coverity uses to determine the desired behavior of an implementation. The model should consist of specifying behaviors for those interfaces that are linked in from code, but are not compiled or analyzed.
For example if we want to sanitize the tainted data from the following code:
uint32_t HeaderParser::ChunkHeader::ChunkSize() const { return ((mRaw[7] << 24) | (mRaw[6] << 16) | (mRaw[5] << 8 ) | (mRaw[4])); }
The model would look like:
uint32_t HeaderParser::ChunkHeader::ChunkSize() const { __coverity_tainted_data_sanitize__(&mRaw[4]); __coverity_tainted_data_sanitize__(&mRaw[5]); __coverity_tainted_data_sanitize__(&mRaw[6]); __coverity_tainted_data_sanitize__(&mRaw[7]); return ((mRaw[7] << 24) | (mRaw[6] << 16) | (mRaw[5] << 8 ) | (mRaw[4])); }
A simple and more common scenario is when we want to break execution when a condition is false, just like an assert:
void AssertExample(const bool expr) { if (!expr) { __coverity_panic__(); } }
To access the Modeling menu you must go to Analysis Settings tab from Project Overview and then scroll down to the bottom of the page to category Modeling File. Once there you can View or Upload a new model. If a new modeling file is uploaded the scan will be retriggered.