ties in C/C++ programs. It uses a global context-sensitive,
control-flow-insensitive analysis in the first phase and an
inter-procedural,context-sensitive dataflow analysis in the
second phase. ESP is between SLAM/BLAST and MOPS
in scalability and precision.
A key contribution to the usability of MOPS is its abil-
ity to report only one error trace for each error cause in the
program, which significantly reduces the number of error
traces that the programmer has to review. Among the tools
discussed above, only SLAM has documented a similar
ability [2]. Compared to MOPS’s approach, SLAM’s ap-
proach is less precise because it may overlook error causes
or report spurious error causes (see Section 5.1. for a de-
tailed discussion).
Despite important technical differences, we believe
these diverse approaches to software model checking
share significant common ground, and we expect that
much of our implementation experience would trans-
fer over to other tools (such as Metal, ESP, SLAM, or
BLAST). This makes our results all the more meaningful.
7. Conclusion
We have shown how to use software model checking
to analyze the security of large legacy applications. We
reported on our implementation experience, which shows
that model checking is both feasible and useful for real
programs. After performing extensive case studies and
analyzing over one million lines of source code, we dis-
covered over a dozen security bugs in mature, widely-
deployed software. These bugs escaped notice until now,
despite many earlier manual security audits, because the
security properties were somewhat subtle and non-trivial
to check by hand. This demonstrates that MOPS allows
us to efficiently and automatically detect non-obvious se-
curity violations. Our conclusion is that model checking
is an effective means of improving the quality of security-
critical software at scale.
Acknowledgment
We are grateful to David Schultz, who built a first proto-
type of the automated build process and who implemented
a first version of the stderr property; and to Geoff Morri-
son, who helped improve the automated build process and
who implemented the MOPS user interface for reviewing
error traces in HTML formats. Ben Schwarz, Jacob West,
Jeremy Lin, and Geoff Morrison helped verify many re-
sults in this paper after the initial submission of this paper.
We also thank Peter Neumann, Jim Larus, and the anony-
mous reviewers for their comments on an earlier draft of
this paper.
References
[1] K. Ashcraft and D. Engler. Using programmer-written
compiler extensions to catch security holes. In Proceed-
ings of IEEE Security and Privacy 2002, 2002.
[2] T. Ball, M. Naik, and S. Rajamani. From symptom to
cause: Localizing errors in counterexample traces. In
POPL ’03: Proceedings of the ACM SIGPLAN-SIGACT
Conference on Principles of Programming Languages,
2003.
[3] T. Ball and S. K. Rajamani. Automatically validating tem-
poral safety properties of interfaces. In SPIN 2001, Work-
shop on Model Checking of Software, 2001.
[4] T. Ball and S. K. Rajamani. The SLAM project: Debug-
ging system software via static analysis. In POPL ’02:
Proceedings of the ACM SIGPLAN-SIGACT Conference
on Principles of Programming Languages, 2002.
[5] M. Bishop and M. Dilger. Checking for race conditions in
file access. Computing Systems, 9(2):131–152, 1996.
[6] H. Chen and D. Wagner. MOPS: an infrastructure for ex-
amining security properties of software. In Proceedings of
the 9th ACM Conference on Computer and Communica-
tions Security (CCS), Washington, DC, 2002.
[7] H. Chen, D. Wagner, and D. Dean. Setuid demystified. In
Proceedings of the Eleventh Usenix Security Symposium,
San Francisco, CA, 2002.
[8] M. Das, S. Lerner, and M. Seigle. Esp: Path-sensitive pro-
gram verification in polynomial time. In PLDI ’02: Pro-
ceedings of the ACM SIGPLAN 2002 Conference on Pro-
gramming Language Design and Implementation, Berlin,
Germany, June 2002.
[9] D. Engler, B. Chelf, A. Chou, and S. Hallem. Checking
system rules using system-specific, programmer-written
compiler extensions. In OSDI, 2000.
[10] D. Engler and M. Musuvathi. Static analysis versus soft-
ware model checking for bug finding. In 5th Intl. Confer-
ence Verification, Model Checking and Abstract Interpre-
tation (VMCAI ’04), 2004.
[11] J. Foster, M. F¨ahndrich, and A. Aiken. A theory of type
qualifiers. In ACM SIGPLAN Conference on Program-
ming Language Design and Implementation (PLDI’99),
May 1999.
[12] T. A. Henzinger, R. Jhala, R. Majumdar, and G. Sutre.
Software verification with BLAST. In Proceedings of the
10th SPIN Workshop on Model Checking Software, 2003.
[13] D. Jackson and J. Wing. Lightweight formal methods.
IEEE Computer, pages 21–22, April 1996.
[14] L. Koved, M. Pistoia, and A. Kershenbaum. Access rights
analysis for Java. In Proceedings of the 17th Annual ACM
Conference on Object-Oriented Programming, Systems,
Languages, and Applications, 2002.
[15] RATS. http://www.securesoftware.com/
rats.php.
[16] H. Rogers. Theory of Recursive Functions and Effective
Computability. MIT Press, 1987.
[17] Sendmail Inc. CERT advisory CA-1999-15 buffer over-
flows in SSH daemon and RSAREF2 library. http://
www.cert.org/advisories/CA-1999-15.htm.
Comentarios a estos manuales