prev

next

out of 18

View

2Download

0

Embed Size (px)

Interprocedural Analysis of Asynchronous Programs ∗

Ranjit Jhala UC San Diego

jhala@cs.ucsd.edu

Rupak Majumdar UC Los Angeles

rupak@cs.ucla.edu

Abstract

An asynchronous program is one that contains proce- dure calls which are not immediately executed from the callsite, but stored and “dispatched” in a non- deterministic order by an external scheduler at a later point. We formalize the problem of interprocedu- ral dataflow analysis for asynchronous programs as AIFDS problems, a generalization of the IFDS prob- lems for interprocedural dataflow analysis. We give an algorithm for computing the precise meet-over-valid- paths solution for any AIFDS instance, as well as a demand-driven algorithm for solving the correspond- ing demand AIFDS instances. Our algorithm can be easily implemented on top of any existing interpro- cedural dataflow analysis framework. We have imple- mented the algorithm on top of BLAST, thereby ob- taining the first safety verification tool for unbounded asynchronous programs. Though the problem of solv- ing AIFDS instances is EXPSPACE-hard, we find that in practice our technique can efficiently analyze pro- grams by exploiting standard optimizations of interpro- cedural dataflow analyses. Categories and Subject Descriptors: D.2.4 [Software Engineering]: Software/Program Verification. General Terms: Languages, Verification, Reliability. Keywords: asynchronous (event-driven) program- ming, dataflow analysis. ∗ This research was sponsored in part by the research grants NSF- CCF-0427202, NSF-CNS-0541606, and NSF-CCF-0546170.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. POPL’07 January 17–19, 2007, Nice, France. Copyright c© 2007 ACM 1-59593-575-4/07/0001. . . $5.00.

1. Introduction

Asynchronous programming is a popular and efficient programming idiom for managing concurrent interac- tions with the environment. In addition to the usual, or synchronous, function calls where the caller waits at the callsite until the callee returns, asynchronous pro- grams have asynchronous procedure calls which, in- stead of being executed from the callsite, are stored in a task queue for later execution. An application-level dis- patcher chooses a call from the task queue, executes it to completion (which might lead to further additions to the task queue), and repeats on the remaining pending calls.

Asynchronous calls permit the interleaving of sev- eral logical units of work, and can be used to hide the latency of I/O-intensive tasks by deferring their exe- cution to a point where the system is not otherwise busy. They form the basis of event-driven program- ming, where the asynchronous calls correspond to call- backs that may be triggered in response to external events. Further, if mechanisms to ensure atomicity, ei- ther by using synchronization [24] or by using transac- tions [15, 29], are used to ensure asynchronous calls are executed atomically, then the scheduler can be multi- threaded, running different asynchronous calls concur- rently on different threads or processors [32]. There have been a variety of recent proposals for adding asyn- chronous calls to existing languages via libraries, such as LIBASYNC [20], LIBEVENT [21], and LIBEEL [6, 5]. These libraries have been used to build efficient and robust systems software such as network routers [19] and web servers [25]. Further, several recent languages such as NESC [12], a language for networked em- bedded systems, and MACE [23], a language to build distributed systems, provide explicit support for asyn- chronous calls.

The flexibility and efficiency of asynchronous pro- grams comes at a price. The loose coupling between asynchronously executed methods makes the control and data dependencies in the program difficult to fol- low, making it harder to write correct programs. As asynchronous programs are typically written to provide a reliable, high-performance infrastructure, there is a critical need for techniques to analyze such programs to find bugs early or to discover opportunities for opti- mization.

For programs that exclusively use synchronous func- tion calls, interprocedural dataflow analysis [31, 28] provides a general framework for program analysis. In the setting of [28], interprocedural dataflow prob- lem is formulated as a context-free reachability prob- lem on the program graph, i.e., a reachability problem where the admissible paths in the graph form a con- text free language of nested calls and returns. Unfor- tunately, this approach does not immediately general- ize to asynchronous programs, for example, by treat- ing asynchronous calls as synchronous. In fact, such an analysis yields unsound results, because the facts that hold at the point where the asynchronous call is made may no longer hold at the point where the stored call is finally dispatched. Though the values passed as pa- rameters in the asynchronous call remain unaltered till the dispatch, the operations executed between the asyn- chronous call and its dispatch may completely alter the values of the global variables. Further, the pairing of asynchronous calls and their actual dispatches makes the language of valid program executions a non-context free language, and a simple reduction to context free reachability seems unlikely.

This paper formalizes the problem of dataflow anal- ysis for asynchronous programs as Asynchronous In- terprocedural Finite Distributive Subset (AIFDS) prob- lems, a generalization of the IFDS problems of Reps, Horwitz and Sagiv [28] to programs that additionally contain asynchronous procedure calls. The key chal- lenge in devising algorithms to solve AIFDS problems precisely, that is, to compute the meet over all valid paths (MVP) solutions for such problems, lies in find- ing a way to handle the unbounded set of pending asyn- chronous calls, in addition to the unbounded call stack. We surmount this challenge through three observations.

1. Reduction We can reduce an AIFDS instance into a standard, synchronous dataflow analysis problem where the set of dataflow facts is the product of the

original set with a set of counters which track, for each of finitely many kinds of pending calls, the ex- act number of instances of the call that are pend- ing. Though the reduced instance has the same solu- tion as the AIFDS instance, we cannot use standard dataflow analyses to compute the solution as the lat- tice of dataflow facts is now unbounded: the coun- ters can grow unboundedly to track the number of pending asynchronous calls.

2. Approximation Given any fixed parameter k ∈ N, we can compute approximations of the meet-over- valid path solutions in the following way. We com- pute an under-approximation of the infinite reduced instance using a counter that counts up to k, drop- ping any asynchronous call if there are already k pending instances for that call. We call this problem the k-reduced IFDS problem. We compute an over- approximation of the infinite reduced instance using a counter that counts up to k, and bumps up to infin- ity as soon as the value exceeds k. This has the ef- fect of tracking up to k pending calls precisely, and then supposing that an unbounded number of calls are pending if an additional asynchronous call is per- formed. We call this problem the k∞-reduced IFDS problem. For each k, both the over- and the under- approximations are instances of standard interpro- cedural dataflow analysis as the abstraction of the counters makes the set of dataflow facts finite. Thus, we can compute over- and under-approximations of the precise solution of the AIFDS instance by run- ning standard interprocedural dataflow analysis al- gorithms [28].

3. Convergence In a crucial step, we prove that for each AIFDS instance, there always exists a k for which the solutions of the over-approximate IFDS instance and the under-approximate IFDS instance coincide, thereby yielding the precise solution for the AIFDS instance. Thus, our simple algorithm for computing the meet over valid paths solutions for AIFDS instances is to run an off-the-shelf interpro- cedural analysis on the k and k∞-reduced IFDS in- stances for increasingly larger values of k, until the two solutions converge upon the precise AIFDS so- lution.

The proof of the third observation, and therefore, that our algorithm is complete, proceeds in two steps. First, we demonstrate the existence of a finite represen-

tation of the backward or inverse MVP solution of the infinite reduced instance. To do so, we design a back- ward version of the algorithm of Reps, Horwitz and Sagiv [28] and prove that it terminates with the finite upward-closed backwards solution by using properties of well quasi-orderings [1, 10]. Second, we prove that if the backward solution is the upward closure of some finite set, then there exists a k at which the solutions of the finite k- and k∞-reduced IFDS instances con- verge. Though the correctness proof uses some techni- cal machinery, its details are entirely hidden from an implementer, who need only know how to instantiate a standard interprocedural dataflow analysis framework.

We have implemented this algorithm on top of the BLAST interprocedural reachability analysis which is a lazy version of the summary-based interprocedural reachability analysis of [28]. The result is an automatic safety verifier for recursive programs with unbound- edly many asynchronous procedure calls. Our reduc-