There has been considerable methodological research on response-adaptive designs for clinical trials but they have seldom been used in practice. The many reasons for this are summarized in an article by Rosenberger and Lachin, but the two main reasons generally cited are logistical difficulties and the potential for bias due to selection effects, "drift" in patient characteristics or risk factors over time, and other sources. Jennison and Turnbull consider a group sequential, response-adaptive design for continuous outcome variables that partially addresses these concerns while at the same time allowing for early stopping. The key advantage of a group sequential approach in which randomization probabilities are kept constant within sequential groups is that a stratified analysis will eliminate bias due to drift. In this article we consider binary outcomes and an algorithm for altering the allocation ratio that depends on the strength of the accumulated evidence. Specifically, patients are enrolled in groups of size n(Ak), n(Bk), k=1, 2, ...K, where n(Ak), n(Bk) are the sample sizes in treatment arms A and B in sequential group k. Patients are initially allocated in a 1:1 ratio. After the kth interim analysis, if the z-value comparing outcomes in the two treatment groups is less than 1.0 in absolute value, the ratio remains 1:1; if the z-value exceeds 1.0, the next sequential group is allocated in the ratio R(1) favoring the currently better-performing treatment; if the z-statistic exceeds 1.5, the allocation ratio is R(2), and if the z-value exceeds 2.0, the allocation ratio is R(3). If the O'Brien-Fleming monitoring boundary is exceeded the trial is terminated. Group sample-sizes are adjusted upward to maintain equal increments of information when allocation ratios exceed one. The z-statistic is derived from a weighted log-odds ratio stratified by sequential group. Simulation studies and theoretical calculations were performed under a variety of scenarios and allocation rules. Results indicate that the method maintains the nominal type I error rate even when there is substantial drift in the patient population. When a true treatment difference exists, a modest reduction in the number of patients assigned to the inferior treatment arm can be achieved at the expense of smaller increases in the total sample size relative to a nonadaptive design. Limitations, such as the impact of delays in observing outcomes, are discussed, as well as areas for further research. We conclude that responsive adaptive designs may be useful for some purposes, particularly in the presence of large treatment effects, although allowing early stopping minimizes the benefits. If such a design is undertaken, the randomization and analysis should be stratified in order to avoid bias due to time trends.