Foreword Historians of behavioral science reviewing progress in the provision of human services at some point in the future will have to confront a curious issue. They will note that the twentieth century witnessed the development of a science of human behavior. They will also note that from mid-century on, clinicians treating behavioral and emotional disorders began relying more heavily on the systematic application of theories and facts emanating from this science to emotional and behavioral problems. They will make observations on various false starts in the development of our therapeutic techniques, and offer reasons for the initial acceptance of these "false starts" in which clinicians or practitioners would apply exactly the same intervention or style of intervention to every problem that came before them. But in the last analysis historians will applaud the slow but systematic development of ever more powerful specific procedures and techniques devised to deal successfully with the variety of specific emotional and behavioral problems. This will be one of the success stories of the twentieth century.Historians will also note a curious paradox which they will be hard pressed to explain. They will write that well into the 1990s few practitioners or clinicians evaluated the effects of their new treatments in any systematic way.
Rather, whatever the behavioral or emotional problem, they would simply ask clients from time to time how they were feeling or how they were doing. Sometimes this would be followed by reports in an official chart or record duly noting clients' replies. If families or married couples were involved, a report from only one member of the interpersonal system would often suffice. Occasionally, these attempts at "evaluation" would reach peaks of quantifiable objectivity by presenting the questions in somewhat different ways such as "how are you feeling or doing compared to a year ago when you first came to see me?"Historians will point out wryly that this practice would be analogous to physicians periodically asking patients with blood infections or fractures "how are you feeling" without bothering to analyze blood samples or take X rays. "How could this have been?" they will ask. In searching for answers they will examine records of clinical practice in the late twentieth century and find that the most usual response from clinicians was that they were simply too busy to evaluate what they were doing. But the real reason, astute historians will note, is that they never learned how.Our government regulatory agencies, and other institutions, have anticipated these turn-of-the-century historians with the implementation of procedures requiring practitioners to evaluate what they do.
This practice, most often subsumed under the rubric of "accountability," will very soon have a broad and deep hold on the practice of countless human service providers. But more important than the rise of new regulations will be the full realization on the part of all practitioners of the ultimate logic and wisdom of evaluating what they do. In response to this need, a number of books have appeared of late dealing with methods to help practitioners evaluate what they do. Some books even suggest that this will enable clinicians to make direct contributions to our science. Using strategies of repeated measurement of emotional and behavioral problems combined with sophisticated case study procedures and single case experimental designs, the teaching of these methods is increasing rapidly in our graduate and professional schools. But at the heart of this process is measurement, and thesine qua nonof successful measurement is the availability of realistic and practical measures of change. Only through wide dissemination of realistic, practical, and accurate measures of change will practitioners be able to fulfill the requirements of accountability as well as their own growing sense of per.