Author: Amy Batchelor
Paperback: 191 pages
Publisher: Columbia University Press, New York
Language: English
ISBN: 9780231193276
Review author: Ian Harris
Batchelor presents a concise and approachable introduction to statistics relevant to social workers. She uses a systematic chapter structure that opens with a brief overview of the material to be considered followed by a list of specific learning objectives. The material is presented as a series of responses to practical questions and summaries of key concepts making it easy for the reader to identify content relevant to their particular interests. Discussion is frequently illustrated with examples showing arithmetic working and specific cases related to practice issues. The chapters close with a key take away summary and short questions that the reader can use to test themselves.
The topics covered start with basic concepts like ‘variable’, ‘level of measurement’ and ‘population’ before progressing to descriptive measures of dispersion, central tendency and relatedness. These are followed by an examination of experimental methods, hypothesis testing and inferential statistics including some work on the analysis of results from multiple groups using F-tests and ANOVA. The appendices comprise a glossary of more than seventy key terms, answers to end of chapter review questions and a cheat sheet for descriptive techniques.
Detailed instructions on calculating descriptive statistics are supplied but the more extended calculations for inference are not. Instead, there is guidance on reading the results of such calculations as they are usually reported in research literature, e.g. test values for Student’s t, ANOVA and χ2; degrees of freedom, and alpha or ‘p’ values. Curiously, given that is relatively easier to collect ordinal level data, there is no discussion of Wilcoxon-Mann-Whitney type tests based on the summing of ranks. Also, the otherwise very useful decision tree for selecting the appropriate tests neglects the possibility of drawing inferences from ordinal level data and only indicates χ2 as an alternative to tests requiring ratio or interval level data. It is, of course possible to treat ordinal level data as categorical and so count the frequencies needed for χ2 but this is not discussed either.
The discussion of experimental design and the selection of the appropriate statistics to calculate when testing hypotheses was brief and not as extensively illustrated as the earlier content on descriptive statistics. However, it could be a useful source of support for readers trying to think critically about methodological issues in research reports. This is consistent with the larger ambition of equipping the practitioner with critical insights necessary for the recognition of biases and limitations otherwise obscured by complex models and the appearance of scientific neutrality. To this end there are frequent illustrations of sources of invalidity including sampling biases, neglect of error margins and related distortions of confidence.
This goes some way to counterbalance claims made about the power of statistical procedures to reliably indicate what works and focus interventions and activities to greatest effect. These claims are made on the basis of a commitment to Evidence Based Practice (EBP) in the sense given by the NASW definition[i]. It is hard to ignore the extent to which EBP has become associated with a particular approach to welfare policy and practice[ii]. It is something of a truism to claim that EBP’s positivism originates in narratives that privilege a technical-rational conception of science and a preference for the bureaucratic-legal organisation of the state. These narratives and values are arguably deeply rooted in the Enlightenment traditions of the West and are consequently open to criticisms of potential cultural bias. Therefore, the relatively unexplored assertion of the validity of EBP appears somewhat inconsistent with the advocacy of critical exploration of biases obscured by complexity and the appearance of neutrality. Clearly, it would be outside the scope of an introductory instructional manual to rehearse all such considerations but some acknowledgement of the issue might have been valuable.
That said, the intent to equip and promote critical engagement with the often obscure reporting of research evidence is laudable and timely. If you are looking for a source that will help you overcome a sense of statistics as alien and inaccessible then this book would provide you with a good starting point.
[i]See the American National Association of Social Workers website at https://www.socialworkers.org/News/Research-Data/Social-Work-Policy-Research/Evidence-Based-Practice
[ii] See Witkin, S., & Harrison, W. (2001). Editorial: Whose Evidence and for What Purpose? Social Work, 46(4), 293-296, for an early overview of the range of critical and supportive perspectives taken on EBP. For more recent contributions see Betts Adams, K., Matto, H. & Winston LeCroy, C. (2009) Limitations of Evidence-Based Practice for Social Work Education: Unpacking the Complexity, Journal of Social Work Education, 45:2, 165-186 and Bergmark, A. & Lundström, T. (2011) Guided or independent? Social workers, central bureaucracy and evidence-based practice, European Journal of Social Work, 14:3, 323-337
Review author: Ian Harris is a lecturer in social work and teaches at the University of Essex. He specialises in curriculum design and professional education particularly in the practice of assessing risks, vulnerabilities and exploitation. Ian has a long-standing interest in the relationships between vocation and knowledge which he explores from critical perspectives in the study of culture.