|
Evaluation
|
|
Evaluation is part and parcel of educating yet it can be experienced as a burden and an unnecessary intrusion. We explore the theory and practice of evaluation and some of the key issues for informal educators. In particular, we examine educators as connoisseurs and critics, and the way in which they can deepen their theory base and become researchers in practice. |
A
lot is written about evaluation in education - a great deal of which
is misleading and confused. Many informal educators are suspicious of
evaluation because they see it as something that is imposed from outside.
It is a thing that we are asked to do; or that people impose on us.
As Gitlin and Smyth (1989) comment, from its Latin origin meaning 'to
strengthen' or to empower, the term evaluation has taken a numerical
turn - it is now largely about the measurement of things - and in the
process can easily slip into becoming an end rather than a means. In
this discussion of evaluation we will be focusing on how we can bring
questions of value (rather than numerical worth) back into the centre
of the process. Evaluation
is part and parcel of educating. To be an informal educator we
are constantly called upon to make judgements, to make theory, and to
discern whether what is happening is for the good. We have, in Elliot
W. Eisners words, to be connoisseurs and critics. In this piece
we explore some important dimensions of this process; the theories involved;
the significance of viewing ourselves as action researchers; and some
issues and possibilities around evaluation in informal education. However,
first we need to spend a little bit of time on the notion of evaluation
itself.On evaluationTo help make sense of evaluation I want to explore three key dimensions or distinctions. Programme or practice evaluation? First, it is helpful to make a distinction between programme and project evaluation, and practice evaluation. Programme and project evaluation. This form of evaluation is typically concerned with making judgements about the effectiveness, efficiency and sustainability of pieces of work. Here evaluation is essentially a management tool. Judgements are made in order to reward the agency or the workers, and/or to provide feedback so that future work can be improved or altered. The former may well be related to some form of payment by results such as the giving of bonuses for successful activities, the invoking of penalty clauses for those deemed not to have met the objectives set for it and to decisions about giving further funding. The latter is important and necessary for the development of work. Practice evaluation. This form of evaluation is directed at the enhancement of work undertaken with particular individuals and groups, and to the development of participants (including the informal educator). It tends to be an integral part of the working process. In order to respond to a situation workers have to make sense of what is going on, and how they can best intervene (or not intervene). Similarly, other participants may also be encouraged or take it upon themselves to make judgements about the situation. In other words, they evaluate the situation and their part in it. Such evaluation is sometimes described as educative or pedagogical as it seeks to foster learning. But this is only part of the process. The learning involved is oriented to future or further action. It is also informed by certain values and commitments (informal educators need to have an appreciation of what might make for human flourishing and what is good). For this reason we can say the approach is concerned with praxis action that is informed and committed. These two forms of evaluation will tend to pull in different directions. Both are necessary but just how they are experienced will depend on the next two dimensions. Summative or formative evaluation? Evaluations can be summative or formative. Evaluation can be primarily directed at one of two ends:
Either can be applied to a programme or to the work of an individual. Our experience of evaluation is likely to be different according to the underlying purpose. If it is to provide feedback so that programmes or practice can be developed we are less likely, for example, to be defensive about our activities. Such evaluation isnt necessarily a comfortable exercise, and we may well experience it as punishing especially if it is imposed on us (see below). Often a lot more is riding on a summative evaluation. It can mean the difference between having work and being unemployed! Banking or dialogical evaluation? Last, it is necessary to explore the extent to which evaluation is dialogical. As we have already seen much evaluation is imposed or required by people external to the situation. The nature of the relationship between those requiring evaluation and those being evaluated is, thus of fundamental importance. Here we might useful employ two contrasting models. We can usefully contrast the dominant or traditional model that tend to see the people involved in a project as objects, with an alternative, dialogical approach that views all those involved as subjects. This division has many affinities to Freires (1972) split between banking and dialogical models of education.
We can see in these contrasting
models important questions about power and control, the way in which
those directly involved in programmes and projects are viewed. Dialogical
evaluation places the responsibility for evaluation squarely on the
educators and the other participants in the setting (Jeffs and Smith
1999: 74) On being connoisseurs and criticsInformal education involves more than gaining and exercising technical knowledge and skills. It depends on us also cultivating a kind of artistry. In this sense, educators are not engineers applying their skills to carry out a plan or drawing, they are artists who are able to improvise and devise new ways of looking at things. We have to work within a personal but shared idea of the good an appreciation of what might make for human flourishing and well-being (see Jeffs and Smith 1990). What is more, there is little that is routine or predictable in our work. As a result, central to what we do as educators is the ability to 'think on our feet'. Informal education is driven by conversation and by certain values and commitments (Jeffs and Smith 1999: 65). Describing informal education as an art does sound a bit pretentious. It may also appear twee. But there is a serious point here. When we listen to other educators, for example in team meetings, or have the chance to observe them in action, we inevitably form judgments about their ability. At one level, for example, we might be impressed by someone's knowledge of the income support system or of the effects of different drugs. However, such knowledge is useless if it cannot be used in the best way. We may be informed and be able to draw on a range of techniques, yet the thing that makes us special is the way in which we are able to combine these and improvise regarding the particular situation. It is this quality that we are describing as artistry. For Donald Schön (1987: 13) artistry is an exercise of intelligence, a kind of knowing. Through engaging with our experiences we are able to develop maxims about, for example, group work or working with an individual. In other words, we learn to appreciate - to be aware and to understand - what we have experienced. We become what Eisner (1985; 1998) describes as 'connoisseurs'. This involves very different qualities to those required by dominant models of evaluation.
The word connoisseurship comes from the Latin cognoscere, to know (Eisner 1998: 6). It involves the ability to see, not merely to look. To do this we have to develop the ability to name and appreciate the different dimensions of situations and experiences, and the way they relate one to another. We have to be able to draw upon, and make use of, a wide array of information. We also have to be able to place our experiences and understandings in a wider context, and connect them with our values and commitments. Connoisseurship is something that needs to be worked at but it is not a technical exercise. The bringing together of the different elements into a whole involves artistry. However, educators need to become something more than connoisseurs. We need to become critics. Criticism can be approached as the process of enabling others to see the qualities of something. As Eisner (1998: 6) puts it, effective criticism functions as the midwife to perception. It helps it come into being, then later refines it and helps it to become more acute. The significance of this for those who want to be educators is, thus, clear. Educators also need to develop the ability to work with others so that they may discover the truth in situations, experiences and phenomenon. |
Educators
as action researchers
Schön (1987) talks about professionals being researchers in the practice context. As Bogdan and Biklen (1992: 223) put it, research is a frame of mind a perspective people take towards objects and activities. For them, and for us here, it is something that we can all undertake. It isnt confined to people with long and specialist training. It involves (Stringer 1999: 5): A problem to be investigated. A process of enquiry Explanations that enable people to understand the nature of the problem Within the action research tradition there have been two basic orientations. The British tradition - especially that linked to education - tends to view action research as research oriented toward the enhancement of direct practice. For example, Carr and Kemmis provide a classic definition: Action research is simply a form of self-reflective enquiry undertaken by participants in social situations in order to improve the rationality and justice of their own practices, their understanding of these practices, and the situations in which the practices are carried out (Carr and Kemmis 1986: 162). The second tradition, perhaps more widely approached within the social welfare field - and most certainly the broader understanding in the USA - is of action research as 'the systematic collection of information that is designed to bring about social change' (Bogdan and Biklen 1992: 223). Bogdan and Biklen continue by saying that its practitioners marshal evidence or data to expose unjust practices or environmental dangers and recommend actions for change. It has been linked into traditions of citizens action and community organizing, but in more recent years has been adopted by workers in very different fields. In many respects, this distinction mirrors one we have already been using between programme evaluation and practice evaluation. In the latter, we may well set out to explore a particular piece of work. We may think of it as a case study a detailed examination of one setting, or a single subject, a single depository of documents, or one particular event (Merriam 1988). We can explore what we did as educators: what were our aims and concerns; how did we act; what were we thinking and feeling and so on? We can look at what may have been going on for other participants; the conversations and interactions that took place; and what people may have learnt and how this may have affected their behaviour. Through doing this we can develop our abilities as connoisseurs and critics. We can enhance what we are able to take into future encounters. When evaluating a programme or project we may ask other participants to join with us to explore and judge the processes they have been involved in (especially if we are concerned with a more dialogical approach to evaluation). Our concern is to collect information, to reflect upon it, and to make some judgements as to the worth of the project or programme, and how it may be improved. This takes us into the realm of what a number of writers have called community-based action research. We have set out one example of this below.
We
could contrast with a more traditional, banking, style of research
in which an outsider (or just the educators working on their own)
collect information, organize it, and come to some conclusions as
to the success or otherwise of the work. In recent years informal educators have been put under great pressure to provide output indicators, qualitative criteria, objective success measures and adequate assessment criteria. Those working with young people have been encouraged to show how young people have developed personally and socially through participation. We face a number of problems when asked to approach our work in such ways. As we have already seen, our way of working as informal educators places us within a more dialogical framework. Evaluating our work in a more bureaucratic and less inclusive fashion may well compromise or cut across our work. There are also some basic practical problems. Here we explore four particular issues identified by Jeffs and Smith (1999: 75-6) with respect to programme or project evaluations. The problem of multiple influences. The different things that influence the way people behave cant be easily broken down. For example, an informal educator working with a project to reduce teen crime on two estates might notice that the one with a youth club open every weekday evening has less crime than the estate without such provision. But what will this variation, if it even exists, prove? It could be explained, as research has shown, by differences in the ethos of local schools, policing practices, housing, unemployment rates, and the willingness of people to report offences. The problem of indirect impact. Those who may have been affected by the work of informal educators are often not easily identified. It may be possible to list those who have been worked with directly over a period of time. However, much contact is sporadic and may even take the form of a single encounter. The indirect impact is just about impossible to quantify. Our efforts may result in significant changes in the lives of people we do not work with. This can happen as those we work with directly develop. Consider, for example, how we reflect on conversations that others recount to us, or ideas that we acquire second- or third-hand. Good informal education aims to achieve a ripple effect. We hope to encourage learning through conversation and example and can only have a limited idea of what the true impact might be. The problem of evidence. Change can rarely be monitored even on an individual basis. For example, informal educators who focus on alcohol abuse within a particular group can face an insurmountable problem if challenged to provide evidence of success. They will not be able to measure use levels prior to intervention, during contact or subsequent to the completion of their work. In the end all the educator will be able to offer, at best, is vague evidence relating to contact or anecdotal material. The problem of timescale. Change of the sort with which informal educators are concerned does not happen overnight. Changes in values, and the ways that people come to appreciate themselves and others, are notoriously hard to identify - especially as they are happening. What may seem ordinary at the time can, with hindsight, be recognized as special. There are two classic routes around such practical problems. We can use both as informal educators. The first is to undertake the sort of participatory action research we have been discussing here. When setting up and running programmes and projects we can build in participatory research and evaluation from the start. We make it part of our way of working. Participants are routinely invited and involved in evaluation. We encourage them to think about the processes they have been participating in, the way in which they have changed and so on. This can be done in ways that fit in with the general run of things that we do as informal educators. The second route is to make linkages between our own activities as informal educators and the general research literature. An example here is group or club membership. We may find it very hard to identify the concrete benefits for individuals from being member of a particular group such as a football team or social club. What we can do, however, is to look to the general research on such matters. We know, for example, that involvement in such groups builds social capital. We have evidence that:
This approach can work where there is some
freedom in the way that you can respond to funders and others with
regard to evaluation. Where you are forced to fill in forms that
require the answers to certain set questions we can still use the
evaluations that we have undertaken in a participatory manner
and there may even be room to bring in some references to the broader
literature. The key here is to remember that we are educators
and that we have a responsibility foster learning, not only among
those we work with in a project or programme, but also among
funders, managers and policymakers. We need to view their requests
for information as opportunities to work at deepening their appreciation
and understanding of informal education and the issues and questions
with which we work. We can now turn to the sorts of questions that we could be asking about our practice and the pieces of work we undertake. Here we can look at some the key questions identified by Jeffs and Smith (1999).
When
exploring these questions we need to be mindful of our values and
commitments as informal educators. In particular, we need to invite
those we are working with to explore such questions. The purpose of evaluation, as Everitt et al (1992: 129) is to reflect critically on the effectiveness of personal and professional practice. It is to contribute to the development of good rather than correct practice. Missing from the instrumental and technicist ways of evaluating teaching are the kinds of educative relationships that permit the asking of moral, ethical and political questions about the rightness of actions. When based upon educative (as distinct from managerial) relations, evaluative practices become concerned with breaking down structured silences and narrow prejudices. (Gitlin and Smyth 1989: 161) Evaluation
is not primarily about the counting and measuring of things. It
entails valuing and to do this we have to develop as connoisseurs
and critics. We have also to ensure that this process of looking,
thinking and acting is participative. For the moment I have listed some guides to evaluation. At a later date I will be adding in some more contextual material concerning evaluation in informal education. Berk, R. A. and Rossi, P. H. (1990) Thinking About Program Evaluation, Newbury Park: Sage. 128 pages. Clear introduction with chapters on key concepts in evaluation research; designing programmes; examining programmes (using a chronological perspective). Useful US annotated bibliography. Eisner, E. W. (1985) The Art of Educational Evaluation. A personal view, Barcombe: Falmer. 272 + viii pages. Wonderful collection of material around scientific curriculum making and its alternatives. Good chapters on Eisner's championship of educational connoisseurship and criticism. Not a cookbook, rather a way of orienting oneself. Eisner, E. W. (1998) The Enlightened Eye. Qualitative inquiry and the enhancement of educational practice, Upper Saddle River, NJ: Prentice Hall. 264 + viii pages. Re-issue of a 1990 classic in which Eisner plays with the ideas of educational connoisseurship and educational criticism. Chapters explore these ideas, questions of validity, method and evaluation. An introductory chapter explores qualitative thought and human understanding and final chapters turn to ethical tensions, controversies and dilemmas; and the preparation of qualitative researchers. Everitt, A. and Hardiker, P. (1996) Evaluating for Good Practice, London: Macmillan. 223 + x pages. Excellent introduction that takes care to avoid technicist solutions and approaches. Chapters examine purposes; facts, truth and values; measuring performance; a critical approach to evaluation; designing critical evaluation; generating evidence; and making judgements and effecting change. Patton, M. Q. (1997) Utilization-Focused Evaluation. The new century text 3e, Thousand Oaks, Ca.: Sage. 452 pages. Claimed to be the most comprehensive review and integration of the literature on evaluation. Sections focus on evaluation use; focusing evaluations; appropriate methods; and the realities and practicalities of utilization-focused evaluation. Rossi, P. H. and Freeman, H. (1993) Evaluation. A systematic approach 5e, Newbury Park, Ca.: Sage. 488 pages. Practical guidance from diagnosing problems through to measuring and analysing programmes. Includes material on formative evaluation procedures, practical ethics, and cost-benefits. Stringer,
E. T. (1999) Action Research 2e, Thousand Oaks, CA.: Sage.
229 + xxv pages. Useful discussion of community-based action research
directed at practitioners. ReferencesBogden, R. and Biklen, S. K. (1992) Qualitative Research For Education, Boston: Allyn and Bacon. Carr, W. and Kemmis, S. (1986) Becoming Critical. Education, knowledge and action research, Lewes: Falmer. Freire, P. (1972) Pedagogy of the Oppressed, London: Penguin. Gauthier, A. H. and Furstenberg, F. F. (2001) Inequalities in the use of time by teenagers and young adults in K. Vleminckx and T. M. Smeeding (eds.) Child Well-being, Child Poverty and Child Policy in Modern Nations Bristol: Policy Press. Gitlin, A. and Smyth, J. (1989) Teacher Evaluation. Critical education and transformative alternatives, Lewes: Falmer Press. Jeffs, T. and Smith, M. (eds.) (1990) Using Informal Education, Buckingham: Open University Press. Jeffs and Smith, M. K. (1999) Informal Education. Conversation, democracy and learning, Ticknall: Education Now Books. Larson, R. W. and Vera, A. (1999) How children and adolescents spend time across the world: work, play and developmental opportunities Psychological Bulletin 125(6). Merriam, S. B. (1988) Case Study Research in Education, San Francisco: Jossey-Bass. Putman, R. D. (2000) Bowling Alone: The collapse and revival of American community, New York: Simon and Schuster. Rubin, F. (1995) A Basic Guide to Evaluation for Development Workers, Oxford: Oxfam. Schön, D. A. (1983) The Reflective Practitioner. How professionals think in action, London: Temple Smith. |