as action researchers
(1987) talks about professionals being researchers in
the practice context. As Bogdan and Biklen (1992: 223)
put it, research is a frame of mind a perspective
people take towards objects and activities. For them,
and for us here, it is something that we can all undertake.
It isnt confined to people with long and specialist training.
It involves (Stringer 1999: 5):
A problem to be investigated.
A process of enquiry
Explanations that enable people to understand
the nature of the problem
research tradition there have been two basic orientations.
The British tradition - especially that linked to education
- tends to view action research as research oriented toward
the enhancement of direct practice. For example, Carr and Kemmis
provide a classic definition:
research is simply a form of self-reflective enquiry undertaken
by participants in social situations in order to improve the
rationality and justice of their own practices, their understanding
of these practices, and the situations in which the practices
are carried out (Carr and Kemmis 1986: 162).
The second tradition,
perhaps more widely approached within the social welfare field
- and most certainly the broader understanding in the USA -
is of action research as 'the systematic collection of information
that is designed to bring about social change' (Bogdan and Biklen
1992: 223). Bogdan and Biklen continue by saying that its practitioners
marshal evidence or data to expose unjust practices or environmental
dangers and recommend actions for change. It has been linked
into traditions of citizens action and community organizing,
but in more recent years has been adopted by workers in very
many respects, this distinction mirrors one we have already
been using between programme evaluation and practice
evaluation. In the latter, we may well set out to explore a
particular piece of work. We may think of it as a case study
a detailed examination of one setting, or a single subject,
a single depository of documents, or one particular event (Merriam
1988). We can explore what we did as educators: what were our
aims and concerns; how did we act; what were we thinking and
feeling and so on? We can look at what may have been going on
for other participants; the conversations and interactions that
took place; and what people may have learnt and how this may
have affected their behaviour. Through doing this we can develop
our abilities as connoisseurs and critics. We can enhance what
we are able to take into future encounters.
evaluating a programme or project we may ask other participants
to join with us to explore and judge the processes they have
been involved in (especially if we are concerned with a more
dialogical approach to evaluation). Our concern is to collect
information, to reflect upon it, and to make some judgements
as to the worth of the project or programme, and how it may
be improved. This takes us into the realm of what a number of
writers have called community-based action research. We have
set out one example of this below.
on community-based action research
fundamental premise of community-based action research
is that it commences with an interest in the problems
of a group, a community, or an organization. Its purpose
is to assist people in extending their understanding
of their situation and thus resolving problems that
action research is always enacted through an explicit
set of social values. In modern, democratic social contexts,
it is seen as a process of inquiry that has the following
It is democratic, enabling the participation
of all people.
It is equitable, acknowledging peoples
equality of worth.
It is liberating, providing freedom from oppressive,
It is life enhancing, enabling the expression
of peoples full human potential.
works through three basic phases:
- building a picture and gathering information.
When evaluating we define and describe the problem to
be investigated and the context in which it is set.
We also describe what all the participants (educators,
group members, managers etc.) have been doing.
interpreting and explaining. When evaluating
we analyse and interpret the situation. We reflect on
what participants have been doing. We look at areas
of success and any deficiencies, issues or problems.
resolving issues and problems. In evaluation
we judge the worth, effectiveness, appropriateness,
and outcomes of those activities. We act to formulate
solutions to any problems.
1999: 18; 43-44;160)
could contrast with a more traditional, banking, style of research
in which an outsider (or just the educators working on their
own) collect information, organize it, and come to some conclusions
as to the success or otherwise of the work.
when evaluating informal education
recent years informal educators have been put under great pressure
to provide output indicators, qualitative
criteria, objective success measures and adequate
assessment criteria. Those working with young people have
been encouraged to show how young people have developed personally
and socially through participation. We face a number of
problems when asked to approach our work in such ways. As we
have already seen, our way of working as informal educators
places us within a more dialogical framework. Evaluating our
work in a more bureaucratic and less inclusive fashion may well
compromise or cut across our work.
are also some basic practical problems. Here we explore four
particular issues identified by Jeffs and Smith (1999: 75-6)
with respect to programme or project evaluations.
problem of multiple influences. The different things that
influence the way people behave cant be easily broken
down. For example, an informal educator working with a project
to reduce teen crime on two estates might notice that the one
with a youth club open every weekday evening has less crime
than the estate without such provision. But what will this variation,
if it even exists, prove? It could be explained, as research
has shown, by differences in the ethos of local schools, policing
practices, housing, unemployment rates, and the willingness
of people to report offences.
problem of indirect impact. Those who may have
been affected by the work of informal educators are often not
easily identified. It may be possible to list those who have
been worked with directly over a period of time. However, much
contact is sporadic and may even take the form of a single encounter.
The indirect impact is just about impossible to quantify. Our
efforts may result in significant changes in the lives of people
we do not work with. This can happen as those we work with directly
develop. Consider, for example, how we reflect on conversations
that others recount to us, or ideas that we acquire second-
or third-hand. Good informal education aims to achieve a ripple
effect. We hope to encourage learning through conversation and
example and can only have a limited idea of what the true impact
problem of evidence. Change can rarely be monitored even
on an individual basis. For example, informal educators who
focus on alcohol abuse within a particular group can face an
insurmountable problem if challenged to provide evidence of
success. They will not be able to measure use levels prior to
intervention, during contact or subsequent to the completion
of their work. In the end all the educator will be able to offer,
at best, is vague evidence relating to contact or anecdotal
problem of timescale. Change of the sort with which informal
educators are concerned does not happen overnight. Changes in
values, and the ways that people come to appreciate themselves
and others, are notoriously hard to identify - especially as
they are happening. What may seem ordinary at the time can,
with hindsight, be recognized as special.
are two classic routes around such practical problems. We can
use both as informal educators.
first is to undertake the sort of participatory action research
we have been discussing here. When setting up and running programmes
and projects we can build in participatory research and evaluation
from the start. We make it part of our way of working. Participants
are routinely invited and involved in evaluation. We encourage
them to think about the processes they have been participating
in, the way in which they have changed and so on. This can be
done in ways that fit in with the general run of things that
we do as informal educators.
second route is to make linkages between our own activities
as informal educators and the general research literature. An
example here is group or club membership. We may find it very
hard to identify the concrete benefits for individuals from
being member of a particular group such as a football team or
social club. What we can do, however, is to look to the general
research on such matters. We know, for example, that involvement
in such groups builds social capital.
We have evidence that:
those countries where the state invested most in cultural
and sporting facilities young people responded by investing
more of their own time in such activities (Gauthier and Furstenberg
more involved people are in structured leisure activities,
good social contacts with friends, and participation in the
arts, cultural activities and sport, the more likely they
are to do well educationally, and the less likely they are
to be involved even in low-level delinquency (Larson and Verma
There appears to be a strong relationship
between the possession of social capital
and better health. As a rough rule of thumb, if you
belong to no groups but decide to join one, you cut your risk
of dying over the next year in half. If you smoke and
belong to no groups, its a toss-up statistically whether
you should stop smoking or start joining (ibid.: 331).
Regular club attendance, volunteering, entertaining, or church
attendance is the happiness equivalent of getting a college
degree or more than doubling your income. Civic connections
rival marriage and affluence as predictors of life happiness
(Putnam 2000: 333).
This approach can work where there is some
freedom in the way that you can respond to funders and others
with regard to evaluation. Where you are forced to fill in forms
that require the answers to certain set questions we can still
use the evaluations that we have undertaken in a participatory
manner and there may even be room to bring in some references
to the broader literature. The key here is to remember that
we are educators and that we have a responsibility foster
learning, not only among those we work with in a project
or programme, but also among funders, managers and policymakers.
We need to view their requests for information as opportunities
to work at deepening their appreciation and understanding of
informal education and the issues and questions with which we
A model for evaluative practice
can now turn to the sorts of questions that we could be asking
about our practice and the pieces of work we undertake. Here
we can look at some the key questions identified by Jeffs and
Smith on evaluating informal education
considering the following dimensions - and how they
relate to each other - we can begin to judge or value
events and experiences. We do this by looking to our
understanding of what makes for human flourishing and
our role. We then have some basis upon which to make
decisions about our next step or to plan strategies.
Interactions. What are the
characteristics of these? What purposes did they serve?
What initiated them? To what extent were they educative?
Are they sustained? Do they reflect the sort of values
we are seeking to encourage?
Focus. What issues and topics
form the focus for conversation? Which of these are
initiated by us, and which by others? What are the most
common subjects or concerns?
Setting. Where is the work
undertaken? What physical settings best stimulate conversation?
What is the impact of the setting upon subject matter,
the nature of those worked with, and the quality of
Aims. What were we as educators
aiming to achieve? What were the aims of others? Were
there conflicts between the two?
Strategies. How did we, as
educators, plan to achieve our aims? Who set these?
What moves did we make? How, if at all, were they altered
and who influenced this? What strategies did others
have? How did they change?
Outcomes. Were outcomes set,
and if so by whom? What appeared to be the outcome for
different participants? What did we learn from our engagement?
Are there issues and questions we need to address? Who
needs to know about this?
and Smith 1999: 77 )
exploring these questions we need to be mindful of our values
and commitments as informal educators. In particular, we need
to invite those we are working with to explore such questions.
purpose of evaluation, as Everitt et al (1992: 129) is to reflect
critically on the effectiveness of personal and professional
practice. It is to contribute to the development of good
rather than correct practice.
from the instrumental and technicist ways of evaluating teaching
are the kinds of educative relationships that permit the asking
of moral, ethical and political questions about the rightness
of actions. When based upon educative (as distinct from managerial)
relations, evaluative practices become concerned with breaking
down structured silences and narrow prejudices. (Gitlin and
Smyth 1989: 161)
is not primarily about the counting and measuring of things.
It entails valuing and to do this we have to develop
as connoisseurs and critics. We have also to ensure that this
process of looking, thinking and acting is participative.
reading and references
For the moment I
have listed some guides to evaluation. At a later date I will
be adding in some more contextual material concerning evaluation
in informal education.
Berk, R. A. and
Rossi, P. H. (1990) Thinking About Program Evaluation, Newbury
Park: Sage. 128 pages. Clear introduction with chapters on key
concepts in evaluation research; designing programmes; examining
programmes (using a chronological perspective). Useful US annotated
Eisner, E. W. (1985)
The Art of Educational Evaluation. A personal view, Barcombe:
Falmer. 272 + viii pages. Wonderful collection of material around
scientific curriculum making and its alternatives. Good chapters
on Eisner's championship of educational connoisseurship and
criticism. Not a cookbook, rather a way of orienting oneself.
Eisner, E. W. (1998)
The Enlightened Eye. Qualitative inquiry and the enhancement
of educational practice, Upper Saddle River, NJ: Prentice Hall.
264 + viii pages. Re-issue of a 1990 classic in which Eisner
plays with the ideas of educational connoisseurship and educational
criticism. Chapters explore these ideas, questions of validity,
method and evaluation. An introductory chapter explores qualitative
thought and human understanding and final chapters turn to ethical
tensions, controversies and dilemmas; and the preparation of
Everitt, A. and
Hardiker, P. (1996) Evaluating for Good Practice, London: Macmillan.
223 + x pages. Excellent introduction that takes care to avoid
technicist solutions and approaches. Chapters examine purposes;
facts, truth and values; measuring performance; a critical approach
to evaluation; designing critical evaluation; generating evidence;
and making judgements and effecting change.
Patton, M. Q. (1997)
Utilization-Focused Evaluation. The new century text 3e, Thousand
Oaks, Ca.: Sage. 452 pages. Claimed to be the most comprehensive
review and integration of the literature on evaluation. Sections
focus on evaluation use; focusing evaluations; appropriate methods;
and the realities and practicalities of utilization-focused
Rossi, P. H. and
Freeman, H. (1993) Evaluation. A systematic approach 5e, Newbury
Park, Ca.: Sage. 488 pages. Practical guidance from diagnosing
problems through to measuring and analysing programmes. Includes
material on formative evaluation procedures, practical ethics,
E. T. (1999) Action Research 2e, Thousand Oaks, CA.:
Sage. 229 + xxv pages. Useful discussion of community-based
action research directed at practitioners.
R. and Biklen, S. K. (1992) Qualitative Research For Education,
Boston: Allyn and Bacon.
W. and Kemmis, S. (1986) Becoming Critical. Education, knowledge
and action research, Lewes: Falmer.
Freire, P. (1972) Pedagogy of the Oppressed,
A. H. and Furstenberg, F. F. (2001) Inequalities in the
use of time by teenagers and young adults in K. Vleminckx
and T. M. Smeeding (eds.) Child Well-being, Child Poverty and
Child Policy in Modern Nations Bristol: Policy Press.
Gitlin, A. and Smyth, J. (1989) Teacher
Evaluation. Critical education and transformative alternatives,
Lewes: Falmer Press.
T. and Smith, M. (eds.) (1990) Using
Informal Education, Buckingham: Open University Press.
and Smith, M. K. (1999) Informal Education. Conversation, democracy
and learning, Ticknall: Education Now Books.
R. W. and Vera, A. (1999) How children and adolescents
spend time across the world: work, play and developmental opportunities
Psychological Bulletin 125(6).
Merriam, S. B. (1988) Case Study Research
in Education, San Francisco: Jossey-Bass.
R. D. (2000) Bowling Alone: The collapse and revival of American
community, New York: Simon and Schuster.
Rubin, F. (1995) A Basic Guide to Evaluation
for Development Workers, Oxford: Oxfam.
Schön, D. A. (1983) The Reflective Practitioner.
How professionals think in action, London: Temple Smith.