This website uses cookies, including third party ones, to allow for analysis of how people use our website in order to improve your experience and our services. By continuing to use our website, you agree to the use of such cookies. Click here for more information on our
and .overview
Key Learning Objectives
- Understand the role of evaluation in the contemporary public sector
- Understand the needs of the evaluation sponsor
- Establish the focus, scope, and purpose of an evaluation
- Develop a suitable and manageable research design
- Implement and manage an evaluation
- Organise interviews, surveys, and other data collection methods
- Manage and combine primary and secondary data
- Be familiar with a range of quantitative and qualitative methods for data analysis
- Build organisational learning into the evaluation process
- Understand the uses of evaluation results
About the Course
How well are government programs working? Who are they affecting? What are their effects? Could they be improved?
More than ever before, politicians, advisors, policy analysts, public sector managers, funding partners, and many other interested parties need quality information on the success or otherwise of specific programs.
This course offers an introduction to program evaluation techniques and how they can be used to generate quality information about program performance and its enhancement.
Drawing on state-of-the-art methods for evaluation research, the course gives participants the knowledge and tools needed to effectively commission and conduct program evaluations. It also recognises the constraints that many public sector managers and analysts work under, and how those constraints limit the scale of evaluation work.
Who Will Benefit
All those involved in planning for, and producing program evaluations at a local, state and federal level, including public sector managers involved in:
- Managing government programs
- Monitoring, assessing, and reviewing programs
- Implementing policies, projects, and programs
- Policy development and analysis
Testimonials
“The discussions between group members, using relevant examples, were enjoyable and educational. Michael was very knowledgeable and willing to pursue things of interest to the group”
Co-ordinator, Defence
Course Outline
Scoping and Planning Program Evaluations
Program evaluation: An overview
- Definition of evaluation
- The place of evaluation in the policy process
- Outcome and process evaluation
- Evaluation and other research initiatives
- Linking evaluation to evidence-based policy development, best practice interventions, creating organisational report cards, and organisational learning
Understanding a program
Evaluators must balance the quest for objectivity with the need to gain intimate knowledge of why a program was established, how stakeholders make sense of the program, how the program works, and what it delivers for the target population.
- Characterising the program
- Getting to know the program
- Identifying program goals
- Program theory and implementation theory
- Building a program theory
- Comparing program theory to actual developments
Identifying issues and formulating questions
- Assessing the need for a program
- Good evaluation questions
- Devising specific questions for the evaluation
- Prioritising questions to be explored
Assessing and monitoring the program process
- Setting criteria for judging program process
- Common forms of program process evaluation
- Assessing service utilisation
- Assessing organisational functions
Assessing and monitoring program outcomes
The question: ‘Is this program working’ can be most readily answered by assessing and monitoring program outcomes. Various approaches to identifying and assessing program outcomes are reviewed.
- Identifying relevant outcomes
- Establishing base-line and current outcomes
- Considering unintended outcomes
Assessing program impact: The classic experimental design
The classic experimental design is often upheld as the ‘gold standard’ for evaluation techniques:
- Random assignment
- Planned variation
- The classic experimental design reviewed
- Analysing experimental results
- Coping with difficulties and limitations that arise
- Conditions that make randomised experiments difficult
- Criticisms
Alternative strategies for program evaluation
Acknowledgement of the practical limitations on the use of the classic experimental design has led researchers to consider alternative strategies for program evaluation such as these:
- Quasi-experimental research designs
- Qualitative methods
- Cost benefit and cost effectiveness analysis
- Meta-analysis
- Triangulation and replication
Managing Program Evaluations
Working with program stakeholders
Program evaluation always takes place in a politicised environment. Stakeholders need to be considered.
- Identifying key individuals and groups associated with the program
- Developing a communication plan
- Making effective use of an advisory committee
- Gathering advice, acting upon it, and reporting back
Measurement issues
Program evaluators seek to make strong claims about the nature and impacts of the programs they study. Approaches are discussed for transforming concepts into collectable, good quality data.
- Dependent and independent variables
- Desirable characteristics of variables
- Concepts and measures
- Measuring variables
- Using multiple measures
- Markers of progress
- Program inputs, resources, and environments
Techniques for data collection
- Sources of data
- Sampling
- Interviewing
- Coding responses
- Utilising existing statistical data
- Merging primary and secondary data
Techniques for data analysis and interpretation
Evaluators must find ways to undertake appropriate data analysis that can yield insights into program effectiveness, such as:
- Regression analysis
- Working with limited dependent variables
- Working with censored data
- Time series analysis
- Choosing case studies based on analysis of quantitative evidence
- Working with qualitative data
- Good practice in data presentation and interpretation
Presenting and utilising evaluation findings – increase the odds they will have an impact on decision-making and inform broader policy conversations
- Meeting expectations of the immediate client
- Supporting change processes
- Serving multiple audiences
- Exploring opportunities for broader dissemination
Aspects of ethical practice
- Understanding the program and its place
- Maintaining high technical quality
- Using balance and judgment
- Adopting a utilization focus from the outset
- Behaving with integrity when working with others
- Surviving under difficult circumstances
Continuing to build your evaluation research capabilities
- Inviting feedback and constructive criticism
- Lesson drawing from other evaluation work
- Joining professional associations
- Improving your professional reading habits
- Integrating an evaluation mindset into your everyday work
Course review and evaluation
An interactive discussion focusing on topics covered in the course, issues that have arisen as the course has progressed, what participants have gained from the course, how participants will integrate what they have learned into their work practices, and how the course could be improved or extended in the future.
On-site & in-house training
Deliver this course how you want, where you want, when you want – and save up to 40%! 8+ employees seeking training on the same topic?
Talk to us about an on-site/in-house & customised solution.
contact
Still have a question?
Sushil Kunwar
Training Consultant
+61 (0)2 9080 4395
training@informa.com.au