Another variation is useful when pre-program data collection such as administering a test on knowledge or attitudes might "teach" youth about the questions to be asked after the program to measure change, and thus distort the measurement of program impact.
If the experiment loses track altogether of more control than treatment group members, the evaluation data will not only be incomplete, it will provide distorted and therefore misleading information on what impacts the program has.
Experimental designs are the most powerful and produce the strongest evidence.All this makes the task of differentiating services integration from service comprehensiveness very difficult. The remaining group received no special services for teenage parents. Monetization of benefits to individuals requires assumptions about three matters, all of which are frequently controversial. A critical, and often difficult, problem is determining which services ' were actually received. Kumpfer, K. Other potential problems may be solvable through the use of special statistical techniques. Mothers in the alternative school with the nursery program had completed more schooling and were more likely to still be enrolled in school than the other mothers.
Another approach is to include a comprehensive measure of "quality of life" as an outcome measure. The programme logic model focuses on the logic or rationale that links the services with a sequence of outcome objectives.And even these may be inadequate to remove the biases thus introduced. This unit of measurement is usually monetary. It can also be used with performance indicators to assess the cost or cost-efficiency of achieving program goals. Hudson, J. A second threat is that of "history"--the risk that unrelated events may affect outcomes. The agencies collaborating in a services integration effort may differ in their vision of the program's goals, key strategies, and how youth needs will be evaluated and problems addressed. Discussions are tape-recorded, but the primary record is the detailed notes taken by the researcher who acts as recorder. Public opinion may resist treating similar children differently on the basis of a coin flip or may view random assignment as exploiting vulnerable populations and powerless people. Barker, G. For example, crime reduction can be assessed by comparing neighborhood rates of calls for police services, household victimization rates, or youth delinquency surveys.
Clifton, NJ: Humana Press. To what extent does evaluation information already exist somewhere on the same or a closely related intervention?
The referral process educates clients on the options and assistance potentially available. A more expensive form of data collection for non-experimental evaluations is a survey of participants some time after the end of the program.
It is important to know the point where, for example, increasing the level of activity results in a large increase in costs because of the need for additional fixed or semi-fixed components.
Each of these has a stake in the results. Knowledge of the expectations and value systems of various stakeholders, coupled with a systematic review of the programme logic model and process flow chart, greatly facilitates the identification of key questions and issues that should be addressed in the evaluation.
Experimental designs are considered the "gold standard" in impact evaluation.San Francisco: Jossey-Bass. Prendergast, M. Are there unique characteristics of the projects to be evaluated that might not apply to most other projects? Although some are brief, user-friendly and have demonstrated reliability and validity 42,43, the cross-cultural applicability is unknown in some instances. This is often achieved through the use of a logic model which shows the logical connections between programme activities to the outcomes to be achieved27 The findings were used to estimate the success rate required to show a positive rate of return given the delinquency risk of the population served for programs with different cost differentials. Monetization of benefits to individuals requires assumptions about three matters, all of which are frequently controversial.
Antecedent variables include the levels and types of neighborhood, family, peer group, and personal risk factors for participants as well as their demographic characteristics.