Empowerment evaluation
This article has an unclear citation style. (January 2012) |
Empowerment evaluation (EE) is an evaluation approach designed to help communities monitor and evaluate their own performance. It is used in comprehensive community initiatives as well as small-scale settings and is designed to help groups accomplish their goals. According to David Fetterman, "Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination".[1] An expanded definition is: "Empowerment evaluation is an evaluation approach that aims to increase the likelihood that programs will achieve results by increasing the capacity of program stakeholders to plan, implement, and evaluate their own programs."[2]
Scope
[edit]Empowerment evaluation has been used in programs ranging from a fifteen million dollar Hewlett-Packard corporate philanthropy effort[3] to accreditation in higher education[4] and from the NASA Jet Propulsion Laboratory’s Mars Mars Rover project[5] to battered women's shelters.[6] Empowerment evaluation has been used by government, foundations, businesses, and non-profits, as well as Native American reservations. It is a global phenomenon, with projects and workshops around the world including Australia, Brazil, Canada, Ethiopia, Finland, Israel, Japan, Mexico, Nepal, New Zealand, South Africa, Spain, Thailand, the United Kingdom, and the United States. A sample of sponsors and clients includes Casey Family Programs, Centers for Disease Control and Prevention, Family & Children Services, Health Trust, Knight Foundation, Poynter, Stanford University, State of Arkansas, UNICEF and Volunteers of America.[7]
History and publications
[edit]Empowerment evaluation was introduced in 1993 by David Fetterman during his presidential address at the American Evaluation Association’s (AEA) annual meeting.[1]
The approach was initially well received by some researchers who commented on the complementary relationship between EE and community psychology, social work, community development and adult education. They highlighted how it inverted traditional definitions of evaluation, shifting power from the evaluator to program staff and participants. Early supporters positively noted the focus on social justice and self-determination. One colleague compared the writings of approach to Martin Luther's 95 Theses.[8][9][10]
Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability[11] the first empowerment evaluation book, provided an introduction to theory and practice. It highlighted EE's scope, ranging from its use in a national educational reform movement to its endorsement by the W. K. Kellogg Foundation’s Director of Evaluation. The book presented examples in various contexts, including: federal, state, and local government, HIV prevention and related health initiatives, African American communities and battered women’s shelters. This first volume also provided various theoretical and philosophical frameworks as well as workshop and technical assistance tools.
Foundations of Empowerment Evaluation,[12] was the second EE book. The book provided steps and cases. It highlighted the role of the Internet to facilitate and disseminate the approach.
The third book was titled: Empowerment evaluation principles in practice.[13] It emphasized greater conceptual clarity by making explicit EE's underlying principles, ranging from improvement and inclusion to capacity building and social justice. In addition, it highlighted its commitment to accountability and outcomes, by stating them as an explicit principle and presenting substantive outcome examples. Cases described include educational reform, youth development programs and child abuse prevention programs.[14]
Theories
[edit]The primary theories guiding empowerment evaluation are process use and theories of use and action.[15][16][17]
Process use represents much of the rationale or logic underlying EE in practice, because it cultivates ownership by placing the approach in community and staff members’ hands.
The alignment of theories of use and action explain how empowerment evaluation helps people produce desired results.[18][19][20][21][22][23]
Process use
[edit]Empowerment evaluation is designed to be used by people. It places evaluation in the hands of community and staff members. The more that people are engaged in conducting their own evaluations the more likely they are to believe in them, because the evaluation findings are theirs. In addition, a byproduct of this experience is that they learn to think lucratively.[clarification needed] This makes them more likely to make decisions and take actions based on their evaluation data. This way of thinking is at the heart of process use.[24]
Principles
[edit]Empowerment evaluation is guided by 10 principles.[25] These principles help evaluators and community members align decisions with the larger purpose or goals associated with capacity building and self-determination.
- Improvement – help people improve program performance
- Community ownership – value and facilitate community control
- Inclusion – invite involvement, participation, and diversity
- Democratic participation – open participation and fair decision making
- Social justice – address social inequities in society
- Community knowledge – respect and value community knowledge
- Evidence-based strategies – respect and use both community and scholarly knowledge
- Capacity building – enhance stakeholder ability to evaluate and improve planning and implementation
- Organizational learning – apply data to evaluate and implement practices and inform decision making
- Accountability – emphasize outcomes and accountability.
Concepts
[edit]Key concepts include: critical friends, cultures of evidence, cycles of reflection and action, communities of learners, and reflective practitioners.[26] A critical friend, for example, is an evaluator who provide constructive feedback.[27] They help to ensure the evaluation remains organized, rigorous and honest.
Steps
[edit]EE's three-step approach includes:[28][12]
- establish their mission;
- review their current status; and
- plan for the future.
This approach is popular in part due to its simplicity, effectiveness and transparency.
A second approach is the 10-step Getting to Outcomes (GTO).[citation needed] GTO helps participants answer 10 questions using relevant literature, methods and tools. The 10 accountability questions and literature to address them are:
- What are the needs and resources? (Needs assessment; resource assessment)
- What are the goals, target population and desired outcomes? (Goal setting)
- How does the intervention incorporate knowledge of science and best practices in this area? (Science and best practices)
- How does the intervention fit with existing programs? (Collaboration; cultural competence)
- What capacities do you need to implement a quality program? (Capacity building)
- How will this intervention be carried out? (Planning)
- How will the quality of implementation be assessed? (Process evaluation)
- How well did the intervention work? (Outcome and impact evaluation)
- How will quality improvement strategies be incorporated? (Total quality management; continuous quality improvement)
- If the intervention is (or components are) successful, how will the intervention be sustained? (Sustainability and institutionalization)
A manual with worksheets addresses how to answer the questions.[29] While GTO has been used primarily in substance abuse prevention, customized GTOs have been developed for preventing underage drinking[citation needed] and promoting positive youth development.[citation needed] Several books are downloadable. In addition, EE can employ photo journalism, online surveys, virtual conferencing and self-assessments.[citation needed]
Monitoring
[edit]Conventional and innovative evaluation tools monitor outcomes, including online surveys, focus groups and interviews, as well as the use of quasi-experimental designs. In addition, program specific metrics are developed, using baselines, benchmarks, goals and actual performance. For example, a minority tobacco prevention program in Arkansas established:
- Baselines (the number of tobacco users)
- Goals (the yearly number of subjects helped)
- Benchmarks (the monthly number of subjects helped)
- Performance (the number of subjects who stop smoking)
These metrics help the community monitor implementation, by comparing performance with benchmarks. It also enables them to make mid-course corrections.
Selected case examples
[edit]Stanford University School of Medicine applied the technique to curricular decision making.[26] EE contributed to improvements in course and clerkship[clarification needed] ratings. For example, the average student ratings for required courses improved significantly (P = .04; Student's one-sample t test).
EE guided Hewlett-Packard's $15 million Digital Village Initiative. The initiative was designed to help bridge the digital divide in communities of color. Outcomes ranged from Native American's building one of the largest unlicensed wireless systems in the country to creating a high-resolution digital printing press.[citation needed]
The State of Arkansas used EE in academically distressed schools and tobacco prevention. Outcomes include improving test scores, upgrading school-level performance and preventing and reducing tobacco consumption.[30]
A school district in South Carolina invested millions of their own dollars to provide each student with a personalized computing device as an educational tool. EE was used to support large scale implementation of the initiative and monitor outcomes associated with teacher and student behavior change.[31]
Rationale
[edit]Response to critique
[edit]EE is conducted by an internal group, not an external individual. Programs are dynamic, not static and thus require more fluid, responsive, and continual assessment. The evaluator becomes a coach, rather than the expert. Investigating worth and merit is not sufficient. The focus should also be on program improvement. Empowerment evaluation, as a group activity, builds in self-checks on bias. Internal and external forms of evaluation are compatible and reinforcing. However, the Joint Committee's standards were applied and empowerment evaluation was found to be consistent with the spirit of the standards. Empowerment evaluation is not a threat to traditional evaluation. It may instead help to revitalize it.[32]
Empowerment evaluation is part of an emancipatory research stream. Its unique contribution is its focus on fostering self-determination and building capacity. Empowerment evaluation is guided by process use. Additional effort could be made to further distinguish empowerment from collaborative, participatory, stakeholder, and utilization forms of evaluation. Empowerment evaluation should be limited or focused on the disenfranchised and issues of liberation. Empowerment evaluation has become a part of the evaluation landscape.[16]
Empowerment evaluation is part of a worldwide movement. It is now part of the evaluation field. However, empowerment evaluation needs to focus on the consumer, rather than staff members. In addition, the definition of empowerment evaluation has changed. Bias in evaluation can be removed by distancing oneself from the group or program being assessed. Internal and external forms of evaluation are needed. Empowerment evaluators serve as evaluation consultants.[33]
The definition of empowerment is the same as when the approach was first defined and introduced to the field. However, it has been expanded to further clarify the purpose of the approach. Fetterman and Wandersman agree that empowerment evaluation is part of an emancipatory stream of research. It also relies on process use to guide it. They also believe that greater effort is needed to further distinguish empowerment from other forms of stakeholder involved approaches. However, empowerment evaluation can be viewed along a continuum from less empowering to more empowering in nature. Empowerment evaluation is designed to help the disenfranchised. However, the boundaries are much broader and inclusive. Everyone can benefit from self-assessment and becoming more self-determined.
Fetterman advocated that evaluation be shared with a broader population.[1][34]
Debates and controversy
[edit]Empowerment evaluation challenged the status quo concerning who is in control of an evaluation and what it means to be an evaluator. Conventionally, evaluations are conducted by a specialist. In EE, the group or community performs the evaluation, guided by an empowerment evaluator or “critical friend.”
First wave of criticism
[edit]Shufflebeam claimed that evaluation should be left in the hands of professionals who objectively investigate the worth or merit of an object and that EE violates the (as yet unadopted) Joint Committee's Program Evaluation Standards.[35][36]
Fetterman and Scriven agreed on the value of both internal and external evaluations. They also agree on a focus on the consumer. However, staff members, sponsors, and policy makers also have important roles to play in evaluation. Scriven however claimed that the evaluator must maintain distance from program participants to avoid bias.[37][38]
Chelimsky re-framed the discussion between Fetterman, Patton and Scriven, explaining that evaluations serve multiple purposes: (1) accountability; (2) development; and (3) knowledge. Scriven, and to a lesser extent Patton, focused on accountability, while Fetterman focused on development.[39]
Second wave
[edit]The second wave of debate and discussion emerged between 2005 and 2007. The primary critiques focused on conceptual and methodological clarity:
Cousins attempted to differentiate between similar approaches, e.g. collaborative, participatory, and empowerment evaluation. Cousins asked whether EE is practical (focusing on decision making), or transformative (focusing on self-determination) and viewed self-evaluation as more likely to have a self-serving bias. This critic noted the variability in attempts at empowerment evaluation.[40]
Miller and Campbell conducted a systematic literature review of empowerment evaluation. They highlighted types or modes of EE, as well as settings, reasons for use, selection process and degree of participation. The highlighted practice variants depending on the size of the evaluation. They suggested that clients were selecting it for appropriate reasons, such as capacity building, self-determination, accountability, cultivating ownership and institutionalization of evaluations. However, they also found that approximately 25% were empowerment in name only. In addition, they argued for additional conceptual clarity.[41]
Patton accepted EE as part of the evaluation field and proposed that given its established status, additional clarity distinguishing collaborative, participatory, utilization and empowerment evaluation would be fruitful. He acknowledged improvements ranging improved definitions and added the 10 principles. He was concerned that self-determination was not on the list. Patton applauded and recommended process use for empowerment evaluation. He accepted the contributors' commitment to forthrightly describing problems. Patton proposed greater emphasis on outcomes or results in EE.[42]
Scriven believes that self-evaluation is flawed, because it is inherently self-serving, and rejected its use for professional development.[43] He questioned the ability of EE to actually empower people and recommended a neutral evaluator role. He suggested that internal and external evaluations are not compatible. He also suggests that empowerment as well as randomized controls are merely forms of ideology.[44]
Response to critique
[edit]Fetterman and Wandersman responded by attempting to enhance conceptual clarity, provide greater methodological specificity and highlight EEs commitment to accountability and outcomes. They acknowledged and applauded Miller and Campbell's systematic review of EE projects, while noting neglected or omitted case examples and questioning some of their methodology.
They claimed that the 10 principles contributed to conceptual clarity and that people empower themselves. They asserted that evaluations are inherently subjective and are shaped by culture and political context, and that EE is committed to honesty and rigor. EE is more inclusive than traditional evaluations, placing cross-checks on data and decisions. Participants often know more about problems than outsiders and have a vested interest in making their programs work. They claimed that internal and external evaluations can operate together effectively as additional cross-checks.
While the similarities among collaborative, participatory and empowerment evaluation were described in the first and second empowerment evaluation books, they recommended Cousins' tool to highlight the differences, focusing on depth of participation and control of evaluation technical decision making[citation needed]
The most significant response to the critiques focused on outcomes. Fetterman & Wandersman argued that outcomes and results were important to EE. They highlighting specific project outcomes including:
Outcomes
[edit]- CDC funded a study using a quasi-experimental design that demonstrated improved outcomes as a result of empowerment evaluation.[45]
- Empowerment evaluation used in Arkansas distressed schools, increased standardized test scores.
- Native American's built a wireless system and digital printing press supported by empowerment evaluation.
- Stanford University's School of Medicine used EE to prepare for an accreditation site visit. Increases in student course ratings were statistically significant.[26]
- Arkansas save millions in excess medical costs from applying empowerment evaluation to tobacco prevention programs. This resulted in legislation creating the Arkansas Evaluation Center.[46][30]
Scriven's assessment
[edit]Scriven agreed that external evaluators sometimes miss problems obvious to program staff members. He also stated they have less credibility with them than an internal evaluator. As a result, he concluded, it is less likely their recommendations will be implemented.[47]
Scriven agreed that EE contributed to improvements in internal staff program evaluations and that empowerment evaluation could make a contribution to evaluation if combined with third-party evaluation.[48]
Professional association affiliation and awards
[edit]Empowerment evaluation was a catalyst for the creation of the American Evaluation Association's Collaborative, Participatory, and Empowerment Evaluation topical interest group. Approximately 20% of the American Evaluation Association membership is affiliated with the topical interest group.[49] SAGE Publications, a social science textbook publisher, cited an empowerment evaluation book as one of their "classic titles in research methods".[50] Four empowerment evaluators received honors from the association: Margret Dugan, David Fetterman, Shakeh Kaftarian, and Abraham Wandersman.[51]
Notes
[edit]- ^ a b c Fetterman 1994.
- ^ Wandersman et al. 2005.
- ^ Fetterman 2012, pp. 98–107.
- ^ Fetterman 2011.
- ^ Fetterman & Bowman 2002.
- ^ Andrews 1996.
- ^ "videos". Archived from the original on 2011-09-27. Retrieved 2012-01-10.
- ^ Altman 1997.
- ^ Brown 1997.
- ^ Wild 1997.
- ^ Fetterman, Kaftarian & Wandersman 1996.
- ^ a b Fetterman 2001b.
- ^ Fetterman & Wandersman 2004.
- ^ See Donaldson, 2005 review of Empowerment evaluation principles in practice
- ^ Argyris & Schon 1978.
- ^ a b Patton 1997a.
- ^ Patton 1997b.
- ^ Dunst, Trivette & LaPointe 1992.
- ^ Zimmerman 2000.
- ^ Zimmerman et al. 1992.
- ^ Zimmerman & Rappaport 1988 See Bandura, 1982 concerning self-efficacy.
- ^ Alkin & Christie 2004.
- ^ Christie 2003.
- ^ Patton 1997b, p. 189.
- ^ Fetterman & Wandersman 2004, pp. 1–2, 27–41, 42–72.
- ^ a b c Fetterman, Deitz & Gesundheit 2010.
- ^ Fetterman 2009.
- ^ Fetterman 2001a.
- ^ Chinman, Imm & Wandersman 2004.
- ^ a b Fetterman & Wandersman 2007.
- ^ Lamont, A., Wright, A., Wandersman, A, & Hamm, D. (2014). An empowerment evaluation approach to implementing with quality at scale. In Fetterman, Kaftarian, & Wandersman (Eds), Empowerment evaluation: Knowledge and tools for self assessment, evaluation capacity building, & accountability (2nd ed).
- ^ Fetterman 1995.
- ^ Scriven 1997.
- ^ David M. Fetterman (2002-07-03). "Empowerment evaluation". Evaluation Practice. 15: 1–15. doi:10.1016/0886-1633(94)90055-8.
- ^ "Program Evaluation Standards Statements « Joint Committee on Standards for Educational Evaluation". Jcsee.org. 2010-10-27. Archived from the original on 2013-01-15. Retrieved 2013-01-27.
- ^ Stufflebeam 1994.
- ^ Fetterman 2010.
- ^ Debate between Fetterman, Patton and Scriven is available online in text form from Journal of MultiDisciplinary Evaluation Archived 2012-07-15 at archive.today. It was also recorded and is available in Claremont's virtual library
- ^ Fetterman 1997.
- ^ Cousins 2004.
- ^ Miller & Campbell 2006.
- ^ Patton 2005.
- ^ Scriven 2005.
- ^ Smith 2007.
- ^ Chinman et al. 2008.
- ^ David Fetterman. "Arkansas Evaluation Center". Arkansasevaluationcenter.blogspot.com. Retrieved 2013-01-27.
- ^ Scriven 1997, p. 12.
- ^ Scriven 1997, p. 174.
- ^ Rodríguez-Campos 2012.
- ^ How SAGE has shaped Research Methods, p. 12. SAGE Publications
- ^ Patton 1997a, p. 148 American Evaluation Association Award Recipients Archived 2012-01-14 at the Wayback Machine
References
[edit]- Alkin, M.; Christie, C. (2004). Alkin (ed.). An evaluation theory tree. Thousand Oaks, CA: Sage. pp. 381–392.
{{cite book}}
:|work=
ignored (help) - Altman, D. (1997). "Review of the book "Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability"". Community Psychologist. 30 (4): 16–17. Archived from the original on 2016-03-05. Retrieved 2012-01-09.
- Andrews, A. (1996). "Realizing empowerment in the evaluation of nonprofit women's services organizations: notes from the front line.". In Fetterman, D. M.; Kaftarian, S.; Wandersman, A. (eds.). Empowerment evaluation: knowledge and tools for self-assessment and accountability. Thousand Oaks, CA: Sage.
- Argyris, C.; Schon, D. A. (1978). Organizational learning: A theory of action perspective. Reading: MS: Addison-Wesley. ISBN 9780201001747.
- Bandura, A. (1982). "Self-efficacy mechanism in human agency" (PDF). American Psychologist. 37 (2): 122–147. doi:10.1037/0003-066X.37.2.122. S2CID 3377361. Archived from the original (PDF) on 2020-05-10.
- Brown, J. (1997). "Review of the book "Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability"". Health Education & Behavior. 24 (3): 388–391. doi:10.1177/109019819702400310. hdl:2027.42/68144. S2CID 208365233.
- Chelimsky, E. (1997). Chelimsky, E.; Shadish, W. (eds.). The coming transformations in evaluation. Thousand Oaks, CA: Sage.
{{cite book}}
:|work=
ignored (help) - Chinman, M.; Hunter, S.B.; Ebener, P.; Paddock, S.; Stillman, L.; Imm, P.; and Wandersman, A. (2008). "The Getting to Outcomes Demonstration and evaluation: An illustration of the prevention support system". American Journal of Community Psychology. 41 (3–4): 206–224. doi:10.1007/s10464-008-9163-2. PMC 2964843. PMID 18278551.
- Chinman, M.; Imm, P.; Wandersman, A. (2004). Getting To Outcomes: Promoting Accountability Through Methods and Tools for Planning, Implementation and Evaluation. Santa Monica, CA: RAND Corporation.
- Christie, C. A. (2003). "What guides evaluation? A study of how evaluation practice maps onto evaluation theory.". In Christie, C. A. (ed.). The practice-theory relationship in evaluation. San Francisco: Jossey-Bass.
- Cousins, B. (2004). "Will the real empowerment evaluation please stand up? A critical friend perspective.". In Fetterman, D. M.; Wandersman, A. (eds.). Empowerment evaluation principles in practice. New York: Guilford. pp. 183–208. ISBN 978-1593851156.
- Donaldson, S. (2005). Book Review of Empowerment Evaluation Principles in. ASIN 1593851146.
- Dunst, C. J.; Trivette, C. M.; LaPointe, N. (1992). "Toward clarification of the meaning and key elements of empowerment". Family Science Review. 5 (1–2): 111–130.
- Fetterman, D.M. (1994). "Empowerment evaluation". Evaluation Practice. 15 (1): 1–15. doi:10.1016/0886-1633(94)90055-8.
- Fetterman, D.M. (1995). "In Response to Dr. Daniel Stufflebeam's Empowerment evaluation, objectivist evaluation, and evaluation standards: where the future of evaluation should not go, where it needs to go, October, 1994, 321-338" (PDF). American Journal of Evaluation. 16 (2): 179–199. doi:10.1016/0886-1633(95)90026-8. Archived from the original (PDF) on 2016-03-06. Retrieved 2012-01-09.
- Fetterman, D. M. (1997). "Empowerment evaluation: a response to Patton and Scriven" (PDF). American Journal of Evaluation. 18 (3): 253–266. doi:10.1016/s0886-1633(97)90033-7. Archived from the original (PDF) on 2016-03-04. Retrieved 2012-01-09.
- Fetterman, D.M. (2001a). "A High Stakes Case Example: Documenting the utility, credibility and rigor of empowerment evaluation in a high stakes arena – accreditation". Foundations of Empowerment Evaluation. Thousand Oaks, CA: Sage.
- Fetterman, D. M. (2001b). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage.
- Fetterman, D. M. (2009). "Empowerment evaluation at the Stanford University School of Medicine: using a critical friend to improve the clerkship experience". Ensaio: Avaliação e Políticas Públicas em Educação. 17 (63): 197–204. doi:10.1590/S0104-40362009000200002.
- Fetterman, D. M. (2010). Ethnography: Step by Step (3 ed.). Thousand Oaks, CA: Sage.
- Fetterman, D. M. (2011). Secolsky, C. (ed.). Empowerment evaluation and accreditation case examples: California Institute of Integral Studies and Stanford University. London: Routledge.
{{cite book}}
:|work=
ignored (help) - Fetterman, D. M. (2012). Empowerment Evaluation in the Digital Villages: Hewlett-Packard's $15 Million Race Toward Social Justice. Stanford, CT: Stanford University Press. ISBN 978-0-8047-8425-2.
- Fetterman, D.; Bowman, C. (2002). "Experiential education and empowerment evaluation: Mars rover educational program case example" (PDF). Journal of Experiential Education. 25 (2): 286–295. doi:10.1177/105382590202500207. S2CID 145745339. Archived from the original (PDF) on 2020-02-18.
- Fetterman, D. M.; Deitz, J.; Gesundheit, N. (2010). "Empowerment evaluation: a collaborative approach to evaluating and transforming a medical school curriculum". Academic Medicine. 85 (5): 813–820. doi:10.1097/acm.0b013e3181d74269. PMID 20520033.
- Fetterman, D. M.; Kaftarian, S. J.; Wandersman, A. H., eds. (1996). Empowerment evaluation: Knowledge and tools for self-assessment and accountability. Thousand Oaks, CA: Sage.
- Fetterman, D. M.; Wandersman, A., eds. (2004). Empowerment evaluation principles in practice. New York, NY: Guilford. ISBN 978-1-59385-115-6.
- Fetterman, D. M.; Wandersman, A. (2007). "Empowerment evaluation: yesterday, today, and tomorrow". American Journal of Evaluation. 28 (2): 179–198. doi:10.1177/1098214007301350. S2CID 144677476.
- Patton, M. Q. (1997a). "Toward distinguishing empowerment evaluation and placing it in a larger context". Evaluation Practice. 15 (3): 311–320. doi:10.1016/S0886-1633(97)90019-2.
- Miller, Robin Lin; Campbell, Rebecca (September 2006). "Taking Stock of Empowerment Evaluation: An Empirical Review". American Journal of Evaluation. 27 (3): 296–319. doi:10.1177/109821400602700303. ISSN 1098-2140. S2CID 46116945.
- Patton, M. Q. (1997b). Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage.
- Patton, M. Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: Sage. ISBN 9780761919711.
- Patton, M. Q. (2005). "Toward distinguishing empowerment evaluation and placing it in a larger context: Take two". American Journal of Evaluation. 26 (3): 408–414. doi:10.1177/1098214005277353. S2CID 144167870.
- Preskill, H.; Torres, R. (1999). Evaluative inquiry for learning in organizations. Thousand Oaks, CA: Sage.
- Rodríguez-Campos, L. (2012). "Stakeholder Involvement in Evaluation: Three Decades of the American Journal of Evaluation". Journal of Multidisciplinary Evaluation. 8 (17): 69.
- Scriven, M. (1997). "Empowerment evaluation examined" (PDF). Evaluation Practice. 18 (2): 165–175. doi:10.1016/s0886-1633(97)90020-9. Archived from the original (PDF) on 2014-05-03. Retrieved 2012-01-09.
- Scriven, M. (2005). "Review of the book: "Empowerment Evaluation Principles in Practice"". American Journal of Evaluation. 26 (3): 415–417. doi:10.1177/1098214005276491. S2CID 145186515.
- Smith, Nick L. (June 2007). "Empowerment Evaluation as Evaluation Ideology". American Journal of Evaluation. 28 (2): 169–178. doi:10.1177/1098214006294722. ISSN 1098-2140. S2CID 145319825.
- Senge, P. (1990). The fifth discipline: the art and practice of organizational learning. New York: Doubleday.
- Stufflebeam, D. (1994). "Empowerment evaluation, objectivist evaluation, and evaluation standards: Where the future of evaluation should not go and where it needs to go" (PDF). Evaluation Practice. 15 (3): 321–338. doi:10.1016/0886-1633(94)90027-2. Archived from the original (PDF) on 2012-09-04. Retrieved 2012-01-09.
- Wandersman, A.; Snell-Johns, J.; Lentz, B.; Fetterman, D. M.; Keener, D.C.; Livet, M.; Imm, P.S.; Flaspohler, P. (2005). "The Principles of Empowerment Evaluation.". In Fetterman, D. M.; Wandersman, A. (eds.). Empowerment evaluation principles in practice. New York: Guilford Publications. p. 27. ISBN 978-1593851156.
- Wild, T. (1997). "Review of Empowerment evaluation: Knowledge and tools for self-assessment and accountability". Canadian Journal of Program Evaluation. 11 (2): 170–172. Archived from the original on 2016-03-06. Retrieved 2012-01-09.
- Zimmerman, M. A. (2000). Rappaport, J.; Seldman, E. (eds.). Empowerment theory: Psychological, organizational, and community levels of analysis. New York: Kluwer Academic/Plenum. pp. 2–45.
{{cite book}}
:|work=
ignored (help) - Zimmerman, M. A.; Israel, B. A.; Schulz, A.; Checkoway, B. (1992). "Further explorations in empowerment theory: An empirical analysis of psychological empowerment" (PDF). American Journal of Community Psychology. 20 (6): 707–727. doi:10.1007/bf01312604. hdl:2027.42/116954. S2CID 34208013.
- Zimmerman, M. A.; Rappaport, J. (1988). "Citizen participation, perceived control, and psychological empowerment". American Journal of Community Psychology. 16 (5): 725–750. doi:10.1007/bf00930023. PMID 3218639. S2CID 7314471.