AEA
AEA
AEA

Evaluation 2024 Workshops

AEA is excited to present dozens of pre-conference workshops at Evaluation 2024: Amplifying and Empowering Voices in Evaluation. Register here to secure your spot; these workshops are an additional cost.

Seats are limited for these workshops. 

Portland will be operating on PST time during the conference.

Monday, October 21

Workshop 1001: Breaking Boundaries and Building Community: Increasing Equity in Evaluation Through Radical Recruitment, Hiring and Staff Retention Strategies

Speakers: Monique Liston; Sojourner White; Linetta Alexander Islam; Ebony Kirkendoll

9:00 a.m. - 4:00 p.m.

In today's rapidly changing world, the field of evaluation encounters a significant hurdle: the absence of involvement of new and emerging perspectives. This not only jeopardizes diversity and sustainability but also hinders the advancement of evaluation itself. Evaluation 2024 is committed to tackling this challenge by elevating the voices of new evaluators and urging professionals at every career stage to embrace innovative ideas and practices in evaluation. Our professional development workshop provides a platform to explore and develop hiring, recruitment, and retention strategies that embrace and support new perspectives in evaluation positions. Drawing from our organization's experience of completing four hiring cycles in the past five years, we have adopted a radical approach to hiring, recruitment, and retention, propelling us to become one of the fastest-growing organizations in our state. Gone are the days of rigid resumes and standardized applications. Instead, we invite prospective candidates to reveal their authentic selves through an open and inclusive application process. By prioritizing personal narratives and individual experiences over formal credentials, our innovative approach aims to uncover the unique talents and perspectives of each candidate. During the workshop, we will delve into the principles and practices that underpin our organization’s radical hiring philosophy. From valuing passion and potential over traditional qualifications to fostering transparency and trust, attendees will gain insights into cultivating a more human-centric approach to talent acquisition. We will discuss key components of the process, including developing job descriptions, crafting benefits packages, designing applications, and conducting interviews. By emphasizing the diverse skill sets and backgrounds sought for each role, attendees will understand how radical hiring can facilitate the formation of high-performing, multidisciplinary teams. Furthermore, our organization prioritizes employee well-being and work-life balance, setting a new standard for organizational culture and employee satisfaction among evaluators. Ultimately, this presentation will challenge attendees to reassess their own hiring practices and consider adopting a radical approach to talent acquisition. By embracing authenticity, diversity, and inclusivity, organizations can attract top talent and foster a sense of belonging and community among their employees. 
 

Workshop 1002: Principles-Focused Developmental Evaluation - SOLD OUT

Speaker: Michael Patton

9:00 a.m. - 4:00 p.m.

Principles-Focused Developmental Evaluation guides adaptive action, innovative initiatives, and systems change in complex dynamic environments. The essentials of this approach will be presented, examined, and applied. Participants will learn to distinguish Principles-Focused Developmental Evaluation from other evaluation approaches. The course will include special attention to the relevance and implications of evaluation adaptability and agility in times of uncertainty and turbulence, as in the global pandemic and accelerating climate emergency. The course will cover the niche and nature of principles-focused developmental evaluation; purposes and applications at local and international levels; the particular challenges, strengths, and weaknesses of this approach; essential principles and practices for designing and conducting principles-focused developmental evaluations; case examples; and new and emergent directions.

Tuesday, October 22

Workshop 1003: AI-Enabled Evaluation Methods - SOLD OUT

Speakers: Zach Tilton; Linda Raftree

9:00 a.m. - 4:00 p.m.

The workshop will consist of 5 mini-modules introducing participants to methods for integrating generative AI in their evaluation practice. Modules include 1) a primer on generative AI and its use in evaluation; 2) ethical and responsible principles for GenAI-enabled evaluation practice; 3) prompt engineering basics; 4) chatbots for theory-based evaluation; 5) AI-assisted multi-method analysis. Sessions will include lectures, practical demonstrations, interactive activities, and large-group discussions.

Artificial intelligence has and will continue to augment the landscape of knowledge work—including program evaluation. This workshop equips participants with entry-level knowledge and practical skills to conduct evaluations in the age of AI and practice AI-enabled evaluation. As AI technology advances, evaluators must develop fundamental AI literacy and expand their evaluation toolbox to remain relevant and competitive in the evaluation marketplace. Further, AI has the potential to translate to efficiency and effectiveness gains in evaluation processes and products—if integrated responsibly and thoughtfully. This workshop will provide participants with basic premises and principles for a baseline level of AI-enable evaluation capacity. This workshop is for evaluation practitioners, managers, commissioners, and other MERL practitioners who have been or would like to integrate various AI tools, techniques, and tips into their evaluation practice.
 

Workshop 1004: Airtable is Awesome!

Speaker: Kristin Cowart

9:00 a.m. - 4:00 p.m.

Are you looking for new ways to visualize data and effectively communicate meaningful results? Join this session focused on exploring the data collection, analysis, and intuitive dashboarding capabilities of Airtable, a dynamic cloud-based platform. Trusted by industry leaders such as Netflix, The American Red Cross, and Nike, this workshop will demonstrate how Airtable's cost-effective, user-friendly, and versatile platform can be used as a powerful tool for program evaluation. Bring your laptop and get ready to learn through real-world examples and hands-on activities, including live data collection and visualization, survey building, and dashboard creation, and see how Airtable can be a valuable addition to your evaluation toolbox (See a test dashboard example here: https://tinyurl.com/AirtableBrightHorizon - signing up for Airtable is quick, easy, and free).

 

Workshop 1005: Amplifying Equity in Evaluation: Transforming Compensation Practices

Speakers: Komani Lundquist Cedano; Corey Newhouse

9:00 a.m. - 4:00 p.m.

In the realm of professional evaluation, many commonly accepted compensation practices inadvertently reinforce unequal treatment for staff and contractors, creating a dissonance with the equity-centered values that organizations strive to uphold. As we aim to amplify and empower diverse voices in evaluation, it is imperative to critically examine and transform these practices to foster a more inclusive and equitable environment in our organizations. In this professional development workshop, we will delve into the intricacies of employee and subcontractor compensation, challenging longstanding practices and exploring equitable alternatives. Drawing from a comprehensive compensation study conducted by the presenters, we will provide attendees with the tools to detect and address inequitable compensation practices. Through interactive discussions and practical exercises, participants will learn how to implement "bias interrupters" to counteract ingrained habits and promote fairness. Attendees will engage in a reflective exercise to develop an inventory of their current compensation practices. Guided by the principles of equity and inclusion, participants will identify actionable steps they can take in the near and mid-term to align their compensation practices with their values, thereby amplifying and empowering voices in evaluation. By the end of this session, participants will be equipped with the knowledge and strategies to foster a more equitable and inclusive evaluation community, aligning their practices with the broader theme of the conference, and contributing to the advancement of the field.

 

Workshop 1006: Arts-Based Evaluation 101: Methodological Foundations

Speakers: Maya Lefkowich; Jennica Nichols

9:00 a.m. - 4:00 p.m.

Evaluators need better tools that honour diverse and emerging perspectives. Mainstream Western data collection techniques - like surveys and recycled interview questions - often fail to adequately engage and represent diverse ways of knowing and being. Our insights, experiences, and ways of expressing ourselves rarely fit into tidy boxes, discrete categories, and linear thinking structures. And, with low participation rates and research fatigue, no one wants to do another survey! Arts-based methods offer evaluators unique opportunities to elicit feedback in ways that are stimulating, joyful, and meaningful. But, arts-based evaluation comes with its own ethical dilemmas, skills, and considerations - requiring evaluators to have a solid theoretical and practical foundation. This fun and timely workshop offers a comprehensive introduction into using arts-based methods for data collection, evaluation planning, and reflective practice with a deep-dive into poetry as one illustrative example. Participants will develop a solid methodological foundation for arts-based evaluation with an engaging blend of theory (short dynamic lectures), practice (applied case studies and activities), and interpersonal experiences (small group and partner exercises). And, content on “imaginative interviewing” will help evaluators understand how to use art and art-making to answer evaluation questions. Original content is designed to bolster evaluator competencies by exploring what arts-based methods are (and are not), how and with whom they work (and don’t), ethical considerations (consent, ownership, risks), spicy debates and conundrums, real-world examples of successful projects and epic mistakes, and tips for beginning an arts-based evaluation practice. No artistic skill or experience with arts-based methods is needed.

 

Workshop 1007: Beyond Intent: Transformative Strategies for Effectively Engaging Communities

Speakers: Jordyn Wartts; Shavanna Spratt

9:00 a.m. - 4:00 p.m.

Community engagement is an essential component of impactful social change, yet many efforts fall short due to a lack of understanding, misalignment of goals, and unaddressed barriers that hinder meaningful interaction and collaboration. Recognizing these challenges, our workshop is structured to provide a comprehensive framework for participants to critically examine their own engagement practices, understand the nuanced needs of communities, and develop strategies that are both responsive and respectful of those needs.

"Beyond Intent" is not just another community engagement workshop. It stands out by addressing the heart of why most community engagement efforts are challenging and often ineffective. Through a blend of interactive exercises, reflective practices, and evidence-based strategies, this workshop goes beyond the superficial layers of engagement to uncover the root causes of disconnection and misunderstanding. It challenges participants to think critically about their intentions and the impact of their actions, pushing the boundaries of traditional engagement methodologies.

Upon completion, participants will possess a deep understanding of the principles underpinning successful community engagement, including transparency, trust, and mutual benefit. They will be equipped to conduct self-assessments and organizational reflections to recognize and mitigate biases and privileges that may impact their engagement efforts. Moreover, attendees will learn to identify and navigate barriers to engagement, utilizing practical tools such as ecosystem maps and decision trees to foster empathy, inclusivity, and measure readiness and accessibility. Participants will gain insights from facilitators who double as active members of their communities and subject-matter experts, integrating these perspectives into practical advice for early evaluators, community organizations, and anyone struggling to introduce themselves to communities. Finally, attendees will be left with tools to develop comprehensive engagement strategies, allowing them to prioritize community feedback and engagement in evaluations and development.

Participants will engage in a variety of exercises and activities to suit various learning styles. Participants will engage in small and large group discussions about the barriers that impede effective engagement, fostering an environment of learning and growth. Content will challenge attendees to conceptualize an engagement process that starts with self-assessment and leads to a prioritization community needs and voices. By focusing on the challenges that make community engagement difficult, "Beyond Intent" offers a path towards more impactful, sustainable, and meaningful connections.

 

Workshop 1008: Consulting 101: An introductory workshop for evaluators who want to start consulting practices

Speakers: Matt Feldmann; Laura Keene

9:00 a.m. - 4:00 p.m.

Independent consulting. Side hustle. Entrepreneur. Small business owner. If these terms resonate with you and your goals, then this workshop is for you. More than 20% of AEA members are independent or have independent consulting side jobs. This workshop will provide you with key understandings to initiate an independent consulting practice including niche identification, marketing approaches, organizational structures, and finances. Laura Keene and Matt Feldmann both have thriving consulting practices and will share their insights for how you can develop your practice through valuable samples, worksheets, and insider tips. This lively workshop will help set you up to take your next steps in developing an independent consulting practice.

 

Workshop 1009: Cost-Inclusive Evaluation: You Can Do It, I'm Doing It, and Here's How

Speaker: Brian Yates

9:00 a.m. - 4:00 p.m.

This workshop teaches cost-inclusive evaluation (CIE) with examples from the presenter's 50 years of experience conducting CIE for apprentice-training programs in health care, in emergency assistance programs for LGBTQI+ and other human rights defenders in international settings, consumer-centered programs for suicide prevention, substance use disorders, and mental health services for youth and adults. Illustrations of problems and solutions in CIE are drawn from the presenter's work in international assistance, and treatment and prevention programs for drug abuse, depression, weight control, and adolescent behavioral health. Qualitative and quantitative methods covered include evaluation of costs from multiple interest group perspectives, cost studies, cost-effectiveness analysis, and cost-benefit analysis. Social Return On Investment is viewed critically, along with traditional economic evaluation. The workshop includes examples of the potential for CIE to reveal hidden inequities, to inadvertently maintain or exacerbate those inequities, and to reduce and remove those inequities. After each topic is taught and illustrated, workshop participants apply their new knowledge to a cost-inclusive evaluation of their own choosing. Spreadsheets and tables to structure CIE are provided via a micro website developed by the presenter.

 

Workshop 1010: Hands-on Introduction to Outcome Harvesting - SOLD OUT

Speakers: Goele Scheers

9:00 a.m. - 4:00 p.m.

Outcome Harvesting (OH) is a participatory monitoring and evaluation method that is used to identify, formulate, analyse and interpret outcomes. OH starts by collecting (harvesting) evidence of what has changed (outcomes) and then works backwards to determine whether and how an intervention has contributed to these changes. This course explores the OH steps through hands-on exercises. It is designed for those new to OH or those aiming to refresh their understanding. At the end of the training, participants will be able to: • Explain the main concepts behind Outcome Harvesting, its steps and principles. • Formulate specific, verifiable and plausible outcome statements. • Apply the fundamentals of analysing and interpreting outcomes. The training will be participatory and engaging. The trainer will give short interactive presentations to introduce each topic and share examples. These introductions will set the stage for practical, hands-on exercises where participants will apply the concepts in groups. Based on a case study and participants’ own material, participants will engage in a role play exercise to simulate the harvesting of outcomes and practice how to analyse and interpret outcomes. The training will be enriched with group discussions and plenary sessions covering various facets of Outcome Harvesting, with the trainer providing in-depth insights and expert guidance throughout the experience.

 

Workshop 1011: Pathways to Healing: Equipping Evaluators for Truth and Reconciliation

Speakers: Kim van der Woerd; Sofia Vitalis

9:00 a.m. - 4:00 p.m.

This one-day workshop designed for evaluators aims to enhance their understanding in the context of Truth and Reconciliation with Indigenous nations. The workshop aims to equip evaluators with the necessary knowledge,  reflection tools, and ethical frameworks to effectively evaluate  with a lens of Truth and Reconciliation, while fostering healing, accountability, and social transformation.

The workshop begins with an exploration of the historical and theoretical foundations of the impetus of Truth and Reconciliation in Canada, providing participants with a comprehensive understanding of the underlying narratives, beliefs, and political will that brought the culmination of colonial practices into place. Drawing on case studies and real-world examples, participants will examine the diverse forms and approaches to applying truth and reconciliation principles in evaluation , learning about truth commissions, and the role of data sovereignty.

Through interactive discussions, group activities, and role-playing exercises, participants will explore key concepts and methodologies relevant to evaluation with a Truth and Reconciliation lens. This includes examining strategies for engaging diverse rightsholders, collecting and analyzing qualitative data, collaboratively determining definitions of success with communities to assess impact and outcomes, and navigating ethical dilemmas and power dynamics inherent in evaluation practice.

Special attention will be given to the unique considerations and challenges of practicing Evaluation with a Truth and Reconciliation lens  in Indigenous and underserved communities. Participants will reflect on their own practices and learn from a culturally responsive and trauma-informed approach to evaluation, grounded in principles of equity, inclusion, and social justice.

Throughout the workshop, participants will be encouraged to reflect on their own identities, biases, and positionalities as evaluators, and consider how these factors may influence their practice. Through reflective exercises and group discussions, participants will explore strategies for cultivating reflexivity, humility, and cultural humility in their work, and recognize the importance of ongoing learning and growth in the pursuit of social justice and reconciliation.

This one-day workshop offers evaluators a unique opportunity to deepen their understanding of truth and reconciliation processes and enhance their evaluation practice in this complex and challenging context. The workshop aims to contribute to the promotion of healing, accountability, and social transformation in communities affected by historical injustices and systemic violence.

Learning objectives:

  • Increase understanding of the historical and theoretical underpinnings of Truth and Reconciliation in Canada, including the narratives, beliefs, and political context that led to the establishment of reconciliation processes.
  • Participants will reflect on their own identities, world views, assumptions and biases, and positionalities as evaluators, and explore strategies for cultivating reflexivity in their work.
  • Increased understanding of the legacy of research in Indigenous communities and recognizing the importance of ongoing learning and growth in the pursuit of social justice and reconciliation.
     

Workshop 1012: Qualitative Inquiry in Evaluation: An Introduction to Core Concepts and Data Collection Methods

Speaker: Jennifer Jewiss

9:00 a.m. - 4:00 p.m.

This workshop introduces core concepts that provide an important foundation for the use of qualitative methods in evaluation. Three primary data collection methods are featured: individual interviewing, participant observation, and document review. Partner and small group activities are woven throughout the session to develop participants’ knowledge and skills in gathering data via these methods. Group discussions explore essential ethical and methodological considerations, including the practice of reflexivity to examine one’s positionality and subjectivity and to foster cultural humility and inclusivity. In addition, the workshop presents a practitioner-friendly conceptual model that illuminates five processes for enhancing the quality of qualitative evaluations and can serve as a valuable touchstone for future evaluation efforts. (Please note that due to the inherent constraints of a six-hour introductory workshop and the scope of the featured topics, data analysis is not covered in this session.)

 

Workshop 1014: Systemic Design Thinking for Evaluation of Social Innovations

Speakers: Mary McEathron; Janice Noga

9:00 a.m. - 4:00 p.m.

Historically, social innovation has based its design foundations in notions of predictability, control, and linearity. Evaluation followed with designs and methodologies grounded in linear assumptions. Over time, notions of problems as simple, complicated, or complex began to influence the design of programs and the evaluations that assessed them, but programs and evaluation continued to pretend that the world was linear. Yet, in truth, there is no non-systems world. More and more, the distinction between simple, complicated, and complex has proven to not be all that useful. In the real world, programs and evaluators work within complex situations. Always. Add to that the rapidly accelerating pace of change and we have programs seeking to address problems that are massively interconnected and under increasing onslaught from human, political, economic, and environmental stressors. Standing in the present, looking toward a future where change seems so extreme and so rapid, it can feel as if the ground is eroding below us. Even for programs – by the time they have implemented their work and obtained evaluation findings, conditions may have already shifted. They are building the bicycle while already riding it. Yet, funding streams, programs, and evaluation keep adhering to processes, designs, and timelines that, too often, are intended for circumstances and a time that no longer exist. Systems thinkers interrogate the system they are studying. They listen to what the system has to say using methods that look much like the ones we already use to evaluate programs. But the intent, timing, and “who” are different. Systemic design thinkers do the same to determine what is going on, what needs to be put into play to address problems, how it should look, and who it should serve. This workshop will cover the basics of systemic design thinking, systems thinking, and complexity. Group discussion and hands-on activities will focus on discovering the potential for synergy between design thinking and systems thinking with the intent of finding the “spark” or hybrid design space that can result from marrying these in evaluation practice. We will engage in integrating systemic design thinking, systems thinking, and evaluation in ways that empower us as evaluators to better attend to whole-systems ecologies and complexity in program design and outcomes. Participants will engage with frameworks and tools to enhance their understanding of systemic design thinking and evaluation as an instrument for listening and discovery, interrogating programs from a context of reflective practice performed in service of design and systems thinking. The day will be structured to encourage ongoing reflection on what it also means to engage with the many diverse voices present in the system through the following questions: • What ways do you need to prepare yourself to become an actor in the system? What role are you going to play? • What voices do you engage and listen to? How? • When do evaluators have a voice in program design? Can we help programs get to better outcomes if we work with them to engage in systems-oriented design thinking?


Workshop 1015: The Survey Design Studio

Speakers: JoAnna Hillman, MPH

9:00 a.m. - 4:00 p.m.

Welcome to the Survey Design Studio, where we create ONLY GREAT SURVEYS. This workshop is a deep dive into the artistic and creative world of outstanding survey design for program evaluation. Explore the art of survey design by composing evaluation purpose statements, crafting key evaluation questions, immersing yourself in the rich palette of question types, and sculpting surveys that capture the essence of program evaluations with flair and finesse. We’ll use logic and intuition to create surveys that resonate deeply with our audiences, illustrate creative approaches for elevating our survey artistic ability, and discuss considerations for special contexts. Prepare to unleash your survey design creativity as you dive into hands-on activities, transforming theoretical concepts into magnificent works of survey art. You’ll engage in collaborative critique sessions, where you'll refine your craft alongside fellow survey artists. Step into a vibrant world where we’ll transform surveys into masterpieces that unlock quality data for decision making. Get ready to unleash your survey creativity!

 

Workshop 1016: Thinking with your hands: Using LEGO® to build logic models

Speakers: Dana Benjamin-Allen; Danelle Marable

9:00 a.m. - 4:00 p.m.

How can evaluators make creating and operationalizing logic models fun, inclusive, interactive, and equitable? In this workshop, participants will learn a new, participatory way to develop logic models, and techniques learned can be applied across the entire evaluation cycle. They will be encouraged to think creatively to ensure that organizations build logic models to mitigate power dynamics, that all voices are heard, and that final products are used. Using the Lego®Serious Play®(LSP) method, participants will build outcomes and activities using Lego® bricks and connect them in an "if, then" framework that results in a three-dimensional relational diagram. Participants will learn new techniques to engage stakeholders in meaningful dialogue, stimulate creativity and innovation, foster reflection and sensemaking, build consensus, and enhance the validity and reliability of evaluation data. By incorporating LSP into their toolkit, evaluators can unlock new possibilities for conducting participatory, inclusive, and impactful evaluations.

Wednesday, October 23

Workshop 1017: Agile Data Management: Empowering Voices Through Automation

Speaker: Marcel Chiranov

8:00 a.m. - 10:45 a.m.

In this interactive workshop, participants will explore how to leverage agile principles and automated tools to enhance data management practices. We will focus on empowering diverse voices in evaluation by streamlining data processes and ensuring inclusivity.  In today’s world, fast access at timely and quality data could mean the difference between success and failure in various fields. Evaluation and project management are no exception. Volatile economic, social and politic environments are affecting the projects’ work in a variety of ways that are difficult to estimate without relevant data. We are going to present how an ideal online-automated data management could look like.  Allowing sending automated reminders to team members to fill their periodical reports, emailing project partners asking for their feedback, collecting data on mobile devices in online, or offline mode, that later would be uploaded in the cloud, are some of the basic features referring to data collection. Different people in the teams have different responsibilities referring to their data, or their team's data. Sandboxing them would allow to access and manipulate the relevant data by creating tailored charts, and dashboards. Being able to share tailored charts and dashboards in a secure way is an efficient way to communicate and promote transparency.

We are going to present how using an automated online data management system for Agile project management, helped to obtain tangible outcomes in a project with unclear expected results at the beginning. Multiple iterations, trial and error, learning, and collecting feedback were possible with such a data management system.  Without it would be too costly, too lengthily and resource consuming.

We will present the principles to select such a data management system and few tricks in operating it. Later we will make the connection with Artificial Intelligence – lot of people would like to use it, but what data sets?  Having own data sets, collected based a known methodology, under known conditions, will make any data sets relevant for organization’s purposes and efficient AI usage.

Main benefits of using such a system

1. Reduced Human Errors - Manual data collection can be prone to errors such as mistyping, duplication, or missing data. Automation eliminates these issues.

2. Improved Data Quality - By minimizing errors, the overall quality of the dataset improves. High-quality data is crucial for robust evaluation outcomes.

3. Time and Cost Savings - Manual data collection is time-consuming and labor-intensive, especially for diverse data requirements. Automation frees up human resources, allowing them to focus on higher-value tasks; automation tools reduces maintenance costs.

4. Scalability - Automated platforms can handle large-scale data collection without straining resources. Whether you need to collect data from a few sources or thousands, automation ensures scalability.

5. Efficiency and Productivity - Automation streamlines repetitive tasks. This leads to improved productivity and faster data processing.

6. Technology-Driven and User-Friendly - Modern data platforms are user-friendly and require no face-to-face interaction with respondents.

7. Faster Decision-Making - Online data collection provides quicker access to insights, enabling faster decision-making based on real-time data.

 

Workshop 1018: An Evaluation Journey: Seven Stops to a more transformative, equitable, and harm reduction evaluation process

Speakers: Julie Poncelet; Jude Kallick

8:00 a.m. - 10:45 a.m.

Have you been trying to figure out the best ways to build trusting, equitable relationships in evaluation with diverse communities and foster an inclusive environment where all voices are valued? Have you felt your approaches to evaluation have gone under-recognized or under-valued?

So, which approach should you use and when? Anti-oppression and Decolonizing Evaluation; Collaborative or Participatory Evaluation; Culturally Responsive and Equitable Evaluation; Culturally Responsive and Racially Equitable Evaluation; Equitable Evaluation: Feminist Evaluation; Gender Responsive Evaluation; Indigenous Evaluation; Transformative Evaluation; Trauma-Sensitive Evaluation; Systems Thinking and Evaluation; or Youth-Focused Evaluation. 

“[Long-standing] structural and systemic racism and oppression of marginalized communities cause significant barriers to establishing and implementing research and evaluation in partnership with diverse communities. The [disregard] of diverse communities’ values and beliefs at social, cultural, religious, and spiritual levels, for example, inhibits the building of these partnerships. [This] and other experiences with racism, bias, and or exploitation contribute to the communities’ mistrust, especially if historically they have actually been harmed or have not equitably benefited from their participation in research/evaluation studies.” (Adedoyin, A. C., et al., (2024). Culturally responsive and Equitable Evaluation: Visions and voices of emerging scholars. Cognella Academic Publishing, p. xix)

Whether you are a beginner, an expert, or anywhere in between, this workshop offers a dynamic and practical look at the core concepts that will help you to engage in a more transformative, equitable, and harm-reduction evaluation process. Together, we will explore where these approaches intersect throughout an evaluation process and the vital lessons and practices that can be applied to inform ourselves, our teams, and our efforts at each stage.

Jude and Julie will share information and engage colleagues using an interactive presentation, digital and analog polling to capture group experiences and consensus, group conversations, and hands-on individual and small group activities. They will begin the workshop with a brief overview of the diverse (aforementioned) approaches to evaluation and their intersecting values and practices. Next, they will present their uniquely designed “Evaluation Journey” with thought-provoking questions that will inspire a more transformative, equitable, and harm-reduction process at each of its seven stops: (1) Preliminary Understanding of Project Needs, (2) Defining and Contextualizing the Evaluation Project, (3) Collaboratively Planning, (4) Gathering Information, (5) Making Meaning of the Information Gathered, (6) Taking Action, and (7) Post Evaluation Project Reflections and Assessment.

Jude and Julie will then guide colleagues in individual and small group hands-on activities to explore two of the seven stops: (2) Defining and Contextualizing the Evaluation Project and (5) Making Meaning of the Information Gathered. The exercises are designed to help colleagues: consider and reflect on how the selected phases should respond to and engage diverse people, communities, and contexts equitably and without causing harm; and identify and recruit the experiences, expertise, and skills necessary to realize this. 

Colleagues will leave the workshop with user-friendly materials for practical application of the “Evaluation Journey” and a list of resources from emerging and underrepresented voices to support their equity and harm-reduction approach to evaluation across all stops.

 

Workshop 1019: Applying Equitable Systems Change (ESC) Learning and Evaluation Methods to Support Systems Improvement, Reform, and Transformation

Speakers: Margaret Hargreaves; Brandon Coffee-Borden

8:00 a.m. - 10:45 a.m.

Over the past decade, government, non-profit, and philanthropic entities, have been working to address increasingly complex community, regional, and global problems, such as accelerating climate change, food insecurity, homelessness, dislocation, structural racism, and other inequities in access to resources, services, and opportunities. Such initiatives require organizational and collective leadership capacity, equitable change strategies, diverse partnerships, and equitable systems change learning and evaluation methods (Co-Impact, 2022: “Philanthropy for Systems Change Handbook”). In 2022, a team of NORC researchers (the presenters of this workshop) developed an Equitable Systems Change (ESC) learning and evaluation approach to advance the practice of evaluating equity-centered systemic and structural change initiatives. These ESC methods have been showcased in AEA sessions and professional development workshops in 2022 and 2023. 

This workshop is the second in a series of two half-day professional development workshops. The first is a beginner-level workshop that introduces the ESC learning and evaluation framework and principles.  This intermediate-level workshop describes the seven core elements of the ESC learning and evaluation approach and offers real-life examples of how the approach has been applied in different contexts. ESC elements include assessing and mapping the system of interest, articulating and clarifying the initiative’s theory of systemic change, and co-designing the evaluation with leaders and communities with lived experience, creating culturally- and contextually appropriate methods and tools, collecting, analyzing, interpreting, and using the data. This workshop provides mini-lectures, real-life case studies, and interactive exercises to help participants learn how to apply ESC methods to their own work.

 

Workshop 1021: Communities of Practice (CoP) to Advance Evaluation Engagement, Participation, Growth and Leadership

Speakers: Leah Neubauer; Thomas Archibald

8:00 a.m. - 10:45 a.m.

This interactive skill-building workshop will introduce Communities of Practice (CoPs) and demonstrate their application for illuminating CoPs as a methodology to interrogate one’s evaluation practice, advance thinking about the evaluation profession and consider diverse, complex evaluator roles in society. Increasingly, evaluators are called to evaluate and participate in CoPs in their in-person or virtual global settings. Grounded in critical adult education and transformative learning, this session will focus on CoPs which engage learners in a process of knowledge construction, unlearning/relearning around common interests, ideas, passions, and goals. Participants will develop a CoP framework that includes the three core CoP elements (domain, community, practice), and processes for generating a shared, accessible repertoire of knowledge and resources. The three core elements and framework will provide a larger foundation to discuss monitoring, evaluation, learning (MEL) and evaluative thinking. Co-facilitators will highlight examples of CoP implementation in MEL from across the globe in development, education and community health through lenses of transformation. Participants will engage in a series of hands-on inquiry-oriented techniques, analyzing how CoPs can be operationalized in their evaluation practice.

 

Workshop 1022: Cost Analysis - How to Do It and How to Use It

Speakers: Kristen Schubert; Ananda Young

8:00 a.m. - 10:45 a.m.

Understanding cost-economy analysis and cost-effectiveness analysis is crucial in the landscape of international development. Demand for these tools has surged due to USAID’s mandatory Automated Directives System (ADS) 201 requirement to use cost analysis with impact evaluations, alongside the emphasis placed by USAID’s chief economist on cost-effectiveness. This recognition among donors and practitioners underscores the importance of understanding cost-effectiveness for optimizing resource allocation and program effectiveness. However, there is a gap between this momentum and broader awareness of these methods, leading to confusion among many stakeholders regarding their purpose, results, and limitations.

To address this gap, we have developed a professional development lecture to introduce and build awareness among new and experienced evaluators and practitioners. This session will offer an introduction to cost-economy and cost-effectiveness analysis, emphasizing their practical applications.

Participants will explore the critical role of cost analysis in advancing sustainability and cost-effectiveness goals, gaining insight into industry-recommended processes and procedures for conducting thorough cost analyses. Through a blend of theoretical insights and practical exercises, attendees will develop an understanding of the tools, underlying data, and effective application of cost analysis processes. While participants will not emerge from the course as methodological experts in these tools, they will leave equipped as informed consumers of cost analyses.

With years of experience teaching costing methods at USAID, with government officials, implementing partners, and graduate-level students, our course aligns with best practices and is designed to be hands-on and discussion-oriented for an engaging learning experience.

Upon completion of this training, participants will walk away with a solid understanding of cost analysis principles and practical techniques. With this knowledge, they will be better equipped to contribute to evidence-based decisions around the use of scarce resources in international development.

 

Workshop 1023: Data Quality Management

Speaker: Anne Coghlan

8:00 a.m. - 10:45 a.m.

A major purpose of many program evaluations is to generate data for decision making. However, how can we be sure that our data are of good enough quality to make well informed decisions? While evaluators may receive training in aspects of data quality, overarching ways to enhance and manage data quality are rarely addressed. In this workshop, evaluators will be introduced to a comprehensive data quality management system for quantitative data, first developed by the Global Fund and several international development agencies, that consists of specific data quality assessment criteria and standard operating procedures. Through large and small group discussions, participants will first identify their own data quality issues. Participants will then review and relate their own experiences to certain assessment criteria and identify procedures for strengthening the quality of their data. Lastly, participants will review the basic components of a Data Quality Management Plan.

 

Workshop 1025: Developing All Types of Evaluation Budgets Using Checklists

Speakers: Guili Zhang

8:00 a.m. - 10:45 a.m.

This professional development workshop will teach participants to design the full range of sound evaluation budgets. The interactive, hands on workshop will school participants on the six factors to consider in developing evaluation budget, the ethical imperatives in budgeting evaluation, evaluation budget line items (personnel, travel, consultants, supplies, equipment, services, and indirect costs), and developing different types of evaluation budgets (fixed-price budgets, budgeting under grants, cost-reimbursable budget, cost-plus a fee, cost plus a grant, cost plus a profit; budgeting under cooperative agreements, and modular budgets). The workshop will engage participants to use illustrative RFPs to apply the checklist in designing the full range of evaluation budgets; and provide participants with relevant follow-up materials on how to obtain additional information and assistance related to designing evaluation budgets.

 

Workshop 1026: Embodiment as a Qualitative Evaluation Methodology

Speaker: Scarlett Kingsley

8:00 a.m. - 10:45 a.m.

This workshop is designed for evaluators seeking to enhance their qualitative research skills through embodied methodologies. As a methodology, embodiment recognizes that human experiences are not solely cognitive but are also deeply intertwined with sensory perceptions, emotions, and physical sensations. This approach emphasizes the importance of understanding and incorporating the body's role in knowledge production, challenging traditional notions of detached observation, and emphasizing the researcher's active engagement with lived experiences.

Implementing embodied methodologies into an evaluative practice allows us to access richer, more nuanced data that better captures the complexities of human behavior, attitudes, and social interactions. Traditional evaluation methods often rely on verbal or written responses, which may limit the depth of understanding, especially in contexts where non-verbal cues and embodied expressions play a significant role. By incorporating embodied approaches evaluators can gather more holistic and contextually rich data, leading to more comprehensive and insightful evaluations.

Understanding embodiment as a methodology also enables evaluators to navigate issues of power, privilege, and marginalization more effectively within their evaluation frameworks. Embodied methodologies encourage reflexivity, prompting evaluators to critically examine their own embodied experiences, biases, and positionalities that may influence the evaluation process and outcomes. This self-awareness enhances the ethical integrity of evaluations and promotes greater inclusivity and sensitivity to diverse perspectives.

Through interactive sessions and practical exercises, this workshop will equip participants with the knowledge and skills to integrate embodied methodologies into every step of the evaluation process including design, data collection and analysis, and reporting.

 

Workshop 1027: Empowering next generation evaluators with a step-by-step Terms of Reference checklist: a workshop with practical exercises on how to commission and respond to evaluation proposals.

Speakers: Thomas Scialfa; Ala'a Issa

8:00 a.m. - 10:45 a.m.

The relationship between a good Terms of Reference (ToR) and a high-quality evaluation has long been recognized; a strong ToR is crucial to evaluations being well designed, budgeted, managed, and meeting stakeholder needs. Development of Mercy Corps’ Final and Mid-Term Evaluation ToR Checklists included a review of dozens of evaluation ToR guidance from a range of implementing, donor, and academic agencies. About half of these resources were just slight variations of the same content and so were not used. Among the 27 retained, 16 did not include checklists; they lacked the precision needed for entry and mid-level monitoring and evaluation staff – and for those involved in commissioning evaluations - to produce a best-practice ToR. 11 included a checklist, all of which were thoroughly reviewed and found to be an improvement over guidance alone but lack some needed content and did not include clearly defined steps and guidance. Most were based on agency-specific policies and procedures and used agency-specific terms and language, making them less flexible for use by other agencies. Mercy Corps’ mid-term and final evaluation checklists have now gone through many months of testing and improvement. They provide the ToR content and step-by-step tips needed to develop high-quality ToRs by less-experienced evaluators, and non-evaluators such as program managers and finance/procurement department staff. These checklists are flexible and so can be used “as they are” or can be easily adapted for any organization’s use. Mercy Corps has found that the use of these checklists are good capacity-building tools. This half-day workshop aims to empower next-generation and mid-level evaluators to write a best-practice ToR. It is also relevant for non-evaluators who commission and/or use evaluations. After briefly presenting Mercy Corps’ ToR checklists used for mid-term and final evaluations, the workshop will facilitate group work to identify how sample ToRs can be improved using the checklists.

 

Workshop 1028: Empowerment Evaluation (self-assessment to facilitate desired outcomes)

Speaker: David Fetterman

8:00 a.m. - 10:45 a.m.

Empowerment evaluation is a stakeholder involvement approach to evaluation. It is aimed at learning and improvement. Empowerment evaluations help people learn how to help themselves and become more self-determined, by learning how to evaluate their programs and initiatives. Key concepts include a critical friend (evaluator helping to guide community evaluations), cycles of reflection and action, and a community of learners. Principles guiding empowerment evaluation range from improvement to capacity building and accountability. The basic steps of empowerment evaluation include: 1) mission: establishing a unifying purpose; 2) taking stock: measuring growth and improvement; and 3) planning for the future: establishing goals and strategies to achieve objectives, as well as credible evidence to monitor change. An evaluation dashboard is used to compare actual performance with quarterly milestones and annual goals. The role of the evaluator is that of a coach or facilitator in an empowerment evaluation since the group is in charge of the evaluation itself. The workshop is open to colleagues new to evaluation as well as seasoned evaluators. It highlights how empowerment evaluation produces measurable outcomes with social justice-oriented case examples ranging from eliminating tuberculosis in India to fighting for food justice throughout the United States. Additional examples include empowerment evaluations conducted with high-tech companies such as Google and Hewlett-Packard as well as work conducted in rural Arkansas and squatter settlements in South Africa. Employing lectures, activities, demonstrations, and discussions, the workshop will introduce the theory, concepts, principles, steps of empowerment evaluation, and technological tools of the trade. (See TED Talk about empowerment evaluation for more details.)

 

Workshop 1029: Engaging in Systemic Design Thinking for Evaluation: A Basic Toolbox

Speakers: Janice Noga; Mary McEathron

8:00 a.m. - 10:45 a.m.

Increasingly, programs find themselves addressing societal problems emerging from multiple root causes involving interconnected relationships and stakeholders. As a result, it is becoming critical that programs learn to embrace thinking about program design in a way that integrates systems principles with systemic design thinking, an approach to problem solving that embraces ambiguity, emergence, and uncertainty. And as program design shifts to embrace complexity, interconnectedness, non-linearity, and the importance of context, so, too, must evaluation. Applying a systemic lens to complex problems can help identify the dynamics of the surrounding system, explore the ways in which the relationships between system components affect its functioning, and ascertain which interventions can lead to better results. Systems thinking helps to demonstrate how systems are structured and how they operate, what lies between the different parts, the relationships that exist, and how patterns of systemic behavior become program outcomes. Systemic design thinkers interrogate the system to determine what is going on, what needs to be put into play to address problems, how it should look, and who it should serve. They interrogate and listen to what the system has to say using methods that look much like the ones we already use, but the intent, timing, and “who” are different. Ultimately, the purpose of such listening is to understand the goals, perspectives, needs, and drives of system actors, stakeholders, and participants and to identify systemic forces and issues driving current challenges. Program stakeholders and evaluators are, thus, able to see the interconnectedness of component parts in a coordinated manner that emphasizes balance and “fit” of the interdependent components. The challenge for evaluators is how to do this – first, in the role of evaluation designer and second, in the role of evaluator of program outcomes as a function of program design. How do evaluators tap into systemic design thinking principles to design evaluations? How do they then implement these principles to evaluate the degree to which programs themselves utilized systemic design principles in planning and implementation? This workshop is for those of you who are interested in diving more deeply into the application of systemic design thinking in evaluation. Participants will engage with a set of frameworks and tools grounded in systemic design thinking. Through facilitated activities and case analysis, this workshop will provide you with hands-on practice in the application of systemic design thinking to evaluation practice. In addition to practical work, discussion throughout the workshop will encourage ongoing reflection on the following questions: • Consider the role of program design in this process. When do evaluators have a voice in program design? • How do we evaluate programs’ use of systemic design and systems thinking? Can we help programs get to better outcomes if we work with them to engage in systemic design thinking? • How does systemic design thinking interact with systems thinking to inform evaluation design? Is there a hybrid design space within which we can work?

 

Workshop 1030: Engaging with Community Members for Equitable Evaluation: A Hands-On, How-To Workshop

Speakers: Susan Wolfe; Ann Price

8:00 a.m. - 10:45 a.m.

Engaging and collaborating with community members is more than conducting interviews and focus groups. Authentic community engagement happens when community members’ voices are centered and amplified and when community members drive the design, implementation and the evaluation of solutions to community needs and asset-based activities to create stronger, more resilient communities.

This interactive workshop will use fun, engaging methods to teach evaluators to 1) assess how equitable and inclusive their current practices are; 2) authentically engage with natural (non -“professional”) community leaders; and 3) learn how to use strengths-based participatory evaluation methods to increase collaboration. This hand-on, experiential workshop will engage participants to co-create knowledge and skills they can use to apply a participative, equitable, and community-empowered approach to evaluation.

The facilitators will engage participants with Technology of Participation (ToP)© methods and employ a variety of teaching/learning strategies to engage adult learners. Each participant will receive a workbook with workshop content, exercises, templates, and resources. In groups of 8 to 10 individuals, participants will work alone, in pairs, and in small group activities in order to accommodate participants’ varying needs and abilities. Accommodations will be made for participants who are not comfortable engaging in interactive exercises as we have encountered this during past workshops. Workshop facilitators will use multiple engagement strategies including:

  • Think, Pair, Share small group discussions.
  • Interactive, liberatory exercises including simulations of exercises participants may use with communities. 
  • Individual exercises that allow for personal reflection.
  • Facilitated large group discussions using the Technology of Participation© methods.

 

Workshop 1031: Expand your system transformation quest: Two potent tactics based on your worldview

Speakers: Beverly Parsons; Katie Winters

8:00 a.m. - 10:45 a.m.

More and more, today’s problems are framed as “structural” or “systemic”. The evaluation field is awash with methods and tools to apply in practice, yet many are grounded in the dominant, mechanistic worldview that’s at the heart of the problems we’re facing. During this workshop, the presenters will offer two simple, yet potent tactics to avoid this trap. We will begin by orienting participants to two basic, intertwined paradigms that underlie human-natural systems, and which are connected to today’s poly-crises. By describing their historical roots and then contrasting them with regard to purpose, structure, and processes, participants will learn how to differentiate and strategically use them in the situations they are seeking to transform. Next, we will walk through example professional development pathways for evaluators to evolve into experienced system transformation practitioners. Framed as a system design quest, these pathways evolve through catalyzing events that spark insight and propel nonlinear jumps in system transformation capacity. Examples of catalyzing events drawn from projects spanning education, social welfare, public health, and environmental sustainability will introduce participants to four clusters of system science theories relevant to evaluation. We will touch on key aspects of general systems theory, cybernetics, and transitional system science theories (e.g., complexity), however, emphasis will be on elucidating a cluster of system science theories underrepresented in evaluation. These eco-relational system science theories will be discussed in greater detail, with emphasis on their importance to systems transformation in support of living systems that provide a foundation for moving through today’s poly-crises. Participants will receive a reference list with recommended websites and publications to support them to continue their exploration of these ideas.

 

Workshop 1032: Facilitation Matters:  Foundational Techniques for Evaluators

Speaker: Rachel Scott

8:00 a.m. - 10:45 a.m.

At times, evaluation can be overwhelming. What data should we collect? (Program leaders might ask themselves). Now that we have data, what do we do with it? (Another might ask). Embedding time for reflection and conversation is key to forging the link between doing an evaluation and using an evaluation. In this session Dr. Rachel Scott from the Center for Research Evaluation at the University of Mississippi will share a set of facilitation protocols designed to help social programs collect data, interpret data, and contemplate how to use data to make positive change. Session attendees will actively participate in a series of facilitation protocols and leave with new tools to encourage storytelling and dialogue and to help collect and use data.

 

Workshop 1039: Learning to Love Your Logic Model: Better Planning, Implementation, and Evaluation through Program Roadmaps

Speaker: Tom J. Chapel, MA, MBA

11:30 a.m. - 2:15 p.m.

The bad rap on logic models in some quarters is well-deserved. What should be a flexible and practical tool often deteriorates into overly-bureaucratic mandatory templates mired in terminology that puzzles both users and all but the most experienced evaluators. This course aims to recapture the original spirit and utility of logic modelling by emphasizing function over form. While we will cover the "usual suspect" components of the traditional logic model—activities and outcomes, inputs and outputs, mediators and moderators--we’ll introduce concepts step by step and, at each point, show how insights from that step contribute (OR NOT) to a more thorough understanding of your program. More importantly, we’ll show how logic models--customarily a tool in program evaluation—are even more useful in setting, assessing, and course-correcting strategy and implementation, even before the first iota of data are collected. These “process use” applications, while not denying the importance of logic models in setting an evaluation focus, excite planners and implementers, and make the evaluator a welcome participant even at the earliest stages of program formation.
 

Workshop 1043: Systems Informed Empowerment Evaluation (SIEE): An Integrative, Theory-Based Approach to Adapting Evaluation Practice to Context

Speakers: Gregory Phillips; Erik Glenn; Esrea Perez-Bill

11:30 a.m. - 2:15 p.m.

 Systems-Informed Empowerment Evaluation (SIEE) is a theory-based, ecological framework that centers the priorities of community-stakeholders in the pursuit of social, economic, and environmental justice. As an approach to multi-site program evaluation, SIEE situates “the program” within a landscape of intersecting systemic dynamics. This ensures a broad, but relevant, understanding of a program’s context that enables the evaluator to remain responsive To propel the power of evaluation to focus on the story of the evaluand, we need to challenge evaluators to go beyond their evaluative work and commit to equity and justice. Theoretical frameworks are crucial for guiding the evaluation process; however, extant theories have been developed for a specific context and are rarely revised in response to scientific, political, and structural change. Thus, it is incumbent to empower evaluators to critically review existing frameworks, and to tailor theories that increase an evaluator’s ability to provide culturally responsive, rigorous, and actionable recommendations.

Systems-Informed Empowerment Evaluation (SIEE) is a theory-based, ecological framework that centers the priorities of community-stakeholders in the pursuit of social, economic, and environmental justice. As an approach to multi-site program evaluation, SIEE situates “the program” within a landscape of intersecting systemic dynamics. This ensures a broad, but relevant, understanding of a program’s context that enables the evaluator to remain responsive.

This workshop will highlight the need for adaptive evaluative methods and community inclusion in evaluation. This work draws on the SIEE framework and its key elements from Empowerment Evaluation, Systems Evaluation, and Transformative Evaluation as a pathway to knowing that leads to positive social change. As a result, participants will understand how SIEE inherits the ability to complement any evaluation approach by nurturing aspects of self-determination necessary to propel community-stakeholder empowerment, embracing iteration throughout the project cycle to ensure utility, and centering the lived experiences of the minoritized as drivers of knowing.

This workshop is designed for individuals new to conducting evaluations responsive to the needs of communities, interested in learning the basics of community-engaged evaluation and related theories, or interested in participating in a collaborative environment designed to advance community-stakeholder empowerment within the field of evaluation as a whole.

 

Workshop 1049: Applying Futures Thinking and Methods to Evaluation

Speakers: Annette Gardner

8:00 a.m. - 10:45 a.m.

The past few years have demonstrated that our economy, climate, politics, and social order can change much faster than in past decades. Our present, and certainly our future, will continue to be volatile, uncertain, complex, and ambiguous (VUCA). For our evaluation clients, it is no longer enough to reflect on the past and the present in program development. Evaluators must actively anticipate what may happen in the future and feed that information back into decision-making and evaluation planning. Futures studies and foresight, the ability to use futures methods to inform strategy and decision-making, provide a rigorous and proven set of tools to perceive, make sense of, and act upon ideas about the future. In this Professional Development session, we  present and demonstrate methods that can be readily adapted to evaluation practice or the Futures Wheel and Wind Tunneling with alternative scenarios. Participants will also explore the macro forces for change commonly referred to as ‘trends’ and ‘weak signals’ that are  just emerging. As a result, participants will learn a new way of thinking that will strengthen evaluation thinking, helping them anticipate and integrate future opportunities and challenges in their evaluation plans.Relevance Statement: The pandemic was an important wake-up call for evaluators to be forward-thinking and anticipate (and prepare) for the unexpected. Society was caught off guard and the consequences were devastating.  Evaluators were no exception. In a rapidly changing world evaluators need to work with their clients to be able to think about possible futures and how they might be enabled, avoided, mitigated, or exploited. They need to develop nimble, forward-thinking designs that can navigate change as well as future-proof their theories, findings and recommendations. Similar to cultivating their ‘evaluative thinking’ skills, evaluators need to engage in a new way of thinking about and using the future or ‘futures literacy’ (Miller, 2018). 

At the field level, this requires a shift in orientation away from ‘rear view mirror’ thinking and assessing a program or policy ex-post with limited thought to how to increase a program’s resilience in the face of great change. As Carden (2023) argues, evaluators are almost by definition trapped in the past, and develop  evaluation findings and recommendations that are an extension of the past and often fail to incorporate novelty and changing context. In practice,  evaluators can no longer assume projects will continue as is; they must replace their aspirational sense of reality with one that is significantly “post-normal” and often complex and chaotic (Schwandt, 2019).  This means that simple theories of change and methods characterized by linear ‘if then’ thinking precludes consideration of larger macro forces for change commonly referred to as ‘trends’ and a wider awareness of ‘weak signals’ that are  just emerging.

An easy first step to opening up the evaluator 'gaze' to include the future in evaluation thinking and design is to adopt foresight methods that can support specific activities in the evaluation process.  The Futures Wheel, a method that identifies primary, secondary, and tertiary consequences of a specific trend or event can be used to broaden participant understanding of change when developing a program theory of change. Alternative scenarios of the future in 2040 can be used to test the resilience and fit of specific evaluation recommendations.  Additionally, they provide a systems perspective that characterizes relationships and impacts. 

 

Workshop 1051: Evaluating Advocacy and Policy Change Initiatives: Concepts, Design Strategies, and Methods

Speaker: Jared Raynor

8:00 a.m. - 10:45 a.m.

How would you go about assessing whether the work of an organization contributed to a policy change or if their advocacy is making a difference? There is no cookie cutter approach, but there are a lot of good ideas and a continuously growing bank of useful practices. Evaluation in the domain of policy advocacy continues to grow in importance and sophistication. It requires understanding the nuances of policy advocacy work, adeptness in navigating ever-changing circumstances, and knowledge of appropriate methods and personal biases. This Workshop is designed to help evaluators that are familiar with basic evaluation principles apply them effectively in the policy advocacy environment. Using the most recent research, practice, and case studies, workshop participants will build their knowledge and practice of key concepts, definitions, designs, and tools. They will leave with knowledge and tools they can begin to apply to advocacy evaluation right away.Relevance Statement: Several factors have fueled the need for skilled evaluators that can design appropriate advocacy and policy change (APC) evaluations to meet diverse stakeholder needs: ongoing foundation interest in supporting APC initiatives to achieve systems change and equity goals; evaluation of democracy-building initiatives worldwide; and diffusion of advocacy capacity beyond the traditional advocacy community (such as nonprofits and service providers). And evaluators have met these needs with great success, building a new field of evaluation practice, adapting and creating evaluation concepts and methods, and shaping advocate, funder and evaluator thinking on advocacy and policy change in all its diverse manifestations.

 

Workshop 1052: Honoring the Voices of Latine Communities through Culturally Responsive and Equitable Evaluation

Speakers: Lisa Aponte-Soto, PhD, MHA

11:30 a.m. - 2:15 p.m.

Enacting culturally responsive and equitable evaluation (CREE; Frierson et al., 2010) with diverse multinational Latine communities calls for evaluators to honor culture and context by centering love, authenticity, and healing practices. This workshop is structured in three main components. Part I will provide an overview of social justice evaluation theories and foundational principles of culturally responsive and equitable evaluation (CREE) with an emphasis on Latino Critical Race Theory and contemporary indigenous praxis-oriented paradigms for working with Latine communities. Part II will focus on self-reflection exercises to assess the evaluators’ positionality as CREE agents (Symonette, 2008). Part III will guide participants through applied case study exercises in small groups.

 

Workshop 1053: Impact: The Fundamentals No One Taught You (and Every Evaluator Needs to Know)

Speaker: John Gargani, PhD

8:00 a.m. - 10:45 a.m.

What is impact? It’s probably not what you think it is. In this one-day workshop, participants will learn that impact has multiple definitions that fit together into a single framework that subsumes logic models, systems thinking, and experiments. They will learn how to use the framework to judge the credibility of impact claims and choose a “good enough” strategy for measuring impact. This is not a course on experimental design, rather it will equip social investors, program designers, managers, and evaluators with the tools they need to reason correctly about impact. The workshop is divided into three sections: language traps and misconceptions, the concept of impact, and strategies for measuring impact. Participants will make light-hearted use of metaphors—time machines, washing of brains, the multi-verse, the garden of forking paths, and all-seeing Zeus—to apply abstract concepts to practical purposes. Activities and games will reinforce what is learned. Students will have fun. By the end of the workshop, participants will have a more systematic understanding of what may be the most important and misunderstood concept in their professional lives. They will be able to (1) use impact vocabulary correctly, (2) distinguish between the concept of impact and impact measurement, (3) recognize there are different conceptions of impact and organize them in a single framework, (4) judge the credibility of impact claims and identify “impact washing”, and (5) make appropriate choices among the “big five” strategies for measuring impact. Participants will be provided with a workbook and access to free resources to continue their learning. Social investors, program designers, managers, and evaluators with an intermediate or advanced understanding of evaluation are welcome.
 

Workshop 1055: Scaling Impact: New Ways to Plan, Manage, and Evaluate Scaling

Speaker: John Gargani, PhD

11:30 a.m. - 2:15 p.m.

In this workshop, participants will learn a new approach to scaling the social and environmental impacts of programs, policies, products, and investments. The approach is based on the book Scaling Impact: Innovation for the Public Good written by Robert McClean and John Gargani, and is grounded in their collaborations with social innovators in the Global South. The workshop goes beyond the book, reflecting the authors’ most recent thinking, and challenges participants to adopt a new scaling mindset. I first introduce participants to the core concepts of the book. Then after each concept, participants have a chance to practice what they learned by engaging in small-group, hands-on exercises drawn from their own professional settings. The workshop is intended as an introduction, and participants will be provided with free resources to continue their learning.

Participants should have a basic understanding of evaluation, either as a practitioner or user. They should understand what a logic model is and recognize that programs, policies, and products create impacts in complex environments. Participants may come from any field, sector, or functional role. Program designers, managers, and evaluators are welcome.

By the end of the workshop, participants will be able to define impact, scaling, operational scale, and scaling impact; use the four principles of scaling; address scaling risks; and apply the dynamic evaluation systems model.
 

Workshop 1056: The Interconnectedness of Place and Practice: An Experiential Workshop

Speakers: Jennifer Billman; Eric Einspruch; Mark Parman

8:00 a.m. - 10:45 a.m.

AEA 2024 calls us to consider how our work amplifies and empowers voices in evaluation through uplifting emerging, underrepresented, and typically unheard voices. Here we posit that one voice left out of this call -  yet intricately impacting all evaluation work -  is the voice of Nature. The neglect of Nature’s voice in evaluative thinking reflects the field’s anthropocentric tendencies. We need look no further than the U.N.’s Sustainable Development Goals to see the global concerns regarding societal and ecological welfare. Yet, too often in practice, the interconnectedness of the two gets overlooked. While it may seem obvious that Life Below the Waters (SDG 14) and Life on Land (SDG 15) impact Good Health and Well-being (SDG 3), drawing this interconnectedness into evaluation practice can prove challenging. So challenging that Patton (2021) decried the disturbing absence of evaluators’ attentiveness to the sustainability of our planet and humanity (p. 173). Yet, calls for recognizing the interconnectedness of people and place in practice have been around for decades. In his classic essay, Should Trees Have Standing, Stone (1972) challenged the Western legal system’s notion that only matter in human form is a possessor of rights (p. 452). This notion of all of Nature (inclusive of humans) as a possessor of rights (Yurok Tribe, 2019) aligns with Indigenous science and philosophy yet is often neglected in Western evaluative thinking. Given that how one views Nature - as separate from or intimately interconnected with humans - is reflected in one’s values system, greater attention to Nature’s voice is needed. Indeed, Patton (2021) points out that “evaluators care about certain things and what they care about influences how they conduct evaluations” (p. 176). Simply put, if we care about the planet and all its inhabitants (living and nonliving) then our evaluation practice will reflect this care. But what is simply stated is not always simple to walk out in practice. This full-day experiential workshop is designed to provide evaluators a creative space for intentional engagement with the Nature-Culture connection and evaluation. During the workshop, participants will be led physically through natural spaces as they are stepped through a series of contemplative activities designed to unite lessons from Nature with evaluative thinking. Participants will travel to the Hoyt Arboretum (or similar outdoor area) where they will individually and collectively reflect on the role of place in their evaluative ontologies, epistemologies, axiologies, methodologies, and theories. Traversing through the Arboretum, participants will stop at five locations where they will listen to Indigenous stories reflecting the Nature-Culture connection, read selected writings from the Nature cannon and the evaluation literature, reflect on these writings in the context of place, then collectively share insights gained. Through direct, guided engagement with Nature, participants will be able to recognize the Nature-Culture connection and take account of this interconnectedness throughout all aspects of their evaluative thinking.
 

Workshop 1020: Collaborative Evaluation

Speaker: Rita O'Sullivan

11:30 a.m. - 2:15 p.m.

 In the 20 years since Practicing Evaluation: A Collaborative Approach was published in (O’Sullivan, 2004) collaborative evaluation has gained ground in the field of evaluation while new techniques have emerged to further enhance its practice.  This workshop will review the elements of collaborative evaluation and then interactively use examples from past evaluations to highlight and refine ways that evaluators can use the approach.  It will particularly emphasize how the collaborative approach lends itself well to culturally responsive evaluation. Collaborative Evaluation engages key program stakeholders actively in the evaluation process. Unlike distanced evaluation, where evaluators have little or no contact with program staff, collaborative evaluation deliberately seeks involvement from program stakeholders during all stages of the evaluation. A collaborative stance can strengthen evaluation results and increase utilization of evaluation findings. Additionally, programs participating in collaborative evaluations develop an enhanced capacity to consume and conduct evaluations, while evaluators gain a better understanding of the program. Further, the approach assumes that evaluation expertise within programs is developmental; and thus, the degree of collaboration must vary by the nature and readiness of the program.  Evaluations completed with this collaborative approach have yielded improved data quality, report writing, and evaluation use with a variety of programs. spanning: education, agricultural extension, environmental, family support, food systems, health, Further, it also increases the resources available to the evaluation. Both emerging and experienced evaluators can use collaborative evaluation approaches and techniques to enhance their suite of tools by which to provide clients with quality evidence from their programs in ways that will lead toward enhanced outcomes.

 

Workshop 1024: Design and Conduct Sound Evaluations Using the CIPP Evaluation Model

Speaker: Guili Zhang

11:30 a.m. - 2:15 p.m.

This dynamic professional development workshop offers a comprehensive introduction to designing and implementing effective evaluations using the latest CIPP Evaluation Model. Through engaging, hands-on sessions, participants will gain the skills to plan, budget, contract, conduct, report, and assess program evaluations in alignment with both the CIPP Model standards and broader professional criteria for rigorous evaluations. Led by a distinguished co-author of the seminal text "The CIPP Model: How to Evaluate for Improvement and Accountability," this workshop provides an unparalleled opportunity to dive deep into the most current practices of the CIPP Model. Attendees will become familiar with essential checklists from the book, covering aspects such as design, budgeting, contracting, reporting, and metaevaluation. Participants will actively apply these concepts through group activities, including the use of a sample RFP to craft and evaluate a plan for context, input, process, or product evaluation based on the CIPP Model’s standards. Additionally, the workshop will offer valuable follow-up resources to support the continued application of these skills. This workshop is an essential experience for professionals looking to elevate their evaluation practices to new heights of effectiveness and accountability.

 

Workshop 1033: Facilitation Matters: Advanced Techniques for Evaluators

Speakers: Rachel Scott

11:30 a.m. - 2:15 p.m.

At times, evaluation can be overwhelming. What data should we collect? (Program leaders might ask themselves). Now that we have data, what do we do with it? (Another might ask). Embedding time for reflection and conversation is key to forging the link between doing an evaluation and using an evaluation. In this session Dr. Rachel Scott from the Center for Research Evaluation at the University of Mississippi will share a second set of facilitation protocols designed to help social programs collect data, interpret data, and contemplate how to use data to make positive change. Building on workshops previously offered at the American Evaluation Conference (2022 & 2023), we plan to engage with attendees to enhance their skills to design facilitation protocols and identify areas for continued growth and development of facilitation skills. Session attendees will actively participate in a series of facilitation protocols and leave with new tools to amplify and empower voices, promote dialogue and to help collect and use data.

 

Workshop 1034: Facilitation Skills for Evaluators

Speakers: Amy Griffin

11:30 a.m. - 2:15 p.m.

Evaluators wear many hats. A critical role is that of the facilitator. Facilitation is needed at every phase of the evaluation process including planning meetings, participatory logic model development, development of data collection methods, qualitative data techniques (e.g. focus groups), and results dissemination. Quality facilitation builds trust among the evaluator and our community partners. Convening effective and productive meetings sends a message of respect and can create a positive and energetic working atmosphere. 

During this session, we will discuss how to prepare for various types of facilitation experiences and will provide considerations for selecting facilitation modes and methods depending on the facilitation goals and group dynamics. Participants will also engage in a self-assessment process of their facilitation skills. This is an interactive session. Participants will be provided the opportunity to plan and present.

 

Workshop 1035: Findings as fuel: Power your organization’s learning with evaluation data

Speakers: Amanda McWhorter; Elizabeth Jarpe-Ratner

11:30 a.m. - 2:15 p.m.

 How can we transform our organizations into learning powerhouses, fueled by our evaluation findings? Evaluation data, findings, and recommendations should do more than sit on a shelf. They can drive program improvement, inform decision-making, and boost accountability to funders and stakeholders. Effective use of evaluation findings remains an elusive milestone for many organizations. In this workshop, participants will go deep into thinking about how to ensure their evaluation findings are used. The course presents a set of evidenced-based factors known to influence organizations’ use of their evaluation findings. Participants will learn about the factors in four categories (instrumental, conceptual, enlightenment, and process) and explore examples of effective approaches. Participants will then apply the concepts to their own settings, reflect upon their own past and possible successes and challenges, learn from each other through facilitated discussions, and leave with a plan to foster improved use of evaluation findings within their own settings.

 

Workshop 1037: Introduction to Culturally Responsive and Equitable Systems Change Evaluation

Speakers: Brandon Coffee-Borden; Chandria Jones; Margaret Hargreaves

11:30 a.m. - 2:15 p.m.

Culturally responsive and equitable evaluation (CREE) and systems-thinking and complexity-science (STCS) informed evaluation are relatively new evaluation approaches that have received increased attention within the evaluation field. Linking these approaches into a coherent whole can increase the rigor, validity, reliability, and value of evaluation practice. Each approach provides unique but complementary ways of understanding how people, programs, and initiatives are influenced by cultural factors and power relations within complex institutional, community, and political systems. In so doing, they provide a platform to more accurately capture the “story” of complex initiatives and opportunities to increase their effectiveness and impact. This workshop will provide a introduction to these two approaches and their integration to advance the practice of evaluation with an equity-focused and systems-informed lens.

 

Workshop 1038: Launching Your Solo Evaluation Practice: Finding Your First Clients as a Freelancer - CANCELLED

Speaker: Patricia Steele

11:30 a.m. - 2:15 p.m.

This half-day workshop is designed for evaluators interested in starting their own solo practice and seeking practical guidance on finding their first clients as freelancers. Led by experienced evaluators who have successfully launched their solo practices, this workshop will provide participants with actionable strategies and tools to kickstart their entrepreneurial journey. The workshop will begin with an exploration of the current landscape for independent evaluators, including market trends and client sectors. Participants will learn how to identify their unique value proposition and target market, setting the foundation for attracting the right clients. Through interactive exercises and discussions, participants will develop their marketing and networking skills to effectively reach potential clients. They will learn how to create a compelling elevator pitch, leverage online platforms and social media, and build strategic partnerships to expand their client base. The workshop will also cover practical aspects of client engagement, such as conducting initial consultations, negotiating contracts, and managing client expectations. Participants will leave with a personalized action plan outlining concrete steps to find and secure their first clients. By the end of the workshop, participants will have the confidence and tools necessary to launch their solo evaluation practice and start building a successful business. Join us for this empowering workshop and take the first step towards realizing your entrepreneurial goals as an evaluator.

 

Workshop 1040: Let’s Slay those adolescent and youth-focused program evaluations! Crucial concepts in planning evaluations and integrating meaningful youth engagement approaches and methods

Speakers: Susan Igras; Rugyatou Kane; Soukeyna OUEDRAOGO; Mamadou Ndiaye

11:30 a.m. - 2:15 p.m.

We know that young people are the future citizens of their countries and the planet. Their passage from adolescence to adulthood marks a crucial moment for consolidating the attitudes, values, and behaviors that will equip them for a rapidly changing society as adults. While significant policies and programs are reaching and supporting the growth and social well-being of adolescents and youth, the evaluation of these policies and programs is surprisingly limited. Hence, opportunities are lost to provide evidence from youth, adult, and systems perspectives of what works and doesn’t work to improve the relevance and effectiveness of policies and programs for the substantial cohort of young people today. Participants in this highly interactive workshop will explore crucial concepts and principles of evaluating adolescent and youth-focused programs and strategies to meaningfully include and engage young people in evaluation and research, drawing from initiatives and innovative examples in North American and African contexts. It will focus on evaluation methods and tools that enable adolescents and young people to participate by offering alternatives that shift ways of communicating and, in many ways, enhance the quality of the information gathered while considering their levels of development and life contexts. Case studies, simulations, and role plays of data collection with participatory methods will highlight good adolescent and youth-focused evaluation practices and raise issues and methodological challenges from evaluation design through dissemination of findings. We aim to facilitate exchanges and debate between evaluators on good examples and possibly on less good ones, as well as on methods and approaches to encourage the authentic participation of adolescents and young people in evaluation. Adaptable to many contexts and disciplines, evaluators can use these tools across the spectrum of M&E activities, from needs assessment and program monitoring and evaluation to interpreting and communicating results with stakeholders.

 

Workshop 1041: Letting the Data Speak: Effectively integrating and synthesizing your data to tell the story

Speakers: Ghazia Aslam; Jonathan Jones

11:30 a.m. - 2:15 p.m.

Over the years, we have developed an approach, the Data Analysis, Integration, and Synthesis (DAIS) process, that efficiently and rigorously integrates and interprets data, aligning with the 2024 conference theme of fostering innovative practices in evaluation. This participatory approach allows us to integrate complex data streams, often collected and analyzed by multiple evaluators ensuring that diverse data are brought together to tell a coherent story. The DAIS process is interactive and iterative. During a DAIS process, we display analyzed data visually to the evaluation team (using cards or sticky notes), and we work together to integrate it into initial findings. We have found that this process can overcome the challenge of moving from analysis to report writing and can help lead to a well-structured report with strong findings, a clear narrative reflected in the conclusions, and recommendations grounded in data.  

In this workshop, we present the DAIS process and invite the participants to practice each step, providing them with an opportunity to experience participatory strategies that enable effective data synthesis. We begin by presenting the approach with a brief lecture and discussion on organizing a DAIS workshop -- What is it? What are the benefits? Who needs to be there? How will it be formatted? We will explain how to develop data analysis summaries, which comprise early emerging theme statements with supporting evidence and data sources. These data analysis summaries are the key inputs to a DAIS.

We will present a mock study with four key evaluation questions and data. Participants work in small groups to practice developing emerging theme statements for each question. We will then practice the workshop steps, focusing mostly on how an evaluation team can affinity map emerging themes into higher-level findings during a DAIS workshop. We will discuss the difference between findings, conclusions, and recommendations and practice developing strong conclusions and recommendations from the emerging findings. Finally, we will discuss good practices and strategies to move from the DAIS to drafting a cohesive narrative in an evaluation report. Thus, through experiential learning, participants will experience how DAIS processes produce traceable, contextualized and supportable evidence-based findings and conclusions and recommendations appropriate for the intended audience, with a confident team standing behind the story the data tell. Post-workshop, attendees will receive access to digital resources and tools to implement DAIS in their evaluations, ensuring continuous learning and application of the workshop insights.

 

Workshop 1042: PARADISE BY THE DASHBOARD LIGHT: A Crash Course in Power BI

Speaker: Joe Travers

11:30 a.m. - 2:15 p.m.

Curious about Microsoft's Power BI data dashboard software but not sure how to learn it easily and use it with your evaluation data? Power BI is extremely powerful, but can be difficult to learn when you’re first starting out.

This workshop takes you from complete novice to complete confidence in knowing how to turn data into a dashboard  - connecting to data, cleaning data (when needed), making charts and visuals, and making a dashboard that immediately answers the questions your report audience have about the data.

This workshop is 100% hands-on! We'll all make a simple (and beautiful) dashboard with some evaluation survey data together.

Participants MUST have a laptop with a recent version of Power BI Desktop installed. Power BI is a Windows only program. It can be downloaded for free from https://powerbi.microsoft.com/en-us/desktop/ or the Microsoft Store .. Workshop data will be provided to participants before the conference (if possible) or at the workshop itself.

Workshop info and material also available at https://aea24.substack.com/

 

Workshop 1044: The Future is Now: Some Adjustments in the Classic Logic Model are Needed

Speaker: Jack Phillips

11:30 a.m. - 2:15 p.m.

The classic logic model is the basis for many evaluation systems and models. It has been in use for more than a century. However, the demands for data and accountability in projects require adjustment to this classic approach to evaluation. For example, the issue of attribution is not always built into logic models, but it is necessary for credibility. When any project or program has improved an impact measure, other influences are in play, and a technique must always be in place to sort out the effects of the program we are evaluating.

The issue of cost-benefit analysis is becoming more of an issue as governments and NGOs struggle for funding. Some programs need to add more benefits than they cost or at least, break even. The classic logic model does not necessarily include this level of analysis but should include it as a possibility.

This session will explore these and other adjustments needed in the classic logic model. It will introduce a model that meets these changes and has been used successfully by many agencies, governments, nonprofits, universities, and businesses. Using examples, case studies, interactive discussions, and activities, this session will engage the audience in understanding the need for change and explore how these changes can be implemented. This session is not suggesting that your current evaluation should be replaced but will discuss the adjustments that should be made to meet current needs and challenges.

 

Workshop 1045: The Path to Inclusive Transformation: Reimagining Theories of Change and Learning to Drive Systems Change

Speakers: Michael Moses; Amanda Stek

11:30 a.m. - 2:15 p.m.

The social sector is more aware than ever that achieving sustainable impact is not as simple as implementing a discrete project or program. Rather, impact comes about at the systems-level, through iterative, collaborative, and dynamic portfolio-level initiatives that bring together diverse sets of actors (funders, practitioners, researchers, and advocates) to strategize, learn, and adapt together over time. Dynamic theories of change – a fresh take on an old tool – when combined with emerging learning approaches, can help changemakers work together to integrate diverse perspectives, and engage in the collective measurement, learning, and action needed to achieve impact in complex systems.

In this interactive workshop, attendees will be introduced to different measurement tools (such as sentinel indicators, progress markers, and causal link monitoring) and have the opportunity to incorporate them into inclusive learning approaches (before and after action reviews, learning logs, and strategy testing) as they practice mapping out and adapting systems-level theories of change. In doing so, they will develop their ability to elevate fresh perspectives in strategic learning efforts, such that the evaluation process integrates the insights, lessons, and priorities of different partners and allies. By the end of the workshop, participants will be equipped to bring a variety of approaches to bear in their efforts to lead inclusive, participatory evaluations that help changemakers work together to learn about and, over time, shape complex systems in a way that enables and motivates others to pursue their own impact trajectories.

 

Workshop 1046: The work starts with us: Shifting and sustaining our daily evaluation practices to be more equitable, transformative, and full of soul

Speakers: Chantal Hoff; Min Ma

11:30 a.m. - 2:15 p.m.

How often have you downloaded a checklist but never used it, or had a light bulb moment from a blog post but struggled to put it into practice?

The field of evaluation is undergoing important shifts as it examines equity within the practice and its role in the social change ecosystem. These shifts are being guided by numerous approaches and methods (e.g., Equitable Evaluation Framework, Trust-Based Philanthropy, Culturally Responsive and Equitable Evaluation, Data Equity Framework) that challenge evaluators to examine WHAT aspects of traditional research and evaluation practice need to change to be more culturally responsive and in better service of equity.

Even when we know WHAT to do, many evaluators face challenges in figuring out HOW to put it into practice. Perhaps we’ve made equity an explicit value or we’ve cited an equity-centered framework in our project scope or evaluation design. Perhaps we’ve even recognized that uplifting diverse perspectives within our organizations and our communities is an essential piece of that puzzle. How are we actually doing that in practice? How are we sustaining that over time?

During this highly interactive workshop, we will focus on the how. Specifically, we will work together to answer the question: How can our daily actions be in better service of more equitable, transformative relationships with 1) communities and people most impacted by the work, 2) clients and primary evaluation users, and 3) our own evaluation teams?

In the first half of the workshop, participants will engage in self-reflection to examine their personal equitable evaluation practices and how those practices shape relationships within their teams, organizations, and/or with collaborators or clients. Participants will continue to reflect, share, and identify new ideas for practice through small group discussions. In the second half of the workshop, participants will learn how to move from reflection to action through real-world examples. The facilitators will share case studies and examples from our team on how we’ve used equity-centered tools to transform the way we work with each other and ultimately the way we make a difference for our clients and the communities they serve.

This workshop is grounded in foundational principles of data equity and draws inspiration from transformative evaluation, culturally responsive and equitable evaluation approaches, arts-based methods, and appreciative inquiry. Participants will leave this workshop newly inspired and equipped with tools and tangible skills on how to incorporate and sustain these approaches within their evaluation practice.

 

Workshop 1047: Theory, Practice, and Praxis for Liberatory LGBTQ+ Evaluation

Speakers: Gregory Phillips; Esrea Perez-Bill; Erik Glenn

8:00 a.m. - 10:45 a.m.

This advanced, project-based workshop will provide a theoretical, practical, and justice-focused approach to what we describe as LGBTQ+ Evaluation. LGBTQ+ Evaluation is not merely a set of content knowledge, but a unique framework and body of theory through which to approach our practice. This workshop is designed to provide attendees with hands-on, collaborative, collective opportunities to reflect and build upon LGBTQ+ liberation in their own practices. We take an inquiry-based approach to teaching LGBTQ+ Evaluation through encouraging attendees to thoughtfully and critically interrogate what it means–in theory, practice, and praxis–to conduct evaluation with LGBTQ+ communities in a way that is culturally responsive, equitable, and transformative. Liberatory adult education theories and texts, such as pedagogy of the oppressed, will guide both curriculum and teaching throughout the session. Participants will be encouraged to practice creativity, experiment with anti-oppressive practices, and engage in meaningful and experiential storytelling.

Workshop 1048: Transformative Mixed Methods Design in Evaluation: Increasing Justice

Speaker: Donna Mertens

11:30 a.m. - 2:15 p.m.

Evaluators can consciously address inequities in the world by the way they design their evaluations. Transformative mixed methods designs are explicitly constructed to serve this purpose. This workshop is designed to teach the use mixed methods for transformative purposes to better address the needs of members of marginalized communities, such as women, people with disabilities, members of the LGBTQ community, people living in poverty, racial/ethnic minorities, and religious minorities. Participants will learn how to use a transformative lens to identify those aspects of culture and societal structures that support continued oppression and how to apply mixed methods designs to contribute to social transformation. Interactive learning strategies will be used including whole group discussion and working in small groups to apply the design of a transformative mixed methods evaluation to a case study.

 

Search