Menu
Log in
Log in


Thank you for attending the 2019 MAE Conference.

We are excited you are here to join us. Use the links below to navigate our online agenda. If you have any question please ask an MAE board member or go to the registration desk. 

Session Descriptions

Sponsors & Partners


Session Descriptions

Breakout Sessions

Ignite+ Sessions

Poster Sessions


Keynote

Topic: Having Fun with Evaluation! | Wendy Tackett | Slides

  • How can evaluators use more interesting processes that provide credibility to the work while also involving clients strategically to ensure that the work is meaningful, useful, and used to guide decision-making and improve outcomes?
  • How can evaluation recipients have an active voice in demanding that evaluation be more than just a text-laden report provided at the end of a fiscal year?
  • How can evaluation be fun?

Breakout Sessions

Embedding FUN to Facilitate Evaluation Use | Wendy Tackett | Slides

Sometimes evaluators get so focused on the evaluation contract or performing the work agreed upon in that contract that we forget about the ultimate purpose of our evaluation work - to provide an evaluation process that is useful and findings that are used. How, as evaluators, can we bridge that gap - creating an environment where evaluation is transparent, understood, and embraced? By incorporating fun activities and a fun attitude into the evaluation work, and starting with that at the beginning, you can strategically create an atmosphere where clients view evaluation as meaningful and useful as well as use it to guide their decision-making and improve outcomes.

Through this session, we will talk about why embedding fun helps people think creatively and facilitates the use of evaluation findings. Participants will also participate in an activity they can use to introduce evaluation to their clients, students, an advisory group, or any group of people who will be involved in an evaluation. Participants will leave with the understanding, directions, and experience to implement this activity on your own with any group.

Cost Analysis 101 | Tomoko Wakabayashi, Jill Claxton, Beth Hardin 

In the proposed session, we will introduce some basic steps involved in conducting a cost analysis. These include identification of 1) an evaluation problem; 2) alternatives; 3) audience; and 4) type of cost analysis to use (Levin, McEwan, Belfield, Bowden & Shand, 2018). We will also introduce audiences to the four major approaches of cost analysis, when to use them and their purposes. The four approaches are: 1) cost-effectiveness (CE); 2) cost-benefit (CB); 3) cost-utility (CU); and 4) cost-feasibility (CF). Using a mock sample, we will demonstrate the ingredients method used to collect all potential expenditures associated with an intervention, such as personnel, facilities, equipment, materials, and training. We will introduce our audiences to internet resources, including those on the Center for Cost Benefit Studies in Education (CBCSE) website, such as the CBCSE interview protocols, and the free CostOut tool. Lastly, we will facilitate discussion and reflection on strengths and limitations of cost analysis.

Where to Find Best Practices? Lessons from the Field | Ignacio D Acevedo, Taylor Crisman, K. Malulani Castro | Slides

Evaluation often requires comparing a target (e.g., program, organization) against appropriate best practices and/or standards. In some cases, identifying such practices or standards is straightforward (e.g., national standards). However, evaluators will frequently encounter instances in which identifying best practices is complicated by the novelty of an evaluation target and/or the context in which the evaluation target occurs. Based on several applied case studies, the goal of this session is to provide participants with guidance on how to identify and/or develop appropriate standards or best practices from which to compare an evaluation target. We will specifically address approaches that are useful in situations where the evaluation target is novel and/or innovative, and in situations when considering culture and/or context is of tantamount importance.

A Crash Course in Developing Survey Questions | Lyssa Wilson Becho, Lori Wingate | Slides

Surveys are a valuable source of evaluation data. However, obtaining quality data heavily relies on well-crafted survey items. Although writing survey questions may seem simple on the surface, anyone who has struggled to find the right response scale or wrestled with poor quality data collected from by poor survey questions knows how critical and complex survey development really is. Writing good survey questions requires more than just common sense.

In this hands-on workshop, participants will learn the art and science of how to develop and improve questionnaire items to maximize the accuracy and utility of survey data. Adult learners learn practical skills best through application, rather than listening to a lecture. Therefore, this workshop begins by asking participants to critique an example survey. From previous workshops, we have found that participants benefit more from identifying what is wrong with a poor survey rather than starting by crafting their own questions. After working individually, the whole group will walk through each question and translate participants’ critiques into key considerations for developing survey questions. In order to aid participants in applying what they learn in this workshop, we will provide them with a handout of survey do’s and don’ts to apply to their future survey development work.

The materials for this workshop have been previously tested during a 3-hour workshop at the 2018 Advanced Technological Education Conference, as well as a 30-minute presentation at the same conference. Both presenters have extensive experience giving interactive presentations as workshops, lectures, and webinars.

Automated Excel Template to Make the Evaluation Budgeting Process Easy | Janet Ray, Maria Schmieder | Slides, Sample Budget, Agenda

This session, “Evaluation budgets made easy using an automated template tool”, will give attendees a hands-on opportunity to create a budget. Participants should bring their laptops and be ready to create a budget during the demonstration. Attendees will learn key budget elements, find out what is a reasonable evaluation budget and be able to create budget in half the time. Whether you are a funder, nonprofit, or consultant, people need fast and simple tools to estimate an evaluation project.

Qualitative Data Analysis- Using the Right Tools for the Job | Jeffery Hillman, Ouen Hunter | SlidesHandout

Qualitative inquiry, in times past, was often considered a lesser of quantitative methods. Guba and Lincoln (1989) kicked the door open on qualitative evaluation through the lens of responsive constructivist evaluation. This framework of evaluative thinking has helped to bring understanding to the value of systematic qualitative research with the intent of creating personalized rich descriptions of programmatic effects from the view point of stakeholder groups (Patton, 2015). Quantitative methods dominated the field for decades and required a specific skill set. Equally, qualitative inquiry necessitates a specific skill set for data analysis.

This workshop style session will focus on qualitative methods for data analysis using thematic identification and categorization. First, a short introduction will cover the key differences between creating inductive and deductive themes along with why one might select one type over another. Second, readily available tools for coding electronically will be discussed. Last, a brief description of transcription styles and pitfall will be discussed followed promptly with a practice session using a transcribed section of an interview. Participants (individually and as a group) will be asked to code the interview with a deductive view while looking for emergent themes. The session will end with reporting the work completed by the participants and debriefing of the positives and challenges found in the process. Encouragement of open sharing through a learning environment will be the focus.

Using Cognitive Interviewing to Improve Survey Questions | Adrienne Adams, Jodi Peterson

Evaluators commonly use surveys to collect evaluation data. When there are no appropriate existing measures, evaluators write new survey questions. The wording of those questions has implications for the accuracy and usefulness of the resulting data. Cognitive interviewing is a tool to help ensure that participants understand the survey content to mean what you intend it to mean. It is also helpful for ensuring that survey design choices enhance rather than hinder data quality. In this workshop, we will explain the purpose and methods of cognitive interviewing, share examples from our work, and give attendees the opportunity to practice key cognitive interviewing techniques. Attendees will leave the workshop with a basic understanding of cognitive interviewing methods and resources to use the technique in a future projects.

Management and Communication Strategies: How to Keep Your Clients In the Know | Kelly Robertson, Lyssa Wilson Becho | Slides, Handout

Evaluation can be a messy process. Clients often struggle to understand what evaluation is, let alone, keep the details of the evaluation process straight. Even from the evaluators’ side, it’s easy for the lines to blur between evaluation projects for busy evaluators switching between projects. Miscommunication is all too common given the complex details that need to be communicated between the evaluator and client. This interactive lecture will help to remedy these issues. Presenters will share practical tips and tricks on how to improve evaluation management and communication with clients throughout the evaluation process, with plenty of time for questions and conversation. Participants will walk away practical resources to help them apply what they learn.

New and experienced evaluators can benefit from sharing useful strategies to guide the evaluation planning process. The presenters will share their practice of: setting up budgets in excel at the task level (to ensure budgets are realistic and sufficient), offering clients an evaluation menu (allowing clients to choose from evaluative activities that fit their needs and budgets), basic templates for what to include in a scope of work, and a list of questions to ask in the first client meeting (to help efficiently collect information needed to draft a scope of work and provide a foundation for future communication and collaboration). Throughout the planning process, presenters will emphasize approaches to encourage authentic and active engagement from clients, instead of blindly agreeing to whatever the evaluator proposes.

Several suggestions for effective management throughout an evaluation’s implementation will be also presented. Tips will focus on how to set up effective communication practices, such as monthly evaluation updates and identification of a contact person for both the client and evaluation team. Suggestions for troubleshooting will also be discussed, such as what to do if clients become non-responsive. Evaluation plan cheat sheets (a quick-reference guides that contain the most important information from an evaluation management plan) will be presented as a strategy to help evaluators keep track of evaluation activities and better communicate the details of an evaluation plan with clients. Presenters will share some examples from their own work and discuss how they leverage tables in new ways to concisely and effectively communicate information. Suggestions include use of tables to create evaluation matrices, as well as to communicate strengths and weaknesses of evaluation approaches, show connections between data collection methods, and present evaluation findings. Client feedback forms and an evaluation recommendations section in the evaluation report will also be presented as strategies to improve future evaluations.

Overall, this interactive lecture is meant to share practical strategies for managing evaluations and effective communication with clients. While presenters will share examples and lessons learned from their own work, they encourage participants to share their experiences as well.

Return to top

Ignite+ Sessions

PhotoVoice on a Budget | Ouen Hunter, Emma Perk 

The inclusion of the participants’ voices of a program is crucial in evaluation. Photovoice, a participatory research action method, encourages the participants to provide their unique insights by asking them to answer a question using photographs and descriptions of the photograph. This method of inquiry has been useful in gathering information from individuals experiencing homelessness (Bender, Begun, Dunn, Mackay, & Dechants, 2018; Pruitt et al., 2018) and health issues (Warne, Snyder, & Gillander Gådin, 2013).

A typical photovoice project includes an introduction and display of photographs. While working on the evaluation for a summer precollege program, with limited funding and time, we successfully implemented a small-scale photovoice activity. The success of this small-scale photovoice evaluation, and our previous experience with the method, indicates that even with limited time and finances, evaluators can still gather rich information from their participants.

This ignite session will provide a high-level overview of how we implemented the photovoice evaluation and our lessons learned.

Everyday Indicators | Zach Tilton

The 15-minute ignite presentation will be divided into three sections: 1) an introduction to Everyday Peace Indicator framework history, rationale, and methodology (5 minutes), 2) a discussion of top-down and bottom-up evaluation systems exploring issues such as proxies and assumptions, scale and replicability, rigor and power, and representation and marginalization (5 minutes) 3) Considerations for audience members on how to apply this approach to their existing evaluation portfolio.

Evaluating Systems Change Using the Most Significant Change Method | Miles McNall, Trevor Strzyzykowski | Slides

When the object of an evaluation is a complex initiative operating in a dynamic environment involving multiple actors engaged in a variety of interrelated activities, it can be difficult at the outset to identify the full range of outcomes that might be anticipated, severely limiting the utility of predetermined outcome measures. Given that systems-change initiatives also frequently take place in the context of other related efforts that might yield similar outcomes, it can be challenging to attribute any given outcome uniquely to the evaluand. In such circumstances, retrospective approaches that cast wide nets to capture outcomes that might be plausibly linked, although not necessarily uniquely attributable to the evaluand, can be helpful in documenting both anticipated and unanticipated outcomes. In recent decades, several such approaches have emerged, including outcome harvesting (Wilson-Grau & Britt, 2012), ripple effects mapping (Kollock et al., 2012), the most significant change (MSC) method (Dart & Davies, 2003), and contribution analysis (Mayne, 2012). In this presentation, we discuss: (a) how MSC was used to capture the outcomes of a youth mental health systems change initiative that stakeholders felt were the most significant, (b) what the major categories of systems change outcomes were, and (c) how we facilitated discussions of the MSC results with stakeholders to promote systems change. The implications of our experience using MSC for the further development and refinement of approaches that enable evaluators to capture program outcomes retrospectively when the full range of anticipated program outcomes is challenging to predict at the outset will also be discussed.

Transformative Data Visualization: 5 Tips to Redesigning Your Data | Jennifer R. Lyons | Slides, Video

Communicating transformative data visuals leads to people actually reading our data and reports. When we can engage people with intentional reporting, our audience will be more engaged and able to take informed action. In this 15-minute session, Jennifer will share 5 tips to redesign your data. Jennifer will showcase before and after data transformations to demonstrate how intentional reporting and graphic design can help transform your data into powerful visual stories.

Return to top

Poster Sessions 

The Q-Team | Mariah Bosquez, Alicia McCormick

The Quality Team (Q-Team) at Urban Neighborhood Initiatives is a group of youth who created and are now implementing evaluation tools to track program quality and provide youth voice to the direction of our organization's youth development work. They are responsible for conducting observations, holding focus groups, and collecting various forms of qualitative data including photos and videos.The poster will cover the process in which Q-Team collaboratively created evaluation tools, beta-tested them, and use them to evaluate UNI's programs as well as the direction they are going in 2019.

Project Graduation: Michigan's Future | Abigail Bartlett

Michigan high school graduation rates lag behind national averages and Detroit is even further behind. Project graduation, a Detroit Parent Network program, uses a three pronged approach to support students academic success aimed at increasing graduation rates for at risk students.

Quality Improvement and Data Analytics Supervisor | Lindsay McCracken, Desiree Jones

Desiree Jones, Regulatory Compliance Director and Lindsay McCracken, Quality Improvement and Data Analytics Supervisor will present on how The Children’s Center (TCC), a non-profit organization, built an internal Quality Improvement team with the capacity to effectively evaluate client outcomes by making decisions based upon hard data, rather than making decisions that are intuitive or based on observation alone. The Children’s Center (TCC) established a Program Evaluation Committee dedicated to supporting the organization’s continuous quality improvement efforts in a way that engages and is accountable to all TCC stakeholder and ensures our programs and services achieve the expected outcomes and impact. This committee collaborates with organizational leaders to strategize evaluation of impact on client outcomes utilizing data.

Evaluation fo Children's Writing Magazine | Adam LeRoy, Melissa Bishop

We conducted a qualitative study of a children's writing magazine. We collected data from interviews, surveys, focus groups, and artifacts. Three themes were identified themes for collaboration, positive effects for children, and program evaluation.

Challenges and Opportunities for Assessment and Data Collection in Disaster Contexts | Mislael A Valentin Cortes, Martin Ornelas, Dana Thomas 

This presentation describes disaster assessment and capacity building projects in Texas and Puerto Rico following hurricane aftermaths. Methods described include qualitative data collection and analysis, resource identification, and emergency protocol assessment and development. Findings highlight critical role of community agencies, barriers to preparedness, and challenges collecting data in post-disaster contexts.

Competing Values Framework in Program Evaluation | Amber Fischer

This project provides an overview of the Competing Values Framework and explores ways in which the framework can be utilized in Program Evaluation including the formation of an efficient program evaluation team, determination of the organization culture and context of program operation, and promotion of effective engagement with program stakeholders.

Innovations in Food Access: Detroit's Fresh Prescription Network | Jason Gapa, Kendra Gardette-Foster

The Fresh Prescription initiative provides a platform in which participants engage the healthcare and local food systems simultaneously, resulting in increases in healthy eating behaviors and knowledge, as well as decreases in unhealthy behaviors and positive perceptions of the program. Outcomes were confirmed using an independent samples T-test analysis.

Life Review and Reminiscence with Older Adults in Detroit | Sarah Charbonneau

Groups of older adults in Detroit participated in an 11-week life review and reminiscence intervention to decrease social isolation and depression and increase social networks. This poster presents the findings of the intervention’s evaluation using multiple tools such as qualitative coding and quantitative analysis.

Don’t Forget the ‘How’ and ‘Why’: Using Implementation Research to Evaluate a Cross-System Prisoner Re-entry Program | Julie Hanna, Emily Pasman, Sheryl Kubiak

The Consolidated Framework for Implementation Research (CFIR) was used to assess the implementation of a cross-system prisoner re-entry program. CFIR proved to be a useful framework for understanding processes and for reinforcing the use of implementation research as a means of continuous quality improvement.

Return to top


2019 Sponsors & Partners

Return to top


Stay IN TOUCH

Not a member but want to receive emails with announcements about MAE's Annual Conference, Professional Development Workshops and other important opportunities? 

JOIN OUR LIST

CONNECT

CONTACT

maevaluate@gmail.com

Michigan Association for Evaluation 2019

Powered by Wild Apricot Membership Software