MAE 2019 Conference

Registration is OPEN! We've released our 2019 Program - check it out below!



2019 Conference Program

Keynote

Topic: Having Fun with Evaluation! | Wendy Tackett

  • How can evaluators use more interesting processes that provide credibility to the work while also involving clients strategically to ensure that the work is meaningful, useful, and used to guide decision-making and improve outcomes?
  • How can evaluation recipients have an active voice in demanding that evaluation be more than just a text-laden report provided at the end of a fiscal year?
  • How can evaluation be fun?

Breakout Sessions

Embedding FUN to Facilitate Evaluation Use | Wendy Tackett

Sometimes evaluators get so focused on the evaluation contract or performing the work agreed upon in that contract that we forget about the ultimate purpose of our evaluation work - to provide an evaluation process that is useful and findings that are used. How, as evaluators, can we bridge that gap - creating an environment where evaluation is transparent, understood, and embraced? By incorporating fun activities and a fun attitude into the evaluation work, and starting with that at the beginning, you can strategically create an atmosphere where clients view evaluation as meaningful and useful as well as use it to guide their decision-making and improve outcomes.

Through this session, we will talk about why embedding fun helps people think creatively and facilitates the use of evaluation findings. Participants will also participate in an activity they can use to introduce evaluation to their clients, students, an advisory group, or any group of people who will be involved in an evaluation. Participants will leave with the understanding, directions, and experience to implement this activity on your own with any group.

Cost Analysis 101 | Tomoko Wakabayashi, Jill Claxton, Beth Hardin

In the proposed session, we will introduce some basic steps involved in conducting a cost analysis. These include identification of 1) an evaluation problem; 2) alternatives; 3) audience; and 4) type of cost analysis to use (Levin, McEwan, Belfield, Bowden & Shand, 2018). We will also introduce audiences to the four major approaches of cost analysis, when to use them and their purposes. The four approaches are: 1) cost-effectiveness (CE); 2) cost-benefit (CB); 3) cost-utility (CU); and 4) cost-feasibility (CF). Using a mock sample, we will demonstrate the ingredients method used to collect all potential expenditures associated with an intervention, such as personnel, facilities, equipment, materials, and training. We will introduce our audiences to internet resources, including those on the Center for Cost Benefit Studies in Education (CBCSE) website, such as the CBCSE interview protocols, and the free CostOut tool. Lastly, we will facilitate discussion and reflection on strengths and limitations of cost analysis.

Where to Find Best Practices? Lessons from the Field | Ignacio D Acevedo, Taylor Crisman, K. Malulani Castro

Evaluation often requires comparing a target (e.g., program, organization) against appropriate best practices and/or standards. In some cases, identifying such practices or standards is straightforward (e.g., national standards). However, evaluators will frequently encounter instances in which identifying best practices is complicated by the novelty of an evaluation target and/or the context in which the evaluation target occurs. Based on several applied case studies, the goal of this session is to provide participants with guidance on how to identify and/or develop appropriate standards or best practices from which to compare an evaluation target. We will specifically address approaches that are useful in situations where the evaluation target is novel and/or innovative, and in situations when considering culture and/or context is of tantamount importance.

A Crash Course in Developing Survey Questions | Lyssa Wilson Becho, Lori Wingate

Surveys are a valuable source of evaluation data. However, obtaining quality data heavily relies on well-crafted survey items. Although writing survey questions may seem simple on the surface, anyone who has struggled to find the right response scale or wrestled with poor quality data collected from by poor survey questions knows how critical and complex survey development really is. Writing good survey questions requires more than just common sense.

In this hands-on workshop, participants will learn the art and science of how to develop and improve questionnaire items to maximize the accuracy and utility of survey data. Adult learners learn practical skills best through application, rather than listening to a lecture. Therefore, this workshop begins by asking participants to critique an example survey. From previous workshops, we have found that participants benefit more from identifying what is wrong with a poor survey rather than starting by crafting their own questions. After working individually, the whole group will walk through each question and translate participants’ critiques into key considerations for developing survey questions. In order to aid participants in applying what they learn in this workshop, we will provide them with a handout of survey do’s and don’ts to apply to their future survey development work.

The materials for this workshop have been previously tested during a 3-hour workshop at the 2018 Advanced Technological Education Conference, as well as a 30-minute presentation at the same conference. Both presenters have extensive experience giving interactive presentations as workshops, lectures, and webinars.

Automated Excel Template to Make the Evaluation Budgeting Process Easy | Janet Ray, Maria Schmieder

This session, “Evaluation budgets made easy using an automated template tool”, will give attendees a hands-on opportunity to create a budget. Participants should bring their laptops and be ready to create a budget during the demonstration. Attendees will learn key budget elements, find out what is a reasonable evaluation budget and be able to create budget in half the time. Whether you are a funder, nonprofit, or consultant, people need fast and simple tools to estimate an evaluation project.

Qualitative Data Analysis- Using the Right Tools for the Job | Jeffery Hillman, Ouen Hunter

Qualitative inquiry, in times past, was often considered a lesser of quantitative methods. Guba and Lincoln (1989) kicked the door open on qualitative evaluation through the lens of responsive constructivist evaluation. This framework of evaluative thinking has helped to bring understanding to the value of systematic qualitative research with the intent of creating personalized rich descriptions of programmatic effects from the view point of stakeholder groups (Patton, 2015). Quantitative methods dominated the field for decades and required a specific skill set. Equally, qualitative inquiry necessitates a specific skill set for data analysis.

This workshop style session will focus on qualitative methods for data analysis using thematic identification and categorization. First, a short introduction will cover the key differences between creating inductive and deductive themes along with why one might select one type over another. Second, readily available tools for coding electronically will be discussed. Last, a brief description of transcription styles and pitfall will be discussed followed promptly with a practice session using a transcribed section of an interview. Participants (individually and as a group) will be asked to code the interview with a deductive view while looking for emergent themes. The session will end with reporting the work completed by the participants and debriefing of the positives and challenges found in the process. Encouragement of open sharing through a learning environment will be the focus.

Using Cognitive Interviewing to Improve Survey Questions | Adrienne Adams, Jodi Peterson

Evaluators commonly use surveys to collect evaluation data. When there are no appropriate existing measures, evaluators write new survey questions. The wording of those questions has implications for the accuracy and usefulness of the resulting data. Cognitive interviewing is a tool to help ensure that participants understand the survey content to mean what you intend it to mean. It is also helpful for ensuring that survey design choices enhance rather than hinder data quality. In this workshop, we will explain the purpose and methods of cognitive interviewing, share examples from our work, and give attendees the opportunity to practice key cognitive interviewing techniques. Attendees will leave the workshop with a basic understanding of cognitive interviewing methods and resources to use the technique in a future projects.

Management and Communication Strategies: How to Keep Your Clients In the Know | Kelly Robertson, Lyssa Wilson Becho

Evaluation can be a messy process. Clients often struggle to understand what evaluation is, let alone, keep the details of the evaluation process straight. Even from the evaluators’ side, it’s easy for the lines to blur between evaluation projects for busy evaluators switching between projects. Miscommunication is all too common given the complex details that need to be communicated between the evaluator and client. This interactive lecture will help to remedy these issues. Presenters will share practical tips and tricks on how to improve evaluation management and communication with clients throughout the evaluation process, with plenty of time for questions and conversation. Participants will walk away practical resources to help them apply what they learn.

New and experienced evaluators can benefit from sharing useful strategies to guide the evaluation planning process. The presenters will share their practice of: setting up budgets in excel at the task level (to ensure budgets are realistic and sufficient), offering clients an evaluation menu (allowing clients to choose from evaluative activities that fit their needs and budgets), basic templates for what to include in a scope of work, and a list of questions to ask in the first client meeting (to help efficiently collect information needed to draft a scope of work and provide a foundation for future communication and collaboration). Throughout the planning process, presenters will emphasize approaches to encourage authentic and active engagement from clients, instead of blindly agreeing to whatever the evaluator proposes.

Several suggestions for effective management throughout an evaluation’s implementation will be also presented. Tips will focus on how to set up effective communication practices, such as monthly evaluation updates and identification of a contact person for both the client and evaluation team. Suggestions for troubleshooting will also be discussed, such as what to do if clients become non-responsive. Evaluation plan cheat sheets (a quick-reference guides that contain the most important information from an evaluation management plan) will be presented as a strategy to help evaluators keep track of evaluation activities and better communicate the details of an evaluation plan with clients. Presenters will share some examples from their own work and discuss how they leverage tables in new ways to concisely and effectively communicate information. Suggestions include use of tables to create evaluation matrices, as well as to communicate strengths and weaknesses of evaluation approaches, show connections between data collection methods, and present evaluation findings. Client feedback forms and an evaluation recommendations section in the evaluation report will also be presented as strategies to improve future evaluations.

Overall, this interactive lecture is meant to share practical strategies for managing evaluations and effective communication with clients. While presenters will share examples and lessons learned from their own work, they encourage participants to share their experiences as well.

Ignite+ Sessions

PhotoVoice on a Budget | Ouen Hunter, Emma Perk

The inclusion of the participants’ voices of a program is crucial in evaluation. Photovoice, a participatory research action method, encourages the participants to provide their unique insights by asking them to answer a question using photographs and descriptions of the photograph. This method of inquiry has been useful in gathering information from individuals experiencing homelessness (Bender, Begun, Dunn, Mackay, & Dechants, 2018; Pruitt et al., 2018) and health issues (Warne, Snyder, & Gillander G├ądin, 2013).

A typical photovoice project includes an introduction and display of photographs. While working on the evaluation for a summer precollege program, with limited funding and time, we successfully implemented a small-scale photovoice activity. The success of this small-scale photovoice evaluation, and our previous experience with the method, indicates that even with limited time and finances, evaluators can still gather rich information from their participants.

This ignite session will provide a high-level overview of how we implemented the photovoice evaluation and our lessons learned.

Everyday Indicators | Zach Tilton

The 15-minute ignite presentation will be divided into three sections: 1) an introduction to Everyday Peace Indicator framework history, rationale, and methodology (5 minutes), 2) a discussion of top-down and bottom-up evaluation systems exploring issues such as proxies and assumptions, scale and replicability, rigor and power, and representation and marginalization (5 minutes) 3) Considerations for audience members on how to apply this approach to their existing evaluation portfolio.

Evaluating Systems Change Using the Most Significant Change Method | Miles McNall, Trevor Strzyzykowski

When the object of an evaluation is a complex initiative operating in a dynamic environment involving multiple actors engaged in a variety of interrelated activities, it can be difficult at the outset to identify the full range of outcomes that might be anticipated, severely limiting the utility of predetermined outcome measures. Given that systems-change initiatives also frequently take place in the context of other related efforts that might yield similar outcomes, it can be challenging to attribute any given outcome uniquely to the evaluand. In such circumstances, retrospective approaches that cast wide nets to capture outcomes that might be plausibly linked, although not necessarily uniquely attributable to the evaluand, can be helpful in documenting both anticipated and unanticipated outcomes. In recent decades, several such approaches have emerged, including outcome harvesting (Wilson-Grau & Britt, 2012), ripple effects mapping (Kollock et al., 2012), the most significant change (MSC) method (Dart & Davies, 2003), and contribution analysis (Mayne, 2012). In this presentation, we discuss: (a) how MSC was used to capture the outcomes of a youth mental health systems change initiative that stakeholders felt were the most significant, (b) what the major categories of systems change outcomes were, and (c) how we facilitated discussions of the MSC results with stakeholders to promote systems change. The implications of our experience using MSC for the further development and refinement of approaches that enable evaluators to capture program outcomes retrospectively when the full range of anticipated program outcomes is challenging to predict at the outset will also be discussed.

Transformative Data Visualization: 5 Tips to Redesigning Your Data | Jennifer R. Lyons

Communicating transformative data visuals leads to people actually reading our data and reports. When we can engage people with intentional reporting, our audience will be more engaged and able to take informed action. In this 15-minute session, Jennifer will share 5 tips to redesign your data. Jennifer will showcase before and after data transformations to demonstrate how intentional reporting and graphic design can help transform your data into powerful visual stories.


MAE is still accepting poster proposals (until April 25):


Poster



Who Should Submit:

We have all faced evaluation tasks that have bordered impossibility. It may have occurred within logic model creation, data capture, collection, entry, stakeholder communication, response rates or analysis and the outcomes looked bleak. But somehow, in some way, we delivered the finished product by the deadline.  Even in evaluation projects that seem to progress exactly as planned, evaluators accumulate tools of the trade that can make processes smoother and contribute to higher quality end products.

MAE seeks out these tips and tricks that our community of professionals can share with one another to advance the practice of evaluation. Whether those tools align with cutting edge strategies or historic practices that remain just as valid today, we want to hear about them and how they can be replicated by others for successful results.

New Online Submission:

We have moved to an online proposal submission system. Before submitting please review the full RFP and submission form below. Once you start the online form, you will not be able to re-enter.


What to do next?


2019 MAE Sponsors

     



     





Interested in becoming a 2019 MAE sponsor? Find out more here.


Michigan Association for Evaluation 2018

maevaluate@gmail.com

Powered by Wild Apricot Membership Software