Blog

  • 08/24/2018 11:00 AM | Greater Boston Evaluation Network (Administrator)

    On Tuesday, August 7th, Carrie Fisher, PhD, Research and Evaluation Manager at the Institute for Community Health, presented to over 20 people on qualitative data analysis at the Jewish Family and Children’s Services in Waltham.  Ms. Fisher is an anthropologist with interests in applied and evaluation research, innovative research methods, health and public health, and research with difficult-to-reach populations.

    Ms. Fisher provided a comprehensive overview of qualitative data analysis that began with a very interesting discussion about how concepts and meanings of “truth” and “reality” can vary in qualitative data analysis.  The presentation then focused on qualitative data analysis choices and methods, data management, data analysis, and key issues around data interpretation.  Below are a few key summary points from the discussion.


    Reality and Truth in Qualitative Data Analysis

    There is not just one truth when dealing with qualitative statements and there are things that can affect how truth and reality are defined such as interviewer, analytical, and interpretive bias.  Even false statements can provide important lessons in the analysis.  It’s important to remember that qualitative data analysis is always selective and can be impacted by individual backgrounds, education, and even the mission and vision of the organization sponsoring the qualitative data project.


    Qualitative Data Analysis Choices and Methods

    Starting a qualitative data analysis project can be challenging as there are many questions that need to be considered such as:  quantity and type of data, validity of analysis, how will the findings be used, identifying the audience, staff capacity to conduct the analysis, time and money for the analysis, and whether or not to use technical qualitative data analysis software.  

    Ultimately, a qualitative data analysis project starts when you decide on a data collection method.  Qualitative analysis is an ongoing iterative process and researchers and evaluators should meet weekly with those doing the data collection to review responses to questions to determine if a change to question design is necessary. 


    Qualitative Data Management

    It is critical to plan ahead for data management of qualitative data and to consider small, yet important, factors such as how you are going to record the data (written, audio recordings, video recordings), how you are going to conduct data entry (Excel, Word, other database), and which tools and/or software you may use to manage and analyze the data. 

    For example, one key data management consideration for in-depth interviews and focus groups is whether or not to use transcription services.   Verbatim transcription can be expensive – just one hour of audio or video can require up to four hours of transcription, which can cost hundreds of dollars.   The website www.rev.com is a cost-effective alternative for transcription with rates around $1 per minute. 

    If you are using qualitative data analysis software like Dedoose, NVivo, ATLAS.ti, or others, it’s important to remember that these software only manage data and do not automatically perform analysis for you.  Coding and analysis of qualitative data using these software are additional steps which require sufficient time and technical training in the software itself. 


    Qualitative Data Analysis

    It’s critical to choose your analytical approach with the end in mind.  Key issues to consider include:  level of detail, level of rigor, audience, how the information may or will be used, time and resources for analysis.  There are many different approaches to analyzing qualitative data including pragmatic thematic analysis, case studies, content analysis, and coding trees to name a few.

    Once your data has been collected and managed, schedule extra time to “swim in the data.”  Get to know your data well by reading over all notes and transcripts before doing any coding.  Write down preliminary thoughts on main themes, points of interests, and gaps in the data. 

    Lastly, coding of qualitative data is a critical step.  It’s important to organize codes into meaningful categories and to create a descriptive codebook to document definitions and themes in codes.  This is particularly important if your project has more than one analyst conducting coding in order to maintain consistency. 


    Qualitative Data Interpretation

    When you are ready to begin interpreting your data findings, begin by listing key points and themes such as:  key confirmations, major lessons, new ideas, and applications to other settings and/or programs.   Some evaluators choose to summarize qualitative findings using quantitative outcomes (i.e., “9/10 respondents agreed with this idea…”).    While this can increase confidence in results, it should be used with caution. 

    Ultimately, the evaluator should always ask:  “Would I feel comfortable with the participant reading this.”  “Would the participants agree with my interpretation of the findings?”


    Ms. Fisher’s full presentation slides can be found here  (members only). 

  • 07/30/2018 12:37 PM | Greater Boston Evaluation Network (Administrator)

    A full evaluation program requires data systems and analytic capacity. Many organizations or programs feel like they’re not prepared to start building their internal evaluation capacity until they have developed logic models and databases. But there are only two elements that need to be in place in order to start evaluating your program. They are clarity about your organization’s mission and clarity about your organization’s target population. These elements are required to focus all of your data collection and analysis.

    © LinkedIn 2015

    © LinkedIn 2015

    Mission clarity is a single, shared understanding of why your program is in business. While most organizations know very well what they do, there is sometimes a lack of clarity about why they do it. For example, if you run an after-school program for teens, it could be intended to improve the youths’ academic skills (an educational goal), help them explore careers that might interest them (a workforce development goal), or pair them with a caring adult (a positive youth development goal). In order to begin building an evaluation program, the program’s mission needs to be clear. Without clarity about your program’s real mission, it will be impossible to prioritize performance measures or design performance measures that communicate your program’s value to all of your stakeholders.

    One way to document your organization’s mission is a simple strategy map, documenting the activity, the immediate outcome for your participants and the long-term outcome that you envision for them. It can be as simple as this:

    tutoring à passing grades à high school graduation à careers à economic well-being

    Developing this strategy map can be challenging, particularly when there are many possible positive outcomes based on the activity that your organization provides. It’s a necessity because your data collection systems will be organized around outcomes that serve that one central purpose.

    © Brands with Fans 2015

    © Brands with Fans 2015

    The second thing you need is consensus about who exactly your organization serves. Who is best positioned to benefit from your program? Who is the most important population for your program to be serving? For example, in the above example, if your program serves an academic goal, the most important population to serve would be youths who are at risk of leaving high school without a diploma. If the goal is a workforce development one, the target population might be youths who are early enough in their high school careers to choose electives based on their future career plans. If your program’s goal is mentorship, your target population might be young people who are developmentally most able to benefit from a mentoring relationship. Measuring how closely the population you serve mirrors your ideal population requires perfect clarity in that area. Clarity about who you serve will organize all of your data collection and analysis and define your unit of analysis.

    With these two things in place, your organization will be ready to develop measures that really indicate whether your organization is helping the population that you intend to serve and whether it is helping to move your participants or clients toward their goals. Equally important, this clarity will help you avoid collecting unnecessary data or focusing on metrics that don’t really drive your organization’s performance.

    By Pieta Blakely, PhD


    © Blakely ConsultingPieta Blakely is a researcher and evaluator, specializing in quantitative evaluations of workforce development programs and local economic development initiatives. She has led multiple evaluation projects spanning education, labor economics, and urban economic development topics.

    As the Principal of Blakely Consulting, LLC, she focuses on working with not-for-profit organizations to build evaluation capacity, integrate evaluation into their program logic models, and use learning for strategic planning. Her clients include a range of anti-poverty and social justice organizations, particularly those that serve disadvantaged and minoritized youth.

    Dr. Blakely received her BA from Brown University in Organizational Behavior and Management and Anthropology, her MS in Administrative Science from Boston University, her MEd from Harvard University, and her PhD from Brandeis University in Social Policy.

  • 06/20/2018 3:20 PM | Greater Boston Evaluation Network (Administrator)

    At our spring GBEN social meet-up, GBEN was very fortunate to have Leslie Goodyear, American Evaluation Association (AEA) President and Principal Research Scientist at EDC, join to talk about the upcoming AEA conference in November as well as her vision for the AEA.    

    In addition to joining the spring meet-up, Leslie generously agreed to participate in a short six-question email interview with GBEN member, Marion Cabanes.    Enjoy!


    © Leslie Goodyear

    GBEN: In 2016 you were formally elected the next president of the American Evaluation Association.  You expressed a hope to foster a collaborative approach among evaluators within the association and among other organizations, policymakers, scholars and practitioners.  Reflecting back on more than one year of this vision, what collaborative dialogues have you witnessed and how do you envision continuing and developing further dialogues on the importance and use of evaluation?

    Leslie:  Whoa! I said that?!? Just kidding. Both during my time as president-elect, and this year as president, I’ve had the chance to see how evaluators are influencing the field and those with power to influence policy. I speak mostly of what’s happening in AEA, but we know that every day, evaluators provide evidence and information to guide decision making and policy at multiple levels.

    You may know that through the Guiding Principles Task Force and the Evaluator Competencies Task Force, members have influenced the review and revision of our ethical principles and helped to define a set of AEA Evaluator Competencies. Both Task Forces took their charge to solicit member input very seriously, and implemented processes that incorporated the ideas and opinions of hundreds of members (through surveys, focus groups, and other calls for information and input). The AEA Board just voted to adopt both, and we’ll be rolling them out before and at the 2018 conference. In addition, our Evaluation Policy Task Force has had multiple successes this year in influencing the development and implementation of federal evaluation policy. Their work is not necessarily loud, but has resulted in strong relationships with policymakers and other influencers in Washington. I have no doubt we’ll see more from them in the coming years, too. 


    GBEN:  The field of evaluation has been evolving rapidly and seeing a greater diversity of evaluators of different stripes.  What do you think is the next era for evaluation in terms of innovations? Or what new fields could evaluation reach out to for informed decision making and organizational learning?

    Leslie:  When I started in evaluation – many years ago – we were a field of academics and we debated whether it was appropriate to advocate for the programs we evaluate and we argued about quantitative versus qualitative methods. Things have changed a lot since then! Now, we’re primarily an association of evaluation practitioners, and we’ve moved on to incorporate and debate new approaches to evaluation (e.g., feminist, developmental, systems) and new positions on everything from equity and inclusion, advocacy, methods, program theory, and 3D logic models. We offer more opportunities for professional development now, and we have new and more dynamic ways to present data (qualitative and quantitative!) and disseminate findings. I’m not a psychic, so I’m not sure what’s coming next, but I’m excited by the passion people have for making the evaluation field more diverse, and its processes and products more directly tied to decision making and action.


    GBEN:  You have travelled across the country and met many other local/state evaluation associations.  What do you think makes for a strong local evaluation association or network?

    Leslie:  It’s been a real pleasure to get to meet with so many evaluators as I’ve visited local affiliates and attended their meetings. As an organization, I think we can do more to connect the national organization to its local affiliates, whether through co-sponsored events, the common brochure, the common member registration, or other ideas. I love that there was an #EvalTwitter chat with the Local Affiliate Council and that so many people participated! I’ve heard from local affiliates that they would love more opportunities to share lessons learned and strategies, and that they would appreciate more opportunities to connect more with each other. I’ve been ensuring that the ideas that have been shared with me are passed right along to AEA staff who can collect them and, when possible, implement them.


    GBEN:  In addition to our bi-monthly roundtables, this blog post is our way of engaging with GBEN members.   What would you like to know from the GBEN community to that could influence your work at the national level?

    Leslie:  First, let me just say thanks for the invitation to contribute to the GBEN blog. I loved getting to meet GBEN members at the social event in Cambridge, and I look forward to connecting again soon, whether at the AEA conference in Cleveland or another GBEN event.   

    I’d love to know what you’re seeing in your evaluation worlds! Are funders and programs drinking the evaluation kool-aid? Are there opportunities the association, or local affiliates, could capitalize on? Are there challenges you’d like to share that you think might be more common than just to you in your work? What do new evaluators need from AEA? What about seasoned evaluators? What are you seeing with regard to trends and concerns?


    GBEN:  What is your favorite part about being an evaluator as well as being a part of the local and national professional evaluation community?

    Leslie:  Easy! My favorite part of being an evaluator, and being part of the local and national community of evaluators: meeting smart, dynamic, engaged, quirky, diverse, passionate, thoughtful, people who want to use their amazing skills to make this world a better place! (However you define, operationalize, and measure that. Ahem.)






  • 05/23/2018 3:51 PM | Greater Boston Evaluation Network (Administrator)

    Having Conversations with Stakeholders Unfamiliar with Evaluation

    By Marion Cabanes

    On April 4th, GBEN hosted a roundtable discussion with members to share their experiences talking with stakeholders unfamiliar with evaluation.

    Members discussed cartoons from the well-known evaluation website, www.freshspectrum.com.  Many of these cartoons portray situations where an evaluator experiences resistance to, or lack of understanding about, evaluation from a client, and how to go about changing the client’s perspective on the role of evaluation for a particular project.




    When you implement your project activities every day for many years, one may think they know the nuts and bolts of a project and what the clients need.  The reality, however, can be much different.  For example:

    • Evaluation tends to be overlooked, or project staff simply don’t see how evaluation can capture their successes or areas in need of improvement of the project. They might also lack time for evaluation as they focus on implementing their project activities, especially where their clients’ immediate livelihoods are at stake (i.e., locating beds for the homeless).
    • Or, there is still a failure to understand how evaluation can showcase project activities and results.  Without evaluation, you lose the utility of the data you’re routinely collecting to serve the purpose of reflecting on your project and learning from it.  It then gets harder to show the “specialness” of your work.


    When talking about the “specialness” of a project, make sure goals are explicit and you are able to explain how the project will succeed or is meeting goals.  Never underestimate spending extra time developing an evaluation plan at the beginning of your project as this will help you stay on track and/or measure how far off you are from reaching the ‘real’ goal, la raison d’être, of your project.



    Evaluation can oftentimes feel like just another administrative task to tick off at the end of a project’s life cycle.  GBEN members offered up some great tips on how to frame the role of evaluation differently with clients:

    • Stakeholders have an ethical responsibility to act, intervene, and assess that their efforts and investment are worthwhile.
    • Evaluation is an opportunity to showcase how good a project is and communicate successes and learnings to project funders, stakeholders, and beneficiaries.
    • Evaluation is an integral part of developing a strong culture of learning.   Evaluation will help you and your team perform better and learn as you are solving problems along the way.  Evaluation provides data to help make feedback-based informed decisions for corrective actions.
    • It’s important to scope your evaluation appropriately by choosing wisely what’s important (and not important) to evaluate.  What is the right level of evidence that shows whether or not your project is working as designed and intended?

    A Cool Resource to help project staff measure project progress and performance and demonstrate results to the world is a “results scorecard” software like Clear Impact and its Scorecard software or Asana.

    Click here to access the presentation from the roundtable (members only!). 


    - Marion Cabanes is a local Boston evaluation specialist working in international development as well as an active member of GBEN.  


  • 01/02/2018 10:42 AM | Danelle Marable (Administrator)

    Along with our new website comes official membership!  GBEN's membership dues are reasonable and in-line with other AEA affiliates.  Once you are a paying member of GBEN, you'll have access to the monthly newsletter, featuring events, jobs, and networking opportunities.  You'll also have access to the member forum, where you can pose questions to other members.  Looking to network with specific people?  You'll have access to the member directory where you can search for people in specific fields or organizations.  You get all of this for $25/year.  Are you a student?  Sign up with your student email address and enjoy the benefits for only $15!

    Sign up by by clicking "Join Us" above. 

Greater Boston Evaluation Network is a 501(c)3 non-profit organization. 

Powered by Wild Apricot Membership Software