Blog

<< First  < Prev   1   2   3   Next >  Last >> 
  • 12/19/2018 10:48 AM | Anonymous

    On Tuesday, December 4th, GBEN hosted its last roundtable of the year about budgeting for evaluation.  Like all of our roundtables this year, we had great a turnout with 17 members participating.  The roundtable was designed to be an open dialogue focusing on the critical issues evaluators face when budgeting for evaluations. Guiding questions for the discussion included:

    • How do non-profits and other organizations fund evaluation staff, data management systems, and other elements of successful evaluations?
    • Is there a magic "rule of thumb" on how to allocate budget resources to evaluations?
    • Are there funders interested in supporting evaluation capacity building?
    • What are other processes or best practices to consider when budgeting for evaluation?  

    Here are some of the key takeaways from the group discussion:


    Evaluation staffing and capacity varies across organizations

    Some organizations still have little-to-no dedicated evaluation staff on the payroll.  For many organizations, having a dedication evaluator is a new organizational initiative.  For those with dedicated evaluation staff, many are grant-funded, meaning once the grant is over, the position may be reduced or eliminated.


    Evaluation budgets can span multiple departments

    Many organizations spread evaluation costs and budgets across varying departments and/or programs.  Evaluation budgets can include staff who may not have the word “evaluation” in their job title or job responsibilities, including front-line program staff, data management and/or administrative staff, and/or information technology and systems staff.  For example, an organization may employ a Database Specialist through their IS/IT department who does critical evaluation-related work by using Efforts to Outcomes (ETO) software. 


    Commitment from organizational leadership is key

    Like all facets of a good evaluation initiative, commitment from senior management within an organization is important for evaluation funding.  Senior leadership and management are often best positioned to seek out, advocate for, and request funding for evaluation from key funding partners. 


    Monitoring your evaluation work helps make the case for future or additional evaluation

    Evaluation staff may hold the key to the magical data kingdom, but often time we don’t directly experience or see how the evaluation results are used to change or improve programmatic processes.  Be sure to document the collaborative process between the evaluation team and front-line program staff to highlight programmatic improvements that are a result of the evaluation findings. 


    Relationships with development staff are key

    It is very helpful to have a relationship between the development team and evaluation team, especially for grant writing.  Development staff understand funder needs and wants and can effectively communicate impressive evaluation results.


    Assert your evaluation needs!

    Most people have no idea what it takes to make an evaluation successful.  Often time it takes more than just a staff person.  Evaluation needs can include certain software, equipment, consultants with specific expertise, and other administrative needs like postage and mailing supplies.  Make a wish list of things you need and share it!


    Seek out ways to reduce evaluation costs

    While planning and conducting and evaluation, it’s important to always ask the question: “what do we want and what do we already have?”  For example, is there data already being collected or existing data systems that could serve your evaluation project?  Finding ways to lower evaluation costs may make it easier to acquire the necessary budget resources for an evaluation initiative. 


    Resources:  (click here to access members-only resources from this roundtable)

    • Budgeting for Evaluation: Key Factors to Consider – a rubric developed by Westat for assessing how much to budget for evaluation
    • Budgeting for Evaluation from the 2014 Americorps Symposium  

  • 10/31/2018 1:52 PM | Anonymous

    On October 5, 2018, Danelle Marable, GBEN President and Senior Director for Evaluation, Assessments, and Coalitions at the Massachusetts General Hospital Center for Community Health Improvement (MGH/ICHI), discussed two community health needs assessments that her team is working on: 

    1. Partnering with the North Suffolk Public Health Collaborative, municipalities, other healthcare providers, community coalitions, and organizations to conduct an assessment for Chelsea, Revere, and Winthrop.

    2. Partnering with all Boston hospitals to conduct a Boston-wide assessment. 

    In 2011, the Affordable Care Act required every hospital to conduct a community health needs assessment (CHNA), develop strategic plans, and post findings to the public.  Non-profit status would be revoked if the CHNA was not conducted.

    A CHNA is a systematic examination of the health status indicators for a given population that is used to identify key problems and assets in a community. The goal of a CHNA is to develop strategies to address the community’s health needs and identified issues.  A CHNA is instrumental in identifying the social and environmental conditions as well as social determinants that can impact the health of these communities, such as childhood experiences, housing, income, employment, healthcare, community.  MGH identified Revere, Chelsea, Charlestown and Winthrop as primary communities to target with CHNA and a community health improvement plan (CHIP).

    During the roundtable, Danelle discussed the process around the assessments, community engagement strategies, data collection efforts, and implementation. 


    Process

    The CHNA and CHIP is a year-long process that occurs every three years.   MGH uses the MAPP Framework (Mobilizing for Action Planning and Partnerships) for the CHNA and CHIP that in short outlines a process of engaging partners in comprehensive data collection and strategic. MGH/ICHI dedicates one year for visioning, assessment, and identification of strategic issues and then their Board of Trustees reviews and grants approval, then allowing for 100 days to develop an implementation plan. The MGH Trustees are required in MA to go to community advisory committee meetings in addition to other meetings.         


    Community Engagement and Needs Assessment

    MGH/ICHI engages multiple sectors during the CHNA process:  resident, local leaders, community-based organizations, educators, as well as other hospitals in the communities.  MGH/ICHI started engaging other hospitals so they don’t burden the community with repeated data collection.  Assessment and identification of needs can be a challenge as prioritization is not always straightforward.  For example, residents in the community of Chelsea spoke of issues of community violence and safety, but other data sources showed decreases in instances of violence in the community. 


    Data Collection

    Data collection for the CHNA involved survey data, focus groups, in-depth interviews, and secondary data sources, including the quality of life survey (addresses more social determinants such as housing, transportation, overall community). Surveys are translated and are distributed online and in print through various community networks.


    Implementation and Evaluation

    A crucial part of the process of a CHNA is the identification and implementation of evidence-based strategies around certain community needs and issues.  Part of this process involves community dialogue – asking key questions about what can be done, what resources are available, and is there the will to implement solutions.   This process not always leads to straightforward answers.  For example, communities identified opioid use as a crisis and access to Narcan as a response, however the community balked at increasing access to Narcan under the suspicion that it leads to increased opioid use.  The group had to take 3 steps back to educate the community regarding the benefits of Narcan.  Once strategies have been implemented, the final process is comprehensive progress monitoring, implementation evaluation, and impact measurement. 

    Copies of Danelle’s PowerPoint slides can be found in the member roundtable resources section (members only!).  For questions, reach out to Danelle by email at: dmarable@partners.org.



  • 09/28/2018 9:19 AM | Anonymous

    On September 6, 2018, Dana Benjamin-Allen of the Boys and Girls Club of America - as well as a long time GBEN member - hosted GBEN's first professional development webinar titled “How to Host Webinars that Don’t Suck!”  The webinar – a live meeting and discussion that occurs via the internet – can sometimes have a reputation for being boring, disengaging, and a waste of an audience member’s time.   For evaluation professionals, creating a data-rich yet engaging webinar can be a challenge. 

    Fourteen GBEN members joined the webinar to learn about tools to increase interactivity and engagement, best practices for online presentations, and the benefits of different distance engagement platforms.   Dana used several tools and activities to model ways to engage the audience and keep them interested in the content of the webinar.  You can access her slide deck on the Roundtable Resources section of the website (members only).  Below are some take away points for engaging your webinar audience.


    Framing the Webinar is Important!

    How you frame and present the webinar is critical to gaining interest and maintaining audience engagement once the webinar starts.  Start the webinar with an introductory slide that presents the intentions and objectives that best resonate with your audience.  It’s important to describe the presenter's relevant background and experience.  

    In addition, it is important to ‘level set’ your audience, that is, identifying the knowledge and experience level among your audience members regarding the webinar topic.  You may have audience members with diverse backgrounds, varying knowledge levels, and varying experience levels with the topic.  Level setting can help you hone in on certain content areas that may be most relevant to this diverse audience as well as identify gaps in knowledge.   Hot tip:  create interactive activities using online polls and tools like GroupMap to level set your audience.

    Language matters so be sure to use a catchy title to make the webinar more inviting.  The webinar title “Webinars that Don’t Suck!” is a perfect example!  The timing of your webinar is very important.  For some audience members, mornings are better, for others it may be lunch time.  If you are reaching a diverse geography of audience members, be sure to schedule it conveniently for all time zones. Lastly, experiment with different alternatives to the word webinar to increase engagement.  Hot tip: call it an online meeting, online course, virtual workshop, “brown bag”, or like the American Evaluation Association, a virtual “coffee break.” 


    Choose the best facilitation tools and methods based on audience needs and intended engagement level:

    There are various webinar platforms on the market today with varying functions.  Do you need a live whiteboard?  Do you want to share and/or pass the screen to multiple presenters?  Do you want a live camera to accommodate live video of all participants and presenters?  These factors should be considered before choosing a platform that best fits the format of your presentation, the needs of your audience, and the intended engagement level.   Hot tip:  using shared experience activities, such as real-time polling applications, helps connect audiences as well as solicit feedback.

    In addition to picking a webinar platform, it is important to identify which facilitation method is right for you and your audience.  Is there one or multiple presenters?  Are you hosting a moderated panel discussion?  Is the presentation interactive and involve audience participation?  Hot tip: co-hosting the webinar with a colleague who has credibility with your audience can increase audience interest and engagement. 

    Lastly, plan accordingly if you plan to take questions and how you will manage the questions.  Do you need a colleague to help with this?  Will it be interactive (i.e. audio for all participants)?   Between 6-10 questions may be good for a live interactive presentation, but plan extra time as participants may interact during each question.


    Presentation is Important:

    The design of your presentation slides is very important.  Keep your slides clean and easy to read.  If you are going to use a lot of text in a slide, plan accordingly to walk through the text with the audience.  Also, create intrigue, confusion, and or excitement from slide to slide using questions, images, and or illustrations.  Lastly, use a strong, exciting, and engaging title.    Hot tip:  there are online alternatives to PowerPoint like Prezi, Visme, or Haiku Deck


    During this webinar, Dana used the following tools (follow the links for more details):


  • 08/24/2018 11:00 AM | Anonymous

    On Tuesday, August 7th, Carrie Fisher, PhD, Research and Evaluation Manager at the Institute for Community Health, presented to over 20 people on qualitative data analysis at the Jewish Family and Children’s Services in Waltham.  Ms. Fisher is an anthropologist with interests in applied and evaluation research, innovative research methods, health and public health, and research with difficult-to-reach populations.

    Ms. Fisher provided a comprehensive overview of qualitative data analysis that began with a very interesting discussion about how concepts and meanings of “truth” and “reality” can vary in qualitative data analysis.  The presentation then focused on qualitative data analysis choices and methods, data management, data analysis, and key issues around data interpretation.  Below are a few key summary points from the discussion.


    Reality and Truth in Qualitative Data Analysis

    There is not just one truth when dealing with qualitative statements and there are things that can affect how truth and reality are defined such as interviewer, analytical, and interpretive bias.  Even false statements can provide important lessons in the analysis.  It’s important to remember that qualitative data analysis is always selective and can be impacted by individual backgrounds, education, and even the mission and vision of the organization sponsoring the qualitative data project.


    Qualitative Data Analysis Choices and Methods

    Starting a qualitative data analysis project can be challenging as there are many questions that need to be considered such as:  quantity and type of data, validity of analysis, how will the findings be used, identifying the audience, staff capacity to conduct the analysis, time and money for the analysis, and whether or not to use technical qualitative data analysis software.  

    Ultimately, a qualitative data analysis project starts when you decide on a data collection method.  Qualitative analysis is an ongoing iterative process and researchers and evaluators should meet weekly with those doing the data collection to review responses to questions to determine if a change to question design is necessary. 


    Qualitative Data Management

    It is critical to plan ahead for data management of qualitative data and to consider small, yet important, factors such as how you are going to record the data (written, audio recordings, video recordings), how you are going to conduct data entry (Excel, Word, other database), and which tools and/or software you may use to manage and analyze the data. 

    For example, one key data management consideration for in-depth interviews and focus groups is whether or not to use transcription services.   Verbatim transcription can be expensive – just one hour of audio or video can require up to four hours of transcription, which can cost hundreds of dollars.   The website www.rev.com is a cost-effective alternative for transcription with rates around $1 per minute. 

    If you are using qualitative data analysis software like Dedoose, NVivo, ATLAS.ti, or others, it’s important to remember that these software only manage data and do not automatically perform analysis for you.  Coding and analysis of qualitative data using these software are additional steps which require sufficient time and technical training in the software itself. 


    Qualitative Data Analysis

    It’s critical to choose your analytical approach with the end in mind.  Key issues to consider include:  level of detail, level of rigor, audience, how the information may or will be used, time and resources for analysis.  There are many different approaches to analyzing qualitative data including pragmatic thematic analysis, case studies, content analysis, and coding trees to name a few.

    Once your data has been collected and managed, schedule extra time to “swim in the data.”  Get to know your data well by reading over all notes and transcripts before doing any coding.  Write down preliminary thoughts on main themes, points of interests, and gaps in the data. 

    Lastly, coding of qualitative data is a critical step.  It’s important to organize codes into meaningful categories and to create a descriptive codebook to document definitions and themes in codes.  This is particularly important if your project has more than one analyst conducting coding in order to maintain consistency. 


    Qualitative Data Interpretation

    When you are ready to begin interpreting your data findings, begin by listing key points and themes such as:  key confirmations, major lessons, new ideas, and applications to other settings and/or programs.   Some evaluators choose to summarize qualitative findings using quantitative outcomes (i.e., “9/10 respondents agreed with this idea…”).    While this can increase confidence in results, it should be used with caution. 

    Ultimately, the evaluator should always ask:  “Would I feel comfortable with the participant reading this.”  “Would the participants agree with my interpretation of the findings?”


    Ms. Fisher’s full presentation slides can be found here  (members only). 

  • 07/30/2018 12:37 PM | Anonymous

    A full evaluation program requires data systems and analytic capacity. Many organizations or programs feel like they’re not prepared to start building their internal evaluation capacity until they have developed logic models and databases. But there are only two elements that need to be in place in order to start evaluating your program. They are clarity about your organization’s mission and clarity about your organization’s target population. These elements are required to focus all of your data collection and analysis.

    © LinkedIn 2015

    © LinkedIn 2015

    Mission clarity is a single, shared understanding of why your program is in business. While most organizations know very well what they do, there is sometimes a lack of clarity about why they do it. For example, if you run an after-school program for teens, it could be intended to improve the youths’ academic skills (an educational goal), help them explore careers that might interest them (a workforce development goal), or pair them with a caring adult (a positive youth development goal). In order to begin building an evaluation program, the program’s mission needs to be clear. Without clarity about your program’s real mission, it will be impossible to prioritize performance measures or design performance measures that communicate your program’s value to all of your stakeholders.

    One way to document your organization’s mission is a simple strategy map, documenting the activity, the immediate outcome for your participants and the long-term outcome that you envision for them. It can be as simple as this:

    tutoring à passing grades à high school graduation à careers à economic well-being

    Developing this strategy map can be challenging, particularly when there are many possible positive outcomes based on the activity that your organization provides. It’s a necessity because your data collection systems will be organized around outcomes that serve that one central purpose.

    © Brands with Fans 2015

    © Brands with Fans 2015

    The second thing you need is consensus about who exactly your organization serves. Who is best positioned to benefit from your program? Who is the most important population for your program to be serving? For example, in the above example, if your program serves an academic goal, the most important population to serve would be youths who are at risk of leaving high school without a diploma. If the goal is a workforce development one, the target population might be youths who are early enough in their high school careers to choose electives based on their future career plans. If your program’s goal is mentorship, your target population might be young people who are developmentally most able to benefit from a mentoring relationship. Measuring how closely the population you serve mirrors your ideal population requires perfect clarity in that area. Clarity about who you serve will organize all of your data collection and analysis and define your unit of analysis.

    With these two things in place, your organization will be ready to develop measures that really indicate whether your organization is helping the population that you intend to serve and whether it is helping to move your participants or clients toward their goals. Equally important, this clarity will help you avoid collecting unnecessary data or focusing on metrics that don’t really drive your organization’s performance.

    By Pieta Blakely, PhD


    © Blakely ConsultingPieta Blakely is a researcher and evaluator, specializing in quantitative evaluations of workforce development programs and local economic development initiatives. She has led multiple evaluation projects spanning education, labor economics, and urban economic development topics.

    As the Principal of Blakely Consulting, LLC, she focuses on working with not-for-profit organizations to build evaluation capacity, integrate evaluation into their program logic models, and use learning for strategic planning. Her clients include a range of anti-poverty and social justice organizations, particularly those that serve disadvantaged and minoritized youth.

    Dr. Blakely received her BA from Brown University in Organizational Behavior and Management and Anthropology, her MS in Administrative Science from Boston University, her MEd from Harvard University, and her PhD from Brandeis University in Social Policy.

  • 06/20/2018 3:20 PM | Anonymous

    At our spring GBEN social meet-up, GBEN was very fortunate to have Leslie Goodyear, American Evaluation Association (AEA) President and Principal Research Scientist at EDC, join to talk about the upcoming AEA conference in November as well as her vision for the AEA.    

    In addition to joining the spring meet-up, Leslie generously agreed to participate in a short six-question email interview with GBEN member, Marion Cabanes.    Enjoy!


    © Leslie Goodyear

    GBEN: In 2016 you were formally elected the next president of the American Evaluation Association.  You expressed a hope to foster a collaborative approach among evaluators within the association and among other organizations, policymakers, scholars and practitioners.  Reflecting back on more than one year of this vision, what collaborative dialogues have you witnessed and how do you envision continuing and developing further dialogues on the importance and use of evaluation?

    Leslie:  Whoa! I said that?!? Just kidding. Both during my time as president-elect, and this year as president, I’ve had the chance to see how evaluators are influencing the field and those with power to influence policy. I speak mostly of what’s happening in AEA, but we know that every day, evaluators provide evidence and information to guide decision making and policy at multiple levels.

    You may know that through the Guiding Principles Task Force and the Evaluator Competencies Task Force, members have influenced the review and revision of our ethical principles and helped to define a set of AEA Evaluator Competencies. Both Task Forces took their charge to solicit member input very seriously, and implemented processes that incorporated the ideas and opinions of hundreds of members (through surveys, focus groups, and other calls for information and input). The AEA Board just voted to adopt both, and we’ll be rolling them out before and at the 2018 conference. In addition, our Evaluation Policy Task Force has had multiple successes this year in influencing the development and implementation of federal evaluation policy. Their work is not necessarily loud, but has resulted in strong relationships with policymakers and other influencers in Washington. I have no doubt we’ll see more from them in the coming years, too. 


    GBEN:  The field of evaluation has been evolving rapidly and seeing a greater diversity of evaluators of different stripes.  What do you think is the next era for evaluation in terms of innovations? Or what new fields could evaluation reach out to for informed decision making and organizational learning?

    Leslie:  When I started in evaluation – many years ago – we were a field of academics and we debated whether it was appropriate to advocate for the programs we evaluate and we argued about quantitative versus qualitative methods. Things have changed a lot since then! Now, we’re primarily an association of evaluation practitioners, and we’ve moved on to incorporate and debate new approaches to evaluation (e.g., feminist, developmental, systems) and new positions on everything from equity and inclusion, advocacy, methods, program theory, and 3D logic models. We offer more opportunities for professional development now, and we have new and more dynamic ways to present data (qualitative and quantitative!) and disseminate findings. I’m not a psychic, so I’m not sure what’s coming next, but I’m excited by the passion people have for making the evaluation field more diverse, and its processes and products more directly tied to decision making and action.


    GBEN:  You have travelled across the country and met many other local/state evaluation associations.  What do you think makes for a strong local evaluation association or network?

    Leslie:  It’s been a real pleasure to get to meet with so many evaluators as I’ve visited local affiliates and attended their meetings. As an organization, I think we can do more to connect the national organization to its local affiliates, whether through co-sponsored events, the common brochure, the common member registration, or other ideas. I love that there was an #EvalTwitter chat with the Local Affiliate Council and that so many people participated! I’ve heard from local affiliates that they would love more opportunities to share lessons learned and strategies, and that they would appreciate more opportunities to connect more with each other. I’ve been ensuring that the ideas that have been shared with me are passed right along to AEA staff who can collect them and, when possible, implement them.


    GBEN:  In addition to our bi-monthly roundtables, this blog post is our way of engaging with GBEN members.   What would you like to know from the GBEN community to that could influence your work at the national level?

    Leslie:  First, let me just say thanks for the invitation to contribute to the GBEN blog. I loved getting to meet GBEN members at the social event in Cambridge, and I look forward to connecting again soon, whether at the AEA conference in Cleveland or another GBEN event.   

    I’d love to know what you’re seeing in your evaluation worlds! Are funders and programs drinking the evaluation kool-aid? Are there opportunities the association, or local affiliates, could capitalize on? Are there challenges you’d like to share that you think might be more common than just to you in your work? What do new evaluators need from AEA? What about seasoned evaluators? What are you seeing with regard to trends and concerns?


    GBEN:  What is your favorite part about being an evaluator as well as being a part of the local and national professional evaluation community?

    Leslie:  Easy! My favorite part of being an evaluator, and being part of the local and national community of evaluators: meeting smart, dynamic, engaged, quirky, diverse, passionate, thoughtful, people who want to use their amazing skills to make this world a better place! (However you define, operationalize, and measure that. Ahem.)






  • 05/23/2018 3:51 PM | Anonymous

    Having Conversations with Stakeholders Unfamiliar with Evaluation

    By Marion Cabanes

    On April 4th, GBEN hosted a roundtable discussion with members to share their experiences talking with stakeholders unfamiliar with evaluation.

    Members discussed cartoons from the well-known evaluation website, www.freshspectrum.com.  Many of these cartoons portray situations where an evaluator experiences resistance to, or lack of understanding about, evaluation from a client, and how to go about changing the client’s perspective on the role of evaluation for a particular project.




    When you implement your project activities every day for many years, one may think they know the nuts and bolts of a project and what the clients need.  The reality, however, can be much different.  For example:

    • Evaluation tends to be overlooked, or project staff simply don’t see how evaluation can capture their successes or areas in need of improvement of the project. They might also lack time for evaluation as they focus on implementing their project activities, especially where their clients’ immediate livelihoods are at stake (i.e., locating beds for the homeless).
    • Or, there is still a failure to understand how evaluation can showcase project activities and results.  Without evaluation, you lose the utility of the data you’re routinely collecting to serve the purpose of reflecting on your project and learning from it.  It then gets harder to show the “specialness” of your work.


    When talking about the “specialness” of a project, make sure goals are explicit and you are able to explain how the project will succeed or is meeting goals.  Never underestimate spending extra time developing an evaluation plan at the beginning of your project as this will help you stay on track and/or measure how far off you are from reaching the ‘real’ goal, la raison d’être, of your project.



    Evaluation can oftentimes feel like just another administrative task to tick off at the end of a project’s life cycle.  GBEN members offered up some great tips on how to frame the role of evaluation differently with clients:

    • Stakeholders have an ethical responsibility to act, intervene, and assess that their efforts and investment are worthwhile.
    • Evaluation is an opportunity to showcase how good a project is and communicate successes and learnings to project funders, stakeholders, and beneficiaries.
    • Evaluation is an integral part of developing a strong culture of learning.   Evaluation will help you and your team perform better and learn as you are solving problems along the way.  Evaluation provides data to help make feedback-based informed decisions for corrective actions.
    • It’s important to scope your evaluation appropriately by choosing wisely what’s important (and not important) to evaluate.  What is the right level of evidence that shows whether or not your project is working as designed and intended?

    A Cool Resource to help project staff measure project progress and performance and demonstrate results to the world is a “results scorecard” software like Clear Impact and its Scorecard software or Asana.

    Click here to access the presentation from the roundtable (members only!). 


    - Marion Cabanes is a local Boston evaluation specialist working in international development as well as an active member of GBEN.  


  • 01/02/2018 10:42 AM | Danelle Marable

    Along with our new website comes official membership!  GBEN's membership dues are reasonable and in-line with other AEA affiliates.  Once you are a paying member of GBEN, you'll have access to the monthly newsletter, featuring events, jobs, and networking opportunities.  You'll also have access to the member forum, where you can pose questions to other members.  Looking to network with specific people?  You'll have access to the member directory where you can search for people in specific fields or organizations.  You get all of this for $25/year.  Are you a student?  Sign up with your student email address and enjoy the benefits for only $15!

    Sign up by by clicking "Join Us" above. 

<< First  < Prev   1   2   3   Next >  Last >> 

Greater Boston Evaluation Network is a 501(c)3 non-profit organization. 

Powered by Wild Apricot Membership Software