• 07/31/2019 10:14 AM | Anonymous

    Big data.  Data science.  Predictive analytics.  Social network analysis.  The field of evaluation is expanding to new frontiers, becoming a transdisciplinary practice.   

    Based on your work experience and interests in the field of evaluation, what is the next area(s) that you want to learn more about and integrate into your evaluation practice?

  • 06/26/2019 3:24 PM | Anonymous

    On Tuesday, June 18th, GBEN hosted its second roundtable on the topic of social network analysis.   Over a dozen GBEN members and guests participated.  The roundtable discussion was led by Kelly Washburn, MPH, from Massachusetts General Hospital’s Center for Community Health Improvement.  Kelly is also one of GBEN’s Programming Committee co-chairs.

    Social network analysis (SNA) is the mapping and measuring of relationships and flows between people, groups, organizations, computers or other information/knowledge processing entities.” (Valdis Krebs, 2002). SNA can show the performance of the network as a whole and its ability to achieve its key goals, characteristics of the network that are not immediately obvious, such as the existence of a smaller sub-network operating within the network, the relationships between prominent people of interest whose position may provide the greatest influence over the rest of the network, and how directly and quickly information flows between people in different parts of the network.

    Kelly walked through a small social network analysis she conducted to walk participants through the different steps needed, challenges, and lessons learned.  The project discussed was a provider task force improving connections among services providers, streamlining services, and enhancing care coordination efforts.  The SNA provided a baseline on how the task force members work with each other by asking four questions:

    1. Do you know this person?
    2. Have you gone to this person for information in the last year?
    3. Have you worked with this person to obtain new resources in the last year?
    4. Have you referred a client to their organization in the last year?

    The analysis was done in Gephi, a free software for conducting social network analyses.  Data cleaning was the most tedious part of the project and was done manually; however, there are ways to bypass the manual data cleaning. After the data is set-up in the appropriate Node and Edges file, they are uploaded into Gephi. Once in Gephi, the steps detailed in their manuals were followed to take it from the initial map to the finalized map.  Following Kelly’s discussion of her project, others in attendance spoke of their own experiences of using social network analysis in their work.

    Key Challenges and Lessons Learned:

    The roundtable participants discussed a few challenges and lessons learned when conducting a SNA, including:

    • New analytical methods and techniques, like SNA, can require a lot of patience and time to learn and master.  Be sure to invest the necessary time when learning how to conduct a SNA for the first time.
    • A high response rate means A LOT of follow-up to ensure the data is representative of the population you are analyzing.  Be sure to invest the necessary time and resources to doing follow-up for your project.
    • Make sure the questions being asked are the right questions as it’s difficult to change directions once the project and analysis has started.
    • Continually ask yourself and/or your team(s):  Do I need to collect new data or is there already collected data I can use for the SNA?  
    • SNA can be frustrating to administer and master at times.  Patience during the process is key to ensuring a successful outcome. 
    • The visual map was key for the task force in understanding the analysis. 


  • 05/28/2019 1:12 PM | Anonymous

    Feminism, at its core, is about the transformation of power—but how do you know that’s happening at the organizational level? How can you understand the core drivers of that transformation? How can your own process of evaluating that transformation democratize the evaluators’ power?

    Taylor Witkowski and Amy Gray are evaluation and learning specialists at Oxfam America, designing and testing feminist monitoring, evaluation and learning processes for a gender justice-focused organizational change initiative.


    Everything is political – even evaluations.

    Traditional evaluations, even when using participatory methods, prioritize certain voices and experiences based on gender, race, class, etc., which distorts perceptions of realities. Evaluators themselves carry significant power and privilege, including through their role in design and implementation, often deciding which questions to ask, which methodologies to use, and who to consult. 

    Feminist evaluation recognizes knowledge is dependent upon cultural and social dynamics, and that some knowledge is privileged over others – reflecting the systemic and structural nature of inequality. However, there are multiple ways of knowing that must be recognized, made visible and given voice. 

    In feminist evaluation, knowledge has power and should therefore be for those who create, hold and share it – therefore, the evaluator should ensure that evaluation processes and findings attempt to bring about change, and that power (knowledge) is held by the people, project or program being evaluated.

    In other words, evaluation is a political activity and the evaluator is an activist.


    Oxfam America is seeking to understand what it means to be a gender just organization—from the inside out.

    In order to do this, Oxfam America recognizes that a holistic, multi-level approach is required. We believe that transformational change begins at the individual level and ripples outwardly into the organizational culture and external-facing work (Figure 1).

    (Figure 1)

    (Figure 1)

    This is why we are investing in a feminist approach to monitoring and evaluation—because even though feminist values—adaptivity, intersectionality, power-sharing, reflexivity, transparency—seem like good practice, without mainstreaming and operationalization they would not be fully understood or tied to accountability mechanisms at the individual, organizational or external levels.

    Therefore, as evaluators, we are holding ourselves accountable to critically exploring and implementing these values in our piece of this process. The foundational elements of this emergent approach include:

    • Power-Sharing: The design, implementation and refinement of the monitoring and evaluation framework and tools are democratized through participatory consultations with a range of stakeholders—a steering committee, the senior leadership team, board members, gender team and evaluation team.
    • Co-Creation: Monitoring includes self-reported data from project contributors as well as process documentation from both consultants and evaluation staff, and data is continually fed into peer validation loops for ongoing reflection and refinement.
    • Transparency: Information regarding the monitoring and evaluation framework, approach and activities are communicated and made accessible to staff on a rolling basis as they evolve.
    • Peer Accountability: Monitoring mechanisms that capture failures and the cultivation of peer-to-peer safe spaces to discuss them create new opportunities for horizontal learning and growth. This includes a social network analysis (SNA) of perceived power dynamics within teams (contributed by team members via an anonymous survey), followed by a group discussion in which they reflected on the visual depiction of their working dynamics through the lens of hierarchy and intersectionality.


    As the monitoring, evaluation and learning (MEL) staff working on this initiative, we recognize that we have an opportunity to directly contribute to change. We therefore see ourselves as activists, ensuring MEL processes and tools share knowledge and power as well as generate evidence that reflects diverse realities and perspectives, and can be used for accountability and learning at multiple levels. As a result of this feminist approach to MEL, participating Oxfam staff can see and influence organizational transformation.                                                                       

    How have you used feminist evaluation in your work? Do you have any tips, resources or lessons learned you’d like to share? Do you think this would make a good roundtable discussion?



  • 04/29/2019 3:12 PM | Anonymous

    As evaluators, we sometimes collect more data than we can use.

    What are 1 or 2 methods or tricks you use to make your data collection process more meaningful and/or more aligned to your evaluation questions?

  • 03/27/2019 10:52 AM | Anonymous

    On Tuesday, February 5th, GBEN and Northeastern University’s Public Evaluation Lab (NU-PEL) co-hosted a panel on Impact Evaluation.  This was the first GBEN event of 2019 and the first event co-sponsored with NU-PEL.  The event saw the greatest turnout in the history of GBEN with 66 attendees!

    The panel featured five local internal and external evaluation leaders who have recently undergone randomized-control trial (RCT) or quasi-experimental impact evaluations. 

    The five panelists were:

    More and more, non-profits must demonstrate impact in order to ensure their ability to grow and innovate.  The purpose of the panel discussion was to explore what drives non-profits to engage in an impact evaluation, how to choose methodology, and lessons learned about communicating results. 

    Here are some of the key takeaways from the engaging panel discussion:

    Methodological Rigor vs Reality

    Several of the panelists discussed the push-and-pull between ideal methodological rigor and what is actually possible and/or ethical for programs.  In particular, Ms. Britt and Mr. Nichols-Barrier spoke to being able to do or not do randomization based on program over-subscription.  On the flip side, Ms. Goldblatt Grace and Professor Farrell from My Life My Choice shared a powerful anecdote about overcoming skepticism to their project’s rigorous methodology to allow a research assistant to be present at the mentor-mentee match sessions.

    Organizations Conduct Impact Evaluations for Lots of Reasons

    The motivating factors behind the decision to evaluate impact are diverse. Organizational values, political context, and funders can all play a role in the decision to conduct an evaluation as well as decisions around study methodologies. 

    Communicating Results

    Several of the panelists shared helpful tips about communicating results, specifically going beyond sticking them in a report that few people read. Ms. Britt shared a strong example of Year Up planning a year-long plan for communicating parts of their results throughout the whole organization, including a big celebratory kick-off event. 

    GBEN would like to thank the five panelists for being a part of this incredible event as well as NU-PEL for co-hosting.  Be on the look-out for future co-hosted events with NU-PEL!

  • 12/19/2018 10:48 AM | Anonymous

    On Tuesday, December 4th, GBEN hosted its last roundtable of the year about budgeting for evaluation.  Like all of our roundtables this year, we had great a turnout with 17 members participating.  The roundtable was designed to be an open dialogue focusing on the critical issues evaluators face when budgeting for evaluations. Guiding questions for the discussion included:

    • How do non-profits and other organizations fund evaluation staff, data management systems, and other elements of successful evaluations?
    • Is there a magic "rule of thumb" on how to allocate budget resources to evaluations?
    • Are there funders interested in supporting evaluation capacity building?
    • What are other processes or best practices to consider when budgeting for evaluation?  

    Here are some of the key takeaways from the group discussion:

    Evaluation staffing and capacity varies across organizations

    Some organizations still have little-to-no dedicated evaluation staff on the payroll.  For many organizations, having a dedication evaluator is a new organizational initiative.  For those with dedicated evaluation staff, many are grant-funded, meaning once the grant is over, the position may be reduced or eliminated.

    Evaluation budgets can span multiple departments

    Many organizations spread evaluation costs and budgets across varying departments and/or programs.  Evaluation budgets can include staff who may not have the word “evaluation” in their job title or job responsibilities, including front-line program staff, data management and/or administrative staff, and/or information technology and systems staff.  For example, an organization may employ a Database Specialist through their IS/IT department who does critical evaluation-related work by using Efforts to Outcomes (ETO) software. 

    Commitment from organizational leadership is key

    Like all facets of a good evaluation initiative, commitment from senior management within an organization is important for evaluation funding.  Senior leadership and management are often best positioned to seek out, advocate for, and request funding for evaluation from key funding partners. 

    Monitoring your evaluation work helps make the case for future or additional evaluation

    Evaluation staff may hold the key to the magical data kingdom, but often time we don’t directly experience or see how the evaluation results are used to change or improve programmatic processes.  Be sure to document the collaborative process between the evaluation team and front-line program staff to highlight programmatic improvements that are a result of the evaluation findings. 

    Relationships with development staff are key

    It is very helpful to have a relationship between the development team and evaluation team, especially for grant writing.  Development staff understand funder needs and wants and can effectively communicate impressive evaluation results.

    Assert your evaluation needs!

    Most people have no idea what it takes to make an evaluation successful.  Often time it takes more than just a staff person.  Evaluation needs can include certain software, equipment, consultants with specific expertise, and other administrative needs like postage and mailing supplies.  Make a wish list of things you need and share it!

    Seek out ways to reduce evaluation costs

    While planning and conducting and evaluation, it’s important to always ask the question: “what do we want and what do we already have?”  For example, is there data already being collected or existing data systems that could serve your evaluation project?  Finding ways to lower evaluation costs may make it easier to acquire the necessary budget resources for an evaluation initiative. 

    Resources:  (click here to access members-only resources from this roundtable)

    • Budgeting for Evaluation: Key Factors to Consider – a rubric developed by Westat for assessing how much to budget for evaluation
    • Budgeting for Evaluation from the 2014 Americorps Symposium  

  • 10/31/2018 1:52 PM | Anonymous

    On October 5, 2018, Danelle Marable, GBEN President and Senior Director for Evaluation, Assessments, and Coalitions at the Massachusetts General Hospital Center for Community Health Improvement (MGH/ICHI), discussed two community health needs assessments that her team is working on: 

    1. Partnering with the North Suffolk Public Health Collaborative, municipalities, other healthcare providers, community coalitions, and organizations to conduct an assessment for Chelsea, Revere, and Winthrop.

    2. Partnering with all Boston hospitals to conduct a Boston-wide assessment. 

    In 2011, the Affordable Care Act required every hospital to conduct a community health needs assessment (CHNA), develop strategic plans, and post findings to the public.  Non-profit status would be revoked if the CHNA was not conducted.

    A CHNA is a systematic examination of the health status indicators for a given population that is used to identify key problems and assets in a community. The goal of a CHNA is to develop strategies to address the community’s health needs and identified issues.  A CHNA is instrumental in identifying the social and environmental conditions as well as social determinants that can impact the health of these communities, such as childhood experiences, housing, income, employment, healthcare, community.  MGH identified Revere, Chelsea, Charlestown and Winthrop as primary communities to target with CHNA and a community health improvement plan (CHIP).

    During the roundtable, Danelle discussed the process around the assessments, community engagement strategies, data collection efforts, and implementation. 


    The CHNA and CHIP is a year-long process that occurs every three years.   MGH uses the MAPP Framework (Mobilizing for Action Planning and Partnerships) for the CHNA and CHIP that in short outlines a process of engaging partners in comprehensive data collection and strategic. MGH/ICHI dedicates one year for visioning, assessment, and identification of strategic issues and then their Board of Trustees reviews and grants approval, then allowing for 100 days to develop an implementation plan. The MGH Trustees are required in MA to go to community advisory committee meetings in addition to other meetings.         

    Community Engagement and Needs Assessment

    MGH/ICHI engages multiple sectors during the CHNA process:  resident, local leaders, community-based organizations, educators, as well as other hospitals in the communities.  MGH/ICHI started engaging other hospitals so they don’t burden the community with repeated data collection.  Assessment and identification of needs can be a challenge as prioritization is not always straightforward.  For example, residents in the community of Chelsea spoke of issues of community violence and safety, but other data sources showed decreases in instances of violence in the community. 

    Data Collection

    Data collection for the CHNA involved survey data, focus groups, in-depth interviews, and secondary data sources, including the quality of life survey (addresses more social determinants such as housing, transportation, overall community). Surveys are translated and are distributed online and in print through various community networks.

    Implementation and Evaluation

    A crucial part of the process of a CHNA is the identification and implementation of evidence-based strategies around certain community needs and issues.  Part of this process involves community dialogue – asking key questions about what can be done, what resources are available, and is there the will to implement solutions.   This process not always leads to straightforward answers.  For example, communities identified opioid use as a crisis and access to Narcan as a response, however the community balked at increasing access to Narcan under the suspicion that it leads to increased opioid use.  The group had to take 3 steps back to educate the community regarding the benefits of Narcan.  Once strategies have been implemented, the final process is comprehensive progress monitoring, implementation evaluation, and impact measurement. 

    Copies of Danelle’s PowerPoint slides can be found in the member roundtable resources section (members only!).  For questions, reach out to Danelle by email at:

  • 09/28/2018 9:19 AM | Anonymous

    On September 6, 2018, Dana Benjamin-Allen of the Boys and Girls Club of America - as well as a long time GBEN member - hosted GBEN's first professional development webinar titled “How to Host Webinars that Don’t Suck!”  The webinar – a live meeting and discussion that occurs via the internet – can sometimes have a reputation for being boring, disengaging, and a waste of an audience member’s time.   For evaluation professionals, creating a data-rich yet engaging webinar can be a challenge. 

    Fourteen GBEN members joined the webinar to learn about tools to increase interactivity and engagement, best practices for online presentations, and the benefits of different distance engagement platforms.   Dana used several tools and activities to model ways to engage the audience and keep them interested in the content of the webinar.  You can access her slide deck on the Roundtable Resources section of the website (members only).  Below are some take away points for engaging your webinar audience.

    Framing the Webinar is Important!

    How you frame and present the webinar is critical to gaining interest and maintaining audience engagement once the webinar starts.  Start the webinar with an introductory slide that presents the intentions and objectives that best resonate with your audience.  It’s important to describe the presenter's relevant background and experience.  

    In addition, it is important to ‘level set’ your audience, that is, identifying the knowledge and experience level among your audience members regarding the webinar topic.  You may have audience members with diverse backgrounds, varying knowledge levels, and varying experience levels with the topic.  Level setting can help you hone in on certain content areas that may be most relevant to this diverse audience as well as identify gaps in knowledge.   Hot tip:  create interactive activities using online polls and tools like GroupMap to level set your audience.

    Language matters so be sure to use a catchy title to make the webinar more inviting.  The webinar title “Webinars that Don’t Suck!” is a perfect example!  The timing of your webinar is very important.  For some audience members, mornings are better, for others it may be lunch time.  If you are reaching a diverse geography of audience members, be sure to schedule it conveniently for all time zones. Lastly, experiment with different alternatives to the word webinar to increase engagement.  Hot tip: call it an online meeting, online course, virtual workshop, “brown bag”, or like the American Evaluation Association, a virtual “coffee break.” 

    Choose the best facilitation tools and methods based on audience needs and intended engagement level:

    There are various webinar platforms on the market today with varying functions.  Do you need a live whiteboard?  Do you want to share and/or pass the screen to multiple presenters?  Do you want a live camera to accommodate live video of all participants and presenters?  These factors should be considered before choosing a platform that best fits the format of your presentation, the needs of your audience, and the intended engagement level.   Hot tip:  using shared experience activities, such as real-time polling applications, helps connect audiences as well as solicit feedback.

    In addition to picking a webinar platform, it is important to identify which facilitation method is right for you and your audience.  Is there one or multiple presenters?  Are you hosting a moderated panel discussion?  Is the presentation interactive and involve audience participation?  Hot tip: co-hosting the webinar with a colleague who has credibility with your audience can increase audience interest and engagement. 

    Lastly, plan accordingly if you plan to take questions and how you will manage the questions.  Do you need a colleague to help with this?  Will it be interactive (i.e. audio for all participants)?   Between 6-10 questions may be good for a live interactive presentation, but plan extra time as participants may interact during each question.

    Presentation is Important:

    The design of your presentation slides is very important.  Keep your slides clean and easy to read.  If you are going to use a lot of text in a slide, plan accordingly to walk through the text with the audience.  Also, create intrigue, confusion, and or excitement from slide to slide using questions, images, and or illustrations.  Lastly, use a strong, exciting, and engaging title.    Hot tip:  there are online alternatives to PowerPoint like Prezi, Visme, or Haiku Deck

    During this webinar, Dana used the following tools (follow the links for more details):

  • 08/24/2018 11:00 AM | Anonymous

    On Tuesday, August 7th, Carrie Fisher, PhD, Research and Evaluation Manager at the Institute for Community Health, presented to over 20 people on qualitative data analysis at the Jewish Family and Children’s Services in Waltham.  Ms. Fisher is an anthropologist with interests in applied and evaluation research, innovative research methods, health and public health, and research with difficult-to-reach populations.

    Ms. Fisher provided a comprehensive overview of qualitative data analysis that began with a very interesting discussion about how concepts and meanings of “truth” and “reality” can vary in qualitative data analysis.  The presentation then focused on qualitative data analysis choices and methods, data management, data analysis, and key issues around data interpretation.  Below are a few key summary points from the discussion.

    Reality and Truth in Qualitative Data Analysis

    There is not just one truth when dealing with qualitative statements and there are things that can affect how truth and reality are defined such as interviewer, analytical, and interpretive bias.  Even false statements can provide important lessons in the analysis.  It’s important to remember that qualitative data analysis is always selective and can be impacted by individual backgrounds, education, and even the mission and vision of the organization sponsoring the qualitative data project.

    Qualitative Data Analysis Choices and Methods

    Starting a qualitative data analysis project can be challenging as there are many questions that need to be considered such as:  quantity and type of data, validity of analysis, how will the findings be used, identifying the audience, staff capacity to conduct the analysis, time and money for the analysis, and whether or not to use technical qualitative data analysis software.  

    Ultimately, a qualitative data analysis project starts when you decide on a data collection method.  Qualitative analysis is an ongoing iterative process and researchers and evaluators should meet weekly with those doing the data collection to review responses to questions to determine if a change to question design is necessary. 

    Qualitative Data Management

    It is critical to plan ahead for data management of qualitative data and to consider small, yet important, factors such as how you are going to record the data (written, audio recordings, video recordings), how you are going to conduct data entry (Excel, Word, other database), and which tools and/or software you may use to manage and analyze the data. 

    For example, one key data management consideration for in-depth interviews and focus groups is whether or not to use transcription services.   Verbatim transcription can be expensive – just one hour of audio or video can require up to four hours of transcription, which can cost hundreds of dollars.   The website is a cost-effective alternative for transcription with rates around $1 per minute. 

    If you are using qualitative data analysis software like Dedoose, NVivo, ATLAS.ti, or others, it’s important to remember that these software only manage data and do not automatically perform analysis for you.  Coding and analysis of qualitative data using these software are additional steps which require sufficient time and technical training in the software itself. 

    Qualitative Data Analysis

    It’s critical to choose your analytical approach with the end in mind.  Key issues to consider include:  level of detail, level of rigor, audience, how the information may or will be used, time and resources for analysis.  There are many different approaches to analyzing qualitative data including pragmatic thematic analysis, case studies, content analysis, and coding trees to name a few.

    Once your data has been collected and managed, schedule extra time to “swim in the data.”  Get to know your data well by reading over all notes and transcripts before doing any coding.  Write down preliminary thoughts on main themes, points of interests, and gaps in the data. 

    Lastly, coding of qualitative data is a critical step.  It’s important to organize codes into meaningful categories and to create a descriptive codebook to document definitions and themes in codes.  This is particularly important if your project has more than one analyst conducting coding in order to maintain consistency. 

    Qualitative Data Interpretation

    When you are ready to begin interpreting your data findings, begin by listing key points and themes such as:  key confirmations, major lessons, new ideas, and applications to other settings and/or programs.   Some evaluators choose to summarize qualitative findings using quantitative outcomes (i.e., “9/10 respondents agreed with this idea…”).    While this can increase confidence in results, it should be used with caution. 

    Ultimately, the evaluator should always ask:  “Would I feel comfortable with the participant reading this.”  “Would the participants agree with my interpretation of the findings?”

    Ms. Fisher’s full presentation slides can be found here  (members only). 

  • 07/30/2018 12:37 PM | Anonymous

    A full evaluation program requires data systems and analytic capacity. Many organizations or programs feel like they’re not prepared to start building their internal evaluation capacity until they have developed logic models and databases. But there are only two elements that need to be in place in order to start evaluating your program. They are clarity about your organization’s mission and clarity about your organization’s target population. These elements are required to focus all of your data collection and analysis.

    © LinkedIn 2015

    © LinkedIn 2015

    Mission clarity is a single, shared understanding of why your program is in business. While most organizations know very well what they do, there is sometimes a lack of clarity about why they do it. For example, if you run an after-school program for teens, it could be intended to improve the youths’ academic skills (an educational goal), help them explore careers that might interest them (a workforce development goal), or pair them with a caring adult (a positive youth development goal). In order to begin building an evaluation program, the program’s mission needs to be clear. Without clarity about your program’s real mission, it will be impossible to prioritize performance measures or design performance measures that communicate your program’s value to all of your stakeholders.

    One way to document your organization’s mission is a simple strategy map, documenting the activity, the immediate outcome for your participants and the long-term outcome that you envision for them. It can be as simple as this:

    tutoring à passing grades à high school graduation à careers à economic well-being

    Developing this strategy map can be challenging, particularly when there are many possible positive outcomes based on the activity that your organization provides. It’s a necessity because your data collection systems will be organized around outcomes that serve that one central purpose.

    © Brands with Fans 2015

    © Brands with Fans 2015

    The second thing you need is consensus about who exactly your organization serves. Who is best positioned to benefit from your program? Who is the most important population for your program to be serving? For example, in the above example, if your program serves an academic goal, the most important population to serve would be youths who are at risk of leaving high school without a diploma. If the goal is a workforce development one, the target population might be youths who are early enough in their high school careers to choose electives based on their future career plans. If your program’s goal is mentorship, your target population might be young people who are developmentally most able to benefit from a mentoring relationship. Measuring how closely the population you serve mirrors your ideal population requires perfect clarity in that area. Clarity about who you serve will organize all of your data collection and analysis and define your unit of analysis.

    With these two things in place, your organization will be ready to develop measures that really indicate whether your organization is helping the population that you intend to serve and whether it is helping to move your participants or clients toward their goals. Equally important, this clarity will help you avoid collecting unnecessary data or focusing on metrics that don’t really drive your organization’s performance.

    By Pieta Blakely, PhD

    © Blakely ConsultingPieta Blakely is a researcher and evaluator, specializing in quantitative evaluations of workforce development programs and local economic development initiatives. She has led multiple evaluation projects spanning education, labor economics, and urban economic development topics.

    As the Principal of Blakely Consulting, LLC, she focuses on working with not-for-profit organizations to build evaluation capacity, integrate evaluation into their program logic models, and use learning for strategic planning. Her clients include a range of anti-poverty and social justice organizations, particularly those that serve disadvantaged and minoritized youth.

    Dr. Blakely received her BA from Brown University in Organizational Behavior and Management and Anthropology, her MS in Administrative Science from Boston University, her MEd from Harvard University, and her PhD from Brandeis University in Social Policy.

Greater Boston Evaluation Network is a 501(c)3 non-profit organization. 

Powered by Wild Apricot Membership Software