Throughout the semester, we will feature highlights and insights from Dr. Lena and Dr. Mangione’s time in Palestine last summer. This is the final installment of the three-part series.
by Dr. Gemma Mangione
What is evaluation, and why is it important for arts professionals? I’ve written on this issue in other contexts, including in a recent blog post for RK&A, the planning, evaluation, and research firm serving museums for which I serve as a consulting analyst. As I suggest in the post, the question has in some ways become a rhetorical one. In an era of ever-increasing accountability to external stakeholders — primarily funders — evaluation savvy is not just an optional set of marketable professional skills, but necessary to the practice of arts management. Nevertheless, few arts professionals have the training to either lead smaller-scale evaluation projects or the resources to work with external consultants that can help them develop larger or longer-term initiatives.
When Dr. Lena and I completed our needs assessment for the Palestine project, we also found this to be the case with the Palestinian arts professionals we surveyed and interviewed. The majority of the arts and culture organizations in Palestine where we conducted our survey reported collecting data to assess organizational performance and generate new ideas for programs. Most of these data categories were informal, however, such as asking for feedback from staff and volunteers, or scanning comments posted to social media following a program. Further, less than one-third of those arts professionals we interviewed described any regular process reliant on, for example, an external evaluator; a specific evaluation plan; or specific instruments like surveys. When I lead capacity workshops, I like to define evaluation for students as the systematic collection of data that assesses success and shortcomings against what you want to achieve. Breaking down the components of the definition, this requires being mindful of process (systematized); collecting information specifically for analysis (data); and being clear on your intentions (what you want to achieve.)
Based on the results of the needs assessment, I developed a three-day curriculum module (in keeping with the length and focus of our other modules: marketing; fundraising; and strategic planning). The second day focused on a general introduction to interview and survey design: two of the methods most often used in-house by administrators, and thus most usefully introduced through basic training in best practices. That day students also enjoyed a visit from Usama Khalilieh, a Lecturer at Bethlehem University, who built on the morning’s instruction through practical exercises in evaluation design. The third day we focused specifically on the evaluation component of fundraising: how to design and write it up for a grant application and plan for the evaluation report due to funders after money is awarded.
But the first day was perhaps the most important and served as the basis for what followed. After introducing results from the needs assessment, I broke down basic differences among research, basic research, and applied research; introducing broad differences among qualitative, quantitative, and mixed methods; and engaged students in collective brainstorming about the importance of evaluation and the kinds of questions it can help us answer about our audiences and work as arts administrators. This culminated in an overview of common types of arts research and evaluation, as formulated by Sarah Lee — formerly of SloverLinett and presently at Sarah Lee Consulting — and presented to the ARAD community as part of her microcourse on arts assessment in 2018. Sarah’s “mental map” of research and evaluation can be particularly useful in helping arts administrators develop a vocabulary for what kind of evaluation projects they might need, and why.

Further, that afternoon, we spent time exploring RK&A’s “intentional practice framework,” usefully explicated in Randi Korn’s recent book. What’s most useful about this framework is how it pushes administrators to incorporate the evaluation process into everyday operations. If an arts organization is clear on its impact — the difference it hopes to make in the quality of life of its audience members, articulated in a clear impact statement — the cycle of intentional practice can easily be built around that impact. This involves planning for the impact an organization wants to achieve; assessing the ways they’ve achieved it; reflecting on what they’ve learned, and how they can do better; and then aligning actions to better achieve impact moving forward. In this way, arts administrators can begin to think about evaluation as one component of the broader process of doing “good work” aimed at achieving their goals, rather than a technical process one must scramble to fulfill whenever a report is due to funders.
As with all pilot programs — and as with good evaluation! — it’s worth reflecting on what went well versus could have gone better. On the one hand, students reported satisfaction with module accessibility, for the exercises and hands-on learning opportunities it incorporated, and a desire to learn more about content of this type. On the other hand, one notable feature of the students participating in our professional development course is the range of organizations they worked in, which directly corresponded to the kinds of resources available to them. It’s a challenge to develop tailored instruction for building evaluation capacity when working with professionals who head an organization of three staff members alongside those that are part of a broader national network with secure international funding. Additionally, I relied primarily on American and Western case studies in my instruction, and our students reported in their summative evaluations a keen interest in locally-relevant examples. As Dr. Lena and I noted in our evaluative report, future iterations of this instruction should spotlight arts professionals active in Palestine who have undertaken evaluation projects or regularly rely on evaluation techniques in their everyday work.