(intro music playing) Deirdre Brennan: Hi everyone, and welcome to the 9th episode of Sparks. I am Deirdre Brennan. I am the executive director of RAILS, the Reaching Across Illinois Library System. We developed this podcast to spark conversation, comment and debate about current trends and issues affecting all types of libraries. Today my guest is Rebecca Teasdale, who is a great friend and colleague. We work together on a number of things and in places over the years. And Rebecca is going to talk to us about evaluation. She has moved from being a public librarian to studying evaluation and is, I think hoping to bring what she learns to the world of libraries someday. But I think I’ll let Rebecca tell us a little bit about how she got into the evaluation field and what the journey has been like for her. Rebecca Teasdale: Thanks, Dee. Thanks for having me. So, I'm currently a senior evaluation and research associate at the Garibay group which is based in Chicago. And I'm also pursing my PhD at the University of Illinois down at Urbana Champaign, in Educational Psychology specifically in evaluation methodology. And so when I was working as a public librarian I increasingly found myself needing to make decisions about what was working and what wasn’t working in the programs and services that we were offering. And I realized that I didn’t have very good methods for making those decisions or gathering information to evaluate what we were doing in public libraries. And that led me on a bit of a journey that ended up landing me in the field of evaluation – which I didn’t even know existed before I started looking into it myself. And so evaluation is, sometimes in libraries we refer to it as assessment. But, in the broader field it’s typically called evaluation. And it’s a study, a field of study and a field of practice that looks at how we can gather data in a systematic way to make determinations about what’s working and what’s not in some of our programs and services. Deirdre Brennan: And obviously, I'm sure that our listeners will have realized already how important it is for librarians – no matter what type, to be able to evaluate your services in order to make sure that they are responding to your user’s needs. Especially, but not only when you don’t have a lot of money. You don’t have money that you can just; keep doing something that’s not as useful as it might be. So, that’s one reason I'm so glad that you are here today to talk to us about this. So, educational psychology and so your journey has led you to the U of I. and you’re currently a senior research…. What’s your title there? Rebecca Teasdale: Senior Evaluation and Research Associate. Dierdre Brennan: Right. Ok. So can you just give us a little overview of what evaluation is? Maybe the different types of it and kind of the general purposes of the different types if there are different purposes? Rebecca Teasdale: Sure. So a lot of times in the library field, we hear about evaluation for accountability purposes. So for example, we might get a grant from IMLS or we might receive funding from another source and then we’re responsible for reporting back what we did with that money and how successful that grant project was. So that’s an example of accountability – of the accountability purpose in evaluation. We also hear a lot about using evaluative data for advocacy purposes. So we want to be able to make a case to legislators or to local officials about the value of the library. I think actually what’s more interesting and has more promise for libraries, is looking at how we can use evaluation to improve our programs and for organizational learning. And so that’s the kind of evaluation that I tend to focus on in my studies, although I do both kinds of evaluation in my practice as well. But I think evaluation – when used for program improvement or organizational learning – lets us kind of open up the parameters a little bit. So instead of just looking at whether or not we reached our goals or reached certain outcomes that we set for ourselves, we have the opportunity to look more broadly at what’s going on with our programs and services - and particularly how it looks through the eyes of the people who are using those services. So it gives us an opportunity to sort of stand in their shoes and see how that fits in with their lives, how that makes sense for them, and where we can do a better job of serving our communities or our stakeholders. Dierdre Brennan: So we’re actually engaged in some evaluation here at RAILS. We’re looking at specifically our member visits at this point for exactly that reason. Because we want to find out if there are ways that we can make them better, more effective. And thank you for your help with that by the way. And when I was previously a public library director, and since I knew you were working on this – because we’ve talked about this a few different times because I'm very interested in it personally. It has always struck me that so many of the programs, the activities that libraries do are not quantifiable or are difficult to quantify. And the way I kind of explain it to myself is that people in our libraries are always engaged in learning, but its informal learning. Or many of them are engaged in learning and that if we could just somehow get our arms around the outcomes or the impact of that informal learning, that’s how we would really evaluate the success of our programs. Does that resonate with you; am I explaining that correctly to myself? Rebecca Teasdale: Yeah, that definitely resonates with me and I think there’s a couple reasons why that’s challenging to do, and also really important to do. So the first thing is that the nature of informal learning is individualistic and it varies from person to person. And so we see that in our libraries, right? Each person comes in and they have their own aim or goal for their visit and it fits into their life in a different way than the person next to them. So in libraries we hear a lot about outcome-based evaluation. And that is based on the idea that we as the library staff are going to set some target outcomes for our program or service in advance. And then we’re going to measure our progress toward accomplishing that. So for any teachers out there this will sound familiar. This comes from the work of Ralph Tyler who advocated for the idea of behavioral-based objectives in education. And that is definitely well-represented in the evaluation field as well. But it doesn’t always work well for everything that we do particularly in public libraries. And so that’s one of the reasons why I felt like I needed to dig into this a little more is that there are many areas where we can’t set those objectives in advance, or it’s not appropriate for the staff to be sort of be deciding those, determining those in advance. And so, that’s one area is that we have this very individualistic nature of public library use. I think another issue in libraries is that we’re a little bit under-theorized about how that learning takes place. So I don’t know about you but when I went to library school, we didn’t learn a lot about learning. We didn’t cover learning theory, what learning looks like. What informal-learning or lifelong-learning or interest-driven learning, connected-learning… there’s all these terms that we throw around in the field but we don’t actually know a lot about how those things unfold. And so in order to be able to evaluate and determine to what extent that learning is taking place, I think we need to know more about what learning looks like so we can recognize it and describe it and ultimately the goal of that is to be able to better support that learning. Dierdre Brennan: Yes, and certainly to be able to explain to… I guess this is about the accountability and the program improvement both. To be able to explain to funders, governance, governing authorities, boards… what that this is learning that’s going on and it’s not just hanging out because you’ve got some spare time and so you go to the library, but people are actually learning and improving their lives or whatever – its having an effect on their lives is just so critical. I think it’s just always been critical but so many programs are threatened right now… federal funding, state funding. So I just think that evaluation is really, has become even more important than it was before. So when you talked to us originally at RAILS about evaluation, you described a normative… no… Rebecca Teasdale: Formative and Summative. Dierdre Brennan: Formative and Summative, sorry. I'm making up my own kind of evaluation… so could you describe the difference there because I think those are really interesting too. Rebecca Teasdale: So evaluation is often divided into 2 big buckets. One of those is formative, and one of those is summative. And so, formative evaluation is that evaluation that’s done during the life-cycle of the program. And the information is used to improve the program or check its progress as it unfolds. Sometimes that’s also called a process evaluation. And then a summative evaluation is something that takes place either after a program concludes or at a designated time point, to kind of look back retrospectively or to sum up what’s been taking place in the program. And so, often when we’re wanting to report back to a funder about a program, that’s going to be a summative evaluation. We’re going to wait until our grant funded program has run its course for example and then take a big picture look at how it went and what the outcomes were. But in your example of your member engagement initiative, it sounds like you’re going to want to be doing more formative evaluation to help form the service that you offer. And so you’re going to be gathering data along the way and then feeding that back into developing your program. So, Bob Stake, who’s an Emeritus professor at Illinois, talks about this with an analogy that I really like. Formative evaluation is when the cook is making the soup and tastes the soup along the way to know what ingredients or seasonings need to be added. Summative evaluation is when the soup is tasted after its finished, and that’s when the person I'm serving it to gets to say whether its good soup or bad soup. Dierdre Brennan: Yes, that is a great analogy I think, yes. So, you’ve been studying this for a while, working on different projects. Could you describe some of the more interesting findings or surprising findings of some of the evaluation projects that you’ve worked on? Rebecca Teasdale: Yeah, so sometimes I talk about evaluation as a process of reality testing. And so, as a program person, as someone designing programs and services, we have an idea in our minds about what we’re hoping to accomplish and how we’re hoping to get there. And there are a lot of assumptions that are based into that process. And so, evaluation can be a very helpful process for checking out those assumptions and doing some reality testing. So, a couple of examples come to mind. One study that I was working on was in an after-school program. We were looking at engagement for both the children in the afterschool program with science activities and then there was parent education, so that their parents could better understand how to support their children. But when we went and actually started gathering data from the participants, we quickly figured out that the parents who were in the parent workshops did not actually have a child in the after-school program. And so the kids in the after-school program, it wasn’t their parents that were going to the workshops. So, if we had just looked at outcomes it’s true that the parents in the workshops learned a lot. And the kids in the after-school program learned a lot. But the larger goal of the program wasn’t accomplish because they had just assume that the “right” parents were going to show up. So that’s an example where there was a design flaw in the programming that was able to be addressed because that information came to light. Another example I've seen, as we’re thinking about maker spaces, so that’s the area that I study, adult engagement with maker spaces. And how on earth do we evaluate that? And so one thing that I've seen that I found very interesting was that often as librarians we are hoping to create a very interactive making experience, where neighbors are working with one another and collaborating on projects. But sometimes what I've seen in actual maker spaces is that folks are coming in and using the resources and equipment, almost like you would a copy machine or a printer - that they come in to produce an item and leave again. And so it sort of questions the assumption of: what is that space for and how is that going to fit into people’s lives. And there’s no right or wrong here, one is not better. It’s more about figuring out what the community needs and wants and how the community is using that space. And then the ways that it aligns with the staff goals or the staff planning or doesn’t align. Dierdre Brennan: Yes, and it obviously makes the point that the… You really have to test your assumptions and that the data gathering is critical. Right? Rebecca Teasdale: Right, absolutely. And so that’s one of the things that distinguishes evaluation from a lot of what we do already, because, we obviously are trying to figure out if our programs are good or not – if they’re working. We’re continually paying attention to that feedback from the public and in our personal interactions with folks in the library. But evaluation is a more systematic process that lets us go through a very disciplined and structured process to gather that data and analyze it and use it to answer questions that we have about our program. And one of the things that I've learned along the way and find surprising is that as librarians, we expect ourselves to be able to do this right? I feel that we have really high expectations for ourselves, that we feel like we should be able to plan the program, run the program and evaluate the program, right? And that also ties into our tendency as librarians. We love data. We love information. And so we also tend to collect it all the time. So I was the director of reference services for a long time, and I can’t tell you how many hash marks I made on my little pad of paper and how many hash marks I counted each month to report to the board. So we had this idea of continual data collection, and that is some of what we hold in our mind about assessment. And so I think some of the promise that evaluation holds for the field is to step back from that and do a time-based, structured study that we can do of a certain program. And that it’s often really beneficial to bring in someone who has particular expertise in the area of evaluating programs. And there’s a number of reasons for that. Some it is technical in nature, right? I don’t know about when you went to library school, but I didn’t learn a lot of research and evaluation methods, and so that is something that is often difficult for a lot of librarians to pick up, is the technical aspects. There’s also a certain degree of impartiality that comes from bringing in an external evaluator – that can see things a little more clearly sometimes. And sometimes that’s seeing things more positively than the library might realize. And sometimes that’s seeing the shortcomings of a program as well. And then members of the public can often feel good about speaking with an external evaluator because they can have strong relationships with the library and wouldn’t want to say something to hurt someone’s feelings or damage the relationship, even if they have a great suggestion that they want to offer. But then, the other level is that evaluators are trained methodologically. So there’s the method which is the technical aspects. But there are also broader and kind of higher level questions. So we were just talking about outcome-based evaluation as one type of evaluation. Knowing which type of evaluation to use when, will shape what you’re able to learn from a study. And so, I often find that when people are dissatisfied with an evaluation that they’ve conducted, it’s often because the questions that were asked or the methodological approach that they used took them in a certain direction that didn’t align with what they really hoped to know. For example, in libraries, we really love a survey. And often I find in library evaluations, the first, the go-to data collection method is conducting a survey. And that has some real affordances for gaining information and insight into what’s going on with a program or service, but it also has some real limitations that need to be accounted for. And so often we can have greater insight and impact if we think more broadly about methods. And perhaps mix methods and do a mix-method study. Dierdre Brennan: So what’s an example of a limitation of a survey? Rebecca Teasdale: So, surveys are designed to present a set of standardized items to a large population and then you want to aggregate that data. So if we think about the pew internet and American life, those are kind of the classic surveys, that there’s a standard set of questions administered to thousands of people. Surveys are pretty limited in the open-ended or qualitative data that they can collect. And in libraries sometimes we tend to push the envelope on that and we use surveys to try to get that qualitative data. But, I think we quickly find that it’s pretty limited - that when we get a sentence back from someone, it’s really hard to understand what they meant in that comment. Or when someone skips that field, and doesn’t give us an open-ended comment, I’ts really hard to know why that is. But if we sit down and do an in-depth interview with someone, or if we use some observation or use some other forms of data collection we can get a lot more nuance and insight into the program and the participants’ experience with it. Dierdre Brennan: So one of the current large projects in library-land related to evaluation or assessment is Project Outcome. I guess it’s related to outcomes because it’s called project Outcome. And this is something that the public library association has been working on. And obviously outcome-based assessment is very, is an important piece in looking at what it is that you’re doing and how your users are feeling about it in a sense. But I'm wondering if you could talk a little bit about how outcome-based evaluation in the way that Project Outcome is thinking about evaluation fits in to kind of the larger field of evaluation that you’ve been talking about. Rebecca Teasdale: Yeah, sure. I’d be happy to, I was one of the task force members that was one of the original participants in Project Outcome back when it was the Performance Measurement Task Force. And one of the really striking things that I found early on is that we gather data from the public library community to try to get a sense of the “lay of the land” before we got started. And so we did a survey, even though I just kind of… (laughter)… to get a “lay of the land”. So we did a survey of public librarians and asked who was already measuring outcomes in their libraries and then to provide examples of that. And it was really surprising to look at that data because there were a lot of libraries that reported that they were measuring outcomes, but then when we looked at the examples that they provided many of those were actually not outcomes. So, an outcome is a change in an individual. It could be a change in skill or knowledge. It could be a change in behavior, or in their life-status. And the only way to determine an outcome is by talking to the person who had the experience in the library. So when we asked the general public library community to report what outcomes they were collecting, sometimes we heard things like program attendance as an outcome. But that isn’t a change in an individual, that’s more of an output, the business that we have, the volume of the service that we are providing. Dierdre Brennan: A hash mark. Rebecca Teasdale: It’s a hash mark, exactly. So, that was one of the early eye-openers for me. That a big component of project outcome needs to be education for the public library community around what is an outcome and why is it important to measure. Because we’ve been measuring performance outputs for years, and they’re still really important. We need to know how busy we are. We need to know what activity is taking place, and we want to know what difference it makes. So it’s sort of like the outputs are the “what are we doing”. And the outcomes is “so what” – what difference does it make? Part of Project Outcome is doing that education. I think another component of Project Outcome is that the aim is to put tools in the hands of librarians so they can measure some of these outcomes themselves in their own library. And so by definition, in order to do that, that requires having some pre-established tools that are ready to go to be put on the ground in a variety of different libraries. But we know from our experience with libraries that there’s a huge diversity in the public library community. So some libraries have a lot of experience and a lot of resources around evaluation. And others are just getting started or might not have resources. And so I think the other opportunity of Project Outcome is for folks to plug in at the difference levels that make sense for their library. For some libraries that have been doing a lot of thinking about outcome and have been doing some outcome-based evaluation, they may find that it’s time to bring in an external evaluator and do a deeper study. But for other libraries that are just getting started, it might be a great entry point to be talking about and measuring some basic outcomes. Dierdre Brennan: So, related to that, how would you advise our listeners – librarians who want to get started? So we know about Project Outcome, but are there other… what would you would advise them to do to get started on evaluation for their library? Read a book? That’s always sort of my go-to… Rebecca Teasdale: Yeah, so there’s some great books that I can definitely, can we put links on your website? Dierdre Brennan: Yeah. Rebecca Teasdale: Ok. So there are some great books and websites for getting started that we can put some links on the RAILS website to some resources like that. I think also talking to other libraries that evaluation can be beneficial. And then actually I would stay to kind of get out of library-land and to talk to some of our colleagues in other fields. So right now I work with the Garibay Group; we work with the museums. So I've learned a lot with working with museums and understanding what evaluation looks like in that context. Social service agencies have a lot of experience with evaluation. Places like United Way have been doing outcomes based evaluation for many years and have some really great resources. Another great first step is to look at our program and service design, because evaluation can provide feedback on how well it’s working, but we can start now by reflecting on what assumptions are built in to our programs and if those assumptions seem to be holding up or not based on our day to day experience. Because sometimes we come in to evaluate a program and then we find out there’s something actually on the surface that’s pretty clearly not working about a program. And we don’t need to waste their money doing an evaluation on a program, when we can just have a conversation about what might be fine-tuned in the program first. So I think taking a critical look at programs and trying to identify where there might be some gaps, where there might be some shortcomings in the internal logic or in the assumptions that are based in can be a great place to start. Dierdre Brennan: So, what’s next for you Rebecca? You’re working on your thesis, I think? Is that right? You’re teaching… so what’s your ultimate goal and when do you think you’ll finish this? Well not finish the journey but sort of the stage of the journey. Rebecca Teasdale: Yeah, so I am teaching now. I teach Program Evaluation at the University of Illinois at the Chicago campus in the College of Education. So that’s a lot of fun. And I am currently working on my dissertation proposal. I'm looking at methods of evaluating public library maker spaces for adults and trying to figure out how to really get at the problem or challenge of participation being very individualistic and self-directed and so it’s not something where we can set a standardized set of predetermined outcomes to measure progress toward. So I'm hoping to defend my proposal this summer and then it would be about 18 months to complete the dissertation and defend that. So I'm on track for that. And as far as long term, my goal has always been to do some evaluation, to do some research and to do some teaching. And so I'm not sure what the ratio will be among those. But those are 3 things around evaluation that I would really love to continue to do - and hopefully related to libraries and museums and other sites of informal learning. Dierdre Brennan: Well as I've said to you in the past I think this is just critical for libraries and I can certainly see that museums have a similar if not a greater problem. Not a problem exactly, a challenge in terms of determining what it is that people do when they’re in the museum and what the impact is of that on their lives. So I'm as much as I'm sad that you’re not a librarian anymore… Rebecca Teasdale: Oh I'm always a librarian. Once a librarian, always a librarian. Dierdre Brennan: You’re right, you’re right. But I'm really glad that you’re working on this. So I want to thank you for talking to us today. And I know you’re also pretty active and visible at conferences – library conferences, other conferences. So I guess we will all run into you every now and then at some programs where you are talking about evaluation too, right? Rebecca Teasdale: Yeah, that would be great. Thank you for having me. Dierdre Brennan: Thanks Rebecca. Rebecca Teasdale: Thank you. Dierdre Brennan: Thank you very much for listening to Sparks today. Sparks is produced by the Reaching Across Illinois Library System. If you would like to learn more about the show or share your feedback on the topics discussed, please reach out at railslibraries.info/sparks. (outro music playing)