Our Vancouver chapter of the Library Services for Children Journal Club held our fall meeting last week to discuss how and why we evaluate early literacy programs such as Mother Goose and storytime. We read and discussed an article about a research study designed to evaluate the impact Regina Public Library’s Mainly Mother Goose program on caregivers support of the development of early literacy for their children. Here’s a summary of the article and our discussion.
Article Summary
This article aimed to understand how the Mainly Mother Goose program may contribute to caregivers’ engagement in the development of their child’s early literacy skills. Noting the lack of research related to public library program evaluation, especially with regards to early literacy programs, the researchers gave a brief literature view and pointed towards studies out of Idaho and Ontario that showed positive impacts of preschool programs and parent education initiatives. This study used a quasi-experimental design to survey caregivers before and after the program and conduct interviews a few months later. They asked the following 4 research questions:
- Do parents report an increased use of the following nine early literacy skill development activities after their participation in the MMG program? (see article for complete list of activities)
- Do caregivers report an increased number of library visits after participating in the MMG program?
- Do caregivers report an increased sense of confidence and competence in using storytime materials and activities after participating in the MMG program?
- Do caregivers use what they learned in the MMG program at home?
The results of their study showed no statistically significant change in the use of the nine early literacy skill activities. Because the study evaluated for changes in frequency of behaviour it was noted that many of the caregivers reported a high usage of the skills on the pre-test thus leaving little room for improvement. The study included results from the program when it was hosted in the library verses when it was hosted at an outreach site. Caregivers at the outreach sites had higher rates of change in the nine early literacy skills. For the remaining research questions, there was an increase in the caregivers visits to the library, their confidence, and their usage of activities at home. Yay!
Group Discussion
Our group started by discussing the nine early literacy skill development activities the researchers chose to ask about. How did they decide on these nine? The don’t give any information regarding the selection of these skills and we noted that they inquire heavily on the skills of talking and singing. None of the questions had to do with play which we know is how children learn. Some of the skills were very similar – talking a child vs. asking them a question – that we questioned the usefulness of the nine skills too. We wish the researchers had given a little background on how they chose those skills and how they were connected to research.
We also discussed the researchers choice to evaluate for a change in frequency of behaviour. Our criticism, which was noted in the article, is that very little change will be observed if the caregivers are already exhibiting the behaviours before the intervention (i.e. the Mainly Mother Goose Program.) Especially when surveying caregivers who are already coming to the library on a regular basis, it’s not surprising that there wasn’t a huge impact on the nine skills. Seeing the results of the outreach site visits differ was a good justification to us that our community outreach efforts are much needed and have the biggest impact. We thought the other three research questions gave more valuable information because they showed a changing view of the library and how our programs can impact caregiver attitudes.
This study led us to think about why we do evaluation in the first place. We came up with a list of reasons to conduct research studies that evaluate our programs including to prove our impact on families, to build credibility with our organizations and community members, to push for more money and funding to increase our capacity, to identify gaps in our programming, to contribute to the body of research literature on evaluation, and to assess for learning outcomes of children and caregivers. We noted the difference, however, between outcome evaluations and satisfaction surveys. If you are wanting to gauge what your caregivers enjoy, what they’d change, what they don’t like, etc. then that is different from an evaluation that measures learning or knowledge acquisition. Before planning large scale evaluation projects it’s important to consider why you are doing them, what you hope to measure, and what you will do with the data when you are done.
This article evaluated caregivers, but there has been recent research that evaluates children and storytime presenters. We talked about the VIEWS2 research study from the University of Washington and how they observed storytimes to see if children display specific early literacy behaviours. They also designed an intervention for the storytime presenters and proved that it helped them be more intentional about early literacy in storytime which impacted the kids as well. What are the pros and cons of evaluating these three audiences: children, caregivers, storytime presenters? How would the study change based on your audience? It all comes back to what you are hoping to gain from the evaluation. If you want to improve your skills as a storytime presenter then you wouldn’t necessarily ask for caregiver feedback. That’s something a peer or mentor could provide more meaningful feedback based on observation. It was very exciting to see the new research coming out of the VIEWS2 project and even more exciting to see free training being developed based on this research called Supercharged Storytimes.
We ended the discussion by asking ourselves: As children’s librarians are we researchers? Do we view ourselves that way? Were we taught to do research and value research in our MLIS programs? There is so much data we collect through our children’s programs that has the potential to speak to library boards and donors about the significant impact we have in our community. But much, if not all, of that data remains unanalyzed as we do not have capacity in our jobs to conduct research studies on top of all the other day-to-day priorities. It’s interesting to note that some libraries are joining with universities, such as Calgary Public Library and Mount Royal, to do this research together. Perhaps that is a model we can use in the future.
If you’re interested in starting a Library Services Journal Club in your area, please let me know and I’d be happy to help!
This is such an important discussion! I think one of the challenges associated with doing research on the impact of storytime is that we cannot isolate the impact of attendance. We can’t prove that attending storytime increased such and such skill, because there are so many other variables in a child’s life that we cannot account for. This means we have to be ultra precise in our language and avoid saying things like “attending storytime will help your child be a strong reader.” A more accurate statement would be “attending storytime will expose your child to the skills necessary to be a strong reader.” This is why I’ve struggled with PLA’s Project Outcome… It’s branded as “outcome evaluation” and but it only measures perception and satisfaction. My unpopular opinion is that it’s basically Survey Monkey for libraries 😀
I am a lover of unpopular opinions! How can bringing a critical eye to our practice be a bad thing? I haven’t looked at Project Outcome in depth but we are using something similar at my library this fall. It focuses on caregivers like the article; however, it asks open ended questions such as what did you learn, do you do anything differently at home, what resources are most useful to you, etc. I need to read more about Project Outcome to get a better understanding of how they define an “outcome” versus what you mentioned (perception/satisfaction). I love the example of precise language you used – so important!
The article did address the various variables that may have influenced the caregivers’ responses. One being that they didn’t put a timeframe on caregiver recall so they have no way of knowing if caregivers were thinking about that day, that week, that month, etc. And like you said, they may have attended other programs in the area, had relatives visiting, been sick, etc. For this study they had to attend a certain percentage of the programs in order to be surveyed so they tried to control for attendance to a degree. I like the how the VIEWS2 research focused on observable behaviours during storytime which led to an intervention for the storytime providers to increase those behaviours. At least the reason behind the evaluation was clear – to increase the intentionality of storytime presenters in order to facilitate early literacy behaviours during storytime. Obviously there are limitations to that though – quiet kids who don’t participate orally or physically, the behaviours that occur before and after storytime, etc. And what none of the research seems to capture is the relationship building that happens between families and storytime presenters.
Thanks so much for adding your thoughts. I’ll let you know when our winter meeting is so you can fly out to join us 🙂
I didn’t attend this session but I have read the article and I am following the VIEWS2 project closely too. Librarians should be participating in research – yes – but academics should be actually conducting the research. We need way more academics to take an interest in what public libraries do and to get in there. Libraries pose difficulties as research sites – getting through ethics approvals is tricky – but not impossible. It is much easier to conduct research in, say, a preschool or kindergarten where you are dealing with the same people week after week. In a public library, you don’t know who is going to be there and you’re supposed to ask permission before make people participate in research! These are the things I think about all the time as I would like to study storytime myself at some point in the future (maybe for a postdoc). Anyway, all this to say – as practitioners we know we need this research – but we can’t conduct it ourselves. It is most definitely the responsibility of academic faculty to to do so , like Kathleen Campana and Maria Cahill, to name a few who are conducting this kind of research which is great. By the way Maria Cahill, Soohyung Joo and Kathleen Campana have just published an article in the Journal of Librarianship and Information Studies called “Analysis of language use in public library storytimes”, I am reading it right now in fact!
Hear, hear! Sometimes I’m tempted to go back and get another degree just so I can do the research myself. I’m so glad there is a core group of academics taking a hard look at storytime though and publishing their work. Thank you for the heads up about the new article – did you get access through the public library’s databases? I’m trying to track it down…