Wednesday, May 22, 2013

MA2013 - were you there?

Last week was the Museums Australia National Conference in Canberra. Were you there? What were your take-home experiences? What would you want to share with fellow EVRNN members who couldn't make it this year?

This year the EVR National Network hosted a workshop by Dr. Tiina Roppola, called "Designing for the Museum Visitor Experience" and based on Tiina's extensive research across a variety of Australian museums. Participants discussed Tiina's model of how visitors and exhibits relate to one another and how it relates to their own institutions.

There was also a general discussion on the role and value of MA's National Networks more broadly. As with the recent EVRNN member survey, other networks have recently taken the temperature of their membership and this confirms that Networks are still valued by their members, even if the level of activity is limited to an informal get-together at conference time. In this vein, member representatives fed back to MA that network affiliation needs to be more visible at conference time - maybe we don't have to wear a red carnation, but perhaps some designation on our delegate badge so we have a better idea of shared interests?

It's not too late to add your voice to the conversation by filling out our member survey: http://www.surveymonkey.com/s/EVRNN - we'd still like to hear from you and Carolyn (our President) will be able to compile a fuller report of responses later in the year.

To get a flavour of the conference, here are a couple of links:

MA ACT's "storify" page https://storify.com/MA_ACT2013
Regan's initial conference notes in "storify" form  https://storify.com/interactivate/ma-2013-conference

Please add your own thoughts and comments below, particularly anything related to future activities of the EVR network.

Monday, May 13, 2013

Survey Responses – Benchmarks and Tips


NB: This first appeared as a blog post at reganforrest.com
I’ve now collected a grand total of 444 completed questionnaires for my PhD research (not including pilot samples) – which is not far off my target sample of 450-500. Just a few more to go! Based on my experiences,  I thought I’d share some of the lessons I’ve learned along the way . . .
Paper or Tablet?
My survey was a self-complete questionnaire (as opposed to an interviewer-led survey) that visitors filled out while on the exhibition floor. During piloting I tried both paper surveys and an electronic version on an ipad, but ended up opting for the paper version as I think the pros outweighed the cons for my purposes.
The big upside of tablet based surveys is that there is no need for manual data entry as a separate step – survey programs like Qualtrics can export directly into an SPSS file for analysis. And yes, manually entering data from paper surveys into a statistics program is time-consuming, tedious and a potential source of error. The other advantage of a tablet-based survey (or any electronic survey for that matter) is that you can set up rules that prompt people to answer questions they may have inadvertently skipped, automatically randomise the order of questions to control for ordering effects, and so on. So why did I go the other way?
First of all, time is a trade off: with paper surveys, I could recruit multiple people to complete the survey simultaneously – all I needed was a few more clipboards and pencils and plenty of comfortable seating nearby. Whereas I only had one tablet, which meant only one person could be completing my survey at a time. By the time you take into account the time saved from being able to collect far more paper surveys in a given time compared to the tablet, I think I’m still in front doing the manual data entry. Plus I find doing the data entry manually was a useful first point of analysis of the data, particularly during the piloting stages when you were looking to see where the survey design flaws were. Secondly, I think many visitors were more comfortable using the old-fashioned paper surveys. They could see at a glance how long the survey was and how much further they had to go, whereas this was less transparent on the ipad (even though I had a progress bar).
This doesn’t mean I would never use a tablet – I think they’d be particularly useful for interviewer-led surveys where you can only survey one participant at a time anyway, or large scale surveys with multiple interviewers and tablets in use.
Refining the recruitment “spiel”
People are understandably wary of enthusiastic-looking clipboard-bearers – after all, they’re usually trying to sell or sign you up to something. In my early piloting I think my initial approach may have come across as too “sales-y”, so I refined it such that the first thing I said was that I am a student. My gut feel is that this immediately made people less defensive and more willing to listen to the rest of my “spiel” for explaining the study and recruiting participants. Saying I was a student doing some research made it clear up front that I was interested in what they had to say, not in sales or spamming.
Response, Refusal and Attrition Rates
Like any good researcher should, I kept a fieldwork journal while I was out doing my surveys. In this I documented everyone I approached, approximately what time I did so, whether they took a participant information sheet or refused, and if they refused, what reason (if any) they gave for doing so. During busy periods, recording all this got a bit chaotic so some pages of notes are more intelligible than others, but over a period of time I evolved a shorthand for noting the most important things. The journal was also a place to document general facts about the day (what the weather was like, whether there was a cruise ship in town that day, times when large numbers of school groups dominated the exhibition floor, etc.). Using this journal, I’ve been able to look at what I call my response, refusal and attrition rates.
  • Response rate: the proportion of visitors (%) I approached who eventually returned a survey
  • Refusal rate: the proportion of visitors (%) approached who refused my invitation to participate when I approached them
  • Attrition rate: this one is a little specific to my particular survey method and wouldn’t always be relevant. I wanted people to complete the survey after they had finished looking around the exhibition, but for practical reasons could not do a traditional “exit survey” method (since there’s only one of me, I couldn’t simultaneously cover all the exhibition exits). So, as an alternative, I approached visitors on the exhibition floor, invited them to participate and gave them a participant information sheet if they accepted my invitation. As part of the briefing I asked them to return to a designated point once they had finished looking around the exhibition, at which point I gave them the questionnaire to fill out*. Not everyone who initially accepted a participant information sheet came back to complete the survey. These people I class as the attrition rate.
So my results were as follows: I approached a total of 912 visitors, of whom 339 refused to participate, giving an average refusal rate of 36.8%. This leaves 573 who accepted a participant information sheet. Of these, 444 (77%) came back and completed a questionnaire, giving me an overall average response rate of (444/912) 49.4%. The attrition rate as a percentage of those who initially agreed to participate is therefore 23%, or 14% of the 912 people initially approached.
So is this good, bad or otherwise? Based on some data helpfully provided by Carolyn Meehan at Museum Victoria, I can say it’s probably at least average. Their average refusal rate is a bit under 50% – although it varies by type of survey, venue (Museum Victoria has three sites) and interviewer (some interviewers have a higher success rate than others).
Reasons for Refusal
While not everyone gave a reason for not being willing to participate (and they were under no obligation to do so), many did, and often apologetically so. Across my sample as a whole, reasons for refusal were as follows:
  • Not enough time 24.4%
  • Poor / no English: 19%
  • Child related: 17%
  • Others / No reason given: 39%
Again, these refusal reasons are broadly comparable to those experienced by Museum Victoria, with the possible exception that my refusals included a much higher proportion of non-English speakers. It would appear that the South Australian Museum attracts a lot of international tourists or other non-English speakers, at least during the period I was doing surveys.
Improving the Response Rate
As noted above, subtly adjusting the way you approach and invite visitors to participate can have an impact on response rates. But there are some other approaches as well:
  • Keep the kids occupied: while parents with hyperactive toddlers are unlikely to participate under any circumstances, those with slightly older children can be encouraged if you can offer something to keep the kids occupied for 10 minutes or so. I had some storybooks and some crayons/paper which worked well – in some cases the children were still happily drawing after the parents had completed the survey and the parents were dragging the kids away!
  • Offer a large print version: it appears that plenty of people leave their reading glasses at home (or in the bag they’ve checked into the cloakroom). Offering a large print version gives these people the option to participate if they wish. Interestingly, however, some people claimed they couldn’t read even the large print version without their glasses. I wonder how they can see anything at all sans spectacles if this is the case . . . then again, perhaps this is a socially acceptable alibi used by people with poor literacy levels?
  • Comfortable seating: an obvious one. Offer somewhere comfortable to sit down and complete the questionnaire. I think some visitors appreciated the excuse to have a sit and and break! Depending on your venue, you could also lay out some sweets or glasses of water.
  • Participant incentives: because I was doing questionnaires on the exhibition floor, food or drink were not an option for me. But I did give everyone who returned a survey a voucher for a free hot drink at the Museum cafe. While I don’t think many (or any) did the survey just for the free coffee, it does send a signal that you value and appreciate your participants’ time.
*A potential issue with this approach is cuing bias – people may conceivably behave differently if they know they are going to fill out a questionnaire afterwards. I tried to mitigate this with my briefing, in which I asked visitors to “please continue to look around this exhibition as much or as little as you were going to anyway”, so that visitors did not feel pressure to visit the exhibition more diligently than they may have otherwise. Also, visitors did not actually see the questionnaire before they finished visiting the exhibition – if they asked what it was about, I said it was asking them “how you’d describe this exhibition environment and your experience in it”. In some cases I reassured visitors that it was definitely “not a quiz!”. This is not a perfect approach of course, and I can’t completely dismiss cuing bias as a factor, but any cuing bias would be a constant between exhibition spaces as I used comparable methods in each.

Sunday, January 6, 2013

Regional Museum Visitor Survey

EVRNN member Barrie Brennan from the Australian Country Music Hall of Fame has shared the visitor information survey they use. It's a one page self-complete survey sheet handed out to visitors who visit the museum as individuals or in pairs or family groups, or as part of an organised tour.

It's modified from an Australian Bureau of Statistics format circulated 2 or 3 years ago, with a couple of additional questions added: one for the visitor's origin (local or overseas) as this is of interest to local tourism bodies. The other is an option for follow-up contact.

Barrie is happy for other EVRNN members, particularly those from small regional museums who may not be currently collecting any visitor data, to modify this template for their own uses.

Any comments or queries? Please add to the comments section of this blog post.



Thursday, October 4, 2012

Cheap & Cheerful Approach to Analysing Tracking Data

At the Museums Australia conference last week, I presented a paper with Jenny Parsons from the South Australian Museum about how the museum used volunteers to collect tracking data and the method I used to analyse the data and get a highly visual 'hot and cold spot' map of the gallery. The slides have been posted to slideshare:



The abstract of the paper is as follows:


In 2010 the South Australian Museum Foundation undertook to update the Australian Aboriginal Cultures Gallery (AACG) of the Museum. Originally opened in March 2000, the gallery was in need of updating in terms of technology, signage and interpretive material.

Without the benefit of a multi-million dollar budget, the project team had to be quite strategic on the edits and adaptations to be made. The team spent a year examining the strengths and weaknesses of the gallery and conducting strategic visitor evaluation.

As part of the evaluation, volunteers from the finance firm JBWere tracked over 150 visitors as they moved through the Australian Aboriginal Cultures Gallery (AACG), using a methodology that had been adapted from one used at the Detroit Institute of Arts. The employees undertook this as part of their corporate social responsibility efforts and so were unpaid. The museum felt that these analysts were ideal candidates for this evaluation as they were astute at monitoring and understanding data.

The purpose of the tracking was to find out how visitors moved through the gallery - which way they turned upon entry, which displays were the most visited, which sections were being missed, and how long visitors spent in the spaces overall.

Adapting methods from the visitor research literature, recorded visitor movements were coded and quantified using Microsoft Excel. This data was re-presented in a colour-coded format on a plan of the AACG. This relatively simple approach was able to create a highly visual and intuitive interpretation of the data, showing visitor movement patterns at a glance.

While the visitor tracking confirmed some of the exhibition team’s suspicions about how the space was being used by visitors, some of the findings also challenged assumptions and led to a revisiting of the way important orientating material was displayed in the redeveloped AACG.

Any questions about the method?  Feel free to drop me a line at enquiries [at] reganforrest.com. 

KISS program evaluation


Short and sweet program evaluation has five measures.
  • First time vs repeat use
  • Rating - 5-point scale and/or Net Promoter Score
  • Rating on KPI outcomes
  • Open comments
  • Demographics - age, sex, location (local or out of town)
I presented a snapshot at the Museums Australia 2012 conference about short and simple program evaluation. Here is the presentation.

To see the notes with each slide, view it on Slideshare by clicking the Slideshare button to the left of the arrow buttons. Then you'll see two tabs beneath the slideshow - one for comments and one for notes for each slide.  Click the 'notes' tab to see them.



Once you have collected the data, you'll need to tally it up and write a short report.

You can keep track of visitor experiences by implementing this really simple evaluation tool for all your public programs.

Author:  Gillian Savage.

Friday, June 8, 2012

Evaluation: it's a culture, not a report

The UK Museums Journal website has recently published the opinion piece Why evaluation doesn't measure up by Christian Heath and Maurice Davies. Heath and Davies are currently conducting a meta analysis of evaluation in the UK.

Is this the fate of many carefully prepared evaluation reports?

The piece posits that: "[n]o one seems to have done the sums, but UK museums probably spend millions on evaluation each year. Given that, it’s disappointing how little impact evaluation appears to have, even within the institution that commissioned it."

If this is the case, I'd argue it's because evaluation is being done as part of reporting requirements and is being ringfenced as such. Essentially, the evaluation report has been prepared to tick somebody else's boxes - a funder usually - and the opportunity to use it to reflect upon and learn from experience is lost. Instead, it gets quietly filed with all the other reports, never to be seen again.

So even when evaluation is being conducted (something that cannot be taken as a given in the first place), there are structural barriers that prevent evaluation findings filtering through the institution's operations. One of these is that exhibition and program teams are brought together with the opening date in mind, and often disperse once the ribbon is cut (as a former exhibition design consultant, their point about external consultants rarely seeing summative reports resonated with my experience). Also, if the evaluation report is produced for the funder and not the institution, there is a strong tendency to promote 'success' and gloss over anything that didn't quite go to plan. After all, we've got the next grant round to think of and we want to present ourselves in the best possible light, right?

In short, Heath and Davies describe a situation where evaluation has become all about producing the report so we can call the job done and finish off our grant acquittal forms. And the report is all about marching to someone else's tune. We may be doing evaluation, but is it part of our culture as an organisation?

It might even be the case that funder-instigated evaluation is having a perverse effect on promoting an evaluation culture. After all, it is set up to answer someone else's questions, not our own. As a result findings might not be as useful in improving future practice as they might be. So evaluation after evaluation goes nowhere, making people wonder why we're bothering at all. Evaluation becomes a chore, not a key aspect of what we do.





Friday, April 27, 2012

DIY Curator



Most social media platforms make it easy for users to select images and present them to the world. Users can collect their images and annotate them, then make selections, curate an album and give it an overarching story.

Flickr, Facebook, Tumblr, blogs, and any number of variants allow everyone to be a curator.

The newest kid on the block is Pinterest, a content sharing service that allows members to "pin" images, videos and other objects to their pinboard. With 2.2 million active daily users and 12 million active monthly users, Pinterest is now the third most-used social media platform in the United States.

Most major museums have adopted social media by using blogs, Facebook, Flickr, etc. How soon before they start colonising Pinterest?

The basic contract between these social media sites and their users is that users need not be passive consumers, they can be the active party, going beyond just looking or reading to 'like', share, recommend, comment, re-post, and add their own content.

Pinterest is popular because it feeds our appetite to share the things we’re passionate about. It has all the buzz of the latest thing. Forbes journalist Scott Goodson wrote, somewhat breathlessly,
Pinterest drives more traffic than three of the biggest social networks combined. And you tap into a digital world of curation, sharing and visual inspiration. It’s the future and it’s happening right now.

Museums that allow visitors to be active and respond socially and creatively in a variety of ways will be loved as much as these gorgeous, functional and popular social media platforms.