Symposium on Arts Participation in Washington DC

Geoffrey Crossick, Director, AHRC Cultural Value Project

Last year, at an early stage of the Cultural Value Project, I spent an intensive week in Washington DC, Philadelphia and New York meeting people in the cultural, academic and policy areas to share thinking about some of the issues we were hoping to address in our work. This included an invigorating half-day talking to Sunil Iyengar, Director of the Office of Research & Analysis at the National Endowment for the Arts, and some of his colleagues. As we parted, we agreed that we needed to find a way of working together.

The first outcome was a two-day symposium in Washington DC that Sunil and I have been organising over the last year and which took place at the start of June 2014. What we had initially thought of as a symposium on arts participation surveys developed into something much more exciting as we defined the problematics that we wanted to address and identified the speakers and other participants. We really wanted to challenge many of the underlying assumptions bound up in conventional national arts participation surveys. The resulting symposium carried the title Measuring cultural engagement amid confounding variables: a reality check.

There were over 60 people at the event, hosted in the fine spaces of the Gallup Building in downtown Washington, drawn from a wide variety of backgrounds (arts funders, cultural policy makers, academic researchers, cultural consultants and others) and from not only the US and UK but also Canada, Australia, Denmark and the Netherlands. The underlying question was a straightforward one: the standard surveys of participation – of which the DCMS/Arts Council England’s Taking Part is just one example – have become a necessary part of the evidence base for those seeking to make the case for public funding of the arts, but how far are they fit for purpose in the changing world of early-21st-century cultural participation and data availability? Is the current approach predicated on unspoken assumptions and expectations, does it miss the complexities of what participation is today, and are big national surveys appropriate to a very different data universe than existed when they were set up?

We’ll each have taken away our own messages from the very stimulating discussions and, in addition to forthcoming podcasts on the NEA’s website, a full report will be issued later in the year. What are the messages that I took away? The first is about data. We got very excited when Bob Groves, the former Director of the US Census Bureau, told us in the challenging plenary lecture that opened the symposium about the plethora of organic data that he said would sweep away the relevance of infrequent survey-based censuses and sample surveys and replace them with data drawn from Google searches, scraped data, Twitter, retail scanning and credit cards and so much more that was about actual behaviour, rather than asking people what they did. Subsequent contributions pinned this more precisely to the cultural world where evolving digital modes of participation and interaction could provide the rich material we might need. It was exciting stuff but we slowly pulled back from writing off the traditional survey because – even without the serious ethical and political considerations that might temper what we did and which strangely did not surface in our discussions – the raw character of organic data meant that we might need the structure of enquiry that emerged from traditional surveys as well as refined methodologies before we could make sense of it. This was not the time to leap too quickly into this particular unknown.

Second,  the interesting presentations we heard on what we’re in the UK calling ‘everyday participation’, starting with what people do rather than with the established categories of cultural engagement, provoked a good deal of thought. Much debate on arts participation is based on a deficit model – which people don’t participate and is it the excluded who are at fault or the arts organisations? Most probably, given that we’re talking about government criticisms of the arts and of the poor, both are often judged to be at fault. If we look at the wide variety of everyday cultural activities that are not captured by surveys but which shape people’s lives, we might have corrected that deficit vision. But is there a danger that by doing so we’re somehow talking ourselves out of social inequality? Work on everyday participation is both interesting and important, but might it lead us to ignore the inequalities of provision and of opportunities that underpin the arts in deeply unequal societies?

Third, why are we interested in arts participation and are they relevant to the arts and cultural sector? Arts organisations appear to care about them because their funders do. What most current surveys, whether national or local, do not provide is much help for arts organisations and practitioners who are genuinely interested in their audiences and the experiences that they have. Can audience and participation surveys be made more relevant, telling organisations more about why their audiences come and why those who are absent don’t, and more about their experiences that go beyond whether they enjoyed it (are you meant to enjoy all cultural experiences, in any case)? Does that mean more surveys that are based on locality, organisation or event? There was much sympathy for this approach, but also an awareness that the big survey mapped the environment in which organisations operated and also helped them to refine their business models in support of financial sustainability. Another message that warned against excessively neat dichotomies.

Fourth was the unspoken disjuncture between the imperatives of policy making on the one hand and academic research on the other, and a sense that that disjuncture might be more pronounced in the UK than in the US where academic researchers often seemed closer to policy makers and to funders (the majority of the latter being foundations rather than government). It is not surprising that people have different objectives nor that these carry implications for methods, for conceptual framework and for overall analysis. If the two communities don’t interact then it is both wasteful and unproductive, but it can be equally wasteful and unproductive if they engage without a clear understanding of their different agendas. Neither should want to see high-level surveys cast aside, even if they need enriching and supplementing with new kinds of data and new kinds of question. If many of us believed that academic research should be the underpinning for policy interventions then we surely need to be aware of the conflicting imperatives rather than wishing them away.

My fifth and final message concerned failure. To be more precise, if one of the main uses for such surveys is to meet the requirements of funders then is there a danger that we’ll be undermining the very risk taking, and thus capacity to fail, that is an essential part of any successful arts practice and arts environment? There is evidence that the press, public and funders pick up on those art forms or organisations that appear less strong in a particular survey rather than those that are flourishing. And if one art form or organisation is doing less well in terms of participation and audiences, then it will be determined to succeed in the future in ways that might inhibit risk-taking and experimentation. Participation and audience surveys that are used for accountability make compliance the driver, and that can threaten the innovation that makes the arts so important.

These were the five big messages that I took away with me from this engaged discussion, but there were others. As an urban historian I was delighted to see the insistence on place, real physical locations, as something that had not been swept away in a digital world, an insistence that emerged from several of the presentations. And I also concluded that there is a great danger in believing that the digital space constituted the cultural ecology when it was in reality no more than one (and a relatively new) part of that complex ecology. Both these were realistic and encouraging. Which I think was part of my conclusion from the symposium as a whole – it was realistic and encouraging at times, but also visionary and imaginative at others.

The comments I received during and after the event suggested that others felt as I did, that by bringing together people from different backgrounds and approaches, by allowing often challenging short presentations to be followed by long and engaged discussion, by ensuring that the programme was not prosaic and by embracing different national experiences (not least contrasting the North American and the European) we’d managed to organise a lively and productive event from which more work should flow. The involvement of the Cultural Value Project did give it a distinct flavour, and Patrycja Kaszynska and I were encouraged by the way people seemed to recognise that. One subsequent US blog commented favourably on the fact that things were not muddied by quantitative versus qualitative debates – and the writer put that down to the UK influence. As we’ve been pressing that point since the Cultural Value Project began it was good to see it recognised!

Anouk Lang: Developing methods for analysing and evaluating literary engagement in digital contexts

This project, Developing methods for analysing and evaluating literary engagement in digital contexts, begins from the starting point that the rapid rise in the amount of user-generated content produced on the internet, especially on social media sites, offers an extraordinary opportunity to study human interaction in a format that lends itself easily to multiple kinds of computational analysis. From the perspective of scholars of reading and reception, this growing body of data is particularly exciting, given that it is not just time-consuming to interview individual readers, carry out surveys and conduct focus groups, but also problematic to draw conclusions from artificial contexts where it is difficult to know the extent to which the answers being given have been influenced by the unequal relationship between reader and researcher. Although data derived from the internet has plenty of limitations of its own—the fact that users of a particular site or service may not be a very representative sample of the general population, for instance—it is still the case that born-digital responses to texts, other readers, and literary events offer researchers the tantalising possibility of grasping aspects of reading that have been previously inaccessible. Not only are there much larger amounts of material available than in the past, but also digital reception data often involves readers voluntarily recording their thoughts in the context of a community to which they feel a sense of belonging, rather than reporting them to a stranger.

The challenge for researchers who work on reading and who do not have large amounts of technical background knowledge is twofold. First, how can they access these rich bodies of data, and second, how can they carry out analysis of digital materials alongside their established methods of working with non-digital reception data? A scholar with experience in interpreting marginalia – comments written in the margins of books – is well placed to bring her skills to bear on digital forms of annotations, for instance, but might not know how to get hold of this data nor how to process it when the sheer amount of material available exceeds the capacity of a single human reader. Other disciplines have addressed these issues—corpus linguists have established methods of constructing and analysing large textual corpora, for instance, and computer scientists have developed techniques such as sentiment analysis which can process large numbers of statements to determine whether they are broadly positive or negative, while various other approaches are being taken by scholars across the digital humanities—but for scholars of reading without the technical background to scrape data from websites, or set up a Twitter archive, there are significant barriers to engaging with this data.

The aim of this project is to lower these barriers, by reporting on three different kinds of approaches that can be taken with digital reception data that are within the grasp of reception researchers without specialist digital humanities training. First, it examines the thematic content of the textual data that individuals generate when they engage in online discussions about the value of books or literary activities. Second, it investigates what can be learnt from the chronological information attached to these discussions, for example the timestamps on social network posts or tweets. Third, it considers the role played by place in online conversations about reading, using digital mapping tools to visualize the geographic information attached to social media posts. The project will produce a report setting out what kinds of information can be learnt about the cultural value of reading in the digital age from these three angles, and will supply guides for a number of digital tools which can be used to work with these three kinds of data.

The two types of social media on which the project centres are the micro-blogging service Twitter and the literary social network LibraryThing. Because the focus of the project is the value that reading and book-related activities brings to individuals, I have chosen books and authors that have won or been shortlisted for high-profile prizes such as the Nobel Prize and the Booker Prize, and that have featured in literary competitions with considerable cultural cachet. Using timestamped data from the Twitter API, for instance, will allow me to examine such things as how the content of discussions about a shortlisted book change in light of prize announcements, or how the progress of a literary competition might influence the way LibraryThing users position themselves in relation to a particular book as they go about their interactions with other readers on the site. Geography, too, can be considered: as people across a country or around the world take to Twitter to express their opinion about an author who has just won a prize or a competition, what kinds of patterns is it possible to discern from the spatial distribution of tweets? Previously, it was difficult for scholars of reading to access the when and where of reception data with such precision, and so—especially in light of the large amount of material that is now available online about readers’ preferences and responses to books—it seems an opportune moment to reflect on the methodological opportunities and limitations of this kind of digital work on the cultural value of reading.