Joint statement from Adrian Hadland, David Campbell, and Paul Lambert:
As the authors of the The State of News Photography report – published by the Reuters Institute for the Study of Journalism at the University of Oxford, in association with the World Press Photo Foundation – we would like to respond to criticisms made by Ken Kobre in his short LinkedIn post dated 10 October.
Kobre is a significant voice in the photojournalism community, but our respect for his right to offer critical commentary cannot obscure the fact that the central point of his post is seriously flawed.
The survey research methodology behind the report has been openly and transparently stated, both in the report itself [see page 13, and the detailed discussion in Appendix 1, pp. 71-73] and in the press statements announcing the report’s release.
The report is based on an online survey of the 5,158 professional photographers who entered the 2015 World Press Photo Contest. A total of 1,556 photographers from more than 100 countries completed the 63-question survey. While the World Press Photo Contest is run by the World Press Photo Foundation, which is based in the Netherlands, the Contest has a global reach and survey respondents came from around the world.
As researchers we are the first to acknowledge that “all surveys have aspects that weaken their purview and affect the degree to which inferences can be drawn or conclusions taken.” However, because of the robust methodology and analytical techniques employed, we are confident that “with certain caveats, the survey results can reasonably be expected to be representative of the population of professional photojournalists (The State of News Photography, p. 71).”
The serious flaw in Kobre’s criticism comes from the way it calls attention to the raw numbers of respondents from certain countries, especially the US, claiming these are too low to be meaningful, without paying attention to the statistical methods through which those numbers are analysed. As our statistical expert, Paul Lambert, writes below, “although it seems counter-intuitive, there is no important relationship between the total size of the underlying population, and the size of a sample that would provide representative data about it.”
We cannot definitively judge whether all media coverage on our study has been fair. The journalists that Kobre deems to have ‘failed reporting 101’ can answer for themselves. However, it seems to us the TIME Lightbox article mentioned at the beginning of Kobre’s post offers an accurate statement of the survey behind our study.
What we can unquestionably confirm is that our study provides a meaningful, evidence-based contribution to our knowledge in the area. Indeed, we believe it to be the first attempt to provide some global data on the lives and livelihoods of professional photojournalists, within the parameters we have clearly stated. For too long our understanding of the challenges facing photojournalism has been anecdotally based, and we wanted to offer a corrective to that weak foundation. Rather than seeking to delegitimise The State of News Photography through untrained opinions about survey methodology, it would be good to hear critics propose concrete ways to enhance our understanding of the photojournalism community, because other studies that sampled other populations to provide an even richer account would be very welcome.
Detailed methodological response from Paul Lambert (Ph.D in Applied Statistics, head of the research group on Social Surveys and Social Statistics, and Professor of Sociology, Stirling University, UK):
It is important to differentiate between the size of the sample and whether the sample is ‘biased’. Sample researchers would always prefer, ideally, as large a sample as possible, but the tradition of ‘power analysis’ in statistical research demonstrates that fairly small sample sizes can still do a very good job of providing accurate information about a much larger population. (In a technical sense, they can be analysed in such a way that it is relatively unlikely that either ‘type I’ or ‘type II’ errors will be made – that is, neither ‘false positive’ results that wrongly presume a relationship that does not exist in the underlying population; nor ‘false negative’ results that wrongly fail to identify a relationship that does exist in the underlying population). Additionally, although it seems counter-intuitive, there is no important relationship between the total size of the underlying population, and the size of a sample that would provide representative data about it.
A well-known example of inference from moderately small sample surveys is when professional opinion polls use samples typically of between 500 and 2,000 thousand respondents to make inferences about attitudes amongst national level populations. Our sample size of over 1,500 is certainly of a magnitude that inference from it to populations of photographers (or to important groups within that population) is plausible.
Sample bias is a different issue. This refers primarily to whether non-response patterns are such that they skew any results from the sample away from the patterns for the population which it is hoped will be represented. All voluntary surveys exhibit some non-response, although if the reasons for non-response are statistically random then this will not in fact introduce bias in the data. Accordingly, sample surveys when they are designed will try to include features that minimise the chances of non-random non-response (steps which we took during the survey design phase).
We do believe nevertheless that our survey will have some bias in its response patterns, as again is normal with professional social survey research projects. Across the social science sector it is generally accepted that useful information can still be obtained from samples with some level of bias, so long as researchers (a) report explicitly on the methods that they used, and (b) pursue analytical methods and report upon their results in ways that should minimise the impact of bias. As is standard practice, we tried to do both in the report.
For instance, we explained the two layers of potential response bias in our sample: first, whether or not questionnaire respondents were a biased sample of World Press Photo Contest entrants, which we suspect is a minimal problem; second, whether or not data about the World Press Photo Contest entrants could reasonably be treated as evidence about the wider population of photojournalists. We believe that the World Press Photo Contest-based sample is a reasonable tool for analysis about photographers in general. The concordance of many of the patterns from the survey with evidence from other sources is potential support for this perspective. Additionally, we generally used methods of analysis that emphasised patterns of relationships such as correlations within the data. Generally speaking, a correlation or association pattern can be expected to be more robust to any bias.
There is a great deal of methodological research into the relative merits of social survey evidence. Most qualified social scientists accept that voluntary questionnaire surveys are an important source of research evidence and are aware of the possible limitations of inferences from survey datasets. We do not claim that our research provides irrefutable evidence, but we do believe that its quality is sufficient to provide a meaningful contribution to the evidence base.