Statement on survey methodology in The State of News Photography study


 

 

State of News Photography

Joint statement from Adrian Hadland, David Campbell, and Paul Lambert:

As the authors of the The State of News Photography report – published by the Reuters Institute for the Study of Journalism at the University of Oxford, in association with the World Press Photo Foundation – we would like to respond to criticisms made by Ken Kobre in his short LinkedIn post dated 10 October.

Kobre is a significant voice in the photojournalism community, but our respect for his right to offer critical commentary cannot obscure the fact that the central point of his post is seriously flawed.

The survey research methodology behind the report has been openly and transparently stated, both in the report itself [see page 13, and the detailed discussion in Appendix 1, pp. 71-73] and in the press statements announcing the report’s release.

The report is based on an online survey of the 5,158 professional photographers who entered the 2015 World Press Photo Contest. A total of 1,556 photographers from more than 100 countries completed the 63-question survey. While the World Press Photo Contest is run by the World Press Photo Foundation, which is based in the Netherlands, the Contest has a global reach and survey respondents came from around the world.

As researchers we are the first to acknowledge that “all surveys have aspects that weaken their purview and affect the degree to which inferences can be drawn or conclusions taken.” However, because of the robust methodology and analytical techniques employed, we are confident that “with certain caveats, the survey results can reasonably be expected to be representative of the population of professional photojournalists (The State of News Photography, p. 71).”

The serious flaw in Kobre’s criticism comes from the way it calls attention to the raw numbers of respondents from certain countries, especially the US, claiming these are too low to be meaningful, without paying attention to the statistical methods through which those numbers are analysed. As our statistical expert, Paul Lambert, writes below, “although it seems counter-intuitive, there is no important relationship between the total size of the underlying population, and the size of a sample that would provide representative data about it.”

We cannot definitively judge whether all media coverage on our study has been fair. The journalists that Kobre deems to have ‘failed reporting 101’ can answer for themselves. However, it seems to us the TIME Lightbox article mentioned at the beginning of Kobre’s post offers an accurate statement of the survey behind our study.

What we can unquestionably confirm is that our study provides a meaningful, evidence-based contribution to our knowledge in the area. Indeed, we believe it to be the first attempt to provide some global data on the lives and livelihoods of professional photojournalists, within the parameters we have clearly stated. For too long our understanding of the challenges facing photojournalism has been anecdotally based, and we wanted to offer a corrective to that weak foundation. Rather than seeking to delegitimise The State of News Photography through untrained opinions about survey methodology, it would be good to hear critics propose concrete ways to enhance our understanding of the photojournalism community, because other studies that sampled other populations to provide an even richer account would be very welcome.

 

Detailed methodological response from Paul Lambert (Ph.D in Applied Statistics, head of the research group on Social Surveys and Social Statistics, and Professor of Sociology, Stirling University, UK):

It is important to differentiate between the size of the sample and whether the sample is ‘biased’.  Sample researchers would always prefer, ideally, as large a sample as possible, but the tradition of ‘power analysis’ in statistical research demonstrates that fairly small sample sizes can still do a very good job of providing accurate information about a much larger population. (In a technical sense, they can be analysed in such a way that it is relatively unlikely that either ‘type I’ or ‘type II’ errors will be made – that is, neither ‘false positive’ results that wrongly presume a relationship that does not exist in the underlying population; nor ‘false negative’ results that wrongly fail to identify a relationship that does exist in the underlying population). Additionally, although it seems counter-intuitive, there is no important relationship between the total size of the underlying population, and the size of a sample that would provide representative data about it. 

A well-known example of inference from moderately small sample surveys is when professional opinion polls use samples typically of between 500 and 2,000 thousand respondents to make inferences about attitudes amongst national level populations. Our sample size of over 1,500 is certainly of a magnitude that inference from it to populations of photographers (or to important groups within that population) is plausible.

Sample bias is a different issue. This refers primarily to whether non-response patterns are such that they skew any results from the sample away from the patterns for the population which it is hoped will be represented. All voluntary surveys exhibit some non-response, although if the reasons for non-response are statistically random then this will not in fact introduce bias in the data. Accordingly, sample surveys when they are designed will try to include features that minimise the chances of non-random non-response (steps which we took during the survey design phase).

We do believe nevertheless that our survey will have some bias in its response patterns, as again is normal with professional social survey research projects. Across the social science sector it is generally accepted that useful information can still be obtained from samples with some level of bias, so long as researchers (a) report explicitly on the methods that they used, and (b) pursue analytical methods and report upon their results in ways that should minimise the impact of bias. As is standard practice, we tried to do both in the report.

For instance, we explained the two layers of potential response bias in our sample: first, whether or not questionnaire respondents were a biased sample of World Press Photo Contest entrants, which we suspect is a minimal problem; second, whether or not data about the World Press Photo Contest entrants could reasonably be treated as evidence about the wider population of photojournalists. We believe that the World Press Photo Contest-based sample is a reasonable tool for analysis about photographers in general. The concordance of many of the patterns from the survey with evidence from other sources is potential support for this perspective. Additionally, we generally used methods of analysis that emphasised patterns of relationships such as correlations within the data. Generally speaking, a correlation or association pattern can be expected to be more robust to any bias.

There is a great deal of methodological research into the relative merits of social survey evidence. Most qualified social scientists accept that voluntary questionnaire surveys are an important source of research evidence and are aware of the possible limitations of inferences from survey datasets. We do not claim that our research provides irrefutable evidence, but we do believe that its quality is sufficient to provide a meaningful contribution to the evidence base.

5 Responses to “Statement on survey methodology in The State of News Photography study”

  1. Ken Kobre

    I do not question the methodology or the statistics of this study. The question is what conclusions can be drawn from these numbers. This is a survey of a group of individuals who have entered one year’s contest… not a random sample of all photojournalists worldwide. The numbers can only tell us about the gender, income, etcetera, of this year’s World Press contest entrants. The title, “The State of News Photography,” does not reflect the data. The data applies only to one contest’s entrants and NOT to photojournalists around the world. The title should be “The State of World Press Entrants.”

    Reply
    • David Campbell

      That is an odd response Ken, given your post was all about sample size and what you saw as the problems with that. And in your comment again you refer to the idea of a ‘random sample’ as though this was the only appropriate basis for survey methodology. We have made clear, both in the report and above, how and why the survey methodology is robust and the inferences drawn from it are valid, within the parameters we have been clear about. To repeat, “we believe that the World Press Photo Contest-based sample is a reasonable tool for analysis about photographers in general.” As a result, we stand by our title.

      Reply
      • Michael Lutzky

        With due respect to all involved…that the headline used by Time “New Study Shows Gender Inequality in Photojournalism Is Real” is not an accurate representation of the survey results is a fair criticism, one that has little if anything to do with the methodology employed to collect the data.

        Regarding the latter, while the methodology may be robust as suggested, that doesn’t necessarily mean it is the correct methodology for a study that wants to understand photojournalists worldwide. The issue is less about the size of the sample than whether the sample accurately reflects the population it is intending to understand. One challenge for the authors is that the survey respondents self selected. In this case, they self selected twice; once to enter the contest and a second time to respond to the survey or not. While sampling is common in surveying for many important reasons, it is also carefully employed for the reasons Mr. Kobre has highlighted. It is not a leap to suggest that World Press entrants are not a representative subset of all photojournalists (as most current and former photojournalists would attest including the author of this post). Similarly the subset of World Press entrants are likely also not representative of the broad pool of US photojournalists. A statistician, as credentialed and experienced as these are, may not be aware of the bias this self selecting is masking. Not only from their results but from their conclusion. Mr Lambert is right that sampling of the size employed here can create a “plausible” representation of the population…in this case though, Mr Kobre is simply suggesting that the sample used in this study doesnt meet that standard with a reasonable degree of confidence. After reviewing the study and the ensuing points and counter-points, I agree with him.

        Reply
        • David Campbell

          Thanks for your comment supporting Ken’s position. As this post is a response to Ken’s argument, we don’t have anything to add to what we have already said. I would be interested in hearing what you and Ken would do to construct and implement a statistically robust and sound survey which met your concerns.

          Reply
  2. Michael Lutzky

    Thank you David. It is an important group and profession to understand better and your work is the first of its kind that I have been exposed to. The questions and categories included are highly relevant and can be drivers of productive action to engage and optimize the work and working conditions of this group. Like many intellectual differences among the most astute experts in a given subject matter, it may simply be a case of having to agree to disagree. While you and the coauthors “believe that the World Press Photo Contest-based sample is a reasonable tool for analysis about photographers in general,” others are not yet fully bought-in to that premise. That, in my opinion, is the root cause of the discomfort some have and is a critique that is very addressable in the next iteration. Its a well constructed, thoughtful survey that truly deserves a much broader distribution (there are press photographer associations and contests in many countries)- which will strengthen the case that the sample used is a more accurate representative for the photographers in general. Best.

    Reply

Leave a Reply