A response to “Australian films at large: expanding the evidence about Australian cinema performance”, a paper by Deb Verhoeven, Alwyn Davidson and Bronwyn Coate, published in Studies in Australasian Cinema, Routledge 2015 Online here.
The Film Impact Rating (FIR) devised and explained in the academic paper authored by Deb Verhoven, Alwyn Davidson and Bronwyn Coate, is an important step in freeing Oz films from the straitjacket of domestic theatrical box office returns as the singular measure of success, something Screen Australia (SA) partially attempted, in 2011 with its report Beyond the Box Office which incorporated “the shift of media consumption from the large to the small screen”. As the FIR paper points out, unfortunately SA did not make “the underlying data and the specific calculations used to estimate between the large and the small screen viewership...available for external assessment”. The FIR paper, in contrast, is intended to open up its analysis to scrutiny by providing information, on the path it has followed, to public scrutiny, supplemented with a specific invitation for feedback.
The central questions would seem to be in the calculation of the FIR itself, specifically the weightings used between the main categories of impact, the three C's of coverage (39%), commentary (37%) and commercial performance (24%). The internal weighting of the component variables that make up each category (14 in all) are aggregated for the broad weightings of the three categories that make up the FIR as indicated above.
Algorithms are being used for microeconomic analysis in a particular area, often a single market or firm, to find out how it works, the data providing the means for micro-economists to make what are often proving to be “startlingly good forecasts of human behaviour in predicting what customers or employees are likely to do next” (The Economist 10/1/15). According to the FIR paper “algorithms are playing an increasingly prominent role in the media sector – in media consumption (eg search and recommendation systems) and production (particularly in terms of demand predictors and content creators)”. In developing the FIR, an algorithm has been used as a tool to retrospectively process data in order to measure film impact “as a weighted calculation made up of heterogenous factors”.
The rating of the impact of each film is in a weighted index facilitated by the algorithm incorporating 14 heterogeneous variables. As presented in the paper in Table 3, an FIR seems more in the nature of an abstraction than an empirical comparative measurement of each film's commercial and cultural performance. For example, to rate the impact of Mystery Road as one third that of The Great Gatsby seems counter intuitive, to say the least, even allowing for the 'normalising' of cultural with commercial factors which the FIR is seeking to instate. Given its box office returns and low venue saturation (c.f. The Great Gatsby) Mystery Road's FIR would seems mostly, if not entirely, reliant on its relatively low budget for its commercial performance rating, together with its better than average commentary rating. This would seem to indicate a problem with the way the components are weighted. It seems an even greater problem with The Darkside's FIR. Is weighting too heavily skewed towards the hitherto neglected cultural components?
The film impact ratings would acquire, I suggest, more practical meaning if located beside five indexes. In place of the three impact variables in Table 3 I propose five columns in the form of indexes, with the FIR in the sixth column for cross reference,viz :
Production budget Coverage Domestic bo International bo Commentary FIR
In other words the production (plus marketing) budget is in the form of a simple index with the highest budget in the selected series = 100. The expression of the production budget as a percentage of worldwide bo could be bracketed against the budget index for each film rather than integrated into a single index.
The coverage index would be unchanged, the film in the selected series with the greatest coverage = 100.
The domestic and international box office receipts would be in separate indexes, the film with the highest receipts in each case = 100.
The commentary index would be unchanged in combining quantitative and qualitative ratings with award nomination and awards actually won by the film. A film that received uniformly maximum critics' ratings from the greatest number of critics plus 100% nomination to awards success would provide the theoretical benchmark (unlikely to be reached by a film in this context) of 100.
This removes a film's budget as a percentage of its bo from incorporation into the commercial performance category, with domestic and international box office receipts now forming three discrete listings.
The FIR is not discarded but made less abstract (because what it represents is made more empirically assessable) by the provision of a clearer composite representation of data for comparison by rejigging of the three category indexes into five (plus a suggested bracketed rating of the production budget as a per cent of box office). This provides a multiple representation of the data in their empirically discrete categories while retaining the conceptual weight of the FIR system, the product of full 'normalisation' of the data “designed to emphasise the contingency of a multitude of impact assessments”.
This still leaves largely unanswered the question of the weightings of the three categories and their component variables. It is indicated in the paper that “each of the variables were normalised to ensure a relatively even distribution in order to facilitate the generation of a film rating index”. Thus the spread of the weighting of eleven individual components ranges from 7.5 -10% except for the number of users polled on IMDB (4%), number of critics polled on Rotten Tomatoes (4%) and number of award nominations (6%). Weightings were assigned “each of the variables based on our own knowledge, backgrounds and ideas of importance”.
It seems to me that a total weighting of the commercial performance category of only 24% (compared with 39% and 37% for the other two categories) is especially questionable - a case of going from one extreme to the other? I think my proposed rejigging into more empirically apparent categories does provide alternative representations of the data against which to assess the FIR
There would also seem to be a need to incorporate ancillary access (down loading and rental), seemingly the subject of the AS report referred to in the opening paragraph, and tv screenings (both free and pay). Time lags would seem to represent a possible problem here.
The possibility of incorporating the volume of commercial downloads is mentioned in the conclusion of the FIR paper. The incorporation of the prestige value of non-theatrical distribution of festival screenings is also mentioned.
In looking at this whole exercise the word that comes to mind is "overdetermination". Exploiting the well known malleability of statistics facilitated by the deployment of computer technology, the authors intent would seem to be to reassert that the case for the public funding of Oz feature films is primarily cultural, (as, can be added, is the case for the arts in general).
This seems explicit by the way the instatement of normalisation values results in Saving Mr Banks sharing with The Great Gatsby the status of benchmark (a maximum value of 100) for coverage despite the fact that the latter had more than twice the number of screenings internationally. It may well be that the effect of normalisation here in favour of the film with an explicit Australian subject was just fortuitous. Although both films are international co-productions, the key creative input on the production side of The Great Gatsby was Australian but in the service of the adaptation of a classic of American literature. The normalising of Walking with Dinosaurs to register an impact rating below that of The Railway Man entirely on the strength, it seems, of the latter's higher commentary rating (more than three times higher – 49 to 16) along with the FIRs of Mystery Road, The Turning, In Bob We Trust and The Darkside, for example, reinforces the impression that there is intent here 'to rectify a wrong'.
That I am in agreement with the aim of what seems the barely disguised intent, increases my concern about the apparent opacity of the FIRs which has been facilitated in the formulation of the algorithm to process the data, a concern not with the apparent intent but with the efficacy of the impact ratings as instruments for analysis and policy formulation. It is therefore important to make the best possible use of the data without obscuring its weighting across categories, hence its meanings. What is at particular issue is the way the weightings have been employed without adequate explanation much beyond apparent subjectivity. As outlined in the FIR paper I cannot see how rating of the 14 disparate variables can be objectively arrived at. Hence I propose a rejigging, although not radical, along more empirical lines.
I still retain the FIR which might now be seen to be placed more in the position of the proverbial 'shag on a rock'. But my claim is that its value can be better tested by explicit comparison with its component elements.
26 February 2015