V krizi smisla tiči misel






         

23.04.2017

Mnenje o raziskavi “Cultural Participation and Inclusive Societies. A thematic report based on the Indicator Framework on Culture and Democracy. December 2016″

Zapisano pod: Kulturna ekonomika, Kulturna politika — andee - 23.04.2017

Tale zapis je bil predstavljen na srečanju avtorjev Kompendija kulturnih politik in trendov v Evropi na Cipru 30. in 31. marca 2017. Po besedah sodelavke, ki je to predstavila, je naletel na veliko odziva. Ker menim, da so tovrstni projekti, kot je IFCD (Indicator Framework on Culture and Democracy) lahko precej nevarni, t.j. vodijo v sporne in netočne zaključke, ki se nato uporabljajo pri sprejemanju odločitev, ga objavljam tudi javno. Raziskavo v celoti sicer najdete tukaj.

SHORT OPINION ON THE REPORT ON »CULTURAL PARTICIPATION AND INCLUSIVE SOCIETIES. A THEMATIC REPORT BASED ON THE INDICATOR FRAMEWORK ON CULTURE AND DEMOCRACY. DECEMBER 2016«

In general, the work on gathering cultural indicators is a worthwhile and extremely difficult endeavour. Cultural statistics are not provided systematically and in sufficiently long and equidistant (e.g. yearly-based) time series by e.g. Eurostat. To this end, I strongly support such reports as provided here and the work on the IFCD in general.

On the other hand, in a renowned article, Diamond and Hausman (1994) question the »numbers« provided by contingent valuation methodology (which is of course unrelated to the report, but used as illustration) in an often quoted question: »Is some number better than no number?«. Indeed, Hausman recalls this in a 2012 article (Hausman, 2012) and considers even his certain hopes of CVM providing some meaningful numbers after a time, as »hopeless«.

I do not think the situation in the case of IFCD and its usage is hopeless, it is promising, but I find several problems with the indicators and in the provided report, shortly listed below:

1) Inadequate description of the methodological procedure: in the report, I miss a more adequate description what exactly is measured by »cultural participation«. Apparently, this is a composite indicator taken from IFCD, including some n number of indicators collected in 7 dimensions listed in the final part of the results’ part of the report. The construction of the measure is not clear from the report, although from a general IFC report it can be concluded that this is a mere summation-based measure of 7 individual components, based on summation of n indicators (by components of course). But this should be clearly provided in the report, as otherwise we do not know what is measured at all. Furthermore, methodology used should be described – of course it includes (apparently) only Pearson correlations with some basic confidence intervals provided in graphs, but justification of this procedure should be provided (related to required sample size, reliability and validity of the results and, if possible, some additional sensitivity analysis).

2) Problems in the methodology Nr. 1: the study often uses Pearson’s correlation on the sample size of ~20. Although in theory nothing is wrong with such procedure, the reliability (and validity) of such results and, mainly, their interpretations, is clearly (highly) questionable. The sample size problem here is extremely dire, and, likely, there is some strong heterogeneity in the dataset (e.g. by welfare regime, cultural policy models, etc.) which is left completely unaddressed, at least as far as I could see.

3) Problems in the methodology Nr. 2: If the measures are composite indicators, based on mere summation (after standarization of course which has been done), and the indicators are likely prone to significant errors (they are even collected for different years – see the IFCD main report), the »practical« errors (not the ones in theoretical confidence intervals) are likely extremely huge. It is e.g. possible/probable there are outliers in the data due to numerous reasons, including mistakes in measurement, and in this case this likely has a huge influence on the results.

4) Causality: The study even discusses causality, which is absurd: of course there exist significant questions of reverse causality in this dataset and relationships analyzed, but I very much fear the data available by this study (mainly related to sample size and quality of the data) do not allow meaningful conclusions regarding this. Furthermore, as pointed in a recent article by Ruiz Pozuelo, Slipowitz and Vuletin (2016, https://publications.iadb.org/handle/11319/7758?locale-attribute=en) problems in estimating causality in relationships to the institutional indicators can be huge.

5) Presentation of the results: at least a complete table of all correlations (of all the relevant/included variables) with sample sizes and other statistics should be provided.

In general, I do not want to be too pessimistic and critical about the report. Again, I think that such endeavours could be useful if in future more attention will be provided to a) construction of the indicator (additional considerations to weighting procedures and sensitivity analysis to the consequences of using different weights for different indicators/components); b) quality of the data – there will always be a problem of sample size as the number of countries will be limited, so in this case at least some time series considerations should be provided, also: at least the year/s of the data should match, as one would assume there have been some significant changes in past decade, if for nothing else, for the consequences of the economic crisis (the data for 2003 or 2007 will/could, therefore, be significantly different than for 2013… – some data collected are from 2007/2011/2014 and other years and some for the apparent reference year 2013 (or 2014?)); c) methodology: simple Pearson’s correlation analysis is simply not enough for a discussion on this topic – if you want to address causality, as the report does (and should), significantly more methodological effort should be provided.

I must say I see a danger that due to relatively low usage of more complex statistics in cultural policy research the researchers in the field could indeed take the graphs for granted and perhaps (likely) even confuse the results in the analysis for causal effects (despite warnings in the study) and, in this manner, such efforts could cause more »noise« than information. As said previously, I have serious concerns about the reliability and validity of the results, which are not adequately addressed in the report, as much as I managed to read in a short time (if I skipped something, I apologize).

Nevertheless, I suggest to spend more effort on the above suggestions in future and hope that my opinion will be useful despite its critical tone.

P.S.: I did not spend much effort on the discussion on the interpretation of the results – although being a cultural economist I am not an expert on cultural participation and it could be that also some contentwise results could be questionable/debatable as well. I hope some other experts could be contacted to discuss those issues as well.

Author:
Andrej Srakar, PhD; Research Associate, Institute for Economic Research, Ljubljana; Assistant Professor, Faculty of Economics, University of Ljubljana.

  • Share/Bookmark


Brez komentarjev »

Še brez komentarjev.

RSS vir za komentarje na objavo. Trackback URI

Komentiraj

Komentiranje iz tujine je omogočeno zgolj prijavljenim uporabnikom !

Blog V krizi smisla tiči misel | Zagotavlja SiOL | O Sistemu |