Covid-19 and Female Leaders

When the feminists are not busy vilifying men and presenting women as their perpetual victims, they are indulging in self-glorification instead.

Yesterday The Guardian ran a story titled, “Female-led countries handled coronavirus better, study suggests”. They linked to this study by Supriya Garikipati and Uma Kambhampati at the universities of Liverpool and Reading respectively.

The Guardian describes the study as “published by the Centre for Economic Policy Research and the World Economic Forum”, which I interpret as meaning those bodies funded the work. However, it is published by SSRN (Social Science Research Network) which is a preprint repository – in other words, it is not a peer reviewed journal paper. The conclusions of the study claim that,

Our findings show that COVID-outcomes are systematically and significantly better in countries led by women and, to some extent, this may be explained by the proactive policy responses they adopted. Even accounting for institutional context and other controls, being female-led has provided countries with an advantage in the current crisis…..the gender of leadership could well have been key in the current context where attitudes to risk and empathy mattered as did clear and decisive communications……women leaders seem to have emerged highly successful.”

The Guardian article included this graphic,

Figure 1

(It is not really that their geography has gone so awry as to believe that New Zealand and Ireland are literally ‘nearest neighbours’ – the term is used in a different sense, see below).

I have been keeping well away from Covid-19 statistics in general, other than the occasional reminder to the wilfully obtuse that male mortality is roughly double that for women. There are a number of reasons for my reticence in getting involved in Covid stats, one being that we have yet to see if the stats at present bear any relationship to the final out-turn in, say, a year or two’s time. Draconian lockdowns now may mean more deaths later – who knows. Another reason is that I have been sceptical about the value of the statistics, both of the numbers of cases and the numbers of deaths.

However, I couldn’t let Garikipati and Kambhampati’s study pass without examination, as I’m sure you will appreciate. I have analysed the data myself. As a by-product I find my previous scepticism in respect of the data to be justified.

In common with Garikipati and Kambhampati I take the Covid-19 data from Worldometer. Specifically, I took the data as they were on 19/8/20. Garikipati and Kambhampati’s data extended only up to 19/5/20, some three months less data than I have used. In both cases the data refer to cumulative quantities, i.e., cases, tests and deaths up to the date specified above.

Worldometer issues due warnings about the data. Data for total cases in a given country refer to the sum of confirmed cases and those that are merely suspected. However, as testing covers only a fraction, generally a tiny fraction, of populations, Worldometer warn that “most estimates have put the number of undetected cases at several multiples of detected cases”. In other words, no one really knows how many people have been infected with the virus in any given country (with a very few exceptions).

Total deaths are defined simply as the cumulative number of deaths among “detected” cases (noting that “detected” might mean a positive test result, or merely a suspected case). Hence the death data could be wildly too small or wildly too large. Since the death data is drawn only from detected cases, and since it is acknowledged that the number of people infected is probably “several multiples” larger, that potentially biases the death data down by a substantial factor. However, one might argue that this is less important than it appears because, where deaths occur, the case is likely to be counted as “detected”. In other words, those instances of infection which are not “detected” are likely to have extremely low mortality rates. Instead the death data may be over-estimated if large numbers of deaths within the “detected” cases are attributed to Covid-19 simply because the person was infected and died. In other words, how many people counted within the Covid dead actually died with the virus rather than of it?

On top of those major uncertainties, there are many reporting issues listed by Worldometer. Countries are constantly changing their methodology for reporting. Worldometer’s list of reporting issues seems not to be complete as the UK is not listed despite having recently revised down their death data. Finally, data collection and analysis is unlikely to be consistent between countries – and that is rather damning since the present exercise is precisely a comparison between countries.

However, my objective is not to draw any definitive conclusions based on the available Covid-19 statistics. My objective is only to examine the veracity of Garikipati and Kambhampati’s claims. The observations made above are already several nails hammered into the coffin lid. But the rest of my analysis asks what we find if we take the data at face value.

Firstly, let’s look at all the data without regard to the sex of the leaders. Worldometer lists 180 countries with Covid-19 data for all three of tests, cases and deaths. There are several more with data for some but not all these quantities, a total of 210 locations being listed. I use all the data available in what follows.

I confine attention to the following data: (i) tests per million of the country’s population, (ii) cases per million population, and, (iii) deaths per million population. Comparison between countries would be meaningless unless data per million of the population were used. Henceforth I may fail to stipulate “per million” for brevity, but this is to be understood throughout.

All three data types (tests, cases and deaths) range over several orders of magnitude. In such cases it is often more enlightening to plot data on log scales. This is done in Figures 2, 3 and 4 which plot on log scales: tests versus deaths, tests versus cases, and deaths versus cases respectively. Each graph also shows a best-fit line through the log-log data. A straight line on a log-log plot is actually a power law relationship between the variables, and the equation of this best-fit power law is given on the graphs. It is clear from the graphs that the log-quantities are correlated. The Pearson correlations of the log data are 0.47, 0.58 and 0.88 respectively.

Least surprising is Figure 4 in which the power law index is close to 1, indicating that the number of deaths is proportional to the number of cases, as one would expect. Note that this says nothing about the accuracy of either the death data or the cases data. Both could be wrong by some factor – and the factor could be different for deaths and cases – but proportionality would still be found if these factors were broadly the same between countries.

Figure 2 is more unexpected, indicating an association between the number of deaths and the number of tests. I think we can be confident the tests are not killing people, so how does this association arise? One possibility is that, as the number of deaths increases in a given country, so that country may be motivated to carry out a larger number of tests. But another possibility relates to the shortcomings of the death data as a true measure of Covid deaths. The death data are obtained simply as the numbers of deaths amongst those identified as cases (i.e., infected). The more tests that are done, the more cases (infections) will be found, and hence, inevitably, the larger will be the associated numbers of deaths as these will be drawn from an increased pool of candidates. The people in question may not have died from Covid-19, and this method of counting excludes valid Covid deaths outside the pool of identified cases.

There is a third possible contribution to the explanation of Figure 2 which relates to the time dimension. Recall that these data are cumulative over the whole Covid-19 pandemic period (perhaps 5 or 6 months or so). Over this period the number of deaths and the number of tests would increase from an initial zero. Moreover, different countries experienced the wave of infections and deaths at staggered time intervals. Hence, the data for a range of countries may be an alias for a range of times.

Figure 3 is initially even more odd. It is tempting to regard an association between cases and tests as telling us something about the efficacy of the testing regime. But Figure 3 makes no sense on that basis. Suppose tests were carried out at random. The number of cases detected would increase in proportion to the number of tests. But Figure 3 has a power law index of only 0.58, well below the value of unity expected for proportionality. On the other hand, suppose the testing regime was extremely efficiently targeted on the most likely to be infected, and only later tested other people. Again a linear relationship would be expected initially, until all the infected people had been tested and thereafter there would be no further increase in the number of cases. The trend of Figure 3 would then be a linear dependency which ultimately turns to the vertical – and this bears no similarity with the data at all.

To drive home how odd Figure 3 is, consider the trend line: carrying out 2,000 tests would reveal 10 infections (one in 200), whereas carrying out 100,000 tests would reveal 10,000 infections (one in ten). Why should the proportion of tests which are positive increase as the number of tests increases? One possible explanation again appeals to the time element. This is not a static picture. Carrying out a very large number of tests takes time, and hence will inevitably involve many tests carried out late in the pandemic. If the infection has continued to spread, regardless of lockdown measures, then there would indeed be a larger percentage of tests which return positive results later in the testing programme.    

Note also that we only need explain either Figure 2 or Figure 3 since they are not independent. Any two of the power law behaviours of Figures 2, 3 and 4 may be used to derive the third as a matter of consistency.

In summary, the unexpected nature of the relationships evident in Figures 2 and 3, and the lack of a clear explanation for these behaviours, indicates that the data measures themselves (cases and deaths) are not properly understood and do not mean what they are naively promoted to mean.

[The sharp-eyed reader may have spotted a couple of data points in which the number of tests per million exceeds a million! These are correct – or, at least, what the Worldometer data states – and relate to the Faeroe Islands and Luxembourg whose populations of 49,000 and 627,000 have been tested more than once on average].

Now let’s return to examining the claims of Garikipati and Kambhampati.

Figure 2
Figure 3
Figure 4

The following lists countries in which the most powerful executive politician is female. This is clear in 17 cases. I have listed a further 9, making 26 in all, in which a woman is president to a male prime minister. The UK is listed in the Worldometer data as UK, not as four separate nations, so you won’t find Nicola Sturgeon below.

Figures 5, 6 and 7 reproduce Figures 2, 3 and 4 but with the above 26 cases of countries with female leaders shown as red squares. It is immediately obvious that the red points are broadly similar to the rest of the data. The red line is the best fit to the red data (for female led countries). This is the best straight line fit to the log-log data, i.e., the best power law fit to the data itself. Also shown on Figures 5, 6 and 7 are the upper and lower 95% confidence bounds on the total (blue) data.

The red lines lie comfortably within the blue dashed upper/lower 95%CL lines, indicating that the red data is not significantly different from the total (blue) dataset.

There is a slight indication that countries with a female leader tend to have rather more tests, but this is not statistically significant (which means it is likely to be statistically random fluke).

The obvious red outlier at abnormally low deaths per million in Figuire 7 is Singapore. The red point with the greatest deaths per million (1,237) in Figures 5 and 7 is San Marino, which has a population of only 33,941 and only 42 deaths, and so is hardly very indicative. However, similar observations apply also to many countries led by men. The red point with the second largest number of deaths per million is Belgium with just short of 10,000 deaths in a population of 11.6 million (859 deaths per million).

Figure 5
Figure 6
Figure 7

Conclusion

The claim made by Garikipati and Kambhampati, namely that “COVID-outcomes are systematically better in countries led by women”, is not supported by the data. On the contrary, there is no statistically significant difference between female-led countries and the totality of countries.

Garikipati and Kambhampati compare male and female led countries by pair-wise comparison, i.e., one female led country is paired with one male-led country. This is illustrated by the Guardian graphic reproduced as Figure 1. It is clear from the huge scatter in the data shown in Figures 5, 6 and 7 that how one chooses the pairs in question will dictate the answer one gets. In other words, Garikipati and Kambhampati have indulged in a particularly crude form of cherry picking and referring to it “nearest neighbour matching” does not improve its validity. It is readily seen from Figures 5-7 that one could easily pick pairs of countries which would seem to support the idea that women leaders were crap compared with men – if anyone were so silly as to wish to do so.

13 thoughts on “Covid-19 and Female Leaders

  1. paul parmenter

    And the proof that it is specifically the sex of the leader of each state that determines how successfully that state has handled the virus is…er…um…

    Reply
  2. Malcolm James

    I agree that the study’s methodology is what can be described as a load of bollocks. This paper is not peer reviewed, but this case might indicate what might ensue if it were submitted. Admittedly, the offending review appears to have been written in an unnecessarily inflammatory manner and he (I’m assuming the reviewer was male!) could have made precisely the same points in very measured and academic terms.

    https://www.timeshighereducation.com/news/sexist-peer-review-causes-storm-online/2020001.article

    The upshot was that this review was ignored and another review was sought. I imagine this reviewer had more sense (or sense of self-preservation) than to severely criticise it and was thoroughly tame, so that this paper has now been published.

    Reply
  3. Seamus Ariat

    All statistics produced by women with an agenda should be viewed with great scepticism. All statistics produced by feminists with an agenda are likely to be entirely fictional or bad science or loaded with confirmation bias or all three. For example statistics about rape & sexual assault or those about Domestic Violence.

    See https://www.city-journal.org/html/campus-rape-myth-13061.html which deconstructs statistics about sexual assault on US campuses.

    Reply
  4. Andy in Germany

    Apart from anything else, Germany is federal, with their own state president, many of whom are male, and the sixteen states all led their own C-19 response with the federal government providing advice and coordination.

    Also, we have a male federal president who is head of state.

    Merkel was a good and effective leader but she was far from being the only one and her influence under the German constitution is deliberately limited, meaning that the claim that Germany’s admittedly good response is entirely due to Merkel being female is really rather silly.

    In fact it is rather insulting to our Chancellor to suggest that her effectiveness was because of her gender: Merkel was effective because of her abilities, training and experience, not her ‘X’ Chromosomes.

    Reply
  5. hannah

    Data from EU CDC

    Among all countries, the median death rate among male-led countries was 8.53 deaths per million (range: 0 to 1238 deaths per million; first quartile: 1.21 deaths per million; third quartile: 44.9 deaths per million), whereas the median death rate among female-led countries was 51.7 deaths per million (range: 0.29 to 844 deaths per million; first quartile: 24.4 deaths per million; third quartile: 108 deaths per million). The p-value of a Mann-Whitney U test comparing the death rates per million between male and female-led countries was 0.011.

    Among the 119 countries with a high or very high Human Development Index (index value of 0.7 or higher), the median death rate among male-led countries was 28.4 deaths per million (range: 0 to 1238 deaths per million; first quartile: 4.4 deaths per million; third quartile: 81.2 deaths per million), whereas the median death rate among female-led countries was 55.6 deaths per million (range: 0.29 to 844 deaths per million; first quartile: 28.1 deaths per million; third quartile: 116 deaths per million). The p-value of a Mann-Whitney U test comparing the death rates per million between male and female-led countries was 0.155.

    The Oxford Government Response Tracker defines a stringency index based on factors such as school closures, workplace closures, restriction of international travel, and cancellation of public events; values range from 0 to 100 (higher values mean more stringency). Define the degree of stringency as the maximum value this metric has reached up to this point. Among these 119 countries, the median degree of stringency among male-led countries was 87.0 (range: 19.4 to 100; first quartile: 80.3; third quartile: 92.6), whereas the median degree of stringency among female-led countries was 74.5 (range: 30.6 to 100; first quartile: 69.2; third quartile: 92.7). The p-value of a Mann-Whitney U test comparing the degree of stringency between male and female-led countries was 0.087.

    The Oxford Government Response Tracker defines a government response index based on factors such as imposing restrictions on school and workplace openings, public events, and travel, as well as efforts to contain the spread and to communicate with the public; values also range from 0 to 100 (higher values mean more involvement). Define the degree of response as the maximum value this metric has reached up to this point. Among these 119 countries, the median degree of response among male-led countries was 81.1 (range: 26.9 to 96.2; first quartile: 74.4; third quartile: 85.3), whereas the median degree of response among female-led countries was 75.0 (range: 34.0 to 89.1; first quartile: 66.7; third quartile: 79.2). The p-value of a Mann-Whitney U test comparing the degree of response between male and female-led countries was 0.061.

    Define the time until any meaningful government response as the number of days between the appearance of the first case in a country and the day the government response index hits 30; note this value can be negative if countries start taking precautions before the appearance of the first case. Among these 119 countries, the median time until response among male-led countries was 11 days (range: -43 to 118 days; first quartile: three days; third quartile: 18 days), whereas the median time until response among female-led countries was 13 days (range: one to 42 days; first quartile: 9.5 days; third quartile: 20.5 days). The p-value of a Mann-Whitney U test comparing the time until response between male and female-led countries was 0.375.

    Source: https://www.reddit.com/r/skeptic/comments/hpds9c/femaleled_countries_versus_maleled_countries_in/

    Reply
    1. William Collins Post author

      That is interesting as it uses a different data source, though only to 10th July. Also the above analysis was based on only 13 female-led countries to my 26. The above associations are all non-significant apart from the first para which indicates greater death rates for female-led countries. I have my doubts that medians are the best means of comparison, though. But I think we can say it reiforces my consigning of Garikipati and Kambhampati to the dustbin.

      Reply
      1. Carson

        First, FYI, there is an update to the analysis Hannah had referenced above. It goes all the way up to the middle of August and also includes analysis of heavily female national parliaments (40% or more women):

        https://www.reddit.com/r/theydidthemath/comments/idz5ih/self_according_to_european_cdc_and_oxford/

        In that analysis, the thirteen female-led countries were ones in which the person heading the body with the primary executive power is a woman. I think this is the right way to define a female-led versus male-led country. A ceremonial head of state like the presidents of Greece, Estonia, Georgia, Singapore, and Slovakia won’t be able to do much about COVID since they don’t have the executive power; it’s the prime ministers in these countries that govern and call the shots.

        I also think that female-led should mean “led by a woman for more than half of the duration of the pandemic” (to account for any upcoming elections and resignations). For example, suppose the prime minister of a male-led country with a high death rate (e.g. the Netherlands) resigns sometime in the next week and a woman takes his place. She shouldn’t be considered responsible for the high numbers of deaths before she took office; that was probably largely out of her control. Similarly, if the President of Taiwan leaves office for some reason during the pandemic and is replaced by a man, he shouldn’t be credited with the low number of deaths Taiwan has experienced either. So I don’t quite agree with your decision to call Gabon or Belarus woman-led for these purposes.

        Medians were used because the analyses were done without any assumptions of the form of the distribution of the deaths per million. The usual assumptions are likely not to hold in this case. When a Mann-Whitney U test is done (as what has been done in this analysis), medians are typically reported.

        But I’m glad multiple independent analyses are confirming the same thing — that just because a country is led by a woman does not automatically mean it will be able to handle the pandemic better.

        Reply
  6. Logan

    So much money available to women / women’s groups to produce this nonsense. It’s a wonder they don’t start advertising their credentials as ‘Professional Sexists’.

    Reply
  7. Mr David Eggins

    Toxic masculinity – (and toxic femininity) – are the result of toxic stress. The major cause of toxic stress is? Answers to a radical feminist near you, to hear how she wriggles on that one!

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *