An article recently appeared in New Mandala by Joshua Woo entitled ‘Extremism in the name of Islam and Malaysian Muslims’ (22 September 2013). This article used recent survey results from the Pew Research Center’s Global Attitudes Project to support its case that we should be alarmed by the prevalence of attitudes in support of religious violence in Malaysia. The article attracted some controversy and I have been asked to comment on some of the empirical claims in particular. This response consists of two parts. In the first part, I examine the validity of the statistical methods used by the paper and in the second part, I make some other criticisms of the paper.

Woo’s article uses the figures in the Pew Center’s report [http://www.pewglobal.org/files/2013/09/Pew-Global-Attitudes-Project-Extremism-Report-Final-9-10-135.pdf] to make three key arguments. Firstly, Woo argues that ‘close to half of Malaysian Muslims think that violence such as suicide bombing can be justified to defend Islam from its enemies.’ Secondly, he argues that these figures should cause us to focus our attention on the purported links between the United Malays National Organisation (UMNO) and Islamic extremism. Finally, it is argued that these figures also require Muslims to look carefully at the rhetoric used by right-wing Muslim organisations.

How much can a few hundred people really tell us?

Some of the people who commented on this original article were critical of the statistical claims made by the author. For example, ‘kamal’ posted:

A small sample of 500 odd interview cannot be use to represent a population of several millions. Reading the first few paragraphs already sounds like bad statistical research. you cannot make generalizations like that. Such simple and naive associations are wrong. […] Secondly, if a more than 50% disagrees with violence in the name of religion, why focus on the divided minority to represent the issue being discuss at hand?

There are two key claims being made here. The first claim is that a sample of 500 people cannot be used to make inferences about a much larger population. This is a question of statistical methods and will be the focus of this first section. It was because of this question in fact that I was originally asked to write this response. The second claim is that it is inappropriate to focus the discussion on the minority who support religious violence. Although I do not address this claim specifically, the next section will discuss some related criticisms of the article.

Although it may seem counterintuitive, it is the case that a small sample can be used to make claims about even a much larger population. This fact is fundamental to statistics and quantitative research but it is only true under certain specific conditions. To give a familiar example, if I were to go to a pub in the centre of Canberra and ask a couple of dozen people questions about their political views, I would be likely to receive a rather different set of responses than if I asked the same questions in a sports bar in the western suburbs of Sydney. Neither set of answers are likely to be representative of Australia as a whole.

Why doesn’t this approach work? The reason is that neither group is nationally representative. Take the Canberra group for instance. One of the most important concerns to them may well be the prospect of cuts to the public service, whereas those in the outer suburbs of Sydney are likely to see that as far less important. To them long commute times might be a larger concern and therefore transport infrastructure could be the most important issue. In order to make valid claims about the entire country, we need to ensure that every individual has as equal chance of being interviewed. This is achieved by the process of random sampling, and doing this well is one of the challenges of conducting good quality survey research.

This leads to the next question: how many people must we sample? It is obvious that if we ask just one person, we are unlikely to get a representative view, even if that person was chosen at random. But we don’t have to ask everybody either. The answer to this question is: it depends. As a rule, the more people we ask, the more confident we can be in our conclusions. In statistics, it is never possible to eliminate doubt but want we can do is quantify it. The conventional practice is to decide beforehand how much uncertainty to accept and only draw conclusions when we can limit the uncertainty to the level.

The probability that a particular result has not occurred simply because the random sampling procedure selected a biased group is referred to as confidence. Different disciplines have different conventions for the level of confidence required to make a finding. In the social sciences, this figure is often 95%. That means that no more than one in every twenty findings made will have arisen as a result of random sampling error. In disciplines such as medicine, where lives may well depend on the results, the confidence level required is often as high as 99.9%, meaning that only one such error in every thousand findings is tolerated.

It is important to note that the quality of the data used in Woo’s article is not in question–Pew Research Center is a reputable organisation that routinely conducts this sort of survey using the best practices of social science research. But that does not necessarily mean that any use of their published figures is a legitimate one. We need to consider the particular statistical claims that are being made.

The author claims, for example, that: ‘867,395 Malaysian Muslims think that violence such as suicide bombing is “often justified” to defend Islam from its enemies.’ He makes similar claims about the precise numbers of Malaysian Muslims who feel that such violence is ‘sometimes’, ‘rarely’ or ‘never’ justified and those who would refuse to answer such a question. The method by which he computed this figure was to multiply the proportion of survey respondents selecting this answer (5%) by the number of Muslims in Malaysia, which he estimates as 17,347,900, based on official statistics from 2010.

This approach is problematic. For one thing, it uses too many significant figures. To produce an estimate not of millions, thousands or even tens but of individual people overstates the accuracy of these figures. Even when estimating the number of Muslims living in Malaysia, we cannot say to that level of precision. The official statistics state that there were 28.3 million people living in Malaysia in 2010, of which 61.3% were Muslim. Using those figures, we could estimate the number of Muslims as 17.3 million but we should not try to estimate more closely than the nearest hundred thousand with only this information.

In fact, to be truly rigorous we cannot estimate how many people in the population who would agree with such a statement as a single number. Instead it must be an interval (i.e. a range of numbers). The sample size of the Malaysian survey used for the Pew Center’s report was 822. Using the 95% confidence level which is standard in much social science research, we can estimate that the proportion of Malaysian Muslims who believe that such violence is ‘often justified’ is somewhere between 0.6 and 1.1 million people. We cannot say exactly what that number is, and in particular we have no more reason to believe it is 867,395 than any other number in that interval. We can, however, say that there is only a 5% chance that our interval estimate is incorrect as a result of sampling error.

In summary, Woo’s approach is somewhat flawed from a statistical perspective. His presentation of the figures overstates the precision of the estimates that could be validly made from the available survey data and this has attracted some fair criticism from commenters. That said, his estimates are not wildly misleading either and there are much deeper criticism that could be made of this article, which will be discussed in the next section.

The many different facets of the truth

There is more to good quantitative research than simply reporting the correct figures. It is too easy to select only those statistics that portray the data in a particular way and to ignore the others. In order to give a clear and fair picture of the situation, one must give careful consideration to which statistics will be discussed and how they will be represented. Unfortunately, Woo’s article consistently presents data in such a way as to cast Malaysian Muslims in the worst possible light and it fails to mention information that might lessen this effect. For example, the article observes that:

First, about 6.7 million Malaysian Muslims, which is 39% of total Malaysian Muslims think that violence can be justified to defend Islam. If among the 520,437 who refused to answer the question because they were either afraid or embarrassed to reveal their inclination for the justification of violence to defend Islam, then the actual number would be higher. This suggests that close to half of Malaysian Muslims think that violence such as suicide bombing can be justified to defend Islam from its enemies.

While it is certainly true that the figures would be higher if the non-response rate was a result of fear or embarrassment, we have been given no reason to believe this might be true. It seems strange to bring up such a possibility in an academic work without examining it more closely. As it is, the claim has the effect of evoking this possibility in the mind of the reader but without requiring the author to defend it. It would be difficult to defend anyway, since 3% is actually a rather good non-response rate for a question, and such a situation would be expected to produce a higher than usual non-response rate.

Similarly, why give a precise estimate of the number of Malaysian Muslims who might under some circumstances support religious violence when survey research would usually report in terms of percentages? The observation that nine hundred thousand Malaysian Muslims believe that religious violence is ‘often justified’ is a frightening one. It is arguably rather less frightening to observe that only five percent of Malaysian Muslims believe the same. Both observations are equally justifiable from the data but the former claim is unnecessarily sensationalist.

Finally, and most troublingly, the article fails to mention that the report which it cites comes to strikingly different conclusions. Even the title of the report–‘Muslim Publics Share Concerns about Extremist Groups: Much Diminished Support for Suicide Bombing’–suggests a very different interpretation of the data. The title of the report does not appear anywhere in Woo’s article. Neither does the key finding that a majority of Muslims surveyed (70% in Malaysia) were concerned about Islamic extremism.

The report also speaks more optimistically about attitudes towards religious violence than does Woo’s article. The relevant paragraph from the Pew Center’s report (pp. 3–4) is:

Half of more of Muslims in most countries surveyed say that suicide bombing and other acts of violence that target civilians can never be justified in the name of Islam. This opinion is most prevalent in Pakistan (89%), Indonesia (81%), Nigeria (78%) and Tunisia (77%). Majorities or pluralities share this unequivocal rejection of religious-inspired violence in Malaysia (58% never justified), Turkey (54%), Jordan (53%), and Senegal (50%). In Malaysia, however, roughly a quarter of Muslims (27%) take the view that attacks on citizens are sometimes or often justified.

It is of course concerning that a quarter of Malaysian Muslims surveyed believe that religious violence is sometimes or often justified and it is completely appropriate to write about that. But there is a danger in failing to mention that this is still a minority, that a clear majority unequivocally oppose religious violence and that a large majority are concerned about religious violence in Malaysia. Woo’s article could have been much improved by discussing these facts up front before addressing his concerns about the minority view.

Substantive points missed …

Woo’s statistical analysis suffers from a couple of problems. At a technical level, it is slightly flawed in that the statistics are reported to a much greater level of precision than is justified by the data, though this is a minor problem. More seriously, the article presents the data in a rather one-sided way and fails to mention relevant pieces of information that might cause it to be seen in a different light.

Having said all of that, the statistical analysis is far from being the most important part of Woo’s article. He makes a number of interesting points in his article and he could easily have made those same points without the statistical analysis that has been the subject of this response. Unfortunately, this has detracted from his article because it is this analysis that has drawn most of the attention from commenters, rather than his substantive points.

Troy Cruickshank is a PhD candidate at the School of Politics and International Relations, Australian National University. His research focuses on how voter attitudes affect political behaviour, particularly during times of economic crisis.