Policy Memos

Is Putin’s Popularity Real?

Policy Memo:


Publication Date:




(PONARS Eurasia Policy Memo) Vladimir Putin has managed to achieve and maintain strikingly high public approval ratings throughout his time as president and prime minister of Russia. Kremlin spin doctors help carefully manage Putin’s public image via a heavy dose of propaganda in the state media, punctuated by the occasional shirtless photo or publicity stunt to emphasize his masculinity, and reinforced by a substantial public opinion operation that includes periodic large surveys of the mass public. Whether or not these methods are effective, there is little doubt that the Kremlin cares deeply about the level of popular support for Putin. Indeed, observers in Russia are quick to point to Putin’s high approval ratings (often in contrast to other leaders) as a source of legitimacy. As in other autocracies, popular support for the ruler is thought to be a key determinant of the tenure and performance of the regime.

Yet, there is a nagging suspicion that Putin’s approval ratings are inflated because respondents are lying to pollsters. Although repression is typically not the first option for contemporary dictatorships, the possibility remains that citizens will be penalized for expressing disapproval of the ruler. Even a small probability of punishment may be sufficient to dissuade survey respondents from expressing their true feelings about the ruler. Alternatively, and not mutually exclusively, respondents may lie to pollsters to conform to what they perceive as a social norm—in this case, supporting Putin.

Determining the extent of dissembling in public opinion polls is a challenge not only for researchers studying public opinion, who have used Russian polling data to explain patterns of regime support, but also for autocrats themselves, who typically lack the tools of free and fair elections or open media with which to gauge their “true” support among the public. Indeed, although Western politicians in democracies are often portrayed as obsessed with polling, in some ways it is in nondemocratic regimes that credible survey responses are most important.

Research Design

To explore the extent to which survey respondents truthfully reveal their support for Putin, we contracted the Levada Center to place questions on the January 2015 and March 2015 rounds of their nationally representative survey (Vestnik). Both rounds included the standard question used to gauge support for Vladimir Putin: In general, do you support or not support the activities of Vladimir Putin?

The concern that our research design seeks to address is that respondents might misreport their support for a political leader when asked directly. To gauge the extent to which survey respondents actually support Putin, we employ a version of the widely used “item-count” technique, often referred to as a “list experiment.” The idea of a list experiment is to give respondents the opportunity to truthfully reveal their opinion or behavior with respect to a sensitive topic without directly taking a position that the survey enumerator or others might find objectionable. Previous work has used the item-count technique to study a range of sensitive topics, including vote buying in Lebanon and voting for Putin in the 2012 presidential election.

The list experiment is implemented by providing the respondent with a list of items and asking not which apply to the respondent, but how many. Thus, in a classic example, respondents are asked how many of a list of events (“the federal government increasing the tax on gasoline,” “professional athletes getting million-dollar contracts,” ”large corporations polluting the environment") would make them angry or upset. Randomly-assigned respondents in the treatment group receive a longer list, which includes the potentially sensitive item, than do respondents in the control group. Continuing the example, respondents in the treatment but not control group see a list that also includes the item “a black family moving in next door.” Respondents in the treatment group are thus given the option of indicating that they would be angered if a black family moved in next door while maintaining ambiguity about whether it is this or one of the other items on the list that would make them angry: they merely include that item in their count of upsetting events.

Interpretation of results from a list experiment is straightforward. Differences in the average responses for the treatment and control group, respectively, provide an estimate of the incidence of the sensitive item. In the example above, the mean number of items provoking anger for respondents living in the U.S. South who were randomized into the treatment group is 2.37, vs. 1.95 for the control group, implying that 42 percent of Southern respondents would be angered by a black family moving in next door.

In our setting, the potentially sensitive attitude is not supporting Putin. To judge the extent of any such sensitivity, we implemented two versions of the list experiment. The first version, which evaluates support for Putin alongside previous leaders of Russia or the Soviet Union, reads as follows:

Take a look at this list of politicians and tell me for how many you generally support their activities:

  • Joseph Stalin
  • Leonid Brezhnev
  • Boris Yeltsin
  • [Vladimir Putin]

Members of the control group receive the list without Putin, whereas members of the treatment group receive the list with Putin. Respondents view the list and manually circle a number from 0 to 4 on a card. The wording “support their activities“ mirrors the direct question discussed above.

The second version of the list experiment places Putin alongside various contemporary political figures:

Take a look at this list of politicians and tell me for how many you generally support their activities:

  • Vladimir Zhirinovsky
  • Gennady Zyuganov
  • Sergei Mironov
  • [Vladimir Putin]

Although the basic research design is straightforward, there are a number of additional considerations, which we discuss below.


Baseline results

Respondents to Levada’s Vestnik have for years reported high levels of support for Vladimir Putin. The January and March 2015 waves of the survey are no exception: when asked directly, 86 percent of respondents in January and 88 percent in March reported that they supported the activities of Vladimir Putin.

Our list experiments also suggest a high level of support for Putin. The estimates from the four experiments (historical/contemporary, January/March) are in fact quite similar, ranging from 79 percent support in both rounds of the historical experiment to 81 percent support in the January contemporary experiment. Taken at face value, these estimates imply a small but not trivial degree of social desirability bias among respondents to the two surveys. Depending on the survey wave and experiment wording, estimates of support for Putin from the list experiments are five to nine percentage points lower than those from the corresponding direct question, with a high probability that the true value is greater than zero. Nonetheless, there are reasons to suspect that the levels of support implied by our list experiments are either over- or underestimates. We discuss each possibility in turn.

Floor effects

List experiments can fail to guarantee privacy if none or all of the items on the list apply to respondents in the treatment group—a situation referred to as floor and ceiling effects, respectively. Thus, for example, a member of the treatment group in the example above who indicated that she was angered by all four items could be identified as upset if a black family moved in next door. In our setting, in which not supporting Putin is potentially sensitive, floor effects are the primary concern: respondents in the treatment group who indicate that they support none of the figures on the list implicitly reveal that they do not support Putin.

Common advice for minimizing floor and ceiling effects is to include items on the control list whose incidence is negatively correlated, thereby ensuring that the typical count lies somewhere between none and all of the items on the list. Unfortunately, our analysis of responses to the direct questions presented above suggests that public approval of virtually any pair of political figures is positively correlated among Russian respondents. Further complicating the issue, many Russians appear unsupportive of any politician—with the possible exception of Vladimir Putin, whose support is the question of this paper.

Lacking a design solution to the problem, we examine the potential consequences of floor effects in our setting. In the January survey, 33 percent of respondents indicate that they support precisely one of the four historical leaders on the treatment list, and 37 percent of respondents indicate that they support precisely one contemporary politician on the treatment list. If we were to very conservatively assume that all such respondents in fact support none of the political figures on the list, but are afraid of revealing that they do not support Putin and so indicate that they support one, then estimated support for Putin would drop to 47 and 44 percent, respectively. Similar results apply to the March experiment.

These sharp lower bounds on support for Putin assume not only that there are no individuals in the treatment group who support Putin but none of the other figures on our historical and contemporary lists, but also that there are no individuals who support precisely one of the figures on the control list but not Putin. Although we cannot directly test these assumptions, neither is terribly plausible. To the extent that they do not hold, support for Putin will be higher than the lower bounds derived above.

Artificial deflation

Potentially working against any floor effects is the possibility of artificial deflation, in which “estimates are biased due to the different list lengths provided to control and treatment groups rather than due to the substance of the treatment items.” In our setting, artificial deflation could arise if the inclusion of Putin provides a strong contrast that reduces the attractiveness of other figures on the list, such that (for example) respondents underreport support for Sergei Mironov when listed alongside Vladimir Zhirinovsky, Gennady Zyuganov, and Vladimir Putin (the treatment condition) but not when listed alongside only the first two figures (the control condition). Such an effect would reduce our estimate of support for Putin from the list experiment and thus increase our estimate of social desirability bias.

To identify possible bias resulting from artificial deflation, we included two “placebo” experiments in the March survey. The idea in each case is to examine the incidence of artificial deflation by including a “potentially sensitive” item that is not in fact sensitive, thus isolating deflation from the effect of social desirability bias and allowing comparison with direct questions about the same items. In the first placebo experiment, we retain the focus on political figures but present a list of non-Russian leaders:

Take a look at this list of politicians and tell me for how many you generally support their activities:

  • Alexander Lukashenko
  • Angela Merkel
  • Nelson Mandela
  • [Fidel Castro]

We assume that respondents will be willing to reveal their support for Fidel Castro, the “potentially sensitive” item in the list experiment, when questioned directly: he is well known in Russia but has little connection to contemporary political debates. In the second placebo experiment, we present a list of respondent characteristics, which we can verify directly from responses to the standard battery of demographic questions:

Take a look at this list of characteristics and tell me how many apply to you:

  • Male
  • Female
  • Married
  • [Over 55]

Examining each of the placebo experiments in turn, we find that the difference in estimated support for Castro between the direct question (60 percent) and the list experiment (51 percent) is nearly identical to that for Putin—this despite the fact that social desirability bias is unlikely to inflate support for Castro when respondents are asked directly. In contrast, we find no evidence of deflation in the “over 55” experiment: both in the direct question and the list experiment, approximately 28 percent of respondents are estimated to be over 55. These results suggest that there may be something distinctive about list experiments that are used to gauge support for political figures.

Net Effects

One way of thinking about the offsetting impact of floor effects and artificial deflation is to add the estimate of artificial deflation from the Castro experiment (nine percentage points, assuming that none of the difference between the direct question and the list experiment is due to social desirability bias) to the lower-bound estimate of support for Putin under maximal floor effects (roughly 45 percent). This puts support for Putin in the mid-50s. At this point, the question is how many of the 33-37 percent of treatment-group respondents who say they support precisely one political figure are telling the truth. Although this question warrants further analysis, it seems implausible that most of these respondents in fact support nobody but are lying to hide their dissatisfaction with Putin. Indeed, we cannot exclude the possibility that Putin is as popular as implied by responses to the direct question.


Estimates from our list experiments suggest support for Putin of approximately 80 percent, which is within ten percentage points of that implied by direct questioning. Various considerations suggest that this figure may be either an over- or underestimate, but in any event, the bulk of Putin’s support typically found in opinion polls appears to be genuine. Of course, respondents’ opinions may still be shaped by pro-Kremlin bias in the media and other efforts to boost support for the president, but our results suggest that Russian citizens are by and large willing to reveal their “true” attitudes to pollsters when asked.

Timothy Frye is the Marshall D. Shulman Professor of Post-Soviet Foreign Policy at Columbia University. Scott Gehlbach is Professor of Political Science and Romnes Faculty Fellow at the University of Wisconsin–Madison. Kyle Marquardt is Postdoctoral Research Fellow at the Varieties of Democracy (V-Dem) Institute at the University of Gothenburg. Ora John Reuter is Assistant Professor of Political Science and Senior Researcher at the University of Wisconsin–Milwaukee.



For further reading:

Joshua Tucker, "Why we should be confident that Putin is genuinely popular in Russia," The Monkey Cage/The Washington Post, November 24, 2015.


About the author

Marshall D Shulman Professor of Post-Soviet Foreign Policy, Director of the Harriman Institute
Columbia University
Professor, Department of Political Science and Harris School of Public Policy
University of Chicago
Assistant Professor of Political Science; Senior Researcher
University of Wisconsin, Milwaukee; Higher School of Economics, Moscow