This box is not visible in the printed version.
Gambling Survey for Great Britain: Official statistics
Published: 29 February 2024
Last updated: 2 October 2025
This version was printed or saved on: 11 October 2025
Online version: https://www.gamblingcommission.gov.uk/report/gambling-survey-for-great-britain-technical-report
This technical report provides detail on the background and methodology for the Gambling Survey for Great Britain (GSGB). Detail on the issued sample size, response and weighting strategy for each wave of fieldwork is provided in the wave-specific reports. Detail of the online and postal questionnaires used are provided alongside the annual reports.
Data collection for the GSGB official statistics started in July 2023 and 2 annual reports have now been published:
There was an extensive development period leading up to the start of the official statistics data collection which started in July 2023. This consisted of a pilot and experimental stage. Findings from the pilot were published in May 2022 and are reported in Participation and Prevalence: Pilot methodology review report. 2 reports on findings from the experimental phase have been published. The first report, covering the first 2 steps, was published in April 2023 Gambling participation and the prevalence of problem gambling survey: Experimental statistics stage report. A further report, covering the final step, was published in November 2023 Gambling participation and the prevalence of problem gambling survey: Final experimental statistics stage (Step 3).
The aims of the GSGB are to:
The GSGB uses what is known as a push-to-web approach, in which people are first encouraged to take part online, completing a web questionnaire. Those who do not initially take part online are subsequently offered an alternative means of participation. In the GSGB this alternative is a paper questionnaire, sent by post. Both data collection modes provide privacy for respondents to answer questions honestly, and by offering an alternative, the survey can include people who are not online or who do not feel willing or able to go online to take part. This helps improve the accuracy and representativeness of the survey. Moreover, some gambling behaviours, notably the propensity to gamble online, are correlated with the probability to take part in online surveys, which can bias results1. The importance of offering the alternative paper questionnaire was highlighted within the pilot report with 43 percent completing the survey by this mode and those completing the paper questionnaire being more likely to be of an older age and a lottery only player.
Inviting people to take part in the GSGB involved randomly selecting addresses within Great Britain, known as random probability sampling. This approach is discussed further in the next section.
1Sturgis, P., & Kuha, J. (2022). How survey mode affects estimates of the prevalence of gambling harm: a multisurvey study. Public Health, 204, 63-69.
Table 1 outlines the issued sample and overall response for the GSGB in Year 1 and Year 2. The address-level response rate (19 percent) was lower than the target (22 percent). To mitigate the lower response rate, boost samples were included within the year 2 sample to increase the overall productive sample size. More detail on response can be found in the ‘A’ tables’ accompanying each annual report. Note that Year 1 comprised 6 months of fieldwork only.
Survey year | Issued sample addresses (number) | Productive addresses response (number) | Productive addresses response (percentage) | Productive participants (number) |
---|---|---|---|---|
1 (31 July 2023 to 19 February 2024) | 37,554 | 6,636 | 19% | 9,742 |
2 (15 January 2024 to 19 January 2025) | 78,866 | 13,489 | 19% | 19,714 |
The proportion of productive participants by sex and age are outlined in Table 2 and Table 3 respectively. More detail on individual response by sex and age can be found in Table A.3 accompanying each annual report. Table A.3 also includes further information on the age distribution of the GSGB sample compared to the population. The GSGB sample is typically older than the general population aged 18 and over.
Survey year | Productive participants male (percentage) | Productive participants female (percentage) |
---|---|---|
1 (31 July 2023 to 19 February 2024) | 44% | 56% |
2 (15 January 2024 to 19 January 2025) | 44% | 56% |
Survey year | Productive participants: 18 to 34 years (percentage) | Productive participants: 35 to 54 years (percentage) | Productive participants: 55 to 74 years (percentage) | Productive participants: 75 years and over (percentage) |
---|---|---|---|---|
1 (31 July 2023 to 19 February 2024) | 21% | 31% | 35% | 13% |
2 (15 January 2024 to 19 January 2025) | 22% | 31% | 36% | 12% |
Table 4 shows response rates by mode of completion for year 1 and year 2. More detail on mode response can be found in the ‘A’ tables’ accompanying each annual report.
Survey year | Productive participants: Online completions (percentage) | Productive participants: Postal completions (percentage) |
---|---|---|
1 (31 July 2023 to 19 February 2024) | 65% | 35% |
2 (15 January 2024 to 19 January 2025) | 66% | 34% |
A high-quality sample is essential for ensuring a nationally representative survey, capable of producing robust population estimates. To achieve this, a stratified random probability sample of addresses in Great Britain (GB) was used. The target population of the survey was adults aged 18 years and over, living in private households within GB.
There is no publicly available list of the GB adult population that can be used for sampling individuals. Instead, like many national surveys, the Postcode Address File (PAF) was used. The PAF is compiled by the Post Office and lists postal addresses (or postcode delivery points) in the United Kingdom.
To get from a list of addresses to a selection of adults within them, involves a 2-stage selection process:
Prior to selection, the sample frame was ordered: this can help to reduce sampling error and increase the precision of estimates, as well as ensuring representativeness with respect to the measures used. The following measures for stratification (in order) were: Country and English region; Population density at Local Authority level and overall Index of Multiple Deprivation (IMD) score2.
At each sampled address, there may have been more than 1 dwelling and/or household. However, a random selection of households is very difficult to operationalise without an interviewer and there was no control over which household opened the invitation letter. As a result, in multi-occupied addresses, no formal household selection took place and the selection of which household took part was left to chance (that is, whichever household opened the letter). The overall proportion of multi-occupied addresses for PAF samples is very small (around 1 percent), and it is therefore unlikely to lead to any systematic error (known as bias) in the responding sample.
At each address, up to 2 adults (aged 18 years and over) could take part. If the household contained 3 or more adults, the instruction was that the 2 adults with the most recent birthday should complete questionnaires.
Asking a set number of adults (in the case of this survey, 2) rather than all adults from each address to complete the survey is a well-established approach for push-to-web surveys in GB3. Most residential addresses (85 percent) contain either 1 or 2 adults, meaning that exclusion of additional adults should not introduce any notable error (known as selection bias). Under this approach, it is estimated that 93 percent of the sample are the ones that would have been selected using a random approach.
While this approach leads to a degree of within-household clustering, the effect of this is lower than if all adults per household were eligible though will be higher than if just 1 adult per household was selected. Moreover, the slight inefficiency at this stage is outweighed by the higher number of productive cases achieved from asking up to 2 adults from each address to complete the survey instead of only 1.
As recommended within the Sturgis (2024) report NatCen has continued to monitor best practice developments in the area of within household selection of adults in push-to-web surveys) and still consider this the best way to select adults within households.
As a push-to-web survey using a PAF sample, the GSGB is reliant on sending invitation letters to perspective participants. The following participant engagement strategy was used; each item was sent to selected addresses in the post:
The letter also contained 2 Quick Response (QR) codes which provided an alternative method for accessing the online questionnaire. This approach was tested in the experimental phase and proved to be successful in securing target response rates. Instructions on what to do if more than 2 adults lived in the household were also included in the letter. Addresses in Wales received the letter in both Welsh and English.
The invitation letter and reminders were the main levers to convince people, including those who did not gamble, to take part. Envelopes sent to participants were branded with the HM Government logo. Additionally, all letters were carefully designed following evidence-informed participant engagement guidance for online surveys published by the Office for National Statistics (ONS) Participant engagement for push-to-web social surveys – Government Analysis Function (opens in new tab).
Experience shows that most people complete a survey within a few days of receiving the request. The time between each mailing was therefore kept as short as possible, to ensure that the request was fresh in people’s mind. A gap of around 10 days between mailings was introduced, to allow removal of responding participants from the sample for the reminders. The day of the week of the mailing was varied to allow for the fact that different people may have time for survey participation on different days of the week.
A study website, freephone number and dedicated email address were set up for participants to contact with issues or queries. The use of monetary incentives in surveys has been proven to increase response rates4. A £10 completion incentive per individual questionnaire was offered. This took the form of a Love2Shop voucher. Those who responded online were emailed a Love2Shop voucher code. Those who completed the postal questionnaire received a physical Love2Shop voucher by post5.
The aim is to achieve a sample size of 5,000 completed individual questionnaires per wave. To ensure a spread of completions throughout the data collection period, the sample for each wave is divided into 2 batches and issued at equal intervals (with minimal overlap between batches and waves).
Table 5 outlines the fieldwork dates for each wave within each survey year (the first date for each wave refers to when invitation letters were posted; the latter date refers to the final date returned postal questionnaires were accepted).
Survey year | Wave 1 | Wave 2 | Wave 3 | Wave 4 |
---|---|---|---|---|
1 | 31 July 2023 to 16 November 2023 | 6 November 2023 to 7 March 2024 | N/A | N/A |
2 | 15 January 2024 to 28 April 2024 | 8 April 2024 to 21 July 2024 | 1 July 2024 to 13 October 2024 | 23 September 2024 to 19 January 2025 |
The postal questionnaire was designed to be as comparable as possible to the online questionnaire. This approach was taken to minimise the low risk of differences arising in the visual presentation of the 2 questionnaires, which could lead to differences in the ways in which questions were understood and answered (known as measurement differences).
Some differences between the 2 questionnaires remain. The online questionnaire includes complex routing and dynamic adjustment of question wording that reflects the participant’s answers to earlier questions. This cannot be replicated in the postal questionnaire. Moreover, to design a postal questionnaire that participants would find straightforward to complete within the required page limit, some questions asked in the online questionnaire are omitted from the postal version.
The questionnaires contain core and modular content. The core content is asked every wave and included some of the official statistics measures. Modular questions are included in the online questionnaire and asked on a rotating basis as required by the Commission and include topical questions or those related to the development of specific policies.
Core content includes:
Modular content covers (online questionnaire only), but is not limited to:
Demographic information captured:
Data was collected from 2 sources: an online questionnaire and a postal questionnaire. The online questionnaire data in its raw form were available immediately to the research team. However, the postal questionnaire data had to be manually recorded as part of a separate process.
The online questionnaire was designed to require minimal editing with built-in routing and checks. The postal questionnaire relied on correct navigation by participants and there was no constraint on the answers they could give. As a result, these responses could include errors, so the data was manually edited. These edits included ensuring single answer questions had only 1 answer option selected, that questionnaire routing was followed, and that answers were realistic and consistent.
A small number of questions within both questionnaires allowed participants to write in a response if none of the existing answer options applied to them. These were back-coded into existing categories or remained coded as ‘other’.
Post-fieldwork validation was carried out. This included checks that variables from the 2 data collection modes had merged correctly into 1 dataset. As up to 2 adults per household could answer demographic questions relating to the whole household (for example, household size and information about income), there was potential for differing responses between individuals. The following rules for harmonising household responses were followed, in priority order:
A further step involved identifying and removing duplicate responses. For this, questionnaires were checked to see if responses to up to 2 questionnaires were very likely to be from the same individual in a household (based on exact matches for the age, date of birth, sex and name provided). Suspected duplicates were removed so that only 1 completed questionnaire from that individual was retained.
Where a household had more than 2 records, any extra cases were removed according to the following rules:
‘Speeders’ (individuals who completed the online questionnaire in an unrealistic amount of time for them to have properly engaged with the questions) were identified and removed from the dataset9.
It is also important to validate GSGB data against other surveys and industry data. This process includes research into reasons why there are discrepancies between the GSGB estimates and those from other surveys (see section on Caveats for interpreting estimates generated by the Problem Gambling Severity Index (PGSI)). To enhance this process, we have added a new question to the Year 3 survey (data collection throughout 2025) to ask participants if they have registered themselves with GamStop in the past 4 weeks, which will facilitate validation against industry data related to GamStop. We also added a new question which was designed in collaboration with the Bingo Association to ask people where they play bingo, to help validate GSGB data against industry data on the number of people playing bingo in bingo clubs. In its review of the GSGB, the Office for Statistics Regulation (OSR) also recommended investigating the coherence and comparability of GSGB data with other relevant data, such as the Adult Psychiatric Morbidity Survey (published June 2025) and the Health Survey for England, due for publication at the end of 2025.
The data were then weighted to allow for comparisons with other data sources. Each publication is accompanied by a wave specific technical report, which outlines the weighting strategy used. Further details can be found alongside each publication in the latest Gambling Survey for Great Britain publications.
Table 6 provides further technical detail on each year’s overall annual weights.
Survey year | Productive sample (number) | Design effect (number) | Sample size (number) | Efficiency percentage |
---|---|---|---|---|
1 (July 2023 to February 2024) | 9,742 | 1.25 | 7,820 | 80% |
2 (January 2024 to January 2025) | 19,714 | 1.26 | 15,600 | 79% |
2 annual weights are produced for each survey year; this includes an overall annual weight and an online only weight. The online only weight should only be used when analysing data only collected via the online questionnaire, all other analysis should be carried out using the overall weight.
A number of rigorous quality assurance processes were utilised when preparing the survey data. This included checks on the survey data carried out by NatCen data managers, such as those included in the data validation above (removing duplicates and speeders and harmonisation) as well as identifying outliers and creating and double-checking derived variables used for reporting.
The survey weights were created and checked by NatCen statisticians, with 2 statisticians working together so that one would produce the weights and a second would check them.
Tables used within the report were run using NatCen’s standard tables syntax that has been developed and refined over several years. The tables syntax and outputs were run twice and cross-checked by a second member of the research team to ensure the correct input variables had been used. The excel tables were also checked by 2 researchers to ensure that bases were correct, and table information was accurate and reader friendly.
On receipt of the data, the Commission also carry out their own quality assurance of the data.
In the instance that an error is spotted in the published data this is reported to the Commission straight away and a review of processes is then carried out to understand why the error has happened. The data is corrected and resupplied to the Commission and published in line with its Revisions and corrections policy.
Data is deposited at UK Data Service (opens in new tab) after each annual publication.
2Indices of multiple deprivation (IMD) is a measure of relative deprivation for small, fixed geographic areas of the UK. Separate indices are produced for each UK country. IMD classifies these areas into 5 quintiles based on relative disadvantage, with quintile 1 being the most deprived and quintile 5 being the least deprived.
3In the Experimental Phase, the effect on data quality and selection bias of inviting a maximum of 2 or a maximum of 4 adults from each household to take part in the survey was investigated. There was no discernible experimental condition effect on household response rates, duplications nor gambling participation rates. There was evidence of significant clustering of gambling behaviours among households with three or four participants. As this can impact on the accuracy of the gambling participation data the recommendation was to invite up to two adults per household to take part going forward.
4 See for example Church, A. (1993). Estimating the effect of incentives on mail survey response rates: A meta-analysis. Public Opinion Quarterly; 57:62-79. Mercer, A., Caporaso, A., Cantor, D. and Townsend, R. (2015). How Much Gets You How Much? Monetary Incentives and Response Rates in Household Surveys. Public Opinion Quarterly 79 (1):105-29. Pengli Jia, Luis Furuya-Kanamori, Zong-Shi Qin, Peng-Yan Jia, Chang Xu, Association between response rates and monetary incentives in sample study: a systematic review and meta-analysis, Postgraduate Medical Journal, Volume 97, Issue 1150, August 2021, Pages 501–510.
5 Love2Shop vouchers cannot be exchanged for cash and cannot be used for gambling, so do not pose ethical problems for this survey.
6The PGSI consists of 9 items and each item is assessed on a four-point scale: never, sometimes, most of the time, almost always. Responses to each item are given the following scores: never = 0, sometimes = 1, most of the time = 2, almost always = 3. When scores to each item are added up, a total score ranging from 0 to 27 is possible. See Problem gambling screens (gamblingcommission.gov.uk) for full detail.
7The Commission is conducting work to develop and test a series of survey questions aimed at collecting data on the experience of consequences from gambling. Full detail on the work undertaken to date and the next steps can be found at Statistics and research series (gamblingcommission.gov.uk)
8Demographic questions align to the GSS harmonisation strategy which promotes consistent definitions and question wording in data collection.
9Speeders are identified by calculating the median time it took to answer each question among all those who answered. From this an expected time is calculated for each participant dependent on the questions that they answered. A ratio of actual time compared with expected time is produced and any statistical outliers on this ratio measure are removed.
The Gambling Survey for Great Britain (GSGB), in common with other surveys, collects information from a sample of the population. The sample is designed to represent the whole population of adults aged 18 years and over living in private households in Great Britain, as accurately as possible within practical constraints, such as time and cost. Consequently, statistics based on the survey are estimates, rather than precise figures, and are subject to a margin of error, also known as a 95 percent confidence interval. For example, the survey estimate might be 15 percent with a 95 percent confidence interval of 13 percent to 17 percent. A different sample might have given a different estimate, but it would be expected that the true value of the statistic in the population would be within the range given by the 95 percent confidence interval in 95 cases out of 100. Confidence intervals are affected by the size of the sample on which the estimate is based. Generally, the larger the sample, the smaller the confidence interval, which results in a more precise the estimate.
Confidence intervals are quoted for key statistics within GSGB reports and presented within some tables. Where differences are commented on, these reflect the same degree of certainty that these differences are real, and not just within the margins of sampling error. These differences can be described as statistically significant10.
All survey methodologies have strengths and limitations. These are summarised in this section.
The Commission’s information needs are consolidated into a single survey (rather than several surveys as previously) which ensures consistency and efficiency.
The collection of data on a rolling basis and producing annual datasets and trends reduces the impact seasonal events (such as the FIFA World Cup) may have on key variables (for example, gambling participation rates).
The survey has undergone a comprehensive development stage, led by experts in the field. Development included cognitive testing, pilot testing, experimental testing, and stakeholder engagement. The survey was also independently reviewed by Professor Sturgis (opens in new Tab).
The survey design and large, representative samples (per wave) allow the Commission to report on key results on a quarterly basis as well as to conduct more detailed analyses.
The push-to-web methodology is more cost effective when compared with face-to-face collection methods.
The methodology also allows increased numbers of people to be interviewed at relatively lower cost, something that is important for the analysis of the consequences from gambling.
A postal alternative to the online questionnaire enabled the recruitment of adults who may have been less technologically literate, not have access to the Internet, or preferred an alternative option, and so increasing the representativeness of the sample.
The survey includes a broad range of content on gambling that is relevant to both those who gamble and those who do not gamble.
The self-administered data collection methods are likely to mitigate social desirability in responses to questions about sensitive topics. This is supported by recent evidence from Sturgis and others (2025)11, who found when gambling impact questions were administered in an online self-completion, PGSI scores were significantly higher than when questions were asked by a telephone interviewer. This suggests respondents under-report undesirable behaviours when an interviewer is present.
As the survey is ‘gambling focused’, it means more detail can be collected about gambling behaviours than is possible in a more general survey, where the number of questions that can be included is limited. Importantly, the inclusion of a longer list of gambling activities on GSGB does not appear to effect reported gambling participation or change PGSI more than 0 estimates (Sturgis and others 2025)11.
With a push-to-web methodology, interviewers are not present to collect the data in person and accuracy of answers relies on participants understanding the questions asked and following the instructions.
Similarly, there is a risk that some participants (although a small proportion) will not follow the routing instructions correctly on the postal version of the questionnaire. To minimise the risk, the postal questionnaire was designed with simple routing instructions and further, routing errors were checked and corrected during the office-based data editing process.
Compared with face-to-face interviewing methods, remote data collection methods typically have lower response rates, meaning they are potentially more susceptible to non-response bias. However, response rates for face-to-face interviews are also declining meaning these studies are also subject to non-response bias12. Furthermore, survey methodologists have found that the correlation between response rate and non-response bias is considerably weaker than conventionally assumed (Groves and Peytcheva 2008; Sturgis and others 2017)13.
The GSGB is ‘gambling focused’ - the survey is disproportionately attracts those who gamble; explicitly mentioning gambling in the survey invitation did not affect the overall response rate but did lead to a 4 percentage point increase in reported gambling participation11. It may be that by mentioning gambling in the survey invitation, people who gamble become over-represented in the sample compared to their composition in the population. It should be kept in mind that the survey overestimates the true level of gambling in the population.
The GSGB sampling frame only covers adults living in private households, and therefore does not include those living in institutional settings for example, large residential care homes, offender institutions, prisons, in temporary housing or sleeping rough. The Gambling Commission do however carry out other research through their Consumer Voice Programme to ensure all views are represented.
The GSGB produces estimates of those scoring 1 to 2, 3 to 7 and 8 or higher on the Problem Gambling Severity Index (PGSI). No survey methodology is perfect; different surveys measuring the same phenomena will provide different estimates because variances in survey design and administration can affect both who takes part and how people answer these questions. Until 2010, data on gambling was captured through the bespoke British Gambling Prevalence Survey (BGPS) series (conducted in 1999, 2007 and 2010). Originally intended to be a tri-annual survey, funding for the BGPS was cut in 2011. The Commission then sought different ways to capture information about gambling within available budgets.
Between 2012 and 2021, the primary method of measuring scores according to the PGSI (as well as a second measurement instrument, the DSM-IV) was through the Health Survey for England (HSE series) and the Scottish Health Survey. The GSGB picks up where the BGPS left off by being a bespoke gambling survey that captures a wide range of information about gambling across the whole of Great Britain. However, the methodology for the new GSGB differs from the BGPS and the health survey series in a number of ways as it also does from the 2023 to 2024 Adult Psychiatric Morbidity Survey (APMS) which included the PGSI (Bennett and others 2025)14. The remainder of this section considers a range of issues affecting all surveys, that may either serve to under-estimate or over-estimate the PGSI estimates.
Using the PAF as a sample frame is common on large-scale surveys, including the BGPS, the GSGB, the health survey series, and the Adult Psychiatric Morbidity Survey (APMS) series. This means that only those living in private households are eligible to be included in the survey. People living in student halls of residence, military barracks, hospitals, prisons and other institutions are excluded. Some of these populations may have higher rates of gambling and higher PGSI scores. All studies using the PAF as a sample frame inherit this source of bias.
This bias is founded on the idea that there are social norms that govern certain behaviours and attitudes, and that people may misrepresent themselves so as to appear to conform to these norms15. In the survey context, this misrepresentation may involve participants explicitly deciding to give false information or modifying their in-mind answer. However, it can also involve participants giving information that they believe to be true but is in fact inaccurate16. It is a potential risk for all surveys that collect information on sensitive topics, including the health survey series, the GSGB and the APMS series. Sensitive topics include those that:
One strategy to reduce the risk of social desirability bias is to use self-completion methods. These methods include online and postal questionnaires, which are completed by the participant. Self-completion methods are used on both the health survey series, the GSGB and the APMS series to collect information on gambling. However, the surveys differ in the way in which self-completion methods are used, which may affect resulting estimates. The health survey series is an interviewer-administered survey that includes a paper self-completion questionnaire to ask about gambling behaviour. This is typically completed by participants in the presence of the interviewer and potentially other household members, who also take part in the survey. APMS 2023 to 2024 was an interviewer-administered survey that used a computer-assisted self-completion (CASI) questionnaire to ask about gambling behaviour, typically completed by participants in the presence of an interviewer.
Sturgis and Kuha noted that it is possible that the presence of an interviewer or other household members might lead to underreporting of gambling in the self-completion questionnaire. Their analysis did not find a statistically significant difference in the proportion of people with a PGSI score of 1 or more within Health Survey for England (HSE) data, depending on whether other people were present at the time the gambling questions were being completed. However, subsequent analysis of HSE 2018 data conducted for the GSGB pilot, using multi-variate regression models, found that the odds of having an PGSI of 1 or more were 1.5 times higher among those who did not have other household members present at the point of interview17. The authors concluded that the online methods of GSGB may offer greater privacy to participants, and so reduce social desirability bias. This conclusion is supported by recent findings from experimental research11, which found when questions were administered in an online self-completion, PGSI scores were significantly higher than when questions were asked by a telephone interviewer. The proportion of respondents scoring 1 or above was 4.4 percentage points higher, which represents an almost 50 percent increase, strongly suggesting that respondents under-report undesirable behaviours when an interviewer is present.
During the stakeholder engagement sessions conducted for the GSGB, those with lived experience of gambling harms stated that they would have been unlikely to participate in a survey when they were experiencing gambling difficulties. This was also highlighted by Sturgis in his review of the GSGB methods18. Evidence supporting this is provided by analysis on non-response of the 2007 and 2010 BGPS series. In 2007, Scholes and others demonstrated a strong relationship between the factors predicting household non-response and gambling frequency: area and household-level factors which predicted lower household response were associated with higher gambling frequency. This suggests that those households less likely to take part in surveys were more likely to contain people who frequently gamble19. Similar analysis conducted for the BGPS 2010 (reported in Wardle and others, 2014)20 demonstrated that households which either required multiple attempts to contact, were reissued after multiple follow-up attempts, or were followed-up by telephone interviewers after the face-to-face interviewer had been unable to make contact, were more likely to contain people who gambled.
This supports the notion that that those who are very engaged in gambling may be less likely to take part in surveys overall. This is likely to apply to all surveys. (However, both the health survey series and the GSGB are likely subject to different selection biases.)
It’s notable that the 2023 to 2024 APMS estimated that 4.4 percent of adults had a PGSI score of 1 or above14 compared to 14.3 percent in the 2023 GSGB. Sturgis and others (2025)11 suggests we may expect that 5 to 6 percentage points of this difference can be accounted for by the different survey invitations and modes, and so recommends the Gambling Commission benchmark the GSGB against the APMS.
The measurement of experience of so-called problem gambling is via a series of questions known as “screens”. Multiple different screens to measure the experience of problem gambling exist. No screen is perfect. In the BGPS and health survey series, two different screening instruments have been used: the DSM-IV and the PGSI Problem gambling screens.
Analysis of these screens shows that they capture different groups of people with potentially different types of problems. Orford and others (2010) suggested that the PGSI, especially among women, may underestimate certain forms of gambling harms which the DSM-IV is better suited to measure21. For this reason, the BGPS and health survey series always included both the DSM-IV and PGSI screens. The rates of those experiencing problem gambling reported by the PGSI are lower than those reported by the DSM-IV. Since the BGPS was developed, the PGSI has become one of the most widely used screens, particularly because it presents scores on a spectrum of severity. In addition, there is now greater focus on the wider range of consequences associated with gambling which are not captured by the PGSI or the DSM-IV. During consultation on the GSGB questionnaire content, stakeholders strongly suggested that it would be appropriate to include only one screen to measure the experience of problem gambling and to use additional questionnaire space to capture other important aspects of gambling experiences. As a result, the GSGB only includes the PGSI screen, which generates lower estimates of problem gambling than the DSM-IV.
How surveys are presented to potential participants can influence who takes part. Williams and Volberg (2009)22 conducted an experiment presenting the same survey to potential participants but varying its description – introducing it either as a health and recreation survey or a gambling survey. The found that rates of problem gambling were higher in the latter. This is maybe because people who gamble may potentially be more likely to take part in a gambling survey because it is relevant to them. This is supported by the recent finding by Sturgis (2025) that mentioning gambling explicitly in the survey invitation did not affect the overall response rate but did lead to a 4 percentage point increase in reported gambling participation. The GSGB, which for ethical reasons has an invite letter informing people the study is about gambling, therefore likely suffers from this selection bias compared with the health survey series and APMS, and it’s not possible to conclude if this makes the estimates more or less accurate.
In addition, analysis conducted by Sturgis and Kuha and also, Ashford and others (2022)17 plus the recent experimental research by Sturgis (2025)11 detailed in the Participation and Prevalence: Pilot methodology review report found that those who completed PGSI questions online had higher PGSI scores than those who completed the questions via an alternative mode.
In short, online surveys may overestimate the proportion of people who gamble online, which may in turn overestimate the consequences experienced from gambling because online and frequent gambling are independently associated with a higher probability of experiencing consequences from gambling.
However, evidence suggests that those experiencing consequences from gambling are less likely to take part in surveys overall and have poorer health outcomes. Given this, there is also the possibility that these people may be less likely to take part in a health-focused survey as suggested by recent findings from Sturgis and others (2025)11, which would also impact on the results obtained by health surveys.
10Statistical significance does not imply substantive importance; differences that are statistically significant are not necessarily meaningful or relevant.
11Sturgis, P., Kuha, J., How, S., & Maxineaunu, I. (2025). Three experiments on the causes of differences in estimates of gambling and gambling impacts in general population surveys.
12For example, in 2021, the Health Survey for England (HSE) household response rate was 32 percent compared with 60 percent in 2015 and 59 percent in 2018.
13Sturgis, Patrick, Joel Williams, Ian Brunton-Smith, and Jamie Moore. 2017. ‘Fieldwork Effort, Response Rate, and the Distribution of Survey Outcomes: A Multilevel Meta-Analysis’. Public Opinion Quarterly 81(2): 523–42; Groves, Robert M., and Emilia Peytcheva. 2008. ‘The Impact of Nonresponse Rates on Nonresponse Bias: A Meta-Analysis’. Public Opinion Quarterly 72(2): 167–89.
14Bennett, M., Spencer, S., Hill, S., Morris, S., McManus, S., & Wardle, H. (2025). Gambling behaviour. In Morris, S., Hill, S., Brugha, T., McManus, S. (Eds.), Adult Psychiatric Morbidity Survey: Survey of Mental Health and Wellbeing, England, 2023 to 2024. NHS England.
15Kreuter, F., Presser, S. and Tourangeau, R. (2008) ‘Social Desirability Bias in CATI, IVR, and Web Surveys: The Effects of Mode and Question Sensitivity’, Public Opinion Quarterly, 72(5), pp. 847–865.
16Tourangeau, R. and Yan, T. (2007) ‘Sensitive Questions in Surveys’, Psychological Bulletin, 133(5), pp. 859–883.
17Ashford, R., Bates, B., Bergli C and others (2022) Gambling participation and the prevalence of problem gambling survey: Pilot stage Methodology review report. National Centre for Social Research: London.
18Sturgis, Patrick (2024) Assessment of the Gambling Survey for Great Britain (GSGB). London School of Economics and Political Science, London, UK Assessment of the Gambling Survey for Great Britain (GSGB) - LSE Research Online (opens in new tab).
19Scholes, S., Wardle, H., Sproston, K., and others (2008) Understanding non-response to the British Gambling Prevalence Survey 2007. Technical Report. National Centre for Social Research, London.
20Wardle, H., Seabury, C., Ahmed, H and others (2014). Gambling Behaviour in England and Scotland. Findings from the Health Survey for England 2012 and the Scottish Health Survey 2012. National Centre for Social Research: London. Available at: Gambling behaviour in England and Scotland -Findings from the Health Survey for England 2012 and Scottish Health Survey 2012 (opens in new tab).
21Orford, J., Wardle, H., Griffiths, M., Sproston, M., Erens, B. (2010) PGSI and DSM-IV in the 2007 British Gambling Prevalence Survey: reliability, item response, factor structure and inter-scale agreement, International Gambling Studies, 10:1, 31-44.
22Williams, R. J., & Volberg, R. A. (2009). Impact of survey description, administration format, and exclusionary criteria on population prevalence rates of problem gambling. International Gambling Studies, 9(2), 101–117.
It is important to have good user engagement with our statistics. It helps us to better understand who is using the GSGB statistics and what their needs are.
During the development of the GSGB, the Gambling Commission engaged with a number of stakeholder engagement groups to keep them informed about the development of the new survey and to listen to their views. Since the survey has moved to the official statistics phase, the Commission has transitioned the stakeholder engagement groups to a GSGB statistics user group.
Register to join the statistics user group.
To leave feedback about the GSGB, fill in our GSGB feedback form.
To get in touch directly please email statistics@gamblingcommission.gov.uk
To help users, the Commission has developed a set of guidance for anyone who wishes to use data from the GSGB to ensure it is reported correctly, this could include policy makers, academics, the gambling industry, the media, members of the public and any other interested users.