Cookies on the Gambling Commission website

The Gambling Commission website uses cookies to make the site work better for you. Some of these cookies are essential to how the site functions and others are optional. Optional cookies help us remember your settings, measure your use of the site and personalise how we communicate with you. Any data collected is anonymised and we do not set optional cookies unless you consent.

Set cookie preferences

You've accepted all cookies. You can change your cookie settings at any time.

Skip to main content

Panel minutes for 4 September 2025

Minutes of the Digital Advisory Panel (DAP) Meeting

4 September 2025, 13:00 to 14:00 Virtual meeting via Microsoft Teams

Panel Members:

  • Christian McMahon (Chair)
  • Paul Smith
  • Darren Williams.

In attendance:

  • REDACTED
  • Helen Child (Director of Governance)

Apologies:

  • Francine Bennett

1. Welcome, apologies and declarations of interest

Attendees were welcomed to the meeting. Christian was welcomed in his role as the new chair of DAP.

There were no new declarations of interest.

2. Minutes of previous meetings

The minutes of the last meeting were approved.

The actions from the previous meeting were reviewed and stated as complete.

3. AI Use Cases

The Gambling Commission’s (GC) Data team are exploring potential use cases for using AI across wider areas of work. The Head of Governance will be leading on this work in the future in terms of governance processes around signing off on the use of AI. The Data team want to explore suitable applications for AI so we can test out governance processes and identify areas where their use will add most value. AI has the potential to help make some areas of work quicker and more cost-effective. DAP members provided some feedback:

  1. It is important to question what metrics to use (costs, manual queries currently being received). These suggestions are good, quantifiable metrics. The GC should look at what systems operators are using for internal facing compliance. The Data team confirmed that at this stage the GC is focusing on some foundational steps before exploring more advanced applications.
  2. A DAP member highlighted the importance of using categories and considering the factor of human elements, and how they use AI prompts, for example if you ask the wrong question – answers can be misleading, or you may get only part of the picture. Somebody with less experience may not be able to question AI correctly so there would need to be a lot of use testing with people to reduce risk around this, for example, when asking about Research & Statistics (R&S) there must be some context to it, the nuance provided in-person is not available via AI. The solution can be to pre-run tests with the right format and have in-built key data that responds to certain prompts. Having Sequel or Python coding will be required to help for links/relationship between work being requested by the user. Error-rate testing must be an important part of the process. Microsoft Copilot could be used in testing (it can link into power BI etc. and manage security) and a lot of companies have taken this approach. It is not the best method to use long-term but is a good soft test for some of these.
  3. Context is imperative. AI will not effectively link complex concepts, data and documents - advanced techniques involving knowledge graphs or agents-based techniques are required. A starting point for small pilots could be use of Microsoft Copilot within GC with documents but any advanced work requires a bigger implementation project.
  4. Different use cases are easier when isolated with controlled access to limited documents; this is achievable with standard infrastructure. The complication is when the context/quality of information is more important, there is higher risk attached. Some papers can have quality ratings attached to them, which would help with the context of it.
  5. DAP members added that it would be necessary to label documents and then train colleagues to use specific words to increase the quality of the context. Each system prompt would need to be tailored in line with the GC’s policies, but it is not too complex if the right people are working on this. Members were reminded that meeting papers were not the architecture for this work.
  6. DAP members continued on the topic of chatbots and how tone and context are some of the biggest challenges with AI and it is sometimes easier to be restrictive in order to get to an agent. Understanding inputs produce better outputs. Getting people to use Copilot gives people that ‘thought process’ and gets them into the mindset of the questions. It is right for the GC to look at risks earlier on and there needs to be the option for people to get through to an agent.
  7. REDACTED
  8. DAP Members advised the GC to be aware that in certain 3rd party tools, AI has been built in to get coverage across multiple social media platforms access. Social media chatbot access is something for the GC to think about.
  9. The GC are looking at AI through different lenses at present and want to create something specific. The discussion on platforms will help.
  10. It was suggested that we could explore other kinds of structured outputs: e.g., investigating illegal gambling sites and what is known about them. The GC could pick from a large number of Large Language Models (LLMs), get it to do an up-to-date website search and specify the urls (domain names) people are directed to, and put it in a giant table and train people how to use the prompts.
  11. AI is improving at undertaking sentiment analysis.

4. Recruitment update

The GC is working on the recruitment of three new DAP members to replace those who have left recently.

5. AOB

The Head of Governance advised DAP members that she has a paper going to Board on 20 November which includes prioritisation of DAP recruitment.

Summary of the Meeting Actions

No actions noted.

Is this page useful?
Back to top