The Online Safety Act for Games.

This is a comprehensive and detailed guide specifically focused around the Online Safety Act and Ofcom’s regulation in relation to the gaming industry. I have gone through well over 1,000 pages of legislation from the UK Government and statements from Ofcom (the independent regulator appointed by the UK gov to enforce the OSA) to produced the following guide on how the Online Safety Act specifically impacts games.

As a note, I have read everything manually and have written this myself. There was no AI used in the production of this guide.

What is the Online Safety Act

The Online Safety Act is a piece of legislation within the United Kingdom which impacts all companies globally as long as they meet certain conditions regarding users/usage within the UK.

The intent is to make the internet safer for users within the UK, specifically for children.

Alongside the legislation, there is also a new regulatory framework (regulated by Ofcom). There are two key areas of focus; “illegal content and activity”, and “content that is harmful to children”.

The Online Safety Act and Ofcom’s regulatory framework

The Online Safety Act is the legislation, but there are also different and key parts that you need to be aware of which are being regulated by Ofcom.

There is a lot to unpack here, which is why I’ve read every statement and volume, as well as the whole of the Online Safety Act, so I can do my best to make you aware of the key points.

 

Are games impacted by the Online Safety Act?

The short answer:

Yes, because Ofcom says so, and because the legislation says so.
Here is a screenshot of one section of one volume of one statement where Ofcom specifically reference the gaming industry. They do this dozens of times.


The long and specific answer:

As per the legislation specifically:

Services who have users in the UK (1.1,) need to be safe by design (1.3.a,) and have a higher standard of protection for children than adults (1.3.b.i,), whilst providing transparency and accountability in relation to those services. (1.3.b.iii,).

Games are user-to-user internet services, as they are an internet service (”a service that is made available by means of the internet (228.1) by means of which content (see definitions) that is generated directly on the service by a user may be encountered by another user(s) of the service (3.1,) specifically in regards to users being able to communicate and send messages to each other (55.3.). You are a “regulated user-to-user service if your game has links with the United Kingdom (4.2.a,), which means that users from the United Kingdom are a target market for your game, children from the United Kingdom are likely to play your game, or your game is capable of being used in the United Kingdom by children(4.6.a), or your game might be found appealing by children within the UK.

Additionally, here are the key definitions from the Online Safety Act (the actual legislation). These are important to understand, or at least be aware of.

Further supporting that games are in scope;
Feedback from the gaming industry and Ofcom’s responses in relation to the Online Safety Act and consultations on Ofcom’s regulatory framework:

Link to Ofcom’s statement “Volume 2: Protecting children from harms online, the causes and impacts of online harms to children”.

Gaming services - Summary of stakeholder feedback
4.105 Stakeholders discussed our assessment of risks associated with gaming services:

UK Interactive Entertainment (Ukie) suggested that, unlike social media platforms, the controlled environment of gaming platforms reduces the risk of children encountering harmful content.90 [] also emphasised the importance of proportionality and taking a service-specific approach to risks associated with gaming platforms.

[] suggested that our presentation of risks associated with violent content, bullying content and abuse and hate content on gaming platforms is “misleading” and would disproportionately burden the “diverse and dynamic” gaming industry.

b4.106 [] challenged our conflation of gaming services with games-adjacent communications platforms.

Our decision (Ofcom)
4.107 We have assessed our presentation of gaming services in the Children’s Register and consider that we have presented a balanced discussion of the relevant risks, based on the available evidence. We recognise that risks will differ depending on the design of specific gaming services, which we expect providers to reflect in their risk assessments. More detail on the process that services should follow in conducting their risks assessments is set out in the Children’s Risk Assessment Guidance.

We note the distinction between gaming services, which allow users to interact within partially or fully simulated virtual environments, and gaming-adjacent services, where users are able to stream and chat about games. We have, therefore, updated Section 5 of Children’s Register (Abuse and hate content) to include the term ‘gaming-adjacent services’ to more accurately represent the examples provided.

Additionally, Ofcom has conducted research into children and their access to video games, and found:
(See “Children’s online behaviours and risk of harm within the Children’s Register of Risks"):

  • (1.38) More than 80% of children once they hit the age of 13 play online video games

  • (1.39) 7-18 year olds spend on average 3.4 hours a day online.

  • (1.45b) Many children play online games which may bring them into contact with strangers, including adults.

    • Three quarters (75%) of children aged 8-17 game online,

    • 25% play with people they do not know outside the game.

    • Additionally, 24% chat to people through the game who they do not know outside of it.

    • When prompted, 62% of parents whose 3-17-year-old played games online expressed concern about their child talking to strangers while gaming (either within the game or via a chat function) and 54% were concerned that their child might be bullied.

In summary - are games impacted by the OSA:

Yes, games are regulated user-to-user internet services (also known as part 3 services), and need to be compliant.
There is no way you can possibly argue that games/gaming services are not in scope or don’t need to make changes to be compliant.

Ofcom don’t care if you think it’s unfair or not relevant to games.
From their research and stats, they have determined that such a massive proportion of children have access to and play games, that the gaming industry 100% needs to be regulated to protect children from encountering illegal or harmful content.

They think it is fair and relevant, and they will fine you 10% of global turnover or £18m, whichever is greater, if you fail to comply.
Games are 100% in scope and being regulated - without a shadow of a doubt.

 

Do I need to care about complying with the Online Safety Act?

Yes. Fines are £18m or 10% of global turnover, whichever is greater.

Here are some comments from Ofcom’s latest statement “Volume 1: Overview scope and regulatory approach” from their finalised guidance regarding the Online Safety Act:

“Services cannot decline to take steps to protect children because it is too expensive or inconvenient – protecting children is a priority. All services, even the smallest, will have to take action”

It doesn’t matter if you’re a big studio or a small game; you have to comply. No excuses.

”By now, all services must have carried out assessments to determine whether they are in scope of the children’s safety duties. We anticipate that most services not using highly effective age assurance will be in scope of the regulatory package we are confirming today.”

Services in scope of the children’s safety duties now have until 24 July 2025 to complete and record their children’s risk assessments, as explained in the Children’s Risk Assessment Guidance. There’s lots of work which you should have completed by now, and if you haven’t - do it now. The most important and major deadlines are coming up. Don’t miss them.

“We can now confirm that our proposals will also include measures to protect children from grooming through the use of highly effective age assurance

By the end of 2025, we will publish our final guidance on wider protections for women and girls following our consultation in February 2025.”

This is an evolving and expanding topic which will progress further legislatively to further protect users, which means service providers will have to continue evolving their processes and adjusting their product/service offering to ensure compliance, as the compliance changes.

Additionally, you need to remember that the Online Safety Act has been enacted for the first time in the UK, but it is not unique. Australia has passed their version and it is coming into regulation at the end of 2025, and EU member states are in the process of passing their own versions.

So this isn’t a one-off tick box exercise for today, and it’s not something that you can ignore. This is a global area of focus, and will be ever-developing over the coming years.

 

What do games have to do?

The accurate, short, but not-very-helpful answer.

Follow Ofcom’s guidance and to complete the:

  1. Children’s access assessment, (should have been completed by Jan 2025)

  2. Illegal content assessment and safety duties, (should have been completed by March 2025)

  3. Children’s risk assessment, (should have been completed by April 2025)

  4. Comply with Children’s Safety Duties. (Due July 25th 2025 at the latest)

Ofcom has hundreds of pages of guidance as well as online/interactive tools regarding how to do this. I recommend your DPO, legal team, or designated child safety officer (which is a new named person you have to have) read and complete these steps.

Although this is critically important, and you need to do this, it’s not a helpful answer as an overview to the games industry. As a result, I’ve condensed a lot of information to give you a more full and useful answer. That said, you still need to do the above.

The longer, detailed, and helpful answer.

Firstly, I need to provide some context.

In the following sections we will cover:

  • The types of content that the OSA & Ofcom are regulating,

  • Risk factors specific to games,

  • Codes of practice and recommended measures,

  • At the end, based on these, we will then cover what you need to practically implement.

Section 1: Types of content that the Online Safety Act and Ofcom are regulating:

There are two types of content, two assessments you need to undertake, and two lots of steps you need to perform:

  • Illegal content and illegal harms,

  • Harmful content (which is different from illegal content or harms, but can overlap).

Content type #1: Illegal content and Illegal harms:

In this section we will specifically cover the illegal content and illegal harms, which you subsequently need to perform your safety duties to protect users from.

In regards to the gaming industry, the types of illegal content most relevant and likely are:

  • Offences to children: Child sexual exploitation and abuse (CSEA); Offences relating to child sexual abuse material (CSAM), and grooming.

  • Threats (including hate),

  • Abuse and insults (including hate),

For further reading and full context, please see the following link: Ofcom: Protecting people from illegal harms online, Illegal content judgements guidance (ICJG).

Child sexual exploitation and abuse (CSEA): Offences relating to child sexual abuse material (CSAM).

5.20 Content which ‘incites’ a child under 16 to engage in sexual activity is illegal content, even where the incitement takes place but the sexual activity does not. This means that where providers do not have information about whether or not a child has been caused to participate or engage in sexual activity offline, the content is illegal if it incites (i.e. encourages or assists) them to do so

5.22 If pornographic content is sent to a child under 16 years, it will be reasonable for service providers to infer that the child has been caused to watch it.

Inferring the potential victim’s age as under 16

5.23 For content to amount to these offences, the communication must involve a child under the age of 16. In order to protect children from online harms, the Act requires providers of services that are likely to be accessed by children to use age estimation or age verification measures.

5.24 Reasonable grounds to infer that a potential victim is a child should be presumed to exist where:

b) Information from age estimation or age verification measures (‘age assurance measures’) indicates that the potential victim in the image is aged under 16. c) The potential victim of grooming states in a report or complaint that they are aged under 16 or were aged under 16 at the time when the potentially illegal content was posted. d) Account information indicates that the potential victim is aged under 16, except where the subject concerned has been using the service for more than 16 years. e) A person other than the potential victim states in a report or complaint that the potential victim is aged under 16 or was aged under 16 at the time when the potentially illegal content was posted. This applies unless:

i. Information from age estimation or age verification measures (‘age assurance measures’) indicate that the potential victim is aged 16 or over; or ii. The potential victim stated in a report or complaint that they were aged 16 or over at the time the potentially illegal content was posted.

Adult to child only offences

5.29 If consideration of the offences above has not resulted in the content being judged to be illegal content, but the content is of a sexual nature and involves a child who can be reasonably inferred to be under 16, the provider should next consider the age of the potential perpetrator.

Sexual communication with a child

5.30 Content will be illegal where it amounts to sexual communication with a child. In order for content to be illegal under this offence, there must be reasonable grounds to infer that all of the following are true:

a) the communication involves at least one child under the age of 16 (the potential victim(s)) and at least one adult aged 18 or over (the potential perpetrator); b) the adult aged 18 or over intends to communicate with the child; c) the communication is either itself sexual, or was intended to encourage the child to make a sexual communication; d) the adult in question did not reasonably believe that they were communicating with a person aged 16 or over; and e) the communication was for the purposes of sexual gratification of the adult in question.

5.33 Communication should be considered sexual where any part of it relates to sexual activity or where any part of it is what a reasonable person would consider to be sexual. It is not necessary to infer that the adult in question themselves believed the communication to be sexual.

5.34 Communication which encourages a child to communicate in a sexual way is encompassed within this definition.

5.35 The medium of the communication is irrelevant when judging whether content is illegal: written messages, audio, video and images may all be considered to amount to sexual communication with a child. This means that the sending of sexualised imagery (for example, an image, video or gif depicting sexual activity) will be captured (although it is likely to have been caught by the ‘sexual activity’ offences above). Likewise, content communicated via permanent means (for example, in a comment on a photo that stays on the service unless the user/service makes a decision to remove it) or via ephemeral means (for example, an audio message in a virtual environment) may amount to sexual communication with a child. Content posted in these settings will be illegal if it amounts to any of the offences set out below.

5.36 Service providers may be most likely to encounter such content via direct or group messages but should also be aware of the risk of this offence manifesting in illegal content in other ways such as via comments or livestreams, via gaming platforms, or in immersive virtual reality environments.

Illegal content: Threats, abuse and harassment (including hate).

Overview of themes relating to illegal content and illegal harms:

Note: This is not the full list. This is a reduced list as to offences that are most likely to be experienced within games content via user-to-user communication.

3.1 The priority offences set out in Schedule 7 of the Online Safety Act (‘the Act’) which relate to threats, abuse and harassment overlap with one another to a significant degree. For the purposes of this chapter, we therefore approach them based on theme, rather than offence by offence.

The themes are:

b) Threats (including hate), encompassing:

i) threatening behaviour which is likely to cause fear or alarm ii) threatening behaviour which is likely to cause harassment or distress

c) Abuse and insults (including hate), encompassing:

i) abusive behaviour which is likely to cause fear or alarm ii) abusive behaviour which is likely to cause harassment or distress

3.2 Suspected illegal content may include more than one of these themes. It may well also need to be considered under other categories of priority offences; in particular: terrorism, CSAM (for example, when a child is being blackmailed), grooming, image-based sexual offences (including intimate image abuse) or foreign interference and the non-priority false communications offence.

Threats or abusive behaviour likely to cause fear or alarm:

3.22 It is not necessary that a person actually suffered fear or alarm from content being posted, only that it was likely to cause a ‘reasonable person’ to suffer fear or alarm. A ‘reasonable person’ is someone who is not of abnormal sensitivity. However, the characteristics of the person targeted are relevant. A reasonable person who is threatened because of characteristics they have (for example, race, sexuality, religion, gender identity or disability) is more likely to feel threatened.

3.23 The mere fact that a person has complained about content is not sufficient to show that a reasonable person would be likely to suffer fear or alarm. In considering whether a reasonable person would be likely to suffer fear or alarm, the following factors are relevant:

Threats or abusive behaviour likely to cause harassment or distress:

3.33 Distress involves an element of real emotional disturbance or upset. The same is not necessarily true of harassment. A person may be harassed, without experiencing any emotional disturbance or upset. However, although the harassment does not have to be grave, it should also not be trivial. When the UK courts are considering these offences, this is the test a jury is asked to apply, and so it is right for providers to take a common-sense view of whether they have reasonable grounds to infer that the content they are considering meets this test.

3.34 Service providers should consider any information they hold about what any complainant has said about the emotional impact of the content in question and take a common-sense approach about whether it is likely to cause harassment or distress. If the content expresses racial hatred or hatred on the basis of other protected characteristics, it is far more likely to cause harassment or distress. Certain words carry greater force depending on who they are used against. The volume of the content concerned, or repetition of the conduct, may make it more likely content will cause harassment or distress. Offences which involve repeated instances of behaviour are also considered in this chapter; see paragraphs 3.107-3.108

Content type #2: Harmful content (different from illegal content):

In this section we will cover harmful content, which subsequently you will need to perform your children’s safety duties to protect users (with specific attention to children) from.

Harmful content falls into three main high-level categories:

  • Primary priority content (PPC)

  • Priority content (PC)

  • Non-designated content (NDC)

In the following section we will explore each category of content which includes all types of PPC, and the types of PC which are most likely relevant to the majority of games.

For shorthand reference, view table 1.1: content harmful to children covered in our guidance as defined in the act: Guidance on content harmful to children

For full context, please read all of the following: Ofcom: Protecting children from harms online, all 6 volumes.

Primary priority content (PPC)

  • Pornographic content

  • Suicide content (which encourages, promotes, or provides instructions for suicide),

  • Self-injury content (encourages, promotes, or provides instructions for an act of deliberate self-injury),

  • Eating disorder content

Priority content (PC)

  • Content which is abusive or incites hatred,

    • Content which is abusive and targets any of the following characteristics:

      • Race,

      • Religion,

      • Sex,

      • Sexual orientation,

      • Disability, or,

      • Gender reassignment,

    • Content which incites hatred against people:

      • Of a particular race, religion, sex, or sexual orientation,

      • Who have a disability, or,

      • Who have the characteristics of gender reassignment..

  • Violent content

    • Content which encourages, promotes, or provides instructions for an act of serious violence against a person.

    • Content which:

      • Depicts real or realistic serious violence against a person,

      • Depicts the real or realistic serious injury of a person in graphic detail,

    • Or, content which:

      • Depicts the real or realistic serious violence against an animal,

      • Depicts the real or realistic serious injury of an animal in graphic detail,

      • Realistically depicts serious violence against a fictional creature, or the serious injury of a fictional creature in graphic detail.

  • Bulling content

    • Content may, in particular, be bullying content if it is content targeted against a person which -

      • Conveys a serious threat,

      • Is humiliating or degrading,

      • Forms part of a campaign of mistreatment.

Non-designated content (NDC)

Any other type of content not mentioned here that presents a material risk of significant harm to an appreciable number of children in the UK.

NDC is not addressed in the Guidance on Content Harmful to Children but is addressed in the Children’s Register. In accordance with their children’s risk assessment duties, service providers are required to consider types of content that may be harmful to children beyond the designated harms specified by the Act.

Providers should refer to the Introduction in Section 1 of the Children’s Register and the Children’s Risk Assessment Guidance for further detail on how they should consider NDC with regard to their children’s risk assessments duties.

Summary of content types for video games:

Although there are a lot of content types to cover; most of the behaviours in relation to games can be distilled into three main groups. We will cover functionalities in full detail in the coming section, although we touch on it here.

1. Toxic players who send message that are:

  • Primary Priority Content:

    • Encourage or provide instruction on suicide (tell people to kill themselves, especially who do so enthusiastically and in detail).

    • Encourage or provides instruction on self-harm (tell people to hurt themselves, especially who do so enthusiastically and in detail).

  • Priority Content:

    • Abuse - generally abusive content targeting specific protected characteristics.

    • Bullying - humiliating or degrading content, and/or conveying a serious threat.

2. Predators who message/communicate with children (CSEA, CSAM, and grooming)

  • Content which incites a child under 16 to engage in sexual activity, even where the incitement takes place but the sexual activity does not.

  • Sexual communication with a child where:

    • The child is under 16,

    • The perpetrator is over 18,

    • The adult intends to communicate with the child,

    • The communication is either itself sexual, or was intended to encourage the child to make a sexual communication,

    • The adult did not believe the person they were communicating with was 16 or over,

    • The communication was for the purposes of sexual gratification of the adult in question.

3. Violent content within the game itself which may not be age-appropriate

  • Violent content relating to people:

    • If your game has content which depicts real or realistic serious violence against a person,

    • or the real/realistic serious injury of a person in graphic detail,

  • Violent content relating to animals, including fictional creatures:

    • Depicts real or realistic serious violence against an animal,

    • Depicts real or realistic serious injury of an animal in graphic detail,

    • Realistically depicts serious violence against a fictional creature, or the serious injury of a fictional creature in graphic detail.

 

Section 2: Risk Factors

Ofcom have established risk factors for service providers. The three that are most relevant to games are:

  1. Service type,

  2. User base,

  3. Functionalities.

The nice thing here is that almost all games will fall into the same categories.

  • You are all games (service type),

  • User base will be diverse based on game and genre,

  • Almost all games have the same risk factors in regards to functionality:

    • Anonymous profiles (anyone can create an account and it’s not tied to them as a person),

    • User networking and user connections, (user’s who don’t know each other can come across each other),

    • The big one: User communication (user’s can message and/or chat).

Service type:

There are lots of service types, but as you’re reading this, we will assume that you are a game.

User base:

Age is a key component to your user base.
For the full read and context on age, read: “17. Recommended age groups” (page 311) of the Children’s Register of Risks”.

Additionally, to support age being a key factor specifically within games, Ofcom has undertaken research and found (see “Children’s online behaviours and risk of harm within the Children’s Register of Risks"):

  • (1.38) More than 80% of children once they hit the age of 13 play online video games

  • (1.39) 7-18 year olds spend on average 3.4 hours a day online.

  • (1.45b) Many children play online games which may bring them into contact with strangers, including adults.

    • Three quarters (75%) of children aged 8-17 game online,

    • 25% play with people they do not know outside the game.

    • Additionally, 24% chat to people through the game who they do not know outside of it.

    • When prompted, 62% of parents whose 3-17-year-old played games online expressed concern about their child talking to strangers while gaming (either within the game or via a chat function) and 54% were concerned that their child might be bullied.

The age bands are as follows:

  • 0-5 pre-literate and early literacy.

  • 6-9 years: Core primary school years.

  • 10-12 years: Transition years.

  • 13-15 years: Early teens.

  • 16-17 years: Approach adulthood.

As a note, these age bands align with the Information Commissioner’s Office (ICO) Age appropriate design code, on the basis of evidence linking certain online behaviours to age and developmental stage. Additionally, Ofcom mention that they created these age groups with consideration to life stages, online presence, parental involvement, and age-specific risks.

Additional and important commentary from Ofcom regarding age and age bands includes:

17.1 As mandated by the Online Safety Act 2023 (the Act), user-to-user services must assess “the level of risk of harm to children presented by different kinds of content that is harmful to children, giving separate consideration to children in different age groups”. There are similar requirements for search services to consider children in different age groups.

17.2 The Act also imposes a number of safety duties requiring services likely to be accessed by children to manage and mitigate risks of harm from content that is harmful to children. This includes, in relation to user-to-user services, operating a service using proportionate systems and processes designed to:
- (i) prevent children of any age from encountering primary priority content that is harmful to children, and
- (ii) protect children in age groups judged to be at risk of harm (in the risk assessment) from encountering priority content that is harmful to children and non-designated content.1573

0-5 pre-literate and early literacy.

A time of significant growth and brain development for very young children. Children of this age are heavily dependent on their parents, with parental involvement substantially influencing their online activity.

Age-specific risks

17.15 Just by being online, children in this age group are at risk of encountering harmful content. As children use devices or profiles of other family members, this may lead to a risk of encountering age-inappropriate content, including harmful content, as recommender systems1587 recommend content on the basis of the search and viewing history of the other user(s).

17.16 The use of child-specific or restricted-age services does not guarantee that children will necessarily be protected from harmful content. It is possible that children may be more likely to use these services unsupervised. There have been cases of bad actors in the past using child-friendly formats, such as cartoons on toddler-oriented channels, to disseminate harmful content on child-specific services.1588

6-9 years: Core primary school years.

After starting mainstream education, children become more independent and increasingly go online. Parents create rules to control and manage their children’s online access and exposure to content.

Age-specific risks

17.24 Some children in this age group are starting to encounter harmful content, and this exposure has the potential for lasting impact. Research by the Office of the Children’s Commissioner for England found that, of the children and young people surveyed who had seen pornography, one in ten (10%) had seen it by the age of 9. Exposure to pornography at this age carries a high risk of harm. For example, older children reflect on being deeply affected by sexual or violent content they encountered when they were younger, which may have been more extreme than they anticipated (in some cases the child had looked for the content, and in other cases it had been recommended).1600

17.25 Children are also being exposed to upsetting behaviour online. Over a fifth (22%) of 8-9- year-olds reported that people had been nasty or hurtful to them, with a majority of these children experiencing this through a communication technology such as messaging or social media.1601

17.26 As with the younger age group, the use of family members’ devices or profiles may lead to a risk of encountering age-inappropriate content, including harmful content. Recommender systems present content on the basis of various factors, including the profile of the user and the search and viewing history of any user(s) of that account/profile. For example, we heard from children who had been shown harmful content via an auto-play function on a social media service when using their parent’s phone and account.1602

10-12 years: Transition years.

A period of rapid biological and social transitions when children gain more independence and socialise more online. Direct parental supervision starts to be replaced by more passive supervision approaches

Age-specific risks

17.33 More independent use of devices, and a shift in the type of parental supervision, as well as increased use of social media and messaging services to interact with peers, creates a risk of harmful encounters online. Children may start to be more exposed to, or more aware of, bullying content online, with 10-12-year-olds describing how they feel confused when trying to distinguish between jokes and ‘mean behaviour’ online.1613 Due to the rapid neurological development taking place in the teenage brain at this point, the psychological impacts of bullying can last into adulthood.1614 Research has found that of the children who have seen online pornography around one in four (27%) had encountered it by the age of 11.1615

17.34 Despite a 13+ minimum age restriction for many social media sites, 86% of 10-12-year-olds say they have their own social media profile.1616 Our research estimates that one in five (20%) children aged 8-17 with an account on at least one online service (e.g., social media) have an adult profile, having signed up with a false date of birth. Seventeen per cent of 8- 12-year-olds have at least one adult-aged (18+) profile.1617 Alongside this, 66% of 8-12-yearolds have at least one profile in which their user age is 13-15 years old.1618

17.35 Evidence suggests that 11-12 is the age at which children feel safest online. A report by the Office of the Children’s Commissioner for England found that the proportion of children who agree they feel safe online peaks at ages 11 and 12 (80%), increasing from 38% from the age of 5.1619

13-15 years: Early teens.

This age group is fully online with children using an increasing variety of services and apps. Parents’ involvement in their children’s online use starts to decline. Increased independence and decision-making, coupled with an increased vulnerability to mental health issues, means children can be exposed to, and actively seek out, harmful content.

Age-specific risks

17.45 A greater use of online services, more independent decision-making and the risk-taking tendencies common in this age group can together increase the risk of encountering harmful content.

17.46 Ofcom research estimates that a fifth (19%) of 13-15-year-olds have an adult-aged profile on at least one online service, potentially exposing them to inappropriately-aged content.1640 A falsely-aged profile will also mean a child can access and use functionalities on services that have a minimum age of 16 years old, such as direct messaging or livestreaming on some services.

17.47 Exposure to hate and bullying content increases from the age of 13. Sixty-eight per cent of 13-17-year-olds say they have seen images or videos that were ‘mean, or bully someone’, compared to 47% of 8-12-year-olds.1641 Encountering hate online is also quite common; three-quarters of children aged 13-15 report having seen online hate on social media.1642

17.48 Children in this age group are particularly vulnerable if they encounter content relating to self-harm and suicide.1643 Due to hormonal changes and mental health challenges, children in this age group may be at risk of the most severe impacts from encountering this type of content, particularly if seen in high volumes.1644 Five per cent of 13-17-year-olds had experienced/seen content encouraging or assisting serious self-harm, and 4% had experienced/seen content encouraging or assisting suicide over a four-week period.1645

16-17 years: Approach adulthood.

At 16 children attain new legal rights for the first time, while parental supervision, and parental concern about their online safety, both decrease. But changes in their behaviour and decision-making ability at this age can lead to an increased risk of exposure to harmful content.

Age-specific risks

17.59 Our research also estimates that almost three in ten (28%) of 16-17-year-olds have a profile with an age of at least 18 on at least one online service (e.g., social media).1663 These children could receive age-inappropriate content suggestions as well as access restricted functionalities. For example, some services restrict the use of livestreaming to 18-year-olds.

17.60 Older children are also more likely to experience communication that potentially makes them feel uncomfortable; 64% of 16-17-year-olds reported experiencing at least one potentially uncomfortable communication, compared to 58% of 13-15-year-olds. These uncomfortable experiences included receiving abusive, nasty or rude messages/voice notes/comments, reported by one in five (20%) 16-17-year-olds.1664

Summary on user base:

A core part of performing your illegal content, children’s access, and children’s risk assessments includes understanding your user base.

One key requirement from the legislation, which is reiterated by Ofcom, is the need of services to use highly effective age assurance (HEAA) as part of your children’s access assessment and to perform your illegal content safety duties and children’s safety duties.

To paraphrase: You can’t state that you don’t have children accessing your service, or the age bands of your users, without having HEAA implemented. Additionally, HEAA is required to abide by your illegal content and harmful content safety duties.

If you have PC, PPC, or NDC which children either can access, or may have access to, you will have to implement HEAA to both understand your user base, as well as protect them from this content. That said, you have the ability to proportionately adjust certain access based on age band.

Functionalities:

There are two parts to functionalities: functionality within your existing game which attribute to risk factors, and functionalities that Ofcom recommend to mitigate risk. This section focuses on the former (functionalities in your game currently which can attribute risk of illegal content or harmful content being discovered or interacted with by children).

There are two main themes throughout the guidance regarding the gaming industry on functionality; User Messaging and Anonymity.

For full context, please review Ofcom: Children’s Register of Risks:

  • Section 3: Suicide and self-harm content
    3.42 - Suicide and self-harm content in gaming services (Primary priority content)

  • Section 5: Abuse and hate content
    5.86 - Abuse and hate content in gaming services (Priority content)

  • Section 6: Bullying content,
    6.60 - Bullying content in gaming services (Priority content)

  • Section 7: Violent content
    7.58 - Violent content in gaming services (Priority content)

It is absolutely clear that Ofcom has identified that in-game messaging and communication is a key area of risk for children in regards to coming into contact with suicide and self-harm content, abuse and hate content, bullying content, and violent content.

Summary of risk factors:

I don’t think it comes as a surprise to any of us, but Ofcom has evidenced and established that:

  • Children are playing video games younger than ever before, and for longer periods,

  • Players can communicate in games,

  • Games have toxicity amongst players,

  • That toxicity, whilst maybe considered unpleasant behaviour amongst adults, is actually an extremely high risk point of the Online Safety Act and Ofcom in relation to children. Especially Suicide and Self-harm content, which is classified as Primary Priority Content (the most extreme content which children have to be protected from the most).

  • Ofcom highlight anonymity amongst players as a large risk factor. This is something that I don’t think will be able to change, for two reasons:

    • I don’t believe players would be tolerant of sharing their real identity with games companies,

    • I don’t believe players would be accepting or tolerant of their real identity being openly available, visible, or tied to their game accounts,

    • I don’t believe that it would be legal under GDPR, ePrivacy directive, or CCPA to enforce this upon users.

  • With that said, the lack of accountability behind behaviour is clearly a risk factor - as Ofcom have established. The anonymity aspect doesn’t have to be compromised to provide the accountability functionality though; which is something we have accomplished at PlaySafe ID.

 

Section 3: Codes of practice and recommended measures

Intro:

The following are functionalities you almost certainly need to add into your game/service. These are all important to Ofcom and the Online Safety Act, and cannot be overlooked or deferred. They need to be considered and implemented where appropriate. I have broken down the list of codes to those specific to games, but for all details, please refer to: Protection of Children Code of Practice for user-to-user services.

1.1 Under the Online Safety Act 2023 (the ‘Act’), Ofcom is required to prepare and issue Codes of Practice for providers of Part 3 services, describing measures recommended for the purpose of compliance with specified duties imposed on those providers by the Act.

Governance and accountability of your service:

  • PCU A2 - Individual accountable for the safety duties protecting children and reporting and complaints duties,

  • PCU A3 - Written statements of responsibilities,

  • PCU A5 - Tracking evidence of new and increasing harm to children,

  • PCU A6 - Code of conduct regarding protection of children from harmful content,

  • PCU A7 - Compliance training.

Age assurance (highly effective age assurance):

  • PCU B5 - Priority content is prohibited on the service, but it is not technically feasible to take down all such content when the provider determines it to be in breach of its terms of service. This covers bullying, abuse and hate, and violent content (toxic messages from users).

  • Potentially, PCU B4 - which is the same as above but in relation to Primary Priority Content (specifically suicide or self-harm). For games with a higher proportion of serious toxicity, this might be more appropriate.

  • At either rate, you will need to implement highly effective age assurance to gate user access to the relevant functionalities (user communication) until HEAA has been completed. More on this specifically in the following section.

Content moderation:

  • PCU C1 - Having a content moderation function to review and assess suspected content that is harmful to children,

  • PCU C2 - Having a content moderation function that allows for swift action against content harmful to children,

  • PCU C3 - Setting internal content policies,

  • PCU C4 - Performance targets,

  • PCU C5 - Prioritisation,

  • PCU C6 - Resourcing

  • PCU C7 - Provision of training and materials to individuals working in content moderation (non-volunteers),

  • PCU C8 - Provision of materials to volunteers.

Reporting and complaints:

  • PCU D1 - Enabling complaints,

  • PCU D2 - Having easy to find, easy to access, and easy to use complaints systems and processes,

  • PCU D3 - Provision of information prior to the submission of a complaint,

  • PCU D4 - Appropriate action-sending indicative timeframes,

  • PCU D5 - Appropriate action-sending further information about how the complaint will be handled,

  • PCU D6 - Opt-out from communications following a complaint,

  • PCU D7 - Appropriate action for relevant complaints about content considered harmful to children,

  • PCU D8 - Appropriate action for content appeals-determination,

  • PCU D10 - Appropriate action for content appeals-action following determination,

  • PCU D11 - Appropriate action for age assessment appeals

  • PCU D13 - Appropriate action for complaints about non-compliance with certain duties

  • PCU D14 - Exception: Manifestly unfounded complaints.

Settings, functionalities, and user support:

  • PCU F1 - Providing age-appropriate user support materials for children,

  • PCU F3 - Signposting children to support when they report harmful content,

User controls:

  • PCU J1 - User blocking and muting, only where your game has more than 700,000 monthly active United Kingdom users,

  • PCU J3 - Invitations to group chats, where the service has group messaging functionality.

Terms of service:

  • PCU G1 - Terms of service: Substance (all services),

  • PCU G3 - Terms of service: Clarity and accessibility.

 

Section 4: What we recommend/predict that games will do

Overview:

Firstly, you need to complete the following:

Here are other important bits of info to support you in doing so:

In the following sections, we will specifically cover “what do I need to do to my game” aside from the administrative processes mentioned above.

Requirement: Implement Highly Effective Age Assurance.

Firstly, let’s cover what Ofcom and the OSA say about HEAA:

2.1 All providers of Part 3 services are required to carry out children’s access assessments to determine whether a service, or part of a service, is likely to be accessed by children.

2.2 The Act says that service providers may only conclude that it is not possible for children to access a service if that service uses a form of age assurance with the result that children are not normally able to access that service or part of it. 1

2.3 We consider that, in order to secure the result that children are not normally able to access their service (or a part of it), service providers should deploy highly effective age assurance and implement effective access controls to prevent users from accessing the service (or relevant part of it) unless they have been identified as adults.2

2.4 As stated in the Children’s Access Assessment Guidance, service providers should consult this guidance to understand what constitutes highly effective age assurance and / or to carry out an in-depth assessment of whether a particular form of age assurance is highly effective for the purpose of stage 1 of the children’s access assessment.3 Protection of Children Codes

2.5 The Protection of Children Code of Practice for user-to-user services (“the Code”), includes recommended measures on the implementation of highly effective age assurance in certain circumstances. 4 The Code sets out the definition of highly effective age assurance for these recommended measures, and lists the steps that service providers should take to fulfil each of the criteria.5

2.6 The Code also includes other recommended measures which may be relevant to the way that service providers implement and operate a highly effective age assurance process on their service – for example, measures relating to the clarity and accessibility of terms of service, and reporting and complaints.6

2.7 Service providers are required to keep records of (1) steps that they have taken in accordance with the Code, or (2) any alternative steps they have taken to comply with their duties.7 Service providers should consult our Record Keeping and Review Guidance for this purpose.8

2.8 This guidance will help service providers in adopting recommended measures that relate to the implementation of highly effective age assurance, by providing additional technical detail and examples on how to meet the standard.

It’s clear that you need to implement highly effective age assurance, both to effectively and accurately complete your Children’s Access Assessment, but also to abide by your Children’s Safety Duties thereafter.

Criteria to ensure an age assurance process is highly effective:

Please see Guidance on highly effective age assurance part 3 services for all details.

4.1 Service providers need to:
(a) choose an appropriate kind (or kinds) of age assurance; and
(b) implement it in such a way that it is highly effective at correctly determining whether a user is a child.

4.2 To ensure that an age assurance process is, in practice, highly effective at correctly determining whether or not a user is a child, service providers should ensure that the process fulfils each of the following four criteria:

- it is technically accurate;
- it is robust;
- it is reliable; and
- it is fair.

Additionally, providers are need to consider:

  • Accessibility for users,

  • Interoperability; the ability for technological systems to communicate with each other using common and standardised formats.

Kinds of age assurance that are capable of being highly effective:

  • Photo-ID matching -
    The most accurate, robust, reliable, and fair.

  • Facial age estimation -
    Varying degrees of accuracy and reliability. Potentially more risky than photo-ID matching.

  • Opening banking -
    Accurate, robust, and reliable. Fair for over 18s, not usable for under 18s.
    Suitable for Primary Priority Content gating to an entire service. Not suitable for most games.

  • Credit card checks,
    Same as Open Banking.

  • Email-based age estimation,
    Varying degrees of accuracy and reliability; likely similar to open banking; good for gating under 18s, but likely less good at accurately providing an actual age of a user.

  • Use a Digital Identity Service
    The best solution if the digital identity service uses Photo-ID matching.
    This means you don’t have to provide any disruption to users who are already verified, and new users being verified for the first time will be able to re-use the verification across other services in the future. Highest level of accuracy, robust, reliability, fairness - with least amount of friction, disruption. Additionally, improves accessibility for users, and ticks the interoperability consideration.

What methods are not capable of being highly effective:

  • Self-declaration of age,

  • Age verification through online payment methods which do not require a user to be over the age of 18,

  • General contractual restrictions on the use of the service by children,

Summary on Highly Effective Age Assurance:

My recommendation is to use a reusable digital identity service, but make sure you choose one who verifies users based on photo-ID matching.
This is for a few very important reasons:

  1. They will have an easier integration by which you just call their API and ask if a user is valid or for the users’ age,

  2. They will be the data controller for all the PII (personally identifiable information) collected and processed throughout the verification process. If however you integrate or use any other method, you are the data controller, and the service you use is the data processor. This has GDPR, ePrivacy, and CCPA implications regarding data collection, handling and processing, increasing your workload, complexity, and liability. Whereas working with a reusable digital identity service dodges all of this for you; especially if they only return the user’s age and not a date of birth (as it’s too course to be considered PII either directly or indirectly).

  3. There’s less friction to the end user as they can verify themselves once with the reusable digital identity service, and then prove their age without sharing any other information with a wide range of games and services.

  4. Following on from the previous point; Gamers HATE sharing information and data with games companies. Having a setup which means they don’t have to share this data with you (through integrating any other solution) will provide the best experience to the player/user.

  5. Lower costs: A reusable digital identity service has volume commitments with a KYC provider, which means they get a much lower price per verification. This lower price can be passed on to you.

  6. The most compliant and future-proofed solution as per the legislation. A reusable digital identity service that uses photo-ID matching is the most technically accurate, robust, reliable, and fair method out of all which are “capable”. You only want to implement a solution once, so make sure you implement the right one so you don’t have to make big changes and cause disruption to your game and players in a few months.

Recommended provider:

I recommend you integrate PlaySafe ID as it is the reusable digital identity provider specifically designed for games and gamers.

In addition to all of the improvements to your compliance and reduction to liability, speed and ease of integration, and massive cost savings - we also provide meaningful accountability to bad actors to keep your game fair and safe for everyone.

With PlaySafe ID you can turn on a new set of matchmaking where only verified PlaySafe ID users can play. Users who are caught cheating, hacking, botting, or being inappropriate to children face penalties across all PlaySafe Protected games and services.

So, not only can you solve your compliance and liability issues regarding the Online Safety Act and Ofcom, and future proof yourselves from the Australian and EU versions coming soon - but you can also directly provide the most fair, fun, and safe environment for players to improve retention and ARPU - all with less risk, cost, time, and effort.

Visit our Studios & Developers page to learn more.

Requirement: Limiting access to functionality based on age.

As established by Ofcom and mandated by Ofcom and the Online Safety Act, in-game messaging is a high risk factor for children due to the range of illegal and harmful content that can be encountered:

  • Illegal content:

    • CSEA, CSAM, Grooming,

  • Harmful content:

    • Primary priority content: Suicide and self-harm content,

    • Priority content: Abuse and hate content, bullying content, and violent content.

The good news for games is as follows:

You are considered PCU B5 (Link here - page 10)
This means that you don’t allow PPC, PC, or NDC on your service, but it might exist and users might encounter it before you can reasonably stop/remove it. For example: user communication. You don’t allow any of the content types above, but that doesn’t mean you can stop them instantly 100% of the time.

As a result of being a PCU B5, you DON’T have to block access to your service/game before a user completes the highly effective age assurance process.
In other words, anyone can still buy, download, install, and play your game as normal.

It DOES mean however that you WILL likely have to block access to all user-to-user messaging/communication services until the users has completed the HEAA process, as this is where the risk for illegal content and harmful content lays. Note: This will be subject to your own children’s access assessment, illegal content assessment and safety duties, , children’s risk assessment, and children’s safety duties - which you still need to complete. But this is the likely outcome in my opinion.

There will likely be variability in the limitation/restriction of service based on age group and your assessments. For example, you might find that:

  • All user-to-user communication is disabled until a user is 13 years old,

  • From 13-17 communication is enabled, with parental controls, and with the profanity filter permanently on,

  • Note: This is not an instruction. You will have to complete your own assessments and determine the right implementation for your specific game and userbase. This is a broad guess from intuition and experience as to what my gut feeling is will be the most likely outcome in the majority of cases. But, do the work yourself and ensure you implement the right solution for your game(s).

  • You might find that a different approach, more staggered approach, or even a more lenient or stronger approach is required. It depends on your game, userbase, and identified risks.

Implement the functionality recommendations (to improve child safety).

As per the Protection of Children Code of Practice for user-to-user services, and as referenced in section 3 (above) “Section 3: Codes of practice and recommended measures”, you need to ensure you implement the recommended functionalities required to improve child safety.

These include functionalities that improve:

  • Governance and accountability of your service:

  • Age assurance (highly effective age assurance):

  • Content moderation:

  • Reporting and complaints:

  • Settings, functionalities, and user support:

  • User controls:

  • Terms of service:

 

Summary

I believe that the Online Safety Act and Ofcom are well intentioned and basing their legislative and compliance frameworks from extensive research, data, and consultation (with industry, users, parents, and children). I also believe that any reasonable adult with a common-sense view can appreciate the online world as it is today is not a safe environment for children. By giving children access to the internet, we are giving them access to any and all content, as well as the ability to contact or be contacted by any and all people. A literal Pandora's box.

It is for these reasons that I believe the Online Safety Act is a positive legislative and compliance beachhead which has the potential to make an incredibly positive impact on the lives of millions of children in the UK - and as the Australian and EU member state versions come into effect, across the world too.

I think that age-gating as it stands today (asking a user if they’re over 18 and to click a button) is a woefully pathetic system, and in 10 years we will all look back at how bad it was, and shake our heads collectively in disbelief at how easy it was for children to access things which we all universally agree as a society that they shouldn’t be able to access.

I feel that highly effective age assurance is an incredibly positive step, especially when implemented through a reusable digital identity service who uses photo-ID verification. I think it provides as frictionless and as positive an experience as possible for the user, whilst minimising disruption, risk, cost, effort, and liability on the game studio/developer. We have also already proven at PlaySafe ID that gamers are happy to verify themselves in exchange for a better gaming experience - as long as their data is secure and never shared with the game studio, and with the addition of PlaySafe Protected matchmaking to keep cheaters & bad actors out of games.

I also think that the measures to restrict functionality that isn’t age appropriate, or that may contain the risk of exposing children to illegal content or harmful content is a good step. There will likely be resistance, or reluctance, of games wanting to implement these features (because it’s time, money, effort, and extra work). But, it’s now business-critical for them to do so, and I think it’s a positive thing. I do also like that Ofcom have taken a proportionate response and not made it a requirement to gate access to all services prior to a user completing the HEAA process. Just gating the functionality that poses the risk is a positive step that should minimise disruption to games.

Ultimately, I suppose my final thought is this:

Games are supposed to be fun. They aren’t supposed to be dangerous.
Kids shouldn’t be at risk of CSEA, CSAM, or grooming. They also shouldn’t be at risk of coming into contact with suicide or self-harm content, bullying, abuse or hate content, or violent content.

The experience of playing a game should be something we all look forward to - not something that fills parents with dread or leaves children frightened, scared, sad - or damaged.

It’s all of our jobs to do better.
And the Online Safety Act and Ofcom are taking the legislative and compliance framework steps to ensure that we all collectively do better.
Because children deserve better than we’ve all been able to deliver so far.

Author & about

This guide was written by Andrew Wailes, CEO of PlaySafe ID.
Andrew has a particular focus on gaming tech with reference to changing legislative and compliance landscapes.
His main areas of focus are gaming, gaming tech, Online Safety Act, GDPR, ePrivacy Directive, and COPPA - as well as Apple’s & Google’s policies, and some other random stuff.

Visit the PlaySafe ID Studio/Dev page and learn how we can help you solve your OSA/Ofcom compliance issues today.

Next
Next

Online Safety Act - Important definitions