Sebastien Bozon AFP | Getty Images
LONDON — The United Kingdom officially brought its major online safety law into effect on Monday, paving the way for tighter oversight of harmful online content and potentially massive fines for internet giants. technology like WhenGoogle and TikTok.
Ofcom, Britain's media and telecoms watchdog, has published its first edition of its code of practice and guidance for tech firms setting out what they should do to tackle illegal harm such as terrorism, hate, fraud and child sexual abuse on their platforms.
The measures form the first set of duties imposed by the regulator under the Online Safety Act, a sweeping law that requires technology platforms to do more to combat illegal content online.
The Online Safety Act imposes certain so-called "duty of care" on these technology firms to ensure that they take responsibility for harmful content uploaded and disseminated on their platforms.
Although the act passed into law in October 2023, it was not yet fully in force - but Monday's development effectively marks the official entry into force of the safety duties.
Ofcom said that technology platforms will have until March 16, 2025 to complete the risk assessments of illegal damages, effectively giving them three months to bring their platforms into compliance with the rules.
Once that deadline passes, platforms must start implementing measures to prevent risks of illegal harm, including better moderation, easier reporting and integrated safety tests, Ofcom said.
"We will be watching the industry closely to ensure that firms meet the strict safety standards set for them under our first code and guidance, with more requirements to follow quickly in the first half of the year next," said Ofcom Chief Executive Melanie Dawes. in a statement on Monday.
Risk of heavy fines, service suspensions
Under the Online Safety Act, Ofcom can impose fines of up to 10% of companies' global annual revenue if they are found to be breaking the rules.
For repeated breaches, individual senior managers can face possible jail time, while in the most serious cases, Ofcom can seek a court order to block access to a service in the UK or limit its access for payment providers or advertisers.
Ofcom was under pressure to strengthen the law earlier this year following far-right riots in the UK instigated in part by disinformation spread on social media.
The duties will cover social media firms, search engines, messaging, gaming and dating apps, as well as pornography and file-sharing sites, Ofcom said.
Under the first edition code, the reporting and complaint functions should be easier to find and use. For high-risk platforms, firms will be required to use a technology called hash-matching to detect and remove child sexual abuse material (CSAM).
Hash matching tools link known CSAM images from police databases to encrypted digital fingerprints known as "hashes" for each piece of content to help the sites' automated filtering systems. social media recognize them and remove them.
Ofcom stressed that the codes published on Monday were only the first set of codes and that the regulator would seek to consult on additional codes in the spring of 2025, including blocking accounts found to be sharing CSAM content and enabling the use of AI to address illegal damages.
"Ofcom's illegal content codes are a material step in online safety which means that from March, platforms will have to proactively stop terrorist material, child abuse and intimate images, and a host of content another illegal one, bridging the gap between the laws that protect us in the offline and online world," British Technology Minister Peter Kyle said in a statement on Monday.
"If the platforms fail to step up the regulator has my support to use all its powers, including issuing fines and asking the courts to block access to the sites," added Kyle.