UK's internet watchdog finalizes first set of rules for Online Safety law

On Monday, the UK's internet regulator Ofcom published the first set of final guidelines for online service providers subject to the Online Safety Act. This comes as the increasingly common law on online harms begins to gain time for its first compliance deadline, which the regulator expects to kick in within three months.

It was Ofcom under pressure move faster in implementing the following online security regime riots in summer these were widely perceived to be fueled by social media activities. Although this follows the process set by legislators, which requires consultation and parliamentary approval of final adaptation measures.

“This decision on the Unlawful Harms Rules and guidance marks an important milestone as online providers are now legally required to protect their users from unlawful harm,” Ofcom wrote in a post. Press release.

“Providers now have a duty to assess the risk of unlawful harm to their services until 16 March 2025. From 17 March 2025, subject to the Laws completing the Parliamentary process, providers will be required to take established security measures. Use other effective measures contained in the Rules to protect users from illegal content and activities.”

"We are ready to impose sanctions if providers do not take immediate action to eliminate risks to their services," the statement said.

According to Ofcom, more than 100,000 tech firms may be covered by the law's duty to protect users from a range of illegal content types in relation to more than 130 "priority offences" covering areas such as terrorism, hate and more. speech, sexual abuse and exploitation of children, fraud and financial crimes.

Failure to comply risks a fine of up to 10% of global annual turnover (or up to £18 million, whichever is greater).

Companies covered range from tech giants to “very small” service providers, with sectors as diverse as social media, dating, gaming, search and pornography affected.

“The duties in the Act apply to service providers with connections to the UK, wherever they are located in the world. Ofcom wrote that the number of regulated online services could total more than 100,000 and range from the world's largest technology companies to very small services.

The rules and guidance follow consultation, with Ofcom looking at research and gathering responses from stakeholders to help shape these rules. passed by parliament last fall and became law in October 2023.

The regulator outlined measures for user-to-user and search services to reduce risks associated with illegal content. Guidance on risk assessments, record keeping and reviews is summarized as follows: an official document.

Ofcom also published a summary It covers every section in today's policy statement.

The approach taken by UK law is the exact opposite of a one-size-fits-all solution; In general, greater liability is placed on larger services and platforms where multiple risks may arise, compared to smaller services with less risk.

However, smaller, lower-risk services are also not exempt from liabilities. And – in fact – many requirements apply to all services, such as having a content moderation system that allows for the rapid removal of illegal content; Having a mechanism through which users can submit content complaints; have clear and accessible terms of service; removal of accounts of banned organizations; et al. Although most of these general measures are features that mainstream services can at least already offer.

But it's safe to say that any technology firm offering user-to-user or search services in the UK will need to at least make an assessment of how the law applies to its business, if not make operational overhauls. specific areas of regulatory risk.

For larger platforms with engagement-centric business models, where the ability to monetize user-generated content is linked to tightly controlling people's attention, greater operational changes may be required to avoid falling foul of the law's duties to protect users from countless harms. .

An important lever to accelerate change is legislation that imposes criminal liability on senior executives in certain cases; This means that tech CEOs can be held personally liable for some types of non-compliance.

Speaking to BBC Radio 4's Today program on Monday morning, Ofcom CEO Melanie Dawes suggested that 2025 will finally see significant changes in the operation of major technology platforms.

"What we're announcing today is actually a big moment for online security because within three months tech companies will need to start taking appropriate measures," he said. “What will they need to change? They need to change the way algorithms work. They need to test these to ensure that illegal content such as terrorism and hate, misuse of intimate images and much more do not actually appear on our feeds.”

“And if things slip through the net, they're going to have to take it out. "For children, we also want their accounts to be set to private so strangers can't contact them," he added.

However, Ofcom's policy statement is just the start of meeting regulatory requirements; editor, Dawes "Greater protections for childrenHe said it would be introduced in the new year.

Therefore, more significant changes to child safety on platforms that parents are pushing for may not filter through until later in the year.

“In January we will be announcing our requirements for age checks to know where children are,” Dawes said. “Then in April we will be finalizing the rules on our wider safeguards for children – and that will be about pornography, suicide and self-harm material, violent content and so on, and it will be about children no longer being fed in the way that we used to be so normal but really harmful today.” .”

Ofcom's summary document also notes that further action may be needed to keep pace with technological advances such as the rise of generative artificial intelligence, and indicates that it will continue to examine risks and may further develop requirements for service providers.

The regulator is also planning “crisis response protocols for emergency events” such as last summer's riots; Recommendations to block the accounts of those who share CSAM (child sexual abuse material); and guidance on using artificial intelligence to combat unlawful harm.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *