Technology

Reddit to update web standard to block automated website scraping



Social media platform Reddit said on Tuesday (June 25) it will update a web standard used by the platform to block automated data scraping from its website, following reports that AI startups were bypassing the rule to gather content for their systems.

The move comes at a time when artificial intelligence firms have been accused of plagiarising content from publishers to create AI-generated summaries without giving credit or asking for permission.

Reddit said that it would update the Robots Exclusion Protocol, or “robots.txt,” a widely accepted standard meant to determine which parts of a site are allowed to be crawled.

The company also said it will maintain rate-limiting, a technique used to control the number of requests from one particular entity, and will block unknown bots and crawlers from data scraping — collecting and saving raw information — on its website.

More recently, robots.txt has become a key tool that publishers employ to prevent tech companies from using their content free-of-charge to train AI algorithms and create summaries in response to some search queries.



READ SOURCE

Business Asia
the authorBusiness Asia

Leave a Reply