After a September exposé revealed how many sex offenders were prowling the halls of the streaming site and preying on youngsters who shouldn’t have been there in the first place, Twitch has been hounded for months. The business now claims to have a mechanism in place to prevent child sexual abuse, but it has not given any further information.
Twitch stated in a blog post on Tuesday that it was developing a “evolving strategy” to reducing the harm done to children. The user-streaming platform announced that “possibly compromised accounts,” often known as accounts formed by young people appearing to be older than 13, will require phone verification before going live. The business added that it is attempting to delete more accounts for users who are younger than 13. Additionally, the in-app browser on the Amazon platform restricts some search phrases.
The company appeared intentionally vague about the search terms it blocked and how its phone authentication system worked. Although it said it was working on a learning system, the company did not disclose any details of its planned development or implementation.
Gizmodo reached out to Twitch to see if he would explain how the company’s phone verification system works, but a company spokesperson said Twitch is intentionally trying to be vague.
“We will always be mindful of the amount of specificity we provide [in relation to Twitch users and the general public] to avoid providing information to malicious persons to circumvent our efforts. This is especially true when it comes to children and predators,” the company wrote on its blog. “There is no single solution to prevent predators”