![]() ![]() We’re determined to continue reducing exposure to videos that violate our policies. In January of 2018 we launched our Intelligence Desk, a team that monitors the news, social media and user reports in order to detect new trends surrounding inappropriate content, and works to make sure our teams are prepared to address them before they can become a larger issue. We also recognize that the best way to quickly remove content is to anticipate problems before they emerge. As noted above, improvements in our automated flagging systems have helped us detect and review content even before it’s flagged by our community, and consequently more than 80% of those auto-flagged videos were removed before they received a single view in the second quarter of 2019. Removing content before it’s widely viewed We go to great lengths to make sure content that breaks our rules isn’t widely viewed, or even viewed at all, before it’s removed. For example, an update to our spam detection systems in the second quarter of 2019 lead to a more than 50% increase in the number of channels we terminated for violating our spam policies. We’re investing significantly in these automated detection systems, and our engineering teams continue to update and improve them month by month. Still, over 87% of the 9 million videos we removed in the second quarter of 2019 were first flagged by our automated systems. Machines also can help to flag hate speech and other violative content, but these categories are highly dependent on context and highlight the importance of human review to make nuanced decisions. ![]() These systems are particularly effective at flagging content that often looks the same - such as spam or adult content. Machine learning is well-suited to detect patterns, which helps us to find content similar (but not exactly the same) to other content we’ve already removed, even before it’s ever viewed. ![]() In 2017, we expanded our use of machine learning technology to help detect potentially violative content and send it for human review. For some content, like child sexual abuse images (CSAI) and terrorist recruitment videos, we contribute to shared industry databases of hashes to increase the volume of content our machines can catch at upload. We sometimes use hashes (or “digital fingerprints”) to catch copies of known violative content before they are ever made available to view. Using machines to flag bad content Once we’ve defined a policy, we rely on a combination of people and technology to flag content for our review teams. We’ll share our progress on this work in the coming months. In April 2019, we announced that we are also working to update our harassment policy, including creator-on-creator harassment. ![]() The spikes in removal numbers are in part due to the removal of older comments, videos and channels that were previously permitted. Though it can take months for us to ramp up enforcement of a new policy, the profound impact of our hate speech policy update is already evident in the data released in this quarter’s Community Guidelines Enforcement Report: The policy was launched in early June, and as our teams review and remove more content in line with the new policy, our machine detection will improve in tandem. We spent months carefully developing the policy and working with our teams to create the necessary trainings and tools required to enforce it. Our hate speech update represented one such fundamental shift in our policies. During this time we consult outside experts and YouTube creators to understand how our current policy is falling short, and consider regional differences to make sure proposed changes can be applied fairly around the world. For particularly complex issues, we may spend several months developing a new policy. Since 2018, we’ve made dozens of updates to our enforcement guidelines, many of them minor clarifications but some more substantive. For example, earlier this year we provided more detail about when we consider a “challenge” to be too dangerous for YouTube. As a result, many updates are actually clarifications to our existing guidelines. After reviewing a policy, we often discover that fundamental changes aren’t needed, but still uncover areas that are vague or confusing to the community. To that end, we have a dedicated policy development team that systematically reviews all of our policies to ensure that they are current, keep our community safe, and do not stifle YouTube’s openness. Developing policies for a global platform Before we do the work of removing content that violates our policies, we have to make sure the line between what we remove and what we allow is drawn in the right place - with a goal of preserving free expression, while also protecting and promoting a vibrant community. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |