According to available data, suicide is the second leading cause of
death among 15- to 29-year-olds and quite alot of it has been committed
live using Facebook and other social media sites.
Now, Facebook wants to leverage artificial intelligence to help with prevention.
The company announced yesterday that it is testing the ability for AI
to identify potential "suicide or self injury" posts based on pattern
recognition from posts that have been previously flagged on the site in
the past.
Its community operations team will then review the posts to decide if
Facebook should surface crisis resources to the user. For now, the test
is limited to the U.S.
In 2016, Facebook made some suicide prevention tools available to its
1.86 billion monthly active users: People can flag questionable posts
for the company to review. Facebook will either suggest the person in
distress reaches out to a friend, with an optional pre-populated text to
break the ice. It also suggests help line information and resources.

No comments:
Post a Comment