Legal and Ethical Issues in Facebook Live Killings

Legal and Ethical Issues in Facebook Live Killings

1.Discuss whether or not you believe that Facebook has a legal or ethical duty to rescue a crime victim.

Facebook holds a legal or ethical duty to rescue crime victims. Although Facebook may not be able to prevent a crime from happening in real time, the company holds the responsibility to report such incidences to the police. Beyond reporting the incidents to the police, Facebook and other social media platforms should develop proper regulations for censuring such incidences. Live streaming acts of violence will likely encourage similar crimes in the community. As such, Facebook holds an ethical duty of ensuring that criminals including terrorist groups do not use the platform for encouraging crime and other vices. Corporate social responsibility dictates that companies must show commitment towards the wellbeing of the larger community (Fordham, Robinson, & Blackwell, 2017). Companies (including social media platforms) must develop strategies that show concern for the local communities or that seek to enhance their relationship with the local communities.

The use of social media platforms has eroded the sense of empathy among users (Bertolotti & Magnani, 2013). The diminishing empathy among social medial users encourages incidences of cyberbullying and violence among users. As such, Facebook must take the leading role in ensuring that users share content in a responsible manner. Although Facebook cannot control what users upload on their social media accounts, it holds the responsibility of prohibiting the sharing of graphic videos or images that encourage violence or acts of terrorism. In addition, Facebook should be active in helping crime victims by reporting incidents to the police. Facebook should develop a way of identifying crime victims taking into consideration that users may not be able to report such incidents on time. Moreover, lack of empathy among users may make it difficult report such incidents.

2.Suggest and elaborate on three (3) ways that social media platforms can be more proactive and thorough with their review of the types of content that appear on their sites.

Social media platforms should be more proactive and thorough with their review of the types of content that appear on their sites. One way of reviewing the types of content that appear on their sites is by encouraging users to report inappropriate images or videos that appear on their sites. Social media platforms should have enough workforce dedicated towards reviewing the videos and images reported by users. Such a team should delete the inappropriate content on the user account and in all other accounts that shared the inappropriate content. However, having a team to review user reports of violations of standards may not be enough on its own. This is because of the possibility of a time lag between the occurrence of an incidence and the user making a report.

Secondly, social media platforms can be more proactive and thorough with their review of the types of content that appear on their sites by engaging editors from different cultural backgrounds. Social media platforms should hire editors from different countries and who can understand their country’s local dialects to help in reviewing content. This may make it easy to review and delete inappropriate content. Thirdly, social media platforms should engage in educating users on standards of conduct and inappropriate use of the platforms. For instance, social media platforms should warn users against posting inappropriate content and the possibility of reporting such content to the police. This may help in curtailing some posts by users since they may fear legal prosecution.

3.Propose two (2) safeguards that Facebook and other social media platforms should put into place to help prevent acts of violence from being broadcasted.

The first safeguard that Facebook and other social medial platforms may adopt to prevent the broadcasting of acts of violence is use of artificial intelligence. Artificial intelligence can help in automatically filtering inappropriate content such as acts of violence shared on social media platforms. Social media networks should work together to develop artificial intelligence technologies that can help in filtering acts of violence across multiple platforms. As such, it will be easier to filter inappropriate content shared across multiple social media platforms. The application of artificial intelligence can help social media platforms to identify quickly inappropriate content. Currently, Facebook is working on artificial intelligence to help in flagging inappropriate content (Isaac & Mele, 2017). Other technology giants such as Google, Amazon, Twitter, and Instagram are also looking on ways to flag inappropriate content.

The second safeguard in preventing acts of violence from being broadcasted is hiring of editors to monitor live streams relating to acts of violence in real time. Facebook should hire editors in various geographical segments to help in monitoring live streams that go viral and made decisions in real time. For instance, the editors may discontinue live streams that show acts of violence and make a report to the police when there are illegal activities involved. This would be critical in preventing the full viewing and sharing of inappropriate content on Facebook and other social media sites. It would also ensure that law enforcement officers act quickly to contain acts of violence or to arrest the perpetrators.

4.Conduct some research to determine whether or not Facebook has an Ethics Officer or Oversight Committee. If so, discuss the key functions of these positions. If not, debate on whether or not they should create these roles.

Facebook has an Ethics Committee that ensures content posted on the site adheres to Facebook’s community standards. Facebook monitors inappropriate content and activities using four teams. These are Safety Team, Hate and Harassment Team, Abusive Content Team, and Access Team (Cluley, 2012). Each of these teams deals with investigating reports made by users. The Safety Team reviews reports made by users concerning violent behavior. This team reviews issues concerning violent acts on live streaming, incidences of rape, torture, and along others. The Hate and Harassment Team reviews incidents involving hate speech and social media bullying. The Abusive Content Team deals with incidents involving sexually explicit content, frauds, and spam messages (Cluley, 2012). The Access Team deals with issues involving access. This team helps in restoring access to users who have lost their access through various fraud schemes.

These teams review user reports 24 hours a day. They are located in the United States and India (Cluley, 2012). The teams comprise of ethnically diverse individuals. For instance, they can provider support in 24 different languages. The teams make judgments by considering whether the issue reported violates Facebook’s community standards. Facebook’s community standards comprise of a set of rules or a policy framework that guides users on the types of content they should post. The major aim of the community standards is to help Facebook users feel secure. The community standards enable users to know what they can share with other users and what violates Facebook policy.

5.Propose two (2) changes Facebook should adopt to encourage ethical use of their platform.

Facebook should adopt new changes in order to encourage ethical use of their platforms. First, Facebook must adopt proactive measures in dealing with the issue of violence. Proactive measures are those that would prevent people from posting inappropriate content. Currently, Facebook employs reactive measures; waiting for users to report issues with content and then take action. The disadvantage of this is that the video could have been shared many times making it difficult to curb its spread. Application of artificial intelligence can enable Facebook to restrict users from uploading content that promotes violence. For example, Google has recently unveiled cloud video intelligence API, a technology that allows users to scan or search for particular parts of a video. Such technologies can help in identifying and blocking videos that promote violence or sexual immoralities.

Facebook should also consider delaying streaming of live video to enable editors to check on the content or nature of the video. Increased monitoring of live videos can significantly reduce the incidences of violent videos. This would involve increasing the number of editors to enable them to check quickly on the nature of live videos.

References

Bell, K. (2017). A new Google tool actually lets you search videos for specific objects. Retrieved from http://mashable.com/2017/03/08/google-video-intelligence-api/#Ac7lejTiQ8qF

Bertolotti, T., & Magnani, L. (2013). A philosophical and evolutionary approach to cyber- bullying: Social networks and the disruption of sub-moralities. Ethics and Information Technology, 15(4), 285-299. doi:10.1007/s10676-013-9324-3

Cluley, G. (2012). What happens when you report abuse on Facebook? Retrieved from https://nakedsecurity.sophos.com/2012/06/21/what-happens-report-abuse-facebook/

Fordham, A. E., Robinson, G. M., & Blackwell, B. D. (2017). Corporate social responsibility in resource companies – opportunities for developing positive benefits and lasting legacies. Resources Policy, 52, 366-376. doi:10.1016/j.resourpol.2017.04.009

Isaac, M. & Mele, C. (2017, April 17). A murder posted on Facebook prompts outrage and questions over responsibility. The New York Times. Retrieved from https://www.nytimes.com/2017/04/17/technology/facebook-live-murder-broadcast.html

Related:

HA3042 Taxation Law  Assignment