For many teens, cruel digital messages are a disturbing part of their daily social experience. According to the U.S. Department of Health and Human Services, 15 percent of high school students were cyberbullied in the past year, and more than 55 percent of LGBT students experienced cyberbullying. Unlike face-to-face bullying, which is often confined to school grounds, cyberbullying can affect students 24/7, leading to increased anxiety, lowered academic performance and increased risk of suicide.
A team of researchers at CU-麻豆影院 wants to help protect children from cyberbullying. But unlike others working on the issue, who mainly address it from a sociological angle, the team is tackling cyberbullying using computer science.
Shivakant Mishra, Richard Han and Qin (Christine) Lv from the Department of Computer Science founded the Cybersafety Research Center to explore this emerging research area, which crosses the boundaries of several traditional computing research areas, including security, privacy and reliability.
鈥淐ybersafety addresses misuse of computing systems that falls into a gray legal area,鈥 says Mishra. 鈥淲hen we look at the computing research being done, cybersafety doesn鈥檛 fit into any one of those areas.鈥
One of their first projects was to create a system that would detect misbehavior, like flashing, on video chat websites. When their SafeVchat system was implemented on Chatroulette, a site that matches up random strangers for video chats, it successfully reduced misbehaving users from 25 percent to less than 2 percent.
Now, the group is taking on cyberbullying and developing tools that would recognize it in real time and notify parents. That project brings a new set of challenges, Mishra says, from access to data to successfully identifying bad behavior.
The team started with social media sites Askfm, where anonymous users can post and answer text-based questions; the Instagram photo-sharing platform; and Vine, where users post short videos. Because posts on these sites are publically accessible, they were able to collect large amounts of data and begin manually marking instances 聽of cyberbullying.
鈥淎 key problem is that bullying is so subtle and depends on the context of the language,鈥 says Mishra. 鈥淭he language is social and cultural, so even our team doing the marking may not be able to figure that out.鈥
The team developed an initial system of classifiers, or algorithms, that detect the frequency of negative words and then use semantic analysis to determine whether they鈥檙e being used in a negative context. They are now moving on to the second step, which is to improve the accuracy of their classifiers.
鈥淐lassification algorithms are usually used on data that is ambiguous in nature,鈥 says Mishra. 鈥淵ou never have classifiers that are 100 percent accurate, but you try to get them as good as they can be.鈥
To improve their classifiers, the team has to go to the end users鈥攕tudents and parents. They have partnered with a local school district and are developing a website where parents could register their children鈥檚 social media accounts in order to receive notifications of potential abuse. The parents would then provide the team with feedback on whether those instances were, in fact, cyberbullying. The team would also like to involve students in helping them to analyze posts and identify abuse.
鈥淥ne of the issues is that it鈥檚 typically very difficult to actually know the students being bullied because they don鈥檛 talk about it,鈥 says Mishra. 鈥淧ersonally, we don鈥檛 know anyone.鈥