Loading Now

AI Exploits Our Social Media Habits To Supercharge Scam Attacks

AI Exploits Our Social Media Habits To Supercharge Scam Attacks

President of Univention North America, making sure you stay in control of your data, your company and your future.getty

For billions of users, social media is an innocent way to stay connected with friends and family. We share parts of our lives and enjoy the successes of our friends. While the risks of addiction, cyberbullying and feelings of inadequacy are accurate and well-documented, they only affect a minority of users.

But there’s another dark side that’s getting less attention. With advances in AI, image recognition and language processing, the wealth of data available in social media is becoming a new gold mine for cybercriminals. Automatically scanning through billions of datasets for a couple of cents can reveal patterns and attack vectors that previously required manual work or user actions to provide the information a scammer needs.

In my work, I have identified three specific attack vectors that AI supercharges, making them more dangerous and requiring us to take countermeasures quickly.Look At My Car!

Since the dawn of the internet, security questions such as “What was your first car?” “Where did you meet your spouse?” or “What was the name of your first pet?” have been part of our password reset policy. For the past 20 years, experts have warned users not to answer them honestly or reuse the answers. Yet, most of us aren’t good enough to remember our passwords. Remembering the wrong answers to three questions is almost impossible unless you write them down, which opens another door to abuse.

And there’s another problem. Most of us like to show off pictures. We post photos of us standing in front of our first car instead of “My first car was an XYZ.” For a couple of years, apps have recognized the make and model when you carefully photograph the back or front of a car. With the advancements in AI-assisted image recognition, though, several companies can now detect the car’s make and model from any angle, even when a person obstructs parts of its features. Combine it with a text recognition trained on sweet-16 presents, driver’s license celebrations or similar events, and we have an answer for the first of the three notorious questions. The same is true for the wealth of information contained in digital wedding albums, pet pictures and finding the house number of your first apartment.

Consequently, it is now possible to automate recognizing images and finding the answers to standard security questions.

Object recognition and classification have improved for recognizing backgrounds in images. Deep learning has enhanced the automated recognition of places and locations, especially tourist hot spots. Security experts have long warned that the metadata in pictures can increase your risk, online and in real life. Combined with image recognition, you don’t need the metadata anymore.My Voice Is My Password, But Not Really

Even new technologies aren’t safe from AI-based attacks. Over the past three years, banks worldwide have introduced biometrics to enhance security. The two most common attributes were face profiles and voice samples.

Unfortunately, faking a voice has become good enough for researchers to break into their own accounts. While these experts deliberately provided the voice samples used, millions of us also do so as we go about our daily online lives, albeit without realizing the risks involved. From TikTok videos to podcasts, the world is full of training material to clone your voice.

Combining it with cloud computing and large data operations, one can now download millions of audio samples, build a corresponding model and let it loose to create deep fakes for entertainment or more nefarious uses.Trust Me, I’m An Engineer!

If you search your favorite video site for scams, technical support is among the most common and somewhat hilarious attempts at deception. Typically, the posted clips involve a cybersecurity researcher fighting back or stringing the scammer along, wasting their time. These types of ruses are popular enough that the FBI regularly updates their warnings about the latest tactics used.

All these videos have one thing in common: The scammers do most of the talking to try to convince potential victims of the legitimacy of their request. Anyone who ever did tech support, including myself, knows that real tech support staff prefer that the customers do the talking (or, better yet, put them on hold) while solving the issue.

AI is about to change that, letting scammers up their game. New tools are fast enough to generate realistic talking points on the fly. When read out by a scammer, those scripts make them sound like support engineers. AI scripts can quickly adapt to someone’s experience, company and positions shared on social media. If your support company is XYZ, it is much more likely that you will accept an unsolicited call to provide assistance for your company laptop or phone, even if they’ve never called you before.

What’s next? Combining those pernicious scripts with deep fake AI means the hackers no longer have to place those scam calls themselves. Technology has come full circle for maximum damage.The Same, Yet New

To be fair, all social engineering scams predate social media’s rise and the invention of AI. Our trust in social connections and hierarchies has developed over millions of years. But while none of these scams are new, the combination of different subsets of AI and massive amounts of data available from our social media profiles presents a greater risk to our digital identity than ever before.

Therefore, building guardrails and safety mechanisms that require a human touch to pass is important. For starters, how about banning the well-worn three security questions and not relying on voice identification when there are other, better options out there that are not as easily fooled? And finally, what about putting a higher burden of proof on tech support calls? Two-factor authentication and other enhanced security options have made enterprise app use safer. Why not apply it to those scenarios as well? Technology is inherently dual use, so let’s make sure it serves us, the legitimate users, not the bad actors.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

02

Is social media ruining our lives?

Marcus is well aware of the impact of technology on our lives, not just through his work but from his leisure activities too, as he would often notice that the friends he played football with would avidly check their online bets during half-time.

Although Marcus’s focus overall is on the “Tech for good” movement, he recognises that there are a lot of problems with a life online.

Among the many downsides he references are: addiction, isolation, depression, anxiety, psychological manipulation, social division, fake news and misinformation.

Tristan acknowledges that there are good things social media can do, for example in reconnecting people and in streamlining healthcare, however, he believes that it’s ultimately skewed against us.

“We are worth more to social media companies when we’re addicted, outraged, polarised, narcissistic, sleepless and disinformed,” says Tristan, “because that means the business model of getting our attention is successful.”

03

Can ️s change minds? How social media influences public opinion and news circulation

Social media use has been shown to decrease mental health and well-being, and to increase levels of political polarization.

But social media also provides many benefits, including facilitating access to information, enabling connections with friends, serving as an outlet for expressing opinions and allowing news to be shared freely.

To maximize the benefits of social media while minimizing its harms, we need to better understand the different ways in which it affects us. Social science can contribute to this understanding. I recently conducted two studies with colleagues to investigate and disentangle some of the complex effects of social media.Social media likes and public policy

In a recently published article, my co-researchers (Pierluigi Conzo, Laura K. Taylor, Margaret Samahita, and Andrea Gallice) and I examined how social media endorsements, such as likes and retweets, can influence people’s opinions on policy issues.

We conducted an experimental survey in 2020 with respondents from the United States, Italy and Ireland. In the study, we showed participants social media posts about COVID-19 and the tension between economic activity and public health. Pro-economy posts prioritized economic activities over the elimination of COVID-19. For instance, they advocated for reopening businesses despite potential health risks.

Pro-public health posts, on the other hand, prioritized the elimination of COVID-19 over economic activities. For example, they supported the extension of lockdown measures despite the associated economic costs.

We then manipulated the perceived level of support within these social media posts. One group of participants viewed pro-economy posts with a high number of likes and pro-public health posts with a low number of likes, while another group viewed the reverse.

Individuals were exposed to the same message with different levels of endorsements, depending on treatment. The left pro-economy tweet appears to be more popular than the tweet on the right.

After participants viewed the posts, we asked whether they agreed with various pandemic-related policies, such as restrictions on gatherings and border closures.

Overall, we found that the perceived level of support of the social media posts did not affect participants’ views — with one exception. Participants who reported using Facebook or Twitter for more than one hour a day did appear to be influenced. For these respondents, the perceived endorsements in the posts affected their policy preferences.

Participants that viewed pro-economy posts with high number of likes were less likely to favour pandemic-related restrictions, such as prohibiting gatherings. Those that viewed pro-public health posts with high number of likes were more likely to favour restrictions.

Social media metrics can be an important mechanism through which online influence occurs. Though not all users pay attention to these metrics, those that do can change their opinions as a result.

Active social media users in our survey were also more likely to report being politically engaged. They were more likely to have voted and discussed policy issues with friends and family (both online and offline) more frequently. These perceived metrics could, therefore, also have effects on politics and policy decisions.Twitter’s retweet change and news sharing

In October 2020, a few weeks before the U.S. presidential election, Twitter changed the functionality of its retweet button. The modified button prompted users to share a quote tweet instead, encouraging them to add their own commentary.

Twitter hoped that this change would encourage users to reflect on the content they were sharing and to slow down the spread of misinformation and false news.

In a recent working paper, my co-researcher Daniel Ershov and I investigated how Twitter’s change to its user interface affected the spread of information on the platform.

We collected Twitter data for popular U.S. news outlets and examined what happened to their retweets after the change was implemented. Our study revealed that this change had significant effects on news diffusion: on average, retweets for news media outlets fell by over 15%.

We then investigated whether the change affected all news media outlets to the same extent. We specifically examined whether media outlets where misinformation is more common were affected more by the change. We discovered this was not the case: the effect on these outlets was not greater than for outlets of higher journalistic quality (and if anything, the effects were slightly smaller).

A similar comparison revealed that left-wing news outlets were affected significantly more than right-wing outlets. The average drop in retweets for liberal outlets was more than 20 per cent, but the drop for conservative outlets was only five per cent. This occurred because conservative users changed their behaviour significantly less than liberal users.

Lastly, we also found that Twitter’s policy affected visits to the websites of the news outlets affected, suggesting that the new policy had broad effects on the diffusion of news.Understanding social media

These two studies underscore that seemingly simple features can have complex effects on user attitudes and media diffusion. Disentangling the specific features that make up social media and estimating their individual effects is key to understanding how social media affects us.

Like Instagram, Meta’s new Threads platform allows users to hide the number of likes on posts. X, formerly Twitter, has just rolled out a similar feature by allowing paid users to hide their likes. These decisions can have important implications for political discourse within the new social network.

Post Comment