A report from the European Commission on disinformation has said that Russian sources attempted to influence the recent European elections through social media.

The "sustained disinformation activity" from Russian-linked accounts aimed to suppress turnout and influence voter preferences so that the only people who decided to vote would be more likely to vote against the interests of the European Union.

In order to do this, the report states that malicious individuals engaged in Facebook groups and managed spam Twitter accounts that would specifically post content challenging the Union's democratic legitimacy or exploited "divisive public debates on issues such as [sic] of migration and sovereignty."

Russian actors have taken similar action in the past over less controversial topics such as Star Wars: The Last Jedi, aligning themselves along politically liberal and conservative lines in order to sow dissent, or used the recent destruction of the Notre Dame Cathedral fire "to illustrate the alleged decline of Western and Christian values in the EU."

More From PCmag

The report said that it could not detect a "distinct cross-border disinformation campaign from external sources specifically targeting the European elections," but rather that these spammers, and in particular ones linked to Russia, are working smaller-scale, localized operations in order to reduce the likelihood of them being caught.

The report also highlights the staggering number of fake accounts infiltrating some of the largest platforms on the internet, stating that in order to try and limit the activities of malicious users, Google has had to remove more than 3.39 million Youtube videos and 8,600 channels for violations against its spam and impersonation policies. Twitter had to oust approximately 77 million "spamlike" or fake accounts globally.

Facebook disabled 2.19 billion fake accounts in the first quarter of 2019 alone, with over 1,500 pages, groups, and accounts engaged in targeted disinformation against EU states. More than 600 groups were detected, operating across France, Germany, Italy, the United Kingdom, Poland, and Spain, which used disinformation and hate speech to artificially boost the content of the sites they supported, generating 763 million user views.

Tech giants have been rolling out more tools for politicians and other organizations in order to stop election interference. In January, Google rolled out Project Shield to all organizations that "are vital to free and fair elections."

However, the difficulty in tackling disinformation is also rooted in the fact that social media companies profit from these fake actors. They exist to drive engagement and discussion which then produces more content for the site and therefore makes it more valuable.

Jonathan Zittrain, a professor at Harvard studying digital property and content, has said that because "platforms are oriented to maximize user growth and retention" they are more reluctant to "[throw] up even tiny hurdles along the sign-up runway, and not closing accounts without significant cause. I suspect they figure there are enough accounts that are the subject of complaints to review without looking for more to assess."

This article originally appeared on PCMag.com.