Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
AI

AI-Generated Text Is the Scariest Deepfake of All (wired.com) 68

An anonymous reader shares a report: In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes -- masked as regular chatter on Twitter, Facebook, Reddit, and the like -- have the potential to be far more subtle, far more prevalent, and far more sinister. The ability to manufacture a majority opinion, or create a fake-commenter arms race -- with minimal potential for detection -- would enable sophisticated, extensive influence campaigns. Pervasive generated text has the potential to warp our social communication ecosystem: algorithmically generated content receives algorithmically generated responses, which feeds into algorithmically mediated curation systems that surface information based on engagement.

Our trust in each other is fragmenting, and polarization is increasingly prevalent. As synthetic media of all types -- text, video, photo, and audio -- increases in prevalence, and as detection becomes more of a challenge, we will find it increasingly difficult to trust the content that we see. It may not be so simple to adapt, as we did to Photoshop, by using social pressure to moderate the extent of these tools' use and accepting that the media surrounding us is not quite as it seems. This time around, we'll also have to learn to be much more critical consumers of online content, evaluating the substance on its merits rather than its prevalence.

This discussion has been archived. No new comments can be posted.

AI-Generated Text Is the Scariest Deepfake of All

Comments Filter:
  • Here's an idea. Everyone has the power to turn them off. Instead of listening to twitter, facebook, or reddit, try listening to the persons around you. Your family, neighbors, and friends, in person.

    You don't need these services. Do they really make your life better?

    • Slashdot too with "filler" article made to ruffle feather and made more comment. This one included!
    • I would never be anything like that!

    • Re: (Score:2, Troll)

      by aaarrrgggh ( 9205 )

      You need some source of information (or at least I do...), or you become uninformed about current events.

      I don’t know how you try to approach it; arguably you could use AI to moderate out divisiveness, repetition of thoughts/ideas, name calling, and spam... just like /. has managed to do over the years with humans.

      The news sites I visit are really getting to the point where the comments are annoying more than useful. I am happy to have well though-out opposing views, or even opposing views that are n

    • They absolutely don't, and in fact Slashdot makes my life worse. Which is why I'm on it! Isn't that why everyone's on it?

    • For people stuck in one bubble it’s far better to just step outside, and often. That’s why I stay on top of obviously biased sources like RT, Fox News, BBC, Vox, Daily Wire, CNN, etc... as well as groups say like r/T_D or r/conservative, or r/politics, r/chapotraphouse, etc (rip) on reddit. I find it extremely interesting how each group tends to behave and what types of social interaction tend to occur.

      If the burnout is simply from a never ending firehouse of drama, then yes, a temporary
    • Re: (Score:2, Insightful)

      by dotancohen ( 1015143 )

      Are your family, neighbors, and friends more informed about Russian incursions into Ukraine, Gaza rocket storage under schools, clandestine Syrian nuclear facilities, and Rohingya genocide than are the UN secretary general, US president, or the crown prince of Saudi Arabia?

      Sometimes we need to make decisions, especially voting decisions, based on the little information that we have.

  • yup we're screwed (Score:5, Interesting)

    by marcle ( 1575627 ) on Monday August 03, 2020 @07:40PM (#60362743)

    Yet another restatement of the problem of social media. It isn't just fringe elements coming together, it's bad actors gaming the system and producing outsized effects with minimal effort.

    Just imagine the propaganda, oops I mean marketing possibilities!

    • I wouldn't necessarily say screwed...it could finally mean people stop using stupid facebook because the signal to noise ratio gets fucked and they prefer face to face instead. I mean I support people using whatever app they want and speaking their minds on it, but I'm so fucking sick of hearing about stupid off-the-wall shit from the grapevine, only to ask "where the fuck did you hear that bullshit from?" and the answer being god damn facebook.

      • by tlhIngan ( 30335 )

        Exactly. I say bring it on with deepfakes everywhere attributed to everyone.

        Let's have deepfakes where everyone can say the same thing I'm sure everyone would get the idea when videos showing themselves saying stuff they would never say. Or writing stuff they'd never write. When no one can believe what anyone says in any form of media, the world will be a better place.

    • Comment removed based on user account deletion
    • by AmiMoJo ( 196126 )

      It's a very similar problem to spam.

      Started out with zero defences, a small number of people on 4chan could create the impression of a big movement. Social media companies got wise to that and it's much harder to do now.

      So the Russian Internet Research Agency simply hired thousands of people to manage fake accounts. We still haven't really got a handle on that yet, mostly because certain people in the West really don't want to look to as long as it's helping them.

      The next phase could be extremely low cost A

  • by phantomfive ( 622387 ) on Monday August 03, 2020 @07:41PM (#60362745) Journal
    There are a few problems with the GPT-3 AI the article is referring to:

    1) It has no understanding of context.
    2) It can't stay on topic.
    3) It doesn't even understand topic sentences.
    4) It doesn't understand anything, it picks words interpolating what it has seen before.
    4) It is barely coherent, sometimes producing sentences that don't even parse.

    So a typical forum poster. Carry on.
    • So it sounds like it's a drop-in replacement for the President's Twitter feed and COVID-19 briefings?

    • by MrL0G1C ( 867445 )

      So basically it's an Alice bot on steroids.

      I get the feeling we are still no nearer having real AI than we were 30 years ago.

      • Pretty much not. AI is more or less a fancy version of the regression function you may have used on a ti-82 in high school algebra. In fact, regression is one of the AI algorithms somebody might choose from, depending on the application.

    • Sounds like a perfect fit for slashdot. Add another 4) Does not RTFA
    • by leonbev ( 111395 )

      Yeah, you don't exactly need genius level AI to post variations of "Orange Man Bad!" or "Make America Great Again!" in a Slashdot or Reddit politics post. Hell, a random word generator could probably do it.

    • by Visarga ( 1071662 ) on Tuesday August 04, 2020 @12:28PM (#60365281)
      Just for fun, here is a GPT-3 reply to your critique.

      1) It has no understanding of context.

      This is true, but it's not a problem for the model. The model is trained on a corpus of text and then tested on another corpus of text. The training data does not contain any information about the context in which the text was written, so there is no way to train a model that can understand context. This is why we have to test it on new data. If you want to see how well GPT-3 performs when trained with additional contextual information, you can try out our open source implementation: https://github.com/mattmahoney... [github.com]

      2) It can't stay on topic.

      It can't stay on topic because it doesn't understand the context in which the text was written. It's not a problem for the model, but a limitation of training data.

      3) It doesn't even understand topic sentences.

      It does not need to understand topic sentences because it is trained and tested on full documents (or paragraphs). The only thing that matters is whether or not it can parse individual sentences correctly.

      4) It doesn't understand anything, it picks words interpolating what it has seen before.

      This is true, but again this isn't a problem for the model as long as we test its performance on new data where there are no previous examples of any kind available to help guide its predictions (which is why we use cross validation).

      5) It is barely coherent, sometimes producing sentences that don't even parse.

      Depending on the prompt, the model can produce coherent text. Here is a sample of text from the model when it was prompted with "I want to go to the store" (the prompt is given in parentheses):

      (I want) (to go) (to the store). I like going shopping. It's fun, but hard work too. You have to walk all over town for hours and hours just looking for things you need. Sometimes you find them, sometimes not. And if you don't buy something right away, they'll tell you that they're closed even though there are still lots of people walking around who look like they're shopping too! But what do those people care? They're not working there! That's why I never shop on Sundays unless I absolutely have to! It would be better if we had some kind of machine that could do all this for us instead of having to waste our time running around town doing it ourselves. But at least we can take public transportation most places now so that cuts down on some of the walking distance involved
      • I'm not sure what you mean when you say "this is not a problem for the model." GPT-3 is a recurrent neural network, and all of these are problems inherent with RNNs. It doesn't matter how much training data you have, the problems will remain.
  • by t4eXanadu ( 143668 ) on Monday August 03, 2020 @08:55PM (#60362901)

    Where are all those AI evangelists? Remember when AI was going to be this amazing thing that would bring forth the singularity? They didn't count on the stupidity of humans.

    Many people already have a hard time evaluating the accuracy of human-composed text. This is really going to screw things up.

    The only viable solution is to heavily regulate the use of AI, which of course poses a laundry list of issues on its own. If individuals and private businesses have restrictions about AI usage, what about governments? Or the military? Will they be the only ones allowed to use it?

    It's already been established that some individuals, businesses, and states like to play fast and loose with regulations, there's no reason to think this would change. But, the prospect of no regulation is terrifying. Counting on people to be better consumers of media is a losing game, so that's out.

    This is going to be a hot mess. As if we don't have enough problems as it is. Woo!

    • by HiThere ( 15173 )

      Regulating this AI problem merely ensures that the posts support, or at least don't contradict, the government's position.

      It's a real problem, and does need a solution, but the solution can't be worse than the problem.

  • Wasn't this the point when they initially came out with GPT-2? They refused to release the entire model because of its "dangerous" nature. They relented after a while and now it's all available. So, yeah, GPT-3 has potentially a greater risk of that kind of problem, but the thing to notice is that we're reaching a point of diminishing returns on the training of these things. 3 used, what, 75x the data? And the performance is certainly not 75x better. If you actually search for transcripts from private beta
  • I feel that I have already been seeing this in the business "news" sites(Market Watch, Yahoo!, Bloomberg...) and their stories. Stories and headlines that aren't quite crommulent but seem to be written specifically to artificially influence opinions and foment buying or selling of certain popular and high volatility securities in order to move markets. It has seemed that many of the stories from these sites have been robot generated for years now.

  • A message is only as important as the one who is sending it. This is why important documents need to be signed by important people and cannot be signed by just anyone. It is why it's important at which university you have studied and that it comes with an important name. It is why people pay more for brand names than for noname products. And so on.

    Even when important people create false messages will the question of why they are sending it be of more importance than the message itself.

    An AI creating false m

    • So, you are saying it’s not technology that could fall A seventeen year old social engineering hacker [nytimes.com]? Because instead of pulling a crypto scam the likes of which a room full of toddlers could easily top, it might be possible to start a war by taking over world leaders social media accounts and posting the right plausible sounding messages.
      • ... it might be possible to start a war by taking over world leaders social media accounts ...

        You give too much importance to social media. Social media accounts don't belong to the government, but to companies. So even when a message is sent by a world leader as you say does it require the necessary authenticity to trigger a military conflict. It can certainly trigger a diplomatic incident where one world leader will ask another for clarification and then they'll use the secure diplomatic channels for their communication, but this only underlines the problem of social media. Social media is only o

        • I hope you’re right honestly, I just don’t believe you’re thinking broadly enough. If someone had taken over trumps twitter after that last little stint in Iran where they got so riled they up and shot down a civilian aircraft. I’m fairly sure carefully crafted faked comments and video could inadvertently just trigger an actual war and if it was with Iran there is no good outcome.
          • This wasn't because of social media. Iran is said to have been ramping up their nuclear program again, and at the same time are their facilities blowing up. There is much more going on behind the scene than we'll ever get to know. And it won't just be about the recent events, but the history of conflict with Iran is a long on. It's older than social media itself.

            • Where did I say it was because of social media? It was a point where Iran was so close to war already, after the American assassination, that the social media channels could simply give it the nudge over the edge. All it takes is to provoke a single response at the right time, there is no going back once a few hundred lives have been lost in an attack - it simply turns into all out war.
              • All it takes is to provoke a single response at the right time, there is no going back once a few hundred lives have been lost in an attack - it simply turns into all out war.

                You mean people would act completely out of impulse and start wars solely on their emotional state of mind?

                No. If this was the case then we would still be sitting in trees and throw our faeces at one another.

                We rather accept a lot of atrocities before we start a war, because we know war is the worst of all evils. Only when we have proof of violent acts from trusted sources do we react, and we react only with as much force as is necessary to protect the innocent from overkill. We don't start a war, because o

                • No, that isn’t even close. Social media manipulation could happen at the very worst moments, when tensions are already strained. Also war is unilaterally decided on, it’s not mutually consensual, if Iran got played at the worst time and slaughtered a few hundred American soldiers, the “oops my bad” excuse won’t defuse the war already underway.
                  • One moment you believe a message on social media can start a war, and then you believe nothing could be said to stop it. So it doesn't matter what you believe. You're a moron and I mean this in a very friendly way.

  • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Monday August 03, 2020 @09:58PM (#60363073) Homepage Journal

    It's not expensive to get a human to write some bullshit in someone else's style. Video deepfakes are scary specifically because we tend to believe what we see, and video has historically been evidence.

    • and video has historically been evidence.

      The party told you to reject the evidence of your eyes and ears. It was their final, most essential command. His heart sank as he thought of the enormous power arrayed against him, the ease with which any Party intellectual would overthrow him in debate, the subtle arguments which he would not be able to understand, much less answer. And yet he was in the right! They were wrong and he was right.

      • I'm not sure how that applies here. With deepfakes, you really can't trust your eyes. Sure, it's always been possible to create misleading videos, but deepfakes represent a paradigm shift in that department because of cost reduction.

        • This will eventually deploy real time, with editing done so as to truly sow doubt as to the validity of any visual evidence - at all. Once reality can be so easily manipulated, it will be create a much stronger polarization as the base of what’s reasonably factual to assume shrinks. By being the trusted source where you dare not trust your eyes or ears, it becomes possible to exert true Orwellian pressure on the populace.
  • by MycoMan ( 132840 ) on Monday August 03, 2020 @10:03PM (#60363085)

    Wouldn't it be possible to detect deepfake tweets, blog postings, facebook postings, etc by distracting them with comments generated by a deepfake AI taking the opposition position? The two (or more) of them could spin off onto cluster with their 'conversation' going while real humans talk amongst themselves?

  • I guess everyone is going to have to back up their assertions with evidence and reasoning. Affinity based marketing [wikipedia.org] is less likely to work with skeptical customers. Hiring opinion leaders to sell your products or political platforms isn't going to produce results as well as it did in the past. Ideas will have to stand on their own merits.

    • It seems like the opposite is happening. Trust is fragmenting, so but so are the things that I need to trust them about. When it comes to somewhat important matters that aren't vital, I have trusted authorities whose advice I will follow (obviously, double checking anything that seems out of wack with my personal experience or my current understanding.) But when it comes to buying equipment for a hobby, online people who have contributed a lot to my knowledge and enjoyment of the hobby can probably get m

  • As synthetic media of all types -- text, video, photo, and audio -- increases in prevalence, and as detection becomes more of a challenge, we will find it increasingly difficult to trust the content that we see.

    I believe that most people will absolutely not trust some of the content they see, however I propose that these same people will also find it much easier to trust other algorithmically constructed content.

    This time around, we'll also have to learn to be much more critical consumers of online content

  • Why would you care about random comments on the Internet? Whether they're written by people or bots, there's no reason to read that crap.

    • Tell that to news outlets and corporate boards. Despite only 20% of Americans having twitter accounts at all, let alone actively use them to express opinions or share facts, I hear reporters discuss twitter trends like they were nationally representative. 8% of that 20% are responsible for something like 80+% of political tweets, but lazy journalists seem to love pretending that reflects society and corporations seem desperate to get positive tweets.
      • I think reporting on twitter makes viewers who are twits or twats or whatever you call twitter users feel important, because they use it too.

        • And the reporters are also twitter users, so I suppose they get to feel more important.

          After yesterday's unacceptable political interference, I'm thinking it's time for Twitter and Facebook to go.

  • I thought about this a lot, due to designing a service that I thought woukd have to distinguish between "people" and "bots".

    But in the end, I had to realize the simple fact, that you have to assume *everyone* is a man-machine hybrid! There are no users! Only connections! Incoming and outgoing packets.

    No guarantees beyond that, whatsoever, can be made.

    Yes, you could argue that if you met somebody in real life, and he became a person to you, instead of some random exchangeable entity like a user, then after e

    • Just because it is an actual person having an opinion, doesn't mean it isn't fake or not spread by a bot.

      Humans are easily influenced, and most of the time can be considered "the original bots". :)

      So this is not in any way different from what we had until now. It's just machines taking our blind partisan drone jerbs [urbandictionary.com]. ;)

  • by argStyopa ( 232550 ) on Tuesday August 04, 2020 @07:50AM (#60364329) Journal

    If the person you're trading comments with is indistinguishable from synthetically generated text, maybe you need to up your choice of interlocutors?

  • Not fear content machine stuff! Good as human!
  • Often intelligent people can't argue a point effectively. Presumably, this would be a group of bots arguing a towards a point the bot owner wants. So one says issue X is good. Another bot says it's not. Another throws statements in support of issue X that make it seem reasonable and appealing. Rinse and repeat so a reader sides on issue X and discounts the nay-sayer bots. To do this you need a good command of the language, the issue, points related to the issue that are effective in changing minds, the abil

  • Just remember that *everything* you see on a screen is a lie.

"Everything should be made as simple as possible, but not simpler." -- Albert Einstein

Working...