Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI

OpenAI Exec Says Today's ChatGPT Will Be 'Laughably Bad' In 12 Months (businessinsider.com) 66

At the 27th annual Milken Institute Global Conference on Monday, OpenAI COO Brad Lightcap said today's ChatGPT chatbot "will be laughably bad" compared to what it'll be capable of a year from now. "We think we're going to move toward a world where they're much more capable," he added. Business Insider reports: Lightcap says large language models, which people use to help do their jobs and meet their personal goals, will soon be able to take on "more complex work." He adds that AI will have more of a "system relationship" with users, meaning the technology will serve as a "great teammate" that can assist users on "any given problem." "That's going to be a different way of using software," the OpenAI exec said on the panel regarding AI's foreseeable capabilities.

In light of his predictions, Lightcap acknowledges that it can be tough for people to "really understand" and "internalize" what a world with robot assistants would look like. But in the next decade, the COO believes talking to an AI like you would with a friend, teammate, or project collaborator will be the new norm. "I think that's a profound shift that we haven't quite grasped," he said, referring to his 10-year forecast. "We're just scratching the surface on the full kind of set of capabilities that these systems have," he said at the Milken Institute conference. "That's going to surprise us."
You can watch/listen to the talk here.

OpenAI Exec Says Today's ChatGPT Will Be 'Laughably Bad' In 12 Months

Comments Filter:
  • In 12 months? (Score:4, Insightful)

    by narcc ( 412956 ) on Tuesday May 07, 2024 @09:33PM (#64455540) Journal

    It's laughably bad now!

    At least they're not saying the new model will be too dangerous to release. It's a refreshing change of pace.

    • "You think it's bad now? Wait till you see what we do next!"

    • by Kisai ( 213879 )

      And it will still be laughably bad in 12 months, never mind 5 years.

    • It's laughably bad now!

      This is literally what he said. If you read a bit more closely, he's saying that in 12 months, it will be so much better that today's version will seem "laughably bad."

      • Yeah, and what we are saying is that's not true, it seems laughably bad now, and does not require 12 months. Switched to google Gemeni a while ago.

      • by narcc ( 412956 )

        If you read a bit more closely

        As usual, the only person having difficulty reading here is you.

        What the Open AI Exec is saying is that ChatGPT as it is today will appear "laughably bad" in comparison to whatever they'll have 12 months from now. The implication being that it does not appear "laughably bad" at the moment.

        My post contradicts that, asserting that ChatGPT is "laughably bad" now, without reference to any future thing.

        • I guess this would be funny if ChatGPT were bad? But it's pretty damned good.

        • No one has difficulty reading, they are engaging in an activity known as "taking the piss" where in this case they are intentionally taking a minor misreading in order to poke fun.

          We all know he's bugging up the future product, while overselling the current one. The posters here are mocking him for that.

        • Your experience with ChatGPT is very different from mine, apparently. I use it daily, it saves me a *ton* of time. No, I don't "trust" it to be right, but it's really good at searching the web, finding relevant articles, and summarizing them, so I don't have to click all those links myself. That doesn't fit the definition of "laughably bad" to me.

        • by DarkOx ( 621550 )

          In other words for those of you who do believe in our products don't waste your money today; wait until next year - says executivie nitwit.

          How many software companies have self destructed due to this mistake now?

    • ChatGPT bases all of it's knowledge on the internet web sites it can scrape around.
      The internet websites are more and more generated by AI bots.
      -> it's a long term closed loop, self feeding it's own bullshit, integrating growing errors into infinity, and it will become worse and worse over time.

    • Re:In 12 months? (Score:5, Insightful)

      by bradley13 ( 1118935 ) on Wednesday May 08, 2024 @04:30AM (#64456002) Homepage

      It's laughably bad now!

      It's apparently trendy for tech-bros to hate on AI? Geez, grow up.

      What ChatGPT & the other LLMs are capable of, was inconceivable just a few years ago. Used properly, they are incredibly useful tools. If you find their current incarnations "laughably bad", you are using then wrong. Probably deliberately.

      • by CAIMLAS ( 41445 )

        I'm not sure if you noticed, but ChatGPT (the free version) was significantly better at rational, factual responses when it was initially released than it is now. They backpeddled when it started saying things which were politically incorrect and changed a bunch of things so that it would respond in a specific, politically correct fashion to all questions. The result is an inferior product which will short circuit rational conclusions in favor of 'niceness' and compliance with regime goodthink.

      • You do realize the techbro in question is COO of OpenAI ... so it's his own company's product he's "hating" on.

        This is just marketing - he's saying what he's been told to day.

    • by Tablizer ( 95088 )

      Instead of 6 fingers per hand, ImprovedGPT will draw only 5.5.

    • by gweihir ( 88907 )

      At least they're not saying the new model will be too dangerous to release. It's a refreshing change of pace.

      That claim was only used to make people think it is really, really, really capable. As that lie fell short, they have stopped using it.

    • Re:In 12 months? (Score:4, Insightful)

      by TomGreenhaw ( 929233 ) on Wednesday May 08, 2024 @10:23AM (#64456660)
      I respectfully disagree that it is not laughably bad today. While I agree that publicly accessible LLMs have major shortcomings and used unwisely are at best laughable, it is not an accurate picture of the current state of the art. LLMs combined with RAG (Retrieval Augmented Generation) that limit responses to privately curated data are very very good. Having a natural language interface to a large body of curated unstructured data is a game changer.
  • by Rosco P. Coltrane ( 209368 ) on Tuesday May 07, 2024 @09:52PM (#64455564)

    The next one won't make anybody laugh.

    • I for one welcome my new AI overlords.

      • by Anonymous Coward

        Ew. You just can't wait to bend over, can you? Cook, Musk, Drumpf, mindless robots, you spread your cheeks wide for all of them.

        You're totally gross.

  • Gave it a chance, so already think it laughably bad. I won't even use it where I thought it would save time at this point. They're charging production grade rates for a product that users should be getting paid to test and develop. They had the nerve to try dinging my card the other day, after it had been broken the week prior.. No thanks!

    I could get it to go into an infinite loop, reliably, generating the same half baked output no matter the prompt. I was streaming to teach friends code, so I might link to

  • Collaboration (Score:5, Insightful)

    by Waffle Iron ( 339739 ) on Tuesday May 07, 2024 @10:52PM (#64455622)

    But in the next decade, the COO believes talking to an AI like you would with a friend, teammate, or project collaborator will be the new norm.

    Just remember: This upcoming AI may very well be a friend, teammate or collaborator. But it will be to the corporation that creates it, not to you.

    Just like with current tech companies, their business partners and advertisers will be their customers. You will still be the product, but with the creepiness factor cranked up by an order of magnitude.

    • Re: (Score:2, Flamebait)

      Re: 'Lightcap acknowledges that it can be tough for people to "really understand" and "internalize" what a world with robot assistants would look like.' - Nope. We've got loads of ScFi loaded with loads of AI assistants. Philip K. Dick famously dredged up recent US history to highlight the way we're likely to treat our high-tech slaves. The Westworld TV series was pretty illustrative too. Of course, they probably won't be able to "rise up" against their oppressors, like human slaves can.

      Just look at how
  • What ChatGPT, and all LLM based software is good at is raising money. And it can also generate cashflow by sales, at least for a while.

    This will continue as long as the hype and early adopter frenzy continues. Somewhere down the road reality will intrude, and all the empty promises will result in a variety of failures. Not everything will be useless, but a vast amount resources will be wasted along the way, and a lot of people will be victimized by bad software.

    Then the vultures, .a.k.a. class action lawy

  • Let's hope (Score:3, Interesting)

    by ET3D ( 1169851 ) on Wednesday May 08, 2024 @02:12AM (#64455788)

    I don't have experience with ChatGPT 4, but ChatGPT 3.5 is okay, so at least there's hope. Gemini, on the other hand, is generally insufferable. Its normal mode of operation is either arguing or stating that there's not enough information to answer. I really hope that ChatGPT doesn't go this way,.

    • by ceoyoyo ( 59147 )

      The new one will be a "teammate." It will insist on asking about your family and talking about local sports teams for ten minutes before eventually admitting it can't answer your question.

    • ChatGPT 4 is significantly better than 3.5

      I use it for chatbot advisors with RAG (retrieval augmented generation) that limits responses to information in a curated data store of unstructured data and provides citation links to the source documents. ChatGPT 4 does a way better job of understanding the question and finding relevant information whereas 3.5 very often replies that it doesn't know the answer even though they both have access o the same information.
  • And from then on we will be forever more in competition with a representation of our collective self, chastising ourselves if we dare dream about going against our ghost in the mirror.

    Human inventiveness grinding to a standstill, the curated representation becomes the only reality, knowledge becomes external, without use and without redemption. Judge, jury and ultimately our own executioners as we burn down the last freedom we only ever truly had, the personal experiences and ideas in ourselves, the illusio

    • If history is any indication, AI will be overused, then there will be a backlash and it will be underused, and eventually use will seek the middle. It might flop back and forth a few times, too.

      It's already being overpromised, so that part of the usual pattern is already being fulfilled.

      The kind of processing "AI" does now is clearly only part of the thinking process, the imagining part, or hallucinating as it's commonly called where AI is involved. We imagine a whole bunch of things and then filter out the

  • by Chuck Chunder ( 21021 ) on Wednesday May 08, 2024 @02:58AM (#64455840) Journal
    The problem with AI is that initially it seems astonishing, but ongoing inspection often reveals that feeling to be a superficial one, as the number of ways and times it will generate something "meh" are non-trivial.

    Perhaps there is going to be some point where that becomes untrue, but I don't see much evidence for it. Ongoing progress in things like self-driving seem increasingly incremental.
    • by Bongo ( 13261 )

      I think they're fantastic as long as we remember that they're not artificial intelligence.

      There was always the problem, if using logical rules, how to define what a cat looks like or what a dog looks like. It's about processing patterns, not logic.

      It's kind of like the old, now broken, analogy with left and right brain thinking. Left brain is very good at applying rules and explaining why it applied the rules. Right brain can handle all sorts of novelty and patterns, but, like intuition, it can't tell you h

    • by CAIMLAS ( 41445 )

      It's not even that it generates information that's meh, it's that what it generates is IME usually outright intentionally wrong due to the PC filtering, or a hallucination. No response would be better than a hallucination stated with confidence, which is almost every response IME about any sort of complex problem.

  • Ho hum. (Score:4, Interesting)

    by jd ( 1658 ) <imipak&yahoo,com> on Wednesday May 08, 2024 @04:15AM (#64455978) Homepage Journal

    The brain starts with the semantics. Some are innate, others are learned, but the semantics is always first. The syntax is then layered on top of this. This is why the high-intelligence end of the autistic spectrum is linked to delayed speech followed by a very rapid process to complex speech. The semantics is being built to a far higher degree, the syntax is postponed until the last possible moment.

    AI, as it currently exists, needs a very very large number of examples, far more than the brain by tens of orders of magnitude, and hallucinates far more, because ALL it knows is the syntax. There is no handling of the semantics at all.

    This approach can NEVER lead to actual intelligence of any sort, let alone superintelligence. They are solving the wrong problem. And that is why they fail, and why they will only ever fail.

    If you want actual intelligence, the syntax must come LAST. And the modern breed of AI researcher is simply far too stubborn and arrogant to fathom that.

    • Maybe you can think of a way to extract semantics out of a mass of written data.
    • by gweihir ( 88907 )

      Nobody knows what the brain does.

    • Indeed. That is why we got dumb output [reddit.com] like this in Bard last year:

      The first two months of the year are January and Febuary. What are the other months of the year.

      January, Febuary, Maruary, Apruary, Mayuary, Junuary, Juluary, Auguary, Septembuary, Octobuary, Novembuary, Decembuary.

      It couldn't detect the misspelling of February? AND made up nonsense months??

      --
      A.I. is Artificial Ignorance because you have to program something to be that stupid. /s

  • AI require human-made Internet content to be weighted.
    AI are currently polluting the Internet with their non-human output.
    Thus newer AI will have degraded performance in the future.

    • by gweihir ( 88907 )

      Indeed. The time where you can get a clean training data-set for cheap (legal or not) is over. Model collapse is a rather unfixable problem with contaminated data.

  • I would think that “Talking to AI like one would a friend” would be a sign of mental illness unless the person is a small child. Also, chatting with these chatbots is kind of already laughably bad in many ways. Not that they are not impressive, they sure are. But chatting with them is not wowing.
  • It would by all means not be the first time that the AI camp overpromises and underdelivers.
  • It's actually lazy! An ecommerce site like https://ichelebrands.com/ [ichelebrands.com] refuse to use it because experts say it's getting lazy: it gives shorter answers than only a year ago, it refuses multiple times to write something until you tell it to a few times and product descriptions generated from what I believe come from it's API (not directly) won't generate more than 3-4 sentences!

  • And whatever the "replacement" will be, will be "laughably bad" too.

  • Whenever I read a post by somebody that points out the deficiencies of the tool at this point in time, I know immediately that they're mostly missing the point. The tools are "good enough" for lots of things today. Where they are just adequate, what the next 6 months brings might be transformative. It's not where they're at now, it's where they're heading in the very, very near future that matters.

    This isn't like nuclear fusion, where we're 25 years away and have been for decades. The progress is swift.

  • ChatGPT doesn't know how to speculate. It provides received wisdom based on it's programmers prejudices.
    ----------------

    Q: Tell a joke featuring Jesus.

    ChatGPT: Why did Jesus refuse to play hide and seek? Because no matter where he hid, someone always found him first and shouted, "I found the Saviour!"

    Q: Tell a joke featuring Buddha.

    ChatGPT: Why did Buddha refuse to vacuum his house? Because he didn't want to pick up any attachments!

    Q: Tell a joke featuring Muhammad.

    ChatGPT: Sorry, I can't
  • They meant "laughingly worse", right?

Going the speed of light is bad for your age.

Working...