Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
AI China United States

China Will Dominate AI Unless US Invests More, Commission Warns (axios.com) 139

The U.S., which once had a dominant head start in artificial intelligence, now has just a few years' lead on China and risks being overtaken unless government steps in, according to a new report to Congress and the White House. From a report: Former Google CEO Eric Schmidt, who chaired the committee that issued the report, tells Axios that the U.S. risks dire consequences if it fails to both invest in key technologies and fully integrate AI into the military. The National Security Commission on Artificial Intelligence approved its 750-page report on Monday, following a 2-year effort. Schmidt chaired the 15-member commission, which also included Oracle's Safra Catz, Microsoft's Eric Horvitz and Amazon's Andy Jassy.

On both the economic and military fronts, the biggest risk comes from China. "China possesses the might, talent, and ambition to surpass the United States as the world's leader in AI in the next decade if current trends do not change," the report states. And It's not just AI technology that the U.S. needs to maintain a lead in. The report mentions a number of key technologies, including quantum computing, robotics, 3D printing and 5G. "We don't have to go to war with China," Schmidt said. "We don't have to have a cold war. We do need to be competitive."

This discussion has been archived. No new comments can be posted.

China Will Dominate AI Unless US Invests More, Commission Warns

Comments Filter:
  • by DaveV1.0 ( 203135 ) on Tuesday March 02, 2021 @10:05AM (#61115592) Journal
    The US is too worried about AIs being politically correct than they are about actually having AIs that work.
    • Re: (Score:3, Insightful)

      by Anonymous Coward

      The US is too worried about AIs being politically correct than they are about actually having AIs that work.

      Might be better to phrase this the other way: if an AI incorporates a bias against a particular ethnic group, in the US we consider this a bug... in China, they consider it a feature.

      • by raymorris ( 2726007 ) on Tuesday March 02, 2021 @10:38AM (#61115730) Journal

        "Against" isn't needed or considered relevant.

        If the output of a formula or system can be correlated (by humans) with some factor that humans can also correlate with race, that formula can't be used.

        Of course, some humans can see a correlation with race in just about any fact, so they object to most algorithms that calculate anything having to do with the real world.

        (Those people who connect everything with race, who think everything is about race, are called racists.)

        • Of course, some humans can see a correlation with race in just about any fact, so they object to most algorithms that calculate anything having to do with the real world.

          Or, they simply don't like the results because the results don't support their claims. Sure, an AI might have a problem identifying people of a specific race. That is not racism and the algorithm isn't racist, it is a deficiency in the AI. It's a bug and needs to be fixed. But, just because the results aren't to someone's liking, it doesn't mean the results are wrong or racist.

          Those people who connect everything with race, who think everything is about race, are called racists.

          Now they are a class of social justice warriors called anti-racists. Anti-racists think race is the most important thing ever and m

          • Also, photos are inherently two dimensional.
            Three dimensions features, such as a nose, are inferred by light and shadow. That is, by the difference between the light color at the top of the nose vs the darker color next to the nose.

            You can't see anything in a photo taken in the dark, with no light. That's just a physical fact. The less light there is, the less detail you can see because there is less difference between the shadow parts and the high spots. It's just a fact of physics that less light is re

            • Ler me propose this experiment since I already know what Ami is going to say:

              Stretch out your hand.
              Notice the shadow it casts in the light-colored wall.
              See if you can make a duck shadow puppet.

              Now put a black leather jacket or other dark, matte surface there. Try making your shadow puppet again.

              I think you'll find that shadow is much harder to see on a dark surface. That's just how it is.

              Does that sound uncaring to you?
              Okay, light doesn't care. Light gets absorbed by dark surfaces and it doesn't care how y

        • by gweihir ( 88907 )

          (Those people who connect everything with race, who think everything is about race, are called racists.)

          Indeed. Although people that notice that occasionally there is a bias based on race are called "scientists that cannot publish their results". The fact of the matter is that there are differences in characteristics whenever a group thinks they for a group based on characteristic XYZ (which can be race). Those differences usually have no direct genetic reasons, but this mechanism can cause indirect ones.

        • by g01d4 ( 888748 )

          correlation with race

          The issue is how you define race and whether on closer inspection it becomes a crude stand-in for something more complicated involving culture and economic background. In making America great again do you go X% ancestry from Y continent of origin, do you account for an immigration effect (forced or optional), or do you get a collection of the woke in a room classifying a pile of photos?

      • by larwe ( 858929 )
        Aren't most wars between countries, kinda by definition, fought between different ethnic groups?

        China is super interested in internally-facing surveillance tools and there is no force to restrain them from fully deploying those tools. In the US (at least), various agencies are equally interested in those tools, but are subject to considerably more restraints as to their real-world usage. It stands to reason that China's tools in this regard are inevitably going to be superior to the US's simply because the

      • by gweihir ( 88907 )

        The US is too worried about AIs being politically correct than they are about actually having AIs that work.

        Might be better to phrase this the other way: if an AI incorporates a bias against a particular ethnic group, in the US we consider this a bug... in China, they consider it a feature.

        If the bias is in the training data, then it also being in the trained classifier is a sign of a good training process.
        The problem the US has is that the whole society constantly lies to itself about how things really are.

    • The US is too worried about AIs being politically correct than they are about actually having AIs that work.

      So it's that gender studies major who has everything screwed up?

    • by DNS-and-BIND ( 461968 ) on Tuesday March 02, 2021 @11:23AM (#61115894) Homepage

      We used to ridicule the Soviets for having whole swathes of science off limits because they knew the results would be politically unacceptable. Or else their scientists would start with the conclusion and work backwards with faulty premises. We all had a good laugh and saw they were doomed because nobody was allowed to tell the truth. I never ever in a million years thought it could happen here, but here we are.

      Hagbard Celine's second law: accurate communication is only possible in a non-punishing situation.

    • by jellomizer ( 103300 ) on Tuesday March 02, 2021 @11:28AM (#61115920)

      It isn't being politically correct, it is the fact that AI's are programmed/trained with our biases so they are making the same mistakes that we are.

      So for training an AI, to read resumes, it references what people in the past had allowed and denied, to try to mimic it. So looking at its references it had learned to check the Name of the Applicant and weigh their name as a strong aspect to hire or not. Often giving preferences towards White Male Common Names.

      The AI itself isn't racist or sexist (It is just a computer, it really doesn't care) but it had picked up the biases when it was learning how to process the information. So such issues will need to be caught and corrected in either the training, or a hard coded cut off.

      Americans have been using the Term Politically Correct, as an excuse to allow them to be cruel and unfair to people, and to excuse their beliefs that are just immoral and wrong, as just some "Other" pushing them into their agenda.
      While creating an AI, we need to work to make sure it will be more moral than we are. It can be politically incorrect, because an AI routine can process data and point out that policy of political party that is in power is not productive or helpful. As the warning about being politically correct, was more towards people being afraid to talk against the government. (China you have to be politically correct and not bad mouth the communist government). In the United States you can feel free to disagree with the Biden Administration or the Democratic majority branches of government, with out such governments going against you.

      • That is a nice, shiny claim you have there, except it sounds like the speculation of the woke. Now, prove it with credible evidence.

        Failing to prove it with credible evidence shows you know you are wrong.
      • Why is the algorithm even looking at a name in order to draw any kind of inference from that? It's exactly the type of data you'd want to exclude from analysis even with humans so that any personal biases don't creep in to their decision. It wouldn't even matter if you used in in a country that's 100% ethnically homogeneous because it would ultimately lead to rejecting anyone with an unusual name even in that group for the same reason.

        You may as well just feed it social security numbers and shoe sizes if
        • by tlhIngan ( 30335 )

          Why is the algorithm even looking at a name in order to draw any kind of inference from that? It's exactly the type of data you'd want to exclude from analysis even with humans so that any personal biases don't creep in to their decision. It wouldn't even matter if you used in in a country that's 100% ethnically homogeneous because it would ultimately lead to rejecting anyone with an unusual name even in that group for the same reason.

          There are lots of things it shouldn't look at, and in a perfect world it

        • Why is the algorithm even looking at a name in order to draw any kind of inference from that? It's exactly the type of data you'd want to exclude from analysis even with humans so that any personal biases don't creep in to their decision.

          Indeed blind hiring is a process being adopted by more and more employers that are not using AI.

          https://www.forbes.com/sites/f... [forbes.com]

          It is not a perfect solution, but it is a start in removing unconcious or sometimes even overt bias whether it be anti or pro any specific group. Agreed these are things there is no reason for AI to know unless you really want the AI to know for some valid logical reason, and if that is the case it should be transparent.

      • "Politically Correct" means that something that contradicts reality is to be held as true because it corresponds instead with a desired political outcome or belief. Political correctness acts as an excuse for cruelty and unfairness, inflicting falsehoods upon the populace with the excuse that we have to pretend lies are true and truth isn't, in order to avoid hurting someone else's feelings. This is not in any way true, those who promote political correctness are insincere in their motives. The goal is t
    • The US is too worried about AIs being politically correct than they are about actually having AIs that work.

      Depends what it's being used for. If it's being used to generate traffic and ad clicks on social media, then being "politically correct" matters, otherwise you offend your audience.

      Conservatives often get offended by "redneck" stereotypes and jokes about "flat Earth" evangelicals, so it's not just progressives requesting PC.

      Maybe if AI instead focuses on how to automatically do laundry and drive cars

    • Social scores and propaganda, CCP has quite a grip. Will democracies need a counter AI ? Is AI another fusion massive expense with illusive gains?
      • Real democracies will not need a counter AI. The main thing an AI can do in a democracy is counting votes and maybe predict consequences in the decision making process. I am afraid any "counter AI" on a political level is a dictator before you know it.
        • In this century, countries don't just wage war with weapons and wallets. There are vast, organized efforts to influence the populace of entire regions against rivals. As an example, Central and Latin American people are being targeted right now with FUD that undermines confidence in US and UK vaccines while promoting chinese and russian vaccines. A well trained AI can be a very effective weapon when planning such strategic maneuvers.
    • Slashdot commenters are too worried about relabeling this that work as not AI than they are about actually having AIs that work.

    • The US is too worried about AIs being politically correct than they are about actually having AIs that work.

      Well, now they can also be worried that Skynet will be Chinese and communist. Think about it, communist AI's dominating the market ... this promises to be interesting. Has anybody told Trump and the his posse about this yet?

    • Since the shitty software everyone insists on inappropriately calling 'artificial intelligence' has no actual cognitive ability, 'consciousness', 'self-awareness', let alone any 'presense of mind' to make moral or ethical judgements on any of it's output, if what comes out of it is racist, sexist, or whatever it is you don't like, you only have yourselves to blame, because that output is just a reflection of the so-called 'training data' you put into it.
      Let China waste it's time on this nonsense excuse fo
    • by gweihir ( 88907 )

      The US is too worried about AIs being politically correct than they are about actually having AIs that work.

      Pretty much. Although I doubt China will be a real leader here. More like the US will be left behind by the rest of the world.

  • So (Score:5, Insightful)

    by ArchieBunker ( 132337 ) on Tuesday March 02, 2021 @10:07AM (#61115600)

    The guy who runs a company that develops AI is telling the government they’re missing out and need to invest more. Right.

    • The guy who runs a company that develops AI is telling the government they’re missing out and need to invest more. Right.

      And bitcoin probably thinks the US needs to invest more in blockchain applications.

    • He personally has billions to throw at the issue and his company has exponentially more. Worse, they have worked in and with China on AI, so he can just go ahead and f- himself. He may be right, but it's partially his fault and he doesn't need my tax dollars to fix it.
  • by Black Parrot ( 19622 ) on Tuesday March 02, 2021 @10:26AM (#61115688)

    ...until I saw the story abut China can't be weened off of Flash.

    • by Tablizer ( 95088 )

      Maybe they can do what they did to the F-35: swipe the plans and make a better version. Make Flash Great Again!

  • The US might have been a lot further along if Avon Hertz didn't go rogue with his Cliffford AI, thus needing to be shutdown by IAA operatives.
  • by simlox ( 6576120 ) on Tuesday March 02, 2021 @10:59AM (#61115808)
    Here in Europe our privacy laws makes large scale AI on personal data impossible. Not so in China.
    • by Pimpy ( 143938 ) on Tuesday March 02, 2021 @11:32AM (#61115942)

      That's not true, there are plenty of techniques that can be applied to personal data that allow it to be used for AI training, like differential privacy. Several EU member states (like Germany) also have research exceptions, so research can still be carried out without having to sort out the audit and compliance issues faced in production. Bigger challenges faced from a regulatory point of view are things like the right to object to AI-based decision-making, and being able to retain audit continuity across training cycles, especially in unsupervised learning situations. This is especially a big issue when processing biometric and health data, which are not covered by the GDPR and for which no free-flow mechanism exists for cross-border data transfers. There's still the issue of whether one can take a model trained on sensitive data in one country and use it in another - it's not yet legally clear whether the model itself constitutes derived data that would be subject to the same set of restrictions or not, and the courts still have to work this out. Perhaps the biggest outstanding issue is liability for automated decision making - e.g. do you blame the model, the person providing the inputs, or the person taking the action based on the suggestion of the model? The original product liability directive never had these sorts of cases in mind and is also being reformed to better address these new kinds of usage scenarios.

      Yes, you could ignore all of these and just throw random AI models into production, but I'd be hesitant to call that progress. Others will eventually face these issues as well in production, regardless of whether there is a country-specific regulatory framework in place that forces one to think about these problems in advance or not.

      • Perhaps the biggest outstanding issue is liability for automated decision making - e.g. do you blame the model, the person providing the inputs, or the person taking the action based on the suggestion of the model?
        There is no question at all about the liability.
        It is the person doing the action, just like a captain on a ship or on an air plane.

        • by Pimpy ( 143938 )

          I'm not sure what you base your assertion on, given that this is precisely the topic being discussed with the reform of the product liability directive, and something that's becoming an increasing point of concern under the new EU data strategy. The Commission has already had a number of consultations on this topic, which I have been involved in since 2018, and it's far from settled which way it will ultimately go. In any event, the current product liability regime does not match your expectation, which ind

    • It does not.

      Data protection laws only mean one big thing: you can not hand out data about private persons to someone else.

      If you are e.g. a health insurance and want to run any algorithms on the medical data of your clients, no one cares. They start to care if the insurance "abuses" the data.

  • This kind of thing has a pedigree of more than 30 years. Except now the boogieman Japan has been replaced by China:

    "In 1982, Japan launched its Fifth Generation Computer Systems project (FGCS), designed to develop intelligent software that would run on novel computer hardware. As the first national, large-scale artificial intelligence (AI) research and development (R&D) project to be free from military influence and corporate profit motives, the FGCS was open, international, and oriented around public

    • by gweihir ( 88907 )

      Yes, very much so. "Intelligent" computers have been promised for a long, long time now. There is still no indication that we will ever get them. But it is an useful idea when you want to separate corporations and states from their money.

  • by Zdzicho00 ( 912806 ) on Tuesday March 02, 2021 @11:21AM (#61115882)

    The USA stand no chance to win with China in anything important with such education system:
    https://www.wgbh.org/news/educ... [wgbh.org]

  • AI dominates you!
  • Why did Eric Schmidt and his social media ilk spend the past several decades coordinating research into AI with China? Google helped to build the monitoring schemes now weaponized by PRC to create and manage social credit scores. Why wasn't he alarmed then?
  • Any AI that would be useful would have to be out in the wild. If so, then it would be potentially visible to anyone who copies and analyzes it. Therefore, any gains by the US, Chinese, Mexicans, Welsh, or whoever wouldn't be much of an advantage.

    For the billions the US has spent working on greater fuel efficiency, do you think the rest of the world is still driving gas guzzlers trying to catch up?

  • I'd like a clear definition of the term "AI" before the discussion of falling behind/spending money. Isn't that basic science?
    • by gweihir ( 88907 )

      "AI" that exists: Automation. Often using statistically "trained" classifiers. Has about as much insight and intelligence as a piece of bread.
      "AI" that people trying to sell: Artificial General Intelligence (AGI). Does not exist, may never exist.

    • The clear definition?

      That is easy: having an algorithm that is in decision making equally good as a human, or better, especially in recognizing things a human can, as in sound, picture, video, and other senses.

      But you could have googled that. To bad you are to lazy or to silly to do that.

      But for your education: current AI systems that actually exist, are specialized on a single or a small number of tasks. E.g. the google AI that can play Go, can not play chess, and vice versa.

      AI is split into about a dozen

      • by kackle ( 910159 )
        The person who replied before you disagrees. And unless there is a consensus on the term, there is a "problem". "AI" is already being used to describe everyday devices that are nowhere near as intelligent as a human, so... Giving billions of dollars for research for the generic category "AI" is potentially wasteful, if not everyone agrees on what AI is.

        @gweihir I politely replied to your post earlier, but Slashdot has a habit of losing posts now, so maybe it needs some AI.
        • Well, as I studied at an university and had AI as one topic: I gave you the definition we used that time :D

          And I doubt there is a newer (different) and more important: more relevant definition.

          But to give you hint about what we are talking, an AI scientist once said on a congress: "There are problems a computer never can solve as good as a human". And another scientist stood up and said: "Tell me the problem and I build you a computer and the software that can!"

          The software in your phone that makes the came

  • by oh_my_080980980 ( 773867 ) on Tuesday March 02, 2021 @01:36PM (#61116476)
    No one is stopping Google, Microsoft, Oracle and any other tech company from investing in AI.

    You have the money.

    You wanted free markets.

    You don't want government regulations.

    This also means, you don't get to complain that you need tax payer money so you don't need to dip into your profits which you promised to your investors.

    Put up or shut up assholes.
  • Considering that the PRC has its nationals embedded to most/all of the AI research going on in the US and that these nationals funnel back data the US does not stand a chance.
    They may not want to do the CPC bidding but the CPC know how to leverage them through family still in China.

FORTRAN is not a flower but a weed -- it is hardy, occasionally blooms, and grows in every computer. -- A.J. Perlis

Working...