Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
EU AI Government

EU Sets Benchmark For Rest of the World With Landmark AI Laws (reuters.com) 28

An anonymous reader quotes a report from Reuters: Europe's landmark rules on artificial intelligence will enter into force next month after EU countries endorsed on Tuesday a political deal reached in December, setting a potential global benchmark for a technology used in business and everyday life. The European Union's AI Act is more comprehensive than the United States' light-touch voluntary compliance approach while China's approach aims to maintain social stability and state control. The vote by EU countries came two months after EU lawmakers backed the AI legislation drafted by the European Commission in 2021 after making a number of key changes. [...]

The AI Act imposes strict transparency obligations on high-risk AI systems while such requirements for general-purpose AI models will be lighter. It restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes, prevention of terrorist attacks and searches for people suspected of the most serious crimes. The new legislation will have an impact beyond the 27-country bloc, said Patrick van Eecke at law firm Cooley. "The Act will have global reach. Companies outside the EU who use EU customer data in their AI platforms will need to comply. Other countries and regions are likely to use the AI Act as a blueprint, just as they did with the GDPR," he said, referring to EU privacy rules.

While the new legislation will apply in 2026, bans on the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet or CCTV footage will kick in in six months once the new regulation enters into force. Obligations for general purpose AI models will apply after 12 months and rules for AI systems embedded into regulated products in 36 months. Fines for violations range from $8.2 million or 1.5% of turnover to 35 million euros or 7% of global turnover depending on the type of violations.

EU Sets Benchmark For Rest of the World With Landmark AI Laws

Comments Filter:
  • by zenlessyank ( 748553 ) on Tuesday May 21, 2024 @09:27PM (#64489187)

    Money dictates actions not laws.

    • Money and power.

      They can set any laws they want. Will non-western nations follows them? Absolutely not. Will US follow them? Only maybe the US's EU subsidiaries within the physical bounds of the EU.

      Useless virtue signaling.

      • Re: (Score:3, Interesting)

        by gweihir ( 88907 )

        Hahahaha, no. What will happen is that all AI companies will follow these laws and make very sure (after the first large fines) they keep them. But the sheep in the US and the rest of the world will not benefit, only EU citizens and those with governments smart enough to follow the EU lead will. Have fun, suckers.

        • Re: (Score:2, Interesting)

          Uh ok, so AI in the EU will be limited and the rest of the world won't be and suckers uh what?
          Okey dokey

          • by gweihir ( 88907 )

            You think unfettered capitalism using AI is an advantage? Let's talk about that again in a few years. And in the meantime, good luck!

          • I'll give you an example of European "limited stuff" versus "American freedom" (it came up yesterday on /., so here goes...)

            Banks/Mastercard in the US are looking to use facial recognition to make payments. You won't even need your card or phone - you just look at the camera, and your shopping gets paid for. Amazingly convenient, and lets be honest, it looks pretty "futuristic" and cool.

            Europe won't be getting this "cool" stuff though. Why? Because here, banks are liable for fraud on their customer accounts

    • by sg_oneill ( 159032 ) on Wednesday May 22, 2024 @01:24AM (#64489499)

      Money dictates actions not laws.

      Thats precisely why it MIGHT work. The EU does not mess around when it comes to financial penalties for privacy and tech monopoly type things. They have no reservations about throwing a billion dollar haymaker into the groin against techbros who think they are above the law.

      • by mjwx ( 966435 )

        Money dictates actions not laws.

        Thats precisely why it MIGHT work. The EU does not mess around when it comes to financial penalties for privacy and tech monopoly type things. They have no reservations about throwing a billion dollar haymaker into the groin against techbros who think they are above the law.

        Meanwhile the ROTW is asking "where did all my privacy go" and "why does everyone seem to have my personal data".

    • Restricted to searches for people accused of "the most serious crimes"... the most serious crime is exposing globalist lies.
    • Hence the fines...

    • by AmiMoJo ( 196126 ) on Wednesday May 22, 2024 @05:28AM (#64489809) Homepage Journal

      If you read the summary you will note that a lot of the rules apply to governments, e.g. using surveillance. Governments are typically not motivated by revenue when doing security work.

      Not that EU fines for businesses are anything to sniff at. For GDPR they are 4% of global revenue (not profit), which is something that even the big tech giants have shown an unwillingness to ignore or accept as the cost of doing business. Even Apple rolled over on 3rd party iOS app stores.

    • by Rei ( 128717 )

      And big companies don't want to be fined into oblivion, so yes, it matters. And with rules like:

      untargeted scraping of facial images from the internet

      You might as well have just entirely banned scraping from the internet. Who wants a diffusion model that's great at everything except has no bloody clue what a face looks like?

      • Re: (Score:2, Interesting)

        by Rei ( 128717 )

        They had a chance to do this right. They could have, say, mandated a series of tests (which they can update the rules for at regular intervals if they prove insufficient) for testing whether models are memorizing and leaking data that they shouldn't. But as always, leave it to the EU to legislate methods rather than outcomes.

        • Not to worry. The EU laws are only one attempt. Other countries with large economies will get to mandate a series of their own tests too. In general, global companies like to follow the union of legal constraints from around the world, as it lowers their cost of doing business.
  • by WaffleMonster ( 969671 ) on Wednesday May 22, 2024 @12:16AM (#64489433)

    Requirements for reliability, testing, nondiscrimination...etc already exist for the covered domains. Simply creating a duplicate set of regulations adding "with AI" accomplishes additional busywork and compliance costs with no commensurate public benefit.

    Thanks to an expansive definition "AI" applies to basic computer algorithms not just the generative AI bits most of the public thinks of as AI.

    Attacks on open source models using illogical and baseless operations counts formulation provided by industry with a direct conflict of interest will only help to ensure technology/power is aggregated into the hands of the few further exacerbating existing trends of technology induced inequality and commensurate corruption risks.

    The only thing I support in the EU act are constraints on the use of AI against citizens. The legislation is a mistake that should not be repeated.

    • by AmiMoJo ( 196126 )

      Clearly whatever requirements you are referring to are inadequate, because many commercial AI systems have proven to be illegally biased.

      AI is something of a special case too, because nobody understands how it really makes decisions. You usually can't ask it to explain itself in any meaningful way either. GDPR actually recognizes that for data processing, where individuals have the right to have automated decisions reviewed by a human, and explained. But AI expands it much further and to the point where it

      • by DarkOx ( 621550 )

        Clearly whatever requirements you are referring to are inadequate, because many commercial AI systems have proven to be illegally biased.

        clearly your are wrong, the requirements are adequate, because whatever conduct you are against is already illegal and provably so; you said it yourself.

      • Clearly whatever requirements you are referring to are inadequate, because many commercial AI systems have proven to be illegally biased.

        I ... DarkOx beat me to it...

        AI is something of a special case too, because nobody understands how it really makes decisions. You usually can't ask it to explain itself in any meaningful way either. GDPR actually recognizes that for data processing, where individuals have the right to have automated decisions reviewed by a human, and explained. But AI expands it much further and to the point where it makes sense to require mandatory testing by the vendor.

        In what regulated domain were you allowed to punt to unexplainable magical non-auditable black boxes in the first place? Where were the standards no testing required? Isn't it better to address shortcomings specific domains rather than impose duplicative requirements? After all thanks to expansive definition many mundane computer algorithms are now also subject to this law.

        There will also be requirements for those tests, similar to the CE mark for product safety. Can't just rely on testing done in China, it has to be proven in the EU, which creates legal liability for the tester.

        Do you mean to tell me the EU doesn't already have requirements for product safety? Are there not any p

  • by Opportunist ( 166417 ) on Wednesday May 22, 2024 @02:01AM (#64489519)

    They decided that AI content needs to be labeled. That's pretty much the big whoop here.

    At the same time, anyone who ever had to deal with it knows that it's virtually impossible to tell AI generated content from human generated content. Hell, even AI cannot do it. Which becomes more and more of a problem when training AI (which is, I'm sure, one of the motivations for the whole deal) because more and more training material for new AIs is hallucinations from former AI generations.

    And you can't manually vet it. The amount of content AI can generate easily outpaces anything humans could audit and vet.

    That law is toothless.

    • That law is toothless.

      It certainly isn't. Quite the opposite given it's % of turnover penalty. Which means that without any means to reliably detect false positives, the law will be used to shakedown anyone who's not in the government's favor.

      The EU is hellbent on making sure no technological advancement happens in their countries. Oh, well. They'll be trying to find away to get access to the crap from others soon enough....

  • If the EU is going to require AI products to be certified, which will probably be expensive in terms of both time and money, especially if the certification has to be done by EU accredited certifiers, it should cut down on the number of products that have frivolously added " AI " to their marketing and product descriptions.

"To take a significant step forward, you must make a series of finite improvements." -- Donald J. Atwood, General Motors

Working...