Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
AI EU Government

Europe Proposes Strict Rules For Artificial Intelligence (nytimes.com) 61

An anonymous reader quotes a report from The New York Times: The European Union unveiled strict regulations on Wednesday to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory. The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems -- areas considered "high risk" because they could threaten people's safety or fundamental rights.

Some uses would be banned altogether, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes. The108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies that have poured resources into developing artificial intelligence, including Amazon, Google, Facebook and Microsoft, but also scores of other companies that use the software to develop medicine, underwrite insurance policies and judge credit worthiness. Governments have used versions of the technology in criminal justice and the allocation of public services like income support. Companies that violate the new regulations, which could take several years to move through the European Union policymaking process, could face fines of up to 6 percent of global sales.

The European Union regulations would require companies providing artificial intelligence in high-risk areas to provide regulators with proof of its safety, including risk assessments and documentation explaining how the technology is making decisions. The companies must also guarantee human oversight in how the systems are created and used. Some applications, like chatbots that provide humanlike conversation in customer service situations, and software that creates hard-to-detect manipulated images like "deepfakes," would have to make clear to users that what they were seeing was computer generated. [...] Release of the draft law by the European Commission, the bloc's executive body, drew a mixed reaction. Many industry groups expressed relief that the regulations were not more stringent, while civil society groups said they should have gone further.

This discussion has been archived. No new comments can be posted.

Europe Proposes Strict Rules For Artificial Intelligence

Comments Filter:
  • by Ostracus ( 1354233 ) on Wednesday April 21, 2021 @08:10PM (#61299122) Journal

    The108-page policy is an attempt to regulate an emerging technology before it becomes mainstream.

    *grumble* Darn pattern-matching. Always causing problems.

    • Inequality is preferable. The only way true "equality" would exist is if everyone were equally poor. As long as resources are finite on this planet that will be the case.

      • by Gravis Zero ( 934156 ) on Wednesday April 21, 2021 @10:29PM (#61299382)

        The only way true "equality" would exist is if everyone were equally rich.

        FTFY. That's literally what equality is: having the same amount. You say poor but there are more than enough resources for everyone to feel plenty rich. However, there would be hundreds of billionaires that would just be soooo upset. I guess it's worth letting millions starve and die so that we don't hurt the feelings of billionaires, right?

        • You ignore reality. Resources are finite. There is not enough of anything in existence to satisfy the demand if it were free.

          • Resources are finite.

            So is the human population.

            There is not enough of anything in existence to satisfy the demand if it were free.

            This is why you allocate resources, so demand does not outstrip supply.

            • As long as I'm the one allocating resources I have no problem with this.
              • What you are failing to realize is that allocations are equal among individuals. Then again, maybe you can't count to ten and I'm wasting my time.

                • by ebyrob ( 165903 )

                  Obligatory:

                  There are 10 types of people in this world. Those who can count in binary and those who cannot...

                • What you are failing to realize is that allocations are equal among individuals. Then again, maybe you can't count to ten and I'm wasting my time.

                  Sure, buddy, all allocations are equal. I wouldn't manage it any other way. No true community organizer would. *nod nod* *wink wink* You rascal.

          • by jythie ( 914043 )
            If we are going to talk 'reality', then it is important to look at how we view 'rich' and 'poor' vs how resources are currently distributed. Given that 2% of the population have 98% of the weath (or 5/95, or whatever shorthand stat you want to pull from), an equal distribution of those finite resources would likely result in the vast majority of the population enjoying wealth beyond their wildest dreams. Besides, who said anything about everything being 'free'? The topic is equal distribution.
          • resources are finite but production capability is essentially infinite. the latter is more closely tied to GDP than the former.

        • Maybe if you think the only meaningful thing in life is how much stuff each person has, but I think you'll find it is not.
          • Maybe if you think the only meaningful thing in life is how much stuff each person has

            What matters most in life is staying alive, followed closely by not having to worry about staying alive. That's where we are failing people because people are not only worried but actually dying easily preventable deaths because of money. Guess we should start raising taxes on billionaires and corporations, right?

      • Inequality is preferable. The only way true "equality" would exist is if everyone were equally poor. As long as resources are finite on this planet that will be the case.

        Agreed. Anytime I hear the word "equality", I reach for my revolver.

        Equality is unachievable in human affairs. You'll always have inequality, and you'll always have some asshole using it to make mischief. Get the notion our of your head, and you still might be able to save your civilization.

        • What kind of equality? Equality before the law? Equally endowed with natural rights? Equally free? Equally endowed with a soul (or whatever you want to call it) and the value it entails?

          Don't let the Marxists define equality only in terms of possessions, they are insincere and their agenda is malicious.

          • how about equality in terms of determining your own social class instead of having it forced on you? Those pesky Marxists always rocking the boat.

        • by whitroth ( 9367 )

          Shut up, parasite. Telling us you reach for your revolver means that you want what you want, and don't care about whether someone else can survive.

          Why should *anyone* else care what you want, if you don't care about others?

      • Equality in regards to what? There are other aspects of life than resource allocation. People can be treated as social equal regardless of their personal wealth. They can be treated equally by the law. They can have equal rights and equal liberty, equal authority to make political and economic decisions. Equal opportunities to make what they will of their own lives.

        True equality shouldn't mean equal economic outcomes. Everything I listed above is incompatible with and preferable to equalized outcom

    • *grumble* Darn pattern-matching. Always causing problems.

      It's a tool like any other, it can be used for good or evil. Consider explosives: they can be used to tear down old derelict buildings or they can used to kill scores of people in an instant. If you ran a for-profit hospital then you could get a lot of business by just blowing up some crowds. The purpose here is to ensure similarly vile schemes are illegal.

    • we shouldn't let our bad pattern matching rule over our society. or opaque algorithms when we have no ability to inspect them and no legal basis to dispute and overturn their results. machine learning can be the ultimate black box, and too many are enamored with technology to question what place it has in our world or set any limits on its application.

  • Neuromancer Moment (Score:4, Insightful)

    by warewolfsmith ( 196722 ) on Wednesday April 21, 2021 @09:02PM (#61299222)
    Here come the Turing Police.
    • by whitroth ( 9367 )

      Gee, didn't even read the slashdot article?

      Hope you enjoy being denied a job, or healthcare, because the "AI" decided that you didn't fit the right pattern.

  • Do I have to get permission for doing gradient descent and SVD inside it, since that's what a lot of "AI" is?

    • Do I have to get permission for doing gradient descent and SVD inside it, since that's what a lot of "AI" is?

      It's the EU. They are drawing up a regulatory framework for cellular division as we speak.

  • by stikves ( 127823 ) on Wednesday April 21, 2021 @09:37PM (#61299284) Homepage

    We have a tendency to confuse many things as "AI". To be fair, I do that in day to day life as well.

    AI is for planning future actions. It makes reasonable decisions based on a world state. Like choosing the next move in chess, plotting a course for navigation, and yes of course making a decision for giving out a loan.

    So a simple "decision tree" is technically AI.
    So too a simple if statement, when it makes a decision like:

    if (credit_score threshold)
        return "don't give loan"

    On the other hand we usually conflate these with Machine Learning (using statistical data to learn parameters, here it would be the "credit_score"), or Data Mining (retrieving high level information, like likelihood of default based on credit scores using historic data, which can be used to calculate the threshold).

    AI by itself is no safer or more dangerous than its external parameters and the underlying data. You can make the most "perfect" system to fix the nose-up action in Boing 737 Max, but a faulty sensor will cause it to crash planes (and actually kill lots of people). You can make the most "perfect" system to give out prison parole decision, but the warden entering the wrong information will lead it to make the wrong decisions.

    • Certainly, the definition of "AI" has become somewhat of a marketing term. But there is some substance behind the hype.

      Things that we did not consider 20 years ago are becoming viable. Certainly, a total surveillance state is on the horizon.

      Regardless of legislation, more and more decisions that affect us will be made by semi-intelligent computers. This can be a good thing, people do not necessarily make good decisions. But we are heading into very unknown waters.

      Ultimately, I think the computers will

  • It's possible that EU might legislate AI but think about it. Do EU bureaucrats even know what AI is? We debate it here and we are better informed . Can they define it in the proper legal language so that it cannot be confused with 'machine learning' or other crude training methods? And when they make "Strict Rules", how will they be enforced?

    The desperation for headlines often causes media to raise alarm or hope about things that might happen (when it snows in hell). I am particularly (not) amused to see he

  • "areas considered high risk because they could threaten people's safety or fundamental rights" ...with a special carve out for national security.

    So AI is so dangerous it could threaten people's safety, but we might need it to keep people safe?

  • The Big Big brother is watching you!
  • If there are all these rules about how AI can be used, then maybe software makers will start calling it what it actually is: statistical pattern analysis.

    Seriously, the problem with these rules will be defining exactly what "AI" is, because it could be defined as anything done "on a computer" to "using neural networks" to some mystical non-existent thing that only exists in movies.

  • Seriously, between GDPR and these regulations, why would a startup-focused engineer stay in the EU? So much easier to build a company elsewhere.

    • by Pimpy ( 143938 ) on Thursday April 22, 2021 @05:02AM (#61300014)

      Because they are interested in developing privacy-preserving technologies in companies whose business models don't depend on end-user exploitation to be viable? Why would anyone want to move to a country where any startup job equates innovation with becoming an apologist for the ad industry in a country that doesn't consider privacy a fundamental right? As the GDPR has shown, the EU is pushing ahead, and many countries are following suit, the US is just behind the curve. The same will happen again with ethical/privacy-preserving AI.

      • by DarkOx ( 621550 )

        Right...Lets face it the greater latitude to try stuff even if it hurts a good number of people along the way that exists both culturally and usually legally in the US has lead to a lot of innovation and rapid growth and its been the rest of the world that has got with our program most of the time not the other way around.

        There are of course notable exceptions.

        • by Pimpy ( 143938 )

          It depends on the area of innovation. In the area of privacy, digital rights, ethics, etc. the US is not even a distant participant. This is primarily because the US has this strange idea that regulation of any sort == a barrier to innovation, as opposed to considering how regulation itself can act as a trigger for innovation, by requiring both new technologies and business models to emerge.

          As most Americans that like to trot this line out have absolutely no idea how the regulatory process works in most cou

  • ... for artificial intelligence to do something that natural intelligence does?

    A question of scale doesn't really hold water IMO.... if doing something on a large scale can be agreed upon to be wrong, then being on a smaller scale does not make it necessarrily right, only less wrong. If there is actually a threshold at which the wrongness becomes unacceptable, then it seems that the legal prohibition should be at that level of the activity, rather than narrowly targetting any amount of that activity when performed by A.I.

    Also, laws like this which try to propose limits on a technological field are invariably not future proof. As man-machine neural interfaces become a reality, at what point does artificially augmented human intelligence qualify as simply artificial intelligence? If there is no level at which that happens, then why should it matter if a machine is doing something that a suitably augmented human can do?

  • Just like all other laws in Europe, they will provide loopholes that make it useless and simply a burden.

    * The anti cookie law, everyone is still being tracked
    * Net neutrality, they made a loophole that you can block ports for security reasons, there are provides that block all port forwarding
    * The legal system itself, the Netherlands made a law in 2015 that allows all of Dutch government to standup in court, pledge an oath and then lie and not be prosecuted for perjury. Loophole for government thus. Strang

  • Europe doesn't produce new stuff.

    It basically coasts and lets others do the work.

    They're just some tariffs away from being completely helpless.
  • I think it took them too long to finally offer "human-centric" approach (c) https://proessays.org/ [proessays.org] that both boosts the tech companies, but also keeps them from threatening society by setting strict privacy laws.

"I've seen it. It's rubbish." -- Marvin the Paranoid Android

Working...