Banned and High-risk AI Use

The new EU Artificial Intelligence (AI) Act prohibits the use of AI for specific purposes and requires high-risk uses to pass the EU declaration of conformity (CE).

The letters AI in large font inside a red circle with white background with the EU logo top right
Banned AI Logo

The new EU Artificial Intelligence (AI) Act has recently caused a bit of a stir in the industry. The purpose of this act is to ensure that core values and fundamental rights of the EU are enshrined in the new technology from the outset. But some have voiced concern that innovation will be stifled.

The EU is part way through its Digital Decade strategy where it wants to embed EU principles and rights into the design, operation and use of technology. This Act, which is at the draft stage, has been agreed and is designed to encourage business investment by building solid legal foundations on which to operate and promote consumer confidence through trustworthy technology.

“This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.” Quote from Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence.

More details on the EU Artificial Intelligence (AI) Act can be found here.

The EU Approach

There are three categories of AI as defined by the Act. They include: prohibited uses of AI, high-risk AI and minimal-risk AI systems.

Banned AI Use

The Act bans the use of AI under the following criteria:

  1. Subliminal techniques likely to cause harm
  2. Exploitation of vulnerabilities of a specific group of persons likely to cause harm
  3. Public authority use of AI to evaluate or classify the trustworthiness of individuals
  4. Law enforcement use of real-time biometric identification except under specific circumstances and requiring judicial authorisation

Failure to follow the prohibition carries a fine of €30 million or 6% of annual global turnover. Exact figures to be confirmed.

High-risk AI

The Act under Annex III defines the following uses as high-risk:

  1. Safety component of a product
  2. Biometric identification
  3. Management and operation of critical infrastructure
  4. Education and vocational training
    1. Access to services
    2. Student assessment
  5. Employment, workers management and access to self-employment
    1. Recruitment
    2. Promotion or termination
  6. Private services and public services
    1. Access to benefits
    2. Creditworthiness or credit scoring
    3. Access to emergency services
  7. Law enforcement
    1. Risk of individual offending or reoffending
    2. Emotional state assessments
    3. Deep fake identification
    4. Investigation or prosecution of criminal offences
    5. Criminal offence prediction
    6. Profiling
    7. Crime analytics
  8. Migration
    1. Emotional state assessments
    2. Risk assessments
    3. Documentation verification
    4. Examination of applications
  9. Justice and democratic processes
    1. Searching and interpreting law and applying the law

High-risk regulations

The regulations for high-risk AI systems are comprehensive. Here are some of the key clauses:

  1. A risk management system will be used to manage the system through its entire lifecycle
  2. AI systems tested to ensure conformance with the regulation
  3. Risks relating to children are specifically called out
  4. Data and data governance controls are required for training validation and testing data
  5. Technical documentation to be maintained
  6. Records keeping for 10 years, accessible over Application Programming Interface (API) or remote access
  7. Designed to support skilled human oversight with ‘kill-switch’
  8. Cybersecurity resilient to attack
  9. Registration of use in an EU database

Failure to follow the regulations for high-risk uses carries a fine of €20million or 4% of annual global turnover. Exact figures to be confirmed.

Timeline

Once the text of the Act has been ratified by the EU Parliament and Council (sometime in Q2/Q3 2024) it is expected to apply two years after entry into force. There may be some aspects that apply sooner such as the prohibition use cases.

Conclusion

We like the way the EU approaches regulation of technology and this is another example. It bases its approach on its core principles and applies regulation in a risk managed way. This will make future revisions and adjustments straightforward as the technology, it’s applications and impacts develop.

All regulation will have consequences for innovation and investment but when balanced against core principles, the benefits outweigh the downside for the future society we want to live in.

How can IT Security Locksmith help?

Companies should already have a register of systems and services they use. This should be expanded to classify systems that utilise AI. They should then be reviewed to determine their risk classification – prohibited, high-risk and minimal-risk and determine next steps as part of a risk treatment plan. Procurement processes should be updated to classify systems with AI at the due diligence stage prior to purchase, contract signing and use.

If you need help with classifying use of AI or development of AI risk treatment plans, please email: contact@itsecuritylocksmith.co.uk.

#BannedAI #ProhibitedAI #AIKillSwitch