Banned and High-risk AI Use
The new EU Artificial Intelligence (AI) Act prohibits the use of AI for specific purposes and requires high-risk uses to pass the EU declaration of conformity (CE).
The new EU Artificial Intelligence (AI) Act has recently caused a bit of a stir in the industry. The purpose of this act is to ensure that core values and fundamental rights of the EU are enshrined in the new technology from the outset. But some have voiced concern that innovation will be stifled.
The EU is part way through its Digital Decade strategy where it wants to embed EU principles and rights into the design, operation and use of technology. This Act, which is at the draft stage, has been agreed and is designed to encourage business investment by building solid legal foundations on which to operate and promote consumer confidence through trustworthy technology.
“This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.” Quote from Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence.
More details on the EU Artificial Intelligence (AI) Act can be found here.
The EU Approach
There are three categories of AI as defined by the Act. They include: prohibited uses of AI, high-risk AI and minimal-risk AI systems.
Banned AI Use
The Act bans the use of AI under the following criteria:
- Subliminal techniques likely to cause harm
- Exploitation of vulnerabilities of a specific group of persons likely to cause harm
- Public authority use of AI to evaluate or classify the trustworthiness of individuals
- Law enforcement use of real-time biometric identification except under specific circumstances and requiring judicial authorisation
Failure to follow the prohibition carries a fine of €30 million or 6% of annual global turnover. Exact figures to be confirmed.
High-risk AI
The Act under Annex III defines the following uses as high-risk:
- Safety component of a product
- Biometric identification
- Management and operation of critical infrastructure
- Education and vocational training
- Access to services
- Student assessment
- Employment, workers management and access to self-employment
- Recruitment
- Promotion or termination
- Private services and public services
- Access to benefits
- Creditworthiness or credit scoring
- Access to emergency services
- Law enforcement
- Risk of individual offending or reoffending
- Emotional state assessments
- Deep fake identification
- Investigation or prosecution of criminal offences
- Criminal offence prediction
- Profiling
- Crime analytics
- Migration
- Emotional state assessments
- Risk assessments
- Documentation verification
- Examination of applications
- Justice and democratic processes
- Searching and interpreting law and applying the law
High-risk regulations
The regulations for high-risk AI systems are comprehensive. Here are some of the key clauses:
- A risk management system will be used to manage the system through its entire lifecycle
- AI systems tested to ensure conformance with the regulation
- Risks relating to children are specifically called out
- Data and data governance controls are required for training validation and testing data
- Technical documentation to be maintained
- Records keeping for 10 years, accessible over Application Programming Interface (API) or remote access
- Designed to support skilled human oversight with ‘kill-switch’
- Cybersecurity resilient to attack
- Registration of use in an EU database
Failure to follow the regulations for high-risk uses carries a fine of €20million or 4% of annual global turnover. Exact figures to be confirmed.
Timeline
Once the text of the Act has been ratified by the EU Parliament and Council (sometime in Q2/Q3 2024) it is expected to apply two years after entry into force. There may be some aspects that apply sooner such as the prohibition use cases.
Conclusion
We like the way the EU approaches regulation of technology and this is another example. It bases its approach on its core principles and applies regulation in a risk managed way. This will make future revisions and adjustments straightforward as the technology, it’s applications and impacts develop.
All regulation will have consequences for innovation and investment but when balanced against core principles, the benefits outweigh the downside for the future society we want to live in.
How can IT Security Locksmith help?
Companies should already have a register of systems and services they use. This should be expanded to classify systems that utilise AI. They should then be reviewed to determine their risk classification – prohibited, high-risk and minimal-risk and determine next steps as part of a risk treatment plan. Procurement processes should be updated to classify systems with AI at the due diligence stage prior to purchase, contract signing and use.
If you need help with classifying use of AI or development of AI risk treatment plans, please email: contact@itsecuritylocksmith.co.uk.
#BannedAI #ProhibitedAI #AIKillSwitch