The Impact of the EU AI Act on the Cyber and Tech Industry

In 2024, the EU introduced the AI Act, a set of rules to ensure AI is used responsibly across Europe. These rules will greatly affect the cyber and  tech industry, especially in following regulations, hiring, and training employees. We’ve collated some important information about the EU AI act, including areas of impact on the industry, individuals, AI risk levels according to the act, and timeframes of key provisions.

Key areas of impact

AI IMPACT
Regulation of AI Systems

The Act will regulate AI systems, including banning certain high-risk technologies. Companies must ensure their AI systems comply with new standards and guidelines.

Risk Management

Organisations, especially those in critical infrastructure, will need to conduct AI risk assessments and adhere to cybersecurity standards, increasing the focus on security and ethical considerations in AI development.

Innovation and Compliance

While the Act aims to protect fundamental rights and prevent systemic risks, it also supports innovation. An AI Office will be established to supervise and enforce provisions, ensuring the safe and trustworthy development and use of AI technologies within the EU.

Global Influence

The EU AI Act is expected to set a global standard for AI governance, influencing regulations in other regions and potentially leading to a more uniform approach to AI regulation worldwide.

AI risk levels according to the EU AI Act

AI RISK LEVELS

The EU AI Act proposes a risk-based approach to regulating AI systems, with four levels of risk: unacceptable/prohibited, high, limited, and minimal (or no) risk. Each level is subject to different degrees of regulations and requirements.

Unacceptable Risk (Prohibited)
  • Subliminal manipulation: Changing a person’s behaviour without their awareness, potentially causing harm.
  • Exploitation of vulnerabilities: Targeting individuals based on social or economic status, age, or physical/mental ability.
  • Biometric categorisation: Categorising individuals based on sensitive characteristics like gender, ethnicity, political orientation, etc.
  • General purpose social scoring: Rating individuals based on personal characteristics, social behaviour, and activities.
  • Real-time remote biometric identification: Banning biometric identification systems in public spaces, with exceptions for law enforcement.
  • Assessing emotional state: Evaluating emotional states in workplaces or educational settings, allowed only for safety purposes.
  • Predictive policing: Assessing the risk of individuals committing future crimes based on personal traits.
  • Scraping facial images: Creating or expanding databases with untargeted scraping of facial images.
High Risk (Must undergo a conformity assessment)
  • Safety components of regulated products: AI systems that are part of already regulated products.
  • Stand-alone AI systems in specific areas: AI systems that could negatively affect health, safety, fundamental rights, or the environment.
 
Limited Risk (Must adhere to transparency)
  • AI systems with risk of manipulation or deceit: Systems that must be transparent, such as chatbots and deepfakes.
Minimal Risk (No obligations)
  • All other AI systems: Systems like spam filters that do not fall under the above categories, with no restrictions or mandatory obligations.

Key provisions of the EU AI Act

Prohibited AI Practices

Starting 2 February 2025, certain AI practices deemed harmful or unethical will be banned outright. This includes AI systems that manipulate human behaviour to the detriment of users or exploit vulnerabilities of specific groups, as mentioned above.

AI Literacy Obligations

From 2 February 2025, providers and deployers of AI systems must ensure appropriate AI literacy for their staff. This includes regular training and the implementation of policies to ensure that employees understand and can responsibly use AI technologies.

General-Purpose AI (GPAI) Models

Rules for providers, mainly developers, regarding GPAI models will apply starting 2 August 2025. These rules aim to ensure that GPAI models are developed and deployed in a manner that aligns with EU values and standards.

Penalties

Penalties for non-compliance with the EU AI Act will apply from 2 August 2025. These penalties are designed to enforce adherence to the regulations and ensure that AI systems are used responsibly.

Impact on the cyber and tech industry

Increased Demand for AI Specialists

There will be a growing demand for AI specialists, including data scientists, machine learning engineers, and AI ethicists. Companies will need to invest in talent acquisition and training to stay competitive.

Contractors

The demand for contractors with specialised AI skills is expected to rise. Companies may turn to contractors for short-term projects or to fill skill gaps quickly. This flexibility can help organisations adapt to the new regulations without the long-term commitment of hiring full-time employees.

Focus on Ethical AI

With new regulations emphasising ethical AI usage, there will be a need for professionals who can navigate the complex landscape of AI ethics and compliance. This includes roles such as AI compliance officers and ethical AI consultants.

Cyber Security Challenges

The integration of AI into various sectors will bring new cybersecurity challenges. Companies will need to hire cybersecurity experts who can address AI-specific threats and vulnerabilities. This includes roles such as AI security analysts and AI-driven threat intelligence specialists.

Multidisciplinary Skills

The new regulations will require professionals with multidisciplinary skills. For example, AI developers will need to understand copyright laws, and legal professionals will need to grasp AI technologies. This will lead to the creation of hybrid roles that combine technical and legal expertise.

Upskilling and Reskilling

To meet the demands of the AI-driven economy, there will be a significant emphasis on upskilling and reskilling the existing workforce. Companies will need to invest in continuous learning programs to ensure their employees stay updated with the latest AI advancements.

ISO/IEC 42001 Certification

To comply with the stringent requirements of the EU AI Act, companies will need to obtain ISO/IEC 42001 certification. This international standard provides a framework for AI Management Systems (AIMS) and helps organisations ensure the responsible development and use of AI systems. By aligning with ISO/IEC 42001, companies can effectively manage the lifecycle of AI systems, ensuring adherence to regulatory requirements.

Impact on individuals

an AI chatbot on a laptop
Enhanced Safety and Trust

The Act introduces a risk-based approach to AI regulation, categorising AI systems based on their potential impact on society and individuals. This means that high-risk AI systems, such as those used in healthcare or law enforcement, will be subject to stricter regulations to ensure they are safe and trustworthy.

Protection of Fundamental Rights

 The Act aims to safeguard fundamental rights by prohibiting certain AI practices that are deemed to pose unacceptable risks, such as social scoring and manipulative AI, helping to protect individuals from potential abuses of AI technology.

Transparency and Accountability

AI systems will be required to be more transparent, meaning individuals will have the right to know when they are interacting with an AI system and understand how decisions affecting them are made, promoting accountability and helping build trust in AI technologies.

Data Privacy

The Act complements existing data protection regulations, such as the General Data Protection Regulation (GDPR), ensuring that AI systems handle personal data responsibly and securely.

AI Literacy

The Act includes provisions to promote AI literacy among the public, helping individuals understand AI technologies and their implications.

As a boutique cyber security and technology recruitment company, InfoSec People is dedicated to connecting top talent with cutting-edge opportunities in the cyber and tech industry. With the EU AI Act reshaping the landscape, now is the perfect time to advance your career. If you’re looking to be part of it, get in contact with us and find out about the different opportunities we have in this space.

InfoSec People is a boutique cyber security and IT recruitment consultancy, built by genuine experts. We were founded with one goal in mind: to inspire people to find the careers that inspire them. With the success of companies fundamentally driven by the quality of their people, acquiring and retaining talent has never been more important. We believe that recruitment, executed effectively, elevates and enables your business to prosper.

We also understand that cyber and information security recruitment can genuinely change people’s lives, that’s why we take the duty of care to those we represent very seriously. All our actions are underpinned by our core values:

  • Always do the right thing
  • Be the best we can be
  • Add value

We work with businesses in the cyber/tech arena, from start-ups and scale-ups to FTSE100 and central Government, many of whom are always looking for great people.

Call us directly on 01242 507100 to discuss opportunities or email info@infosecpeople.co.uk.

www.infosecpeople.co.uk