EU AI-ACT
Mikis complies with the EU AI Act – transparent, auditable, and fully controllable.
Whether in public administration, research, or industry: With Mikis, you rely on an AI platform that accounts for risk levels, implements regulatory obligations clearly, and treats data protection as a top priority.
✓ AI Act-compliant architecture
✓ Integrated transparency and labelling features
✓ Built-in risk assessment and documentation
Designed for organizations that don’t just use AI – but shape it responsibly.
Disclosure, Labelling, and Information Obligations (Article 50, EU AI Act)
Kurzfassung
Transparency Obligations under the EU AI Act for Limited-Risk AI Systems
The EU AI Act imposes transparency requirements for certain AI applications, especially when users interact with or consume content generated by AI. The goal: ensuring that people understand when they are engaging with AI or receiving synthetic content.
Key Requirements
Human–AI Interaction
In systems like chatbots, users must be clearly informed that they are communicating with AI.
Exception: When it is obvious (e.g., Siri, Alexa).
Synthetic Content (Audio, Video, Image, Text)
Outputs generated by AI must be machine-readable and marked (e.g., via watermarks or metadata).
Operators must explicitly disclose deepfakes to users.
Exceptions: Law enforcement or artistic expression.
AI-Generated Texts on Public Topics
If published, such texts must be labeled as AI-generated – unless they undergo editorial review and oversight.
Emotion Recognition & Biometric Categorization
Individuals must be informed when such technologies are used.
Exception: Legally authorized use in criminal investigations.
Key Definitions
- Synthetic content refers to anything generated by AI, regardless of realism.
- Deepfakes are synthetic outputs realistic enough to be perceived as authentic.
Every deepfake is synthetic, but not every synthetic output is a deepfake.
https://www.rtr.at/rtr/service/ki-servicestelle/ai-act/Transparenzpflichten.de.html
Risk Categories for AI Models (GPAI) under the EU AI Act
Kurzfassung
General-Purpose AI Models (GPAI) under the EU AI Act
General-purpose AI models (GPAI) – often referred to as foundation models – are designed to perform a wide range of tasks. Examples include large language models such as GPT, Llama, or Mixtral. These models process extensive volumes of text, images, or audio and frequently serve as the basis for building specialized AI systems.
The EU AI Act makes a clear distinction between:
- AI Models (GPAI) – standalone models with broad capabilities
- AI Systems – models combined with additional components such as user interfaces
Only GPAI models fall under the specific model-level obligations outlined in Articles 53 ff. of the AI Act.
GPAI vs. Generative AI
- GPAI refers to versatile models capable of solving diverse tasks
- Generative AI is a subcategory of GPAI, focused on content generation (e.g., text, images)
Systemic GPAI Models
A GPAI model is classified as systemically risky if it:
- Demonstrates high-impact capabilities, typically indicated by extremely high training requirements
(e.g., computational effort exceeding 10²⁵ FLOPs) - Is designated as high-risk by the European Commission or a scientific advisory panel
These models may pose significant risks to health, safety, fundamental rights, or the functioning of the EU internal market.
Criteria for Systemic GPAI Risk Assessment
Assessment may include:
- Number of model parameters
- Size and nature of training datasets
- Training effort (FLOPs, time, energy consumption)
- Modalities (e.g., text, image, multimodal capabilities)
- Benchmark performance and task generalization
- User base (e.g., over 10,000 professional users in the EU)
Obligations for Providers
- All GPAI models must meet documentation and transparency obligations (Article 53)
- Systemic GPAI models are subject to additional strict requirements for risk assessment and mitigation (Article 55)
The EU provides templates and guidance documents to support providers in fulfilling these obligations.
https://www.rtr.at/rtr/service/ki-servicestelle/ai-act/risikostufen_ki-systeme.de.html
AI Skills under Article 4 of the EU AI Act
Zusammenfassung
AI Competence Obligations under the EU AI Act (Effective February 2, 2025)
As of February 2, 2025, the first provisions of the EU AI Act will come into force – including the requirement for AI competence as outlined in Article 4.
This obligation applies to all providers and deployers of AI systems, regardless of system type or risk category.
What is AI Competence?
According to Article 3(56) of the AI Act, AI competence refers to:
- The skills, knowledge, and understanding
required to use AI systems responsibly - The ability to assess opportunities and risks
- The capacity to recognize potential harms
This applies to providers, operators, and affected users, tailored to their respective roles and responsibilities.
Key Requirements – Article 4 AI Act
Organizations must ensure that internal staff and external stakeholders who use or operate AI systems have:
- Adequate AI competence, based on:
▸ technical knowledge
▸ experience
▸ training
▸ application context
The Act does not mandate specific training formats – companies may use internal training, e-learning, or external programs.
Examples of Relevant Competence Areas
- Basic understanding of AI and how it works
(e.g., bias, hallucinations) - Awareness of opportunities and risks
- Use of internal AI tools and systems
- Legal foundations (data protection, copyright, AI Act)
- Ethical and safety-related considerations
- Digital literacy (e.g., according to the DigComp framework)
Target Groups
- AI developers
- Operators of AI systems
- All employees using AI
- External service providers with AI involvement
Sanctions and Legal Risks
The AI Act itself does not impose direct penalties for non-compliance with Article 4.
However, insufficient training may be considered a breach of duty of care and lead to liability (e.g., under Austrian civil law § 1313a ABGB).
Implementation in Organizations
1. Initial Assessment
- Identify current AI systems in use
- Include standard software and updates
2. Define AI Strategy
- Clarify how AI fits with data protection, IT security, ESG, etc.
- Define decision-making processes and responsibilities
- Ensure AI competence across the organization
Recommended: Internal AI policies, clear roles, workflows, and communication structures
Operational Implementation
- Tailor content to different roles (e.g., managers, developers, end users)
- Use flexible training formats (workshops, e-learning, peer exchange)
- Continuously evaluate training and test new systems
- Foster interdisciplinary collaboration (IT, legal, security, HR, etc.)
Documentation and Proof of Compliance
To demonstrate compliance with Article 4:
- Document your AI strategy and internal guidelines
- Record training activities: type, content, provider, date, frequency
Link to Digital Competence
AI competence builds upon and is closely linked to digital literacy, forming part of a broader skill set required for the responsible use of AI technologies.
Contact & Demo
Would you like to test Mikis as a data-secure AI platform in your organization?
Get in touch – we’ll show you concrete use cases and security architectures.
These organizations also rely on the AI technology behind Mikis:
Contact
Expert Solutions Strahlhofer
Efficient Online Solutions for Your Business
Web Design // Online Shops // Artificial Intelligence
Parkring 7
2333 Leopoldsdorf near Vienna
Austria
Would you like to know how Mikis can be applied in your field?
Contact us for a personalized demo – we’ll show you real-world applications tailored to your organization.
Whether you're looking for a knowledge base, an intelligent assistant, or a specialized research tool – we offer personal consultation and develop the right solution together with you.
We look forward to hearing from you!
