The AI Act for Software Projects: What SMEs Should Watch Now for AI Features
The AI Act is not a ban but a classification. Most SME applications are low risk — the real work is transparency, oversight and documentation.

Around the EU AI Act there is mostly one feeling in SMEs: worry. Worry that every AI feature is now a major compliance project. For most SME applications that is exactly not the case.
The AI Act is not a ban but a classification. Whoever classifies their application correctly usually sees manageable obligations instead of a major project.
The basic principle: risk, not technology
The AI Act does not regulate "AI" wholesale but the use case by risk: prohibited practices, high risk, limited risk (transparency obligation), minimal risk. The same technology can fall into completely different classes depending on use.
For typical SME AI — internal assistants, document automation, drafting aids with human approval — the classification is usually "limited" or "minimal". That means transparency and diligence, not an approval procedure.
Three obligations that almost always apply
1. Transparency
When people interact with or via AI, or content is AI-generated, that must be recognizable. This is not a major technical topic — but it must be planned early instead of bolted on later.
2. Human oversight
AI that influences something relevant needs a human who can intervene. Exactly the human-in-the-loop discipline that distinguishes good AI anyway is here also required by regulation.
3. Documentation
What does the system do, with which data, with which limits, who is responsible? This documentation is not paper for the drawer but the same clarity a good project needs anyway.
Take the timeline seriously
The AI Act enters into force in stages — obligations for AI models, transparency and high risk apply at different points in time. The European Commission's official timeline is the authoritative source and changes in detail. Therefore: document the classification, watch the dates, treat an article like this as a living document.
What SMEs concretely do now
Don't wait and don't panic, but classify: which AI features do we have, which risk class do they fall into, which of the three obligations applies? This classification belongs in every project with AI — best together with the data-protection assessment needed anyway (see GDPR-compliant AI applications) and general AI maturity (see AI Readiness Check).
Checklist before the AI feature
- Have we mapped every AI feature to a risk class?
- Is transparency toward users technically planned?
- Is there effective human oversight with the ability to intervene?
- Does documentation of purpose, data, limits, responsibility exist?
- Do we watch the AI Act timeline and update the classification?
- Is the classification connected with the GDPR assessment?
- Is it clear who is internally responsible?
Frequently asked questions
Is every AI feature now high risk? No. High risk is narrowly defined. Most SME applications fall into limited or minimal risk with manageable obligations.
Do we have to wait for final guidelines? No. Classification, transparency, oversight and documentation can be done today. Waiting creates more risk than acting.
Does the AI Act also apply to bought AI? Yes, depending on use. Whoever uses AI carries responsibility for the use case — not just the model provider.
Is this a competitive disadvantage? On the contrary: documented, supervised, transparent AI is a trust argument in the German B2B market.
Conclusion
For most SMEs the AI Act is not a major project but a classification task: determine the risk class, ensure transparency and oversight, document, watch the dates. Exactly the discipline good AI needs anyway — now also as a regulatory expectation.
Further reading
- Planning GDPR-Compliant AI Applications — data protection and the AI Act belong together.
- AI Readiness Check: 12 Questions Before You Start — classification as part of AI maturity.
Next step
You're planning AI features and want to classify the AI Act correctly? Start with a short assessment of your requirements. We determine risk class and obligations — pragmatically, not as a major project.
Sources
- European Commission, AI Act — digital-strategy.ec.europa.eu
- EU AI Act Service Desk, Timeline for the Implementation of the AI Act — ai-act-service-desk.ec.europa.eu
- NIST, AI Risk Management Framework — nist.gov