AI Regulation – Leveraging Artificial Intelligence (AI)

1. General Information

2. Use of AI Systems at SnapCube

3. AI-Generated Content & Transparency

4. Classification as Limited Risk AI System

5. Technical and Organizational Measures

6. Ethical Principles in AI Use

7. Internal Compliance and Training

8. Amendments to this AI Regulation


1. General Information

At SnapNext GmbH & Co. KG, operator of the SnapCube® brand, we prioritize responsible and transparent use of Artificial Intelligence. Through this AI Regulation, we inform you about how we use AI systems on our website and in our products, as well as the measures we take to comply with the European AI Regulation (EU AI Act).


This AI regulation applies to:

  • our website www.snapcube.de, as well as

  • all SnapCube® products and services that utilize AI technologies (e.g., the SnapCube® AI filter in our photo booths and WebApps).


Below, you’ll learn which content is AI-generated, how we ensure transparency, and why our offering is classified as a “limited risk AI system” according to the EU AI Act. Additionally, we describe the technical precautions and ethical principles we adhere to when using AI, as well as our internal processes to ensure compliance.


2. Use of AI Systems at SnapCube

SnapCube® uses AI technology to provide you with creative photo and video experiences. Specifically, modern AI models are used—such as diffusion models like Stable Diffusion—to generate new image or video content based on uploaded photos and your dynamic inputs (e.g., text inputs for image descriptions or theme style selections). Using AI, your photo can be transformed into a fantastical scenario or enhanced with special artistic effects.


The use of these AI functions is always initiated by you—you decide whether a photo should be edited with an AI filter. The underlying AI system analyzes the provided image purely technically, without collecting any additional personal information. No personal data is processed beyond the uploaded photo. Specifically, we refrain from any identification of depicted persons or systematic evaluation of sensitive characteristics from the photo.


The AI-generated results (images and possibly videos) are made available to you or authorized individuals—such as event participants—via a QR code or download link. We only use the captured photos to perform the desired AI generation and provide the result. There is no further use of your images, such as for training our models. After processing is complete, we store recordings only as long as necessary (you can find details in our privacy policy).


3. AI-Generated Content & Transparency

Transparency is crucial to us: We clearly indicate when content has been generated by AI and when you are interacting with an AI. You are expressly informed at the appropriate point that an AI system is in use—such as directly in the WebApp or on the display of our photo booth when using the AI filter. This way, you always know that the following result was created with the help of AI and is not handcrafted by a human.


All images and videos generated by our AI are also technically marked. We embed metadata (according to the IPTC/XMP standard) in every AI-generated file, machine-readable to record that the content was artificially generated or manipulated. This ensures that the AI origin of the imagery remains traceable even with future use. Where technically possible, we additionally use visible cues or watermarks as identification without affecting the user experience.


These measures ensure that AI content is always recognizable as such. They fulfill the transparency obligations of the EU AI Regulation: Users must know when they are dealing with an AI or consuming AI-generated content. Our system is also designed without any misleading or manipulative functions. It clearly serves the purpose of creating creatively edited images without deceiving you.


4. Classification as Limited Risk AI System

The AI systems used at SnapCube® are classified as “limited risk AI systems” according to the EU AI Regulation. This means our offering is neither prohibited nor falls into the high-risk categories. We use AI exclusively for creative photo and video applications in entertainment and marketing contexts—not for safety-critical decisions, scoring, monitoring, or other purposes with high risk to user rights. Consequently, we are mainly subject to certain transparency requirements, but not strict certification or reporting obligations as applied to high-risk AI.


As a provider of a limited risk AI system, we fulfill our duties by particularly ensuring transparency (see section 3) and strengthening the AI competence of our employees (see section 7). We have internally reviewed and documented that our use cases fall into this risk class. Should the legal framework change or our system obtain a higher risk profile, we will promptly take measures to ensure full compliance. Currently, our AI functions are permissible and are considered harmless under the AI Regulation by adhering to the prescribed diligence and transparency obligations. We ensure this through the described measures.



5. Technical and Organizational Measures

In conjunction with our AI systems, we implement various technical and organizational measures to ensure safe, reliable, and data-protection-compliant operations:


  • Content Filters and Usage Limits: Our AI models are equipped with filter mechanisms to prevent the generation of inappropriate or dangerous content (e.g., violence glorification or pornographic depictions). We define clear usage policies for the AI filters so that they are used solely within the intended framework—for creative, theme-specific image generation.

  • Quality Assurance: Before we release a new AI function, it undergoes extensive testing. We check the quality, accuracy, and any unwanted effects (such as distortions or biases) of the generated results. Even during ongoing operations, we monitor the AI’s performance and make adjustments if needed to promptly correct errors or deviations.

  • Data Security: The processing of images by AI takes place on secure, controlled servers in Germany. We protect the transmitted data through encryption and restrict access to authorized individuals. Your photos are not forwarded to external AI services or third parties but are processed within our own infrastructure. This ensures a high level of data protection and control over data flows.

  • Data Minimization: We only collect the data necessary for the AI service (typically just the photo and any prompt texts or selection criteria you may enter). These data are used solely for image/video generation in accordance with the purpose limitation principle. Storage occurs only as long as necessary; for example, uploaded recordings are automatically deleted after a defined period (details can be found in our privacy policy).

  • Robustness and Reliability: Our developers ensure that the AI systems function stably and reliably. We keep the AI models we use up to date with the latest technology and install necessary updates or improvements to guarantee safety and stability. With new insights into potential risks or vulnerabilities, we respond promptly with appropriate countermeasures to ensure the safe use of AI at all times.


6. Ethical Principles in AI Use

For us, a trustworthy and human-centered approach to AI is paramount. We adhere to the following ethical principles:


  • Fairness and Non-Discrimination: Our AI is designed to work equally well for all users. We ensure that no one is disadvantaged or stereotyped due to characteristics like skin color, gender, or origin in the generated content. If we become aware of any bias or injustice in the results, we actively address it.

  • Transparency: Openness about AI use is central (see section 3 above). We communicate clearly and understandably where and how AI is used. There are no hidden AI functions. As a user, you always know when a result is AI-generated and not from a human.

  • Safety and Protection: The safety of users is a priority. We ensure our AI applications pose no physical or psychological risks. Through content controls and testing, we prevent the AI from generating potentially harmful or offensive content. We also ensure that the use of our AI offerings happens in a controlled environment (e.g., always under supervision during events by our team).

  • Data Protection: We respect your privacy. Personal data is only used to the extent necessary for the AI service (see data minimization above) and never used for other purposes without legal basis or your consent. We comply with all relevant data protection principles (especially GDPR) such as purpose limitation, data security, and deletion periods. Your data belongs to you—it will not be used elsewhere without your permission.

  • Accountability: We take responsibility for the use of our AI. SnapNext has defined internal responsibilities to monitor compliance with all AI use regulations and principles (see section 7). If you have questions or issues related to our AI, we are available and seek transparent solutions. Moreover, we subject our AI systems to continuous review to maintain high ethical standards.


7. Internal Compliance and Training

Behind the scenes, we make sure that our use of AI meets both our own standards and legal requirements:


  • Training and Competence: Our employees are regularly trained in the use of AI systems. We promote AI competence within the team so that all involved understand how our models work, the legal requirements (e.g., EU AI Act, GDPR), and our ethical guidelines. New employees and partners are also introduced to our AI policies and processes.

  • Internal Guidelines & Processes: We have established clear internal guidelines for the development and operation of AI functions. This includes an internal AI policy or code of conduct that everyone must adhere to. Compliance with these guidelines is monitored by management and our data protection/compliance team.

  • Documentation (Procedure Index): We maintain an internal index of all AI applications and the associated processing activities. This documents the purpose of each AI system, how it works, what data it processes, and what protective measures are implemented. This procedure index helps us keep track at all times and to be accountable to supervisory authorities when necessary.

  • Review and Further Development: We regularly review our AI systems and their use. If new risks or improvement potentials are identified, we adjust our processes and technical measures accordingly. We also stay informed about developments in AI law and technology to react early. Our AI regulation itself is reviewed at defined intervals and updated as necessary.

  • Contact Persons and Reporting: We have designated contacts for questions relating to AI use (including our data protection officer, available at datenschutz@snapnext.de). Users and customers can contact us at any time for clarifications or to provide feedback. If there are incidents or complaints regarding our AI systems, established procedures ensure they are addressed internally and—if required—reported to the appropriate authorities.


8. Amendments to this AI Regulation

We reserve the right to adjust this AI Regulation as needed to conform to new legal requirements or technological developments. Should new changes arise, for example, due to the further implementation of the EU AI Act or modifications to our offerings, we will update this regulation accordingly. The current version of this AI Regulation will be published on our website.


As of May 21, 2025

Sign up for our newsletter now

Don't miss out on the inspiration and insights from constantly evolving projects!

Sign up for our newsletter now

Don't miss out on the inspiration and insights from constantly evolving projects!

Sign up for our newsletter now

Don't miss out on the inspiration and insights from constantly evolving projects!