Photoroom’s approach to responsible AI

Photoroom’s mission is to put the power of great images into everyone people’s hands. Our technology helps hundreds of millions of individuals, small businesses, and global brands create standout visuals—fast. With this reach comes responsibility.

That’s why we invest in responsible AI. We believe AI should be built and deployed in ways that are safe, inclusive, and sustainable. This document outlines how we approach data practices, safety controls, privacy, representation, environmental impact, and social contribution across our AI development process.

We’re committed to doing the right thing as we build responsible AI—while acknowledging this is a continuously evolving field.

1. Scope of this document

This document applies to AI models that are fully developed by Photoroom—from data sourcing to model training and deployment. It outlines our standards, safeguards, and commitments for these models.

Our approach is user-centric: we aim to provide the best possible experience and value. That’s why we focus on developing in-house AI models for our most strategic use cases, where performance, control, and accountability matter most.

For non-core features, we may integrate or build on top of external third-party AI models when it allows us to deliver faster, better, or more innovative results. In those cases, we still apply our user experience standards, but the training and governance of those external models aren't under our control and may not fully reflect the practices described in this document.

2. Using data responsibly

As a younger company in the AI field, we don’t train on data collected over decades. We primarily rely on publicly available information to train our models.

Photoroom trains its models using carefully selected data sources:

  • Publicly available datasets, including industry-standard machine learning collections. We exclude sources known to violate our policies or that have opted out of AI use.

  • Proprietary data from partners, including industry-leading image providers and professional photographers, giving us access to non-public datasets under agreed terms.

  • Computer-generated synthetic data, which helps diversify training examples in a privacy-safe way.

  • User-generated content, only when users have provided their consent. App users, including free users, can opt out at any time by adjusting their settings. We do not store content from API users.

3. Handling copyright and user content responsibly

We acknowledge that copyright and the handling of user content are complex and evolving areas in AI development. Our goal is to align with emerging legal frameworks and responsible data practices.

We license proprietary data from both individual creators and large-scale image providers, giving us access to high-quality, non-public datasets. This differentiated content contributes to the creative direction and quality of our models.

When collecting data from the public web, we comply with robots.txt mechanism as the most common safeguard to help avoid unauthorized use of online content. While robots.txt is not an universal mechanism, we view it as an important signal of content owners’ preferences—and one step in a broader effort toward responsible data sourcing.

We continually monitor legal developments to ensure responsible practices.

4. Protecting privacy and data integrity

We do not train on customer content unless permission is given. We do not store content from API users.

Photoroom’s models are built to learn general patterns—not to memorize or “regurgitate” content. Like other generative systems, they may sometimes reflect patterns or common aesthetics found in publicly available datasets. In rare cases, a model may inadvertently produce an image that closely resembles existing content. This is not intentional and reflects a failure in the machine learning process—not a design goal.

These occurrences are more likely when similar visuals appear repeatedly across different sources in training data. To reduce this risk, we apply techniques such as data deduplication, anonymized captions, and performance testing to avoid overfitting. We continue to improve these safeguards through ongoing research and development.

5. Avoiding harmful content in training and outputs

We take a proactive approach to reducing harm from training data selection to prompt handling.

We do not use content that is violent, sexually explicit, abusive, illegal, or otherwise prohibited in our training data. We select training sources that avoid known harmful domains or materials, and we filter our datasets through the use of internal or third party classifier models.

In addition, we prevent unsafe model behavior by blocking prompts related to violence, pornography, illegal activity, and other restricted topics. When such a prompt is flagged, the model will not generate a response.

These combined safeguards help minimize the risk of harmful or inappropriate outputs. If such issues do arise, users can report them directly through the app or their enterprise contact. We actively monitor and review flagged outputs for harmful or biased behavior and update systems accordingly.

6. Building for representation and diversity

We aim to build a product that can be used and loved by everyone, everywhere. That starts with inclusive training data—and with a team intentionally built to understand and represent the needs of a global audience.

Our training data includes diverse human representations across cultures, ethnicities, age groups, and body types. We also collaborate with professional photographers and designers globally to reflect a range of cultural aesthetics, visual norms, and expectations.

Internally, we’ve assembled a diverse team that brings global perspectives to how our models are designed and evaluated. Our goal is to ensure Photoroom delivers visually relevant, respectful, and accessible outputs for all users.

7. Scaling AI sustainably

We build efficient AI that scales responsibly. When processing millions of images each day, even small improvements matters. We design and optimize our models to minimize energy consumption and environmental impact. Our model training runs on infrastructure powered entirely by low-carbon energy. We continuously improve the efficiency of our inference systems and publicly share our technical optimizations with the AI developer community so others can lower their emissions as well.

We are one of the first signatories of the international Coalition for Sustainable AI and collaborate with peers to promote better measurement and accountability. We publicly share our CO₂ emission calculation methodology and report on the estimated carbon footprint of our models.

8. Enabling social impact though AI

We believe AI should be useful—not only to businesses and creators, but also to the broader communities we serve. That’s why we make Photoroom accessible for free to millions of users around the world, helping them create professional-quality images without prior training or professional equipment.

By lowering the barrier to great visuals, we empower individuals to grow their businesses, market their products, and reach financial independence. We support major nonprofit and mission-driven organizations that use visuals to promote circularity, and deliver social and economic impact.

As part of our commitment to openness and accessibility, we share selected tools and components through open-source repositories, allowing others to learn from and build on our work.

9. Handling reported issues

If users encounter problematic content, they can report it directly in the app or contact their sales representative (for enterprise customers). These reports help us identify issues and improve our models. We monitor flagged outputs and use that feedback to reduce the risk of recurrence.

10. Managing risk

We regularly test our models, review flagged content, and incorporate user feedback to maintain high standards of safety, privacy, and performance. That’s how we believe AI should be built: safer, more accessible, and more useful over time.

We’ll continue learning, adapting, and sharing as we build AI that reflects our values and serves our users responsibly.

Lyline LimHead of Impact at Photoroom
Photoroom’s approach to responsible AI
Design your next great image

Design your next great image

Whether you're selling, promoting, or posting, bring your idea to life with a design that stands out.