How Photoroom delivered a +34% improvement for ads
The internet is loud and busy. Every brand is in competition for the short attention span of potential customers online. And it’s only going to get more crowded over time. According to Statista, ad spending will grow by 7.6% in 2024. In the US alone, digital ad spend will hit $306.94 billion, up from $270.24 billion this year.
Your ads will need to stand out from every other ad your audience is scrolling past. One way to create scroll-stoppers is by using seamless, professional images that are polished and contextual. The good news? You don’t have to be a photographer or a design expert to create good ad photography; you can do this in no time using PhotoRoom’s AI-powered photo editor.
Read on to learn about the details of the test, the results, the lessons we learned, and how you can easily run your own multivariate test using PhotoRoom images to test the impact they can have on your own ads.
Setting up the test
In a previous Facebook ad test, we explored the impact that PhotoRoom images can have on ad performance by polishing the background of product pictures. Today, we’re sharing a follow-up with the results of a recent A/B/C test we conducted on Facebook ads. This time, we looked at placing objects in their contextual environment, specifically furniture.
Our new objective: Identify the winning ad variant that could improve click-through rates (CTR) and reduce costs per click (CPC). For this experiment, we generated three image variants in seconds to represent our A, B, and C test groups. They looked like this:
Variant A showcased the original image with a white background
Variant B featured an image enhanced only using our Instant Shadows tool
Variant C presented a generative AI background that placed the product in a contextual setting, specifically within a stylish living room.
The ad copy, audience targeting, and landing page remained consistent across all three variants, ensuring that the only variable under scrutiny was the image itself.
Running the test
The A/B/C test spanned over two weeks, allowing us to gather substantial data and draw reliable conclusions. Throughout this period, we closely monitored the performance metrics of each ad variant to determine which one resonated most effectively with our target audience.
How the variants performed
The findings were both illuminating and affirming.
Ad variant C, featuring the product in a contextual living room setting, emerged as the clear winner.
It boasted the highest CTR: +37% better than A and +67% better than B
It resulted in the lowest CPCs: +34% improvement over A, and a +52% improvement over B, providing us with statistically significant results with a 95% confidence interval.
Lessons from the test
The short version is that it confirmed our hypothesis and proved the importance of challenging assumptions about how audiences interact with ads and what they expect from the quality of the photography.
It confirmed our hypothesis: Potential customers prefer to see products in a contextual setting—particularly furniture. The ability to visualize how an item complements their living space proved to be a key factor in driving engagement and conversions. By allowing prospects to assess how a purchase fits into their own surroundings, we create a more compelling and personalized shopping experience.
The power of high-quality imagery can’t be understated: The results of this A/B/C test reinforce the importance of using seamless, professional-level imagery in your ads. By presenting your products in a visually appealing and contextually relevant way, you can significantly improve engagement and conversion rates.
The value of data-driven decision-making: Conducting A/B/C tests allows you to make data-driven decisions based on real performance metrics. By testing different variants and closely monitoring the results, you can learn what your audience actually responds to and adjust your strategy accordingly.
Start by defining your objective
Clearly outline the specific metrics you want to improve, like click-through rates (CTR) or conversion rates. This will help you measure the success of your test accurately.
Create a variety of image variants
Create three different image variants to represent your A, B, and C test groups. Ensure that each variant has a distinct visual element, like different backgrounds or shadows.
Keep the other variable consistent
In order to isolate the impact of the images, make sure that other elements of your ads—like ad copy, audience targeting, and landing pages—remain consistent across all variants.
Run the test for a significant amount of time
Launch your A/B/C test long enough to gather substantial data. Monitor the performance metrics of each variant closely, including CTR, CPC, conversion rates, and any other relevant metrics.
Analyze and draw conclusions
Once you have collected enough data, analyze the results to identify the winning variant. Look for statistically significant differences in performance metrics and consider the overall impact on your objective.
Partner with us on your next campaign
Stay tuned for more tests where we research ad performance across different types of product categories. We're inviting partners from various categories like furniture, cosmetic, health, fashion and retail to share images that we can incorporate into our next round of ad tests. In return, we'll cover the ad expenses.
These ongoing tests aim to prove that the quality of the visual elements of an ad drive traffic and translate into meaningful conversions. As we continue to explore and refine our approach, we look forward to sharing more insights that can empower your advertising strategies.
Ready to partner with us to optimise your ad assets? Contact us.
Check out our Photo editor API offering: