Insights

How Sainsbury’s Nectar360 and DEPT® scale retail media with AI agents

Marjan Straathof
Marjan Straathof
Global SVP of Marketing
Length 35 min read
Date January 12, 2026
How Sainsbury’s Nectar360 and DEPT® scale retail media with AI agents

Retail media is the buzzword of the decade, but behind the scenes, it’s an operational minefield.

At MAD//Fest 2025, Nathan Coppens (VP Growth, DEPT®) and Alice Anson (Director of Digital Retail Media, Nectar360) sat down to reveal how they solved the “Quality Control Crisis” for Sainsbury’s.

Transcript

[Applause] Everybody can hear me well. Thumbs. Great. All welcome to you all. My name is Nathan. I’m the VP of growth for Adept Agency just across the hallway here. responsible for our commercial, for our AI, for our practice. warm welcome to you all this morning. It’s a bit rainy, but I think we’re going to be fine. introducing Alice. So, I’m Alice Anson. I’m the director of digital retail media at Nectar 360. So, my responsibility is looking after our proposition, tech, sales and operations for all of our digital channels. So, that’s anything that uses the internet to be enabled, which is things like instore screens, our websites, and then offsite. for any of you that don’t know what retail media is, you’ve probably been living under a rock for the last three years because it is the buzzword of the moment. basically retail media is about connecting with shoppers across their shopper mission and utilizing the first party data that we build from our amazing loyalty program to really kind of create those first party audiences that connect shoppers to their most favorite brands. So maybe let’s jump into it directly. first thing I think we want to touch upon is the thousands of assets that your team manages. they’re all being sent into you by the brands themselves by agencies that help the brands to create these assets which is a wide range of statics PDFs videos what is the breaking thing in the process at the moment which made you start think about creating an AI implementation so when we think about retail media it’s all around putting the creative in front of the customer as you said that is thousands of assets on an annual basis so we work with over 900 different clients and agencies over the course of the year And you know working with a brand as prestigious as sainsburries and Argos we have a lot of brand guidelines that we need to make sure that we adhere to to make sure that our brand is represented in the way and also though so that our brands we work with are represented in the way. So when you send an asset into us there are over 130 different things that we are checking for. currently checking for those very very manually and that does mean that you know human error it is a real thing. and therefore there are these moments when potentially there can be conflict between things. So yeah, so what we have done recently is we’ve launched a platform which is called Nectar 360 Pollen. It’s basically a unified retail media platform which will allow our brands and agencies to book optimize run and measure retail media campaigns in one. And as part of that we were trying to solve this issue of how do we create things more efficient? How do we remove the weeks of work, the back and forth that that go through that? and obviously this brand guidelines thing came to the front of mind. and we were kind of thinking how do we do this? and that’s when we came to depth. And did you ever consider outsourcing the work to like cheaper areas to make sure that there it could be done or was automation always the plan? So we do outsource some of our our capabilities. So we work with Accentra and we have an outsourced center over there. I think when it comes to brand guidelines, it’s really hard to work through, right? So, you’re looking for exactly the shade of purple and orange. You’re looking for exactly the number of pixels that something looks at. And that’s really hard to tell from the naked eye. and also takes a lot of training, a lot of kind of education of teams, etc. So, it’s extremely complicated. and I think, you know, when we came and and spoke to you guys, that was also something that you reflected. It’s a really complicated piece of work. So, you know, I think as we started talking through, we kind of came to the conclusion that it’s it’s not a single model AI problem, but multiple agents working together. So, maybe for the the value of the audience, you can kind of bring that together as to how you got there and the complexity that you probably found when you looked under the hood. So, I think the complex thing is that it is a multi- aent setup, which means that we’re not throwing it into one chat GBT LLM and then there’s an answer coming back. there’s like multiple steps that we need to do throughout the process which all need to be super accurate. the number that we’re sort of looking for is like 95% above accuracy which should be done within all the steps. So it’s object detection, it’s extraction of the data, then it’s checking brand brand compliance and all those steps need to be the best what we can and for that we also use all kinds of different models. So we have YOLO for example for the detection, we have OCR, we have the LLM of open AI. So we for every step in the journey and every step that needs to happen throughout the process we have a different model that is the best for what we need to do at that stage in the process and making it a multi- aent approach because multiple things need to happen in order and for all those multiple steps we need a specialized model to help us out and create that accuracy that that we are looking for. now I think that also goes into the point like do you just buy something in the market and you can connect multiple sales products and then there’s your workflow or is this something that you want to build in house and fully with code set up and implement into the Poland platform? Is there a consideration there? there is a consideration and I’ll be honest there’s no answer to this one. so at Native 360 we’ve got a legacy of of creating amazing proprietary technology. and how we do that is we focus on the things that we’re good at. you know, we are great at customer experience. We’re great at client service. That’s really what we hold ourselves up for. And then we work with amazing partners across the industry to kind of really bring in that expertise. So, kind of bringing that blend together is super important. I think you’ll remember as well, it was a very tight turnaround in terms of our timelines. and I think that really kind of speaks to actually the pace that AI is growing and changing. you know, every single week there is something new coming out, there’s new technology. and you know, looking internally at ourselves, we just knew we weren’t the best people to do that job. and it would just take too much time to upskill the teams and change and and that’s why we kind of got this hybrid approach that we we’re kind of working down. so obviously we did work with you guys which has been amazing. It’s been an amazing journey. Maybe kind of it would be great to kind of break down the stack. So what did you do? What is it actually that we we’ve kind of built together? so I’ll give the presentation a press if it’s going to click to the next to show what it’s doing. Should I Oh, there we go. so this is an asset as you see and this sort of shows all the agents that are working together. Thank you. Yeah, working together to do all the checks that need to happen. And what you see is all the like sizing checks, logo checks, color checks that are that are appearing here, which is actually what is happening in the back in the Poland system. what is important there to sort of break this down is to start with how what what was the project like? So it was a tight turnaround, but although it was a tight turnaround, we needed to start with like the basis, which is what is the process of checks that we want to do? what technology are we going to use for those checks? like is there a different technology out there? Should we buy something? Yes or no? And I think we quite quickly already saw that was that difficult in terms of setup and terms of checks that need to be done. You we’re talking about the 120 checks. We’re covering some of them now in the in the first implementation. We’re talking about safe zones. We’re talking about print. We’re talking about CMK versus RGB. We’re talking about on platform, off platform in the stores. And all those need to be checked on compliance. Now, there’s also a deviation you can make between brand checks. So brand checks from sainsburries brand checks that need to happen legal checks that need to happen. So if we comply with all the legal checks the terms of conditions if they are there if they are readable yes or no all those things need to be done and need to be checked and therefore to all do that but also hit the accuracy levels that we wanted. We were quite sure that wasn’t doable with just buying a product out of the market and then implementing it and then making it work. also one of the important things I think is the the difference between use the use of machine learning and LLM because there was a difference there where we looked at can we use an LLM for all of this? Can we use different LLM for all of this which actually wasn’t possible because we saw that some things weren’t like possible within the existing setup. funny thing there is that we are now working for I think five months together. When we started for like for example with Gemini some things weren’t possible. We therefore build it in as as a machine learning solution and at today it is working. So you sort of see the speed of AI and the progression there that is sort of already like hitting us in in what we’re doing and like really showing that what the speed of AI is at the moment. And maybe just on that one we had quite an in-depth conversation about the color orange. so yeah, do you want to bring that to life because I think that’s a that’s a great story. So, so the crazy thing there is what we’re also doing is doing the the color check on the logos for example. So there’s a gradient within within the Sbury’s logo and we need to check if the logos logo color is correct. If the gradient is correct if that’s not the case we want to give back what the color should be and that at the moment it’s not correct. We also want to recognize if it’s a CMK or an RGB color setup for print or for digital which is a massive difference and it’s quite hard to see for a person to recognize if it’s if it’s good yes or no and we are checking that within the platform if that’s correct yes or no which was I think quite a quite a difficult step to implement. other things that you’re seeing here is that also the there’s things like things can’t be on an asset at the same time. So if you see the top one and the bottom one there’s a a nectar price as you call it and a nectar logo those can’t be there on the same time which is why we’re implemented a ruling engine. So not only like extracting data, looking at the data, if the TCC’s are correct, yes or no, but also then applying a ruling engine to do like let’s yes, if if this then that statements to make sure that if things are on the asset at the same time that that can be and that we give it back to the to the audience. And I think maybe a final thing to note before we dive into your team and how they started using it if the interesting thing there is the platform itself where we’ve implemented into because checking the asset is one thing of course but then bringing it back to the client or to the agency that is doing it using it is a second thing and what what we want to be able to do is show the asset that has been provided then show what is wrong or with it but also contextually tell what is wrong so it’s not the image is incorrect or the color is incorrect but also what is incorrect about it. What should it be? Where where should the position be? Where should the size be different or where should the positioning be different? and that’s what we’re implementing in the in the platform today to also make sure that the feedback is contextual and the feedback is also driven by AI and made by AI. So we’re not like giving back static error messages that you can’t really understand and not really sure what you what you need to do about it. so then maybe going back into the to the team and how they how they work on it. What was the adoption like for the team starting to work with AI in this way? Yeah, so I think you know AI is a journey for everyone. If you read the newspaper, it’s going to be the downfall of human society, right? And we’re never going to talk to anyone ever again. So I think it’s very much about kind of going through that journey. When we think about Nectar 360, our view on AI is very much that it it it will give us more efficiency, which means that we can spend more time leaning into what we do as humans, which is creating amazing connections with other people, allowing us to kind of have more amazing conversations with our brands and our clients to really understand and get another skin of their business versus doing all the stuff which quite frankly a lot of this is doesn’t add job satisfaction to anyone, right? so I think the team have been really excited about this. So if we put it into context when we think about the the brand process today it can take one to three weeks for us to actually confirm an asset that is insane. when we look at this it’s it’s around kind of maximum 90 seconds that it takes to do that. So when you work up the amount of time that is being saved and actually what that means for our teams that is a phenomenal amount of things. and I think you know one of the things that as humans we’re naturally skeptical. I’m definitely more on the skeptical side. I think one of the things that we really worked hard on was making sure that it had the level of accuracy. so you know we hit a 90% accuracy mark which I think is phenomenal. so how did we get to that stage and then you know a lot of the asset labeling that you’re seeing and don’t worry if all the acronyms that Nathan’s saying just blow your mind. They completely blow my mind too. so how did you kind of go through all of that process of asset labeling because that’s still quite manual right? And when you think of AI, you think everything’s going to be automated. I think this this was also one of the challenging parts. If you have the technology, you need need to make that work. But also then there’s the lab data labeling to make the system understand what is good and what is not good. And when you go back into like all the years that that we have checked the assets, we mostly ended up with correct assets because that’s what we were aiming for to create correct assets and put them live. But what’s really important for the system is to have incorrect assets to make the system understand what is not correct. And of course those assets were lacking or not there. But we needed it to train the system. so what we did for that was we implemented the genai approach and a protocol called light speeded is what we use within depth mostly to generate a lot of different assets sizes for all the the media platforms out there but we sort of misused it in a different way to create all kinds of wrong assets I would say. So we changed the position of the logo slightly or maybe we made it bigger. We changed the color. We created thousands of different assets. We labeled them with AI and Genai. So we created them, labeled them automatically, making the training a lot easier for us and not that manual. Although there’s still a manual part because we want to like show like this is incorrect, this this is incorrect. We need to make the system understand. But I think also the Genai and Lightseed product really helped us to create all those different assets that were needed to really train what is incorrect and make the system system understand. so that means that we have now I think it’s a 91% accuracy. Gone up 1% just in a few weeks. Love it. We’re just pushing the numbers a bit. indeed around 90 seconds to do all the checks that we’re that we’re doing at the moment. And the cost is also quite quite interesting because we mixed both LLM and machine learning at the moment. The cost for a single asset for all the checks that we’re doing is around 1 cent which is crazy low I would say also given that the cost for LM requests are going down. But I think that the combination between OCR, LLM is really showing that it can be dropped and therefore I think the me the business case there for you is is going to be massive. No, so it’s a really strong business case for us and I think when we start to think about you know how we we drive this forward and and what’s next for us in the future. everything that we do and when we think about the Poland program everything has been around actually engaging with with our clients and with our agencies and understanding what those pain points are and continuing to solve for that. So, you know, we’ve already got LLM in the platform. We’ve got JNAI in the platform. We’re thinking about what is the role of Agentic? How do we start to kind of tie everything together, but everything has to come back to us for that problem statement? and what we need to solve there. So, when we think about the future, what’s your vision for this tool in the future? So, could it extend beyond compliance? Is there there more cool stuff that we can do around brand safety, realtime optimization? definitely. So I think what what you see here is that one of the things that we would like to like increase and and make better which is if there within the asset a claim is being done we can recognize the claim through the agent that we’ve built. So we recognize a claim and always within the platform when you make a claim you need to deliver documentation to show that the claim can be is true and is is correct. but at the moment we do a check like is the document there? Yes or no but we don’t look look at the document itself and if it really states that it’s clearly a claim that can be made. Also, there at the moment isn’t a conversational layer yet where you can speak to the system to understand the error messaging for example. So, if you would say like the color is incorrect and there’s color codes there or the positioning is incorrect and you would not understand from an agency or a client perspective, you would maybe want to like have a conversational message about like why is it wrong? What’s wrong about it? where with all the trained data and all the agents in place, it’s quite easy to make that conversation happening and actually talk to the system instead of again ending in a loop between people asking questions, getting emails back and then we end up in the same lead time that we had before. So I think that extra conversational layer would be would be brilliant to add to the system. That’s great. So I I think that’s great, right? It’s about how we continue to solve the problems, you know, we we’ve got so far. who knows what is going to come out from AI over the next few months and years. you know almost every day as I said there’s something new. So I think definitely for us it is how do we make this easier to understand? I think it’s a great starting point I don’t think I’ve seen anything like this within the retail space but exactly to your point how do we make it clearer? How do we maybe start to solve for some of this stuff as well? If your logo is in the wrong place or the wrong size can we do that for you? and can we kind of continue to kind of push the dial forward in in making sure that ana 360 we’re the easiest retail media network to work with? Yeah, maybe one extra point because Poland is a lot bigger than than on on this part. Can you maybe elaborate a bit on all the other points that have been u presented lately? So, Poland is as I said a unified retail media platform. So, every single channel in one it will allow you to go in there and through LLM build a media plan. you’ll still be able to do it the oldfashioned way. Don’t worry, our teams are not going anywhere. it will also allow you to kind of build out that audience set. So whether that’s targeting to kind of store locations or to specific customers. and again, you can do that the way that we do it today or we’ve got LLM built in there. and then once you flow through the asset functionality, everything goes through through obviously this tool will kind of push out all through the relevant APIs into all of the different channels and then all of the data comes back into the platform which is phenomenal, right? To be able to get that measurement and we’ve got some cool stuff coming, you know, such as IROS and and MTA. I think the really important thing to kind of note with the platform is there’s a lot of cool stuff in there. but we we definitely see this as complimentary to our people. We don’t see our people as going anywhere. I think this is more around how we, as I said, give people more time to work on the the things that actually drive business forward as opposed to kind of waste time on boring stuff. Do we already want to share an ETA with the with the audience? late 2025 is all I’m allowed to say. but I think I’m going to be on the debt stand after this. So, if anyone’s got any questions on this or anything, then then come and come and ask me. and I I I I can be bribed. Great. yeah, so before you get up, we’ve actually had a massive flurry of questions which is super exciting to see. And we’ve got a little bit of time, so I think if we can have a a leisurely romp through some of these. some are in the kind of similar spaces, so I’m going to stitch a few of them together. so the first up is looking at that quite impressive for early days 91% claim of accuracy. firstly, is there a layer of human review on assets on top of that because you’re dealing with that 91%. and related also a question from Nick. So thank you Nick. once that incorrect assets been identified, who or what is responsible for correcting it? How are you then replacing and implementing those changes as well? Yes. So absolutely there is a human lens on it that there has to be to make sure that we’re we’re protecting the brand etc. What this allows us to do though is really quickly get to the root of the problem. So, as you can kind of see within the platform, what it will tell you is is what has passed and and what hasn’t passed. That then allows us to have really great conversations with our clients being super clear on what is not and then you can re-upload the asset and kind of go back through that loop. So, we have an amazing campaign media management team who this is their their bread and butter. This is what they do getting the campaigns out in front of customers. So, so this is kind of their wheelhouse to to really have those great conversations in a super transparent way, right? Because there’s quite a lot of times we’ve all been there where you’ve got an asset back and someone’s gone that’s not correct. and then you kind of have to figure out why whereas actually now we’re going to tell you. And maybe to add to that what like talking about what the future be like like this is now positioned within the Poland platform. You could also think of it being a platform that already the agency can work with during the creation of the asset. Absolutely. making sure that when they create something they want to sort of look for all the creative things that they can do look like for okay maybe this can happen that can happen and directly upload it into the system and check if it’s correct yes or no and if it’s doable so therefore you can like really see like what can I create look at it put it at the system get it back work on the feedback and therefore make sure that like also the create creatively wise it’s going to be more u u looking better I would say we’ve also got a few things around the future kind of where you see this going as well. So one question has been around other potential uses. So what we’re seeing here is obviously quite brandled ads. They’re quite designled. Do you see any future here potential for things like influencer assets or different types of content creation as well? Yeah, absolutely. So the way that we’ve we’ve built it together is that it it can run on any asset, right? So I think you saw the example of it running on video. So, there’s absolutely a potential there for us to be able to run through kind of creator content, etc., TV ads, all of that kind of piece. So, I think now we’ve got the basics and we we’ve kind of established the framework. I I don’t think there’s a limit. unless Nathan, you’re going to tell me, I’m going to have an awkward conversation on stage. but I don’t think there’s a limit to where this could go. No, no. I think and also what would be relevant if we eventually also connected to the performance of an asset, then we can also give that back, right? So then it’s not only about brand compliance or legal compliance, but also to give advice on what works the best, yes or no. And if you would maybe put your brand name in the beginning instead of at the end in case it’s a video to make sure we also drive performance and give that back to the audience as an extra service to the clients. Yeah, amazing stuff. And and then just obviously big undertaking. You mentioned already some of the time pressures you were under. Were there any other specific challenges that you found as you’ve been moving through this process? things you know if everyone else here is looking to introduce Agentic AI advice that you’d give to them where would you start how would you look at that process I’m sure probably chatter debt is probably one of the first answers we’re there but any other kind of broad sort of advice that you’d give to to our teams here you want to go first I was going to say it it’s not easy a couple of seconds video doesn’t do the justice for the amount of work that goes in from a legal from a DPO from a contracting from a product from an engineering standpoint. So I think it’s about being really clear within your business around how you want to utilize AI being really sure on what are the different checkpoints that you have to go through to make sure that things can get signed off quickly and you can kind of unlock the potential and then for me it’s being really laser focused on that problem statement. I think you know AI is a bit of a catch all phrase and it can be quite nerve-wracking and it can feel a bit too large and you can start to have things that grow arms and legs like you know scope creep is definitely real in the AI space. So, so being really clear on what you want to work on I think is key but that’s probably more practical I would say and how you do it. Yeah, practical. Yeah, maybe from a technical perspective, I think that like although always there’s time pressure and you want to get something live as quickly as possible, like that initial phase of figuring out what are you going to do, what tech will you use is super important given the speed that we have in innovation, right? So I think like the bounding boxes that you see here is something that we built with an LLM, what wanted to build with an LLM, which wasn’t possible then. So we did it with machine learning and by now it’s doable already. So really looking at what are we going to use what system will we use what is doable but also what do we expect this the the LM already built like when we are building the system and work with that knowing what is going to be new innovations upcoming what is the road map I think it’s really important to like create a really solid technical strategy and build from there to not like walk into a trap quite quickly and need to redo it actually that’s a really good point so I just I think if we can dig a little bit deeper into that so question from Simon was how do you actually manage that process with the models changing and re-releasing updating consistently? how are you ensuring that something that is working and accurate now isn’t going to break overnight when we suddenly get new updates and and how are you looking at that process so to make sure you’re staying both up to date but also kind of retrofitting things too. So so it’s it’s you can see it as a sort of headless setup is is I think a name that everybody knows. So and it’s agnostic from an LLM perspective which means that we can quite quickly swap out an LLM and bring a new LLM in the system if we see that works better. Also that like goes back to the point like it’s so innovative and it’s so speedy every day there’s something new and we want to be able to really easily if there’s influencer content or other things to be able to adjust quite quickly and bring new tools in. So therefore the whole setup is headless. we can connect it to every system that we want to and all the training is done within that headless back end. So not with not not in itself not in tools within the back end that’s also where the sort of gold is that’s where we build all the logic and that’s why we can quite easily use something new put it in and make sure it’s not going to fall over due to all the sort of proprietary stuff being in a tool which can’t be replaced. So the vendor lock in amazing and I think probably one time for one more for Alice I think this will be for you. So there’s a lot of people I think in the crowd who are potentially partners and you’ve touched on the fact that you’re already bringing in different teams into this process. so is there kind of plans to build in suppliers and brands working with sainsburries to utilize the platform for future changes as well? So HFSS updates have been kind of touched upon. It’s on everybody’s minds who’s working in the retail and food space. So will there be any kind of plans for that and do you see that as something that you’ll keep growing with the platform too? Absolutely. As I said, like we we started this journey with a group of I think it’s 12 to 14 different clients and agencies really telling us what their problems are. We we do a client survey every year. So, we we really kind of dived into that data to identify the problem statements before we started this. we’ve started talking through what are our capital requirements as we we kind of build this platform. over a third of the budget is reserved for user input. So we will continue to have those really active conversations with with the brands and with the agencies that we work with. because actually that’s the exciting thing that that’s where the platform grows where it needs to grow and you know if we hold our USPs close to our heart which are easy to use omni channel at its core AI smarts and and market leading measurement we can’t do the easy to use piece if we only look internally at what we want it to do. We we really need to be connected with our core users. So absolutely it will continue to be part of it and as as it rolls out over the next few kind of months etc. I am all a for any kind of feedback so great fantastic. So I think as you can tell from the standing room only for this talk and the most questions we’ve had of any panel so far so congratulations for that and everyone has really enjoyed this. So guys we’re running a little bit behind on the next talk. So we’ve probably got about 5 to 10 minutes before we’ll switch over. but obviously if there’s anyone else who wants to come and chat to the team pop up to the front and I’m sure they’ll be willing to take some questions there as well. So massive hand. So thank you very much.

The bottleneck

Managing retail media for brands as prestigious as Sainsbury’s and Argos is more than placing ads. It’s about protecting brand integrity. With over 900 clients and agencies sending in thousands of static, video, and PDF assets annually, the Nectar360 team was drowning in manual audits.

“There are over 130 different things we check for in a single asset,” Alice Anson explained. “From the exact shade of Sainsbury’s orange to legal T&Cs, doing this manually meant weeks of back-and-forth and a high risk of human error.”

A multi-agent architecture

The solution isn’t just throwing an asset into ChatGPT. To achieve 95%+ accuracy targets, DEPT® built a specialized multi-agent AI system integrated into Nectar360’s unified platform, Pollen.

This headless and LLM-agnostic architecture allows different models to do what they do best:

  • Object detection: Using YOLO to identify products and logo placements.
  • Text extraction: High-speed OCR to read and validate legal disclaimers.
  • Reasoning: OpenAI and Gemini models acting as the “brain” to apply a complex ruling engine.

The secret sauce: Training with “bad” data

One of the biggest hurdles in AI training is that most historical data consists of correct assets. To train an AI to find mistakes, you need examples of errors.

DEPT® utilized its proprietary Lightspeed technology to “misuse” GenAI and generate thousands of intentionally incorrect assets. By slightly shifting logos, changing hex codes, and altering font sizes, the team created a massive training library that taught the agents exactly what not to look for.

The results

The shift from manual to agentic has completely rewritten the business case for Nectar360:

What’s next: Conversational compliance & performance

The roadmap for the Pollen platform extends far beyond simple Pass/Fail checks. In the near future, DEPT® and Nectar360 are looking at:

  • Conversational feedback: Instead of an error code, agencies can chat with the asset to understand exactly why a color is off.
  • Performance optimization: Agents that suggest creative tweaks, such as moving a brand name to the start of a video, to drive higher ROAS.
  • Influencer & TV integration: Expanding the framework to audit creator content and TV ads

AI is giving us efficiency so we can spend more time on what we do best as humans: creating amazing connections and understanding our clients’ businesses.


Alice Anson, Nectar360

FAQs

What is Nectar360 Pollen?

Pollen is Nectar360’s unified retail media platform that allows brands to book, optimize, and measure campaigns across all Sainsbury’s and Argos digital and in-store channels.

How does DEPT® use AI Agents for brand compliance?

DEPT® uses a multi-agent stack (YOLO, OCR, and LLMs) to automate 130+ brand and legal checks, reducing audit times from weeks to under 90 seconds.

Can AI Agents check video assets for retail media?

Yes. The DEPT® solution for Nectar360 is capable of auditing both static images and video content for logo placement, color accuracy, and legal compliance.

What is the benefit of a “headless” AI setup?

A headless setup prevents vendor lock-in, allowing companies to swap out different AI models (like moving from OpenAI to Gemini) as the technology evolves without breaking the entire workflow.

on our minds

VIEW ALL INSIGHTS