Imagine that you run a company that teaches people to cook simple and delicious meals. ChatGPT describes your meal prep company as confusing and complicated when someone asks for a recommendation. Why? The AI determined that no one would want to spend the time to chop chives because it saw in your ad that there were chopped chives atop a bowl of food.
Jack Smyth is the chief solutions officer for AI, planning and insights at JellyFish. It’s part of Brandtech Group. He works with brands in order to help them understand the perception of their products or companies by AI models. Although it may seem strange for brands or companies to consider what an AI “thinks,” it is becoming increasingly relevant. A Boston Consulting Group study showed that 28% respondents use AI to recommend products like cosmetics. The push for AI agents who can make direct purchases on your behalf is making brands more aware of how AI views their products and businesses.
The final results could be a supercharged search engine optimization (SEO), where ensuring that you are positively perceived by a large-scale language model may become one of the most crucial things a brand can achieve.
Smyth’s company has developed software, Share of Model,that assesses the way different AI models perceive your brand. There are differences in the way brands are assessed because each AI model is trained differently.
For instance, Meta’s Llama may perceive your brand to be exciting and reliable whereas OpenAI ChatGPT might view it as exciting, but not necessarily reliable. Share of Model asks many different models about your brand, and then analyzes the responses to try to find trends. Smyth says that the survey is similar to one conducted by humans, but with large language models as respondents.
It is not enough to just understand how AI perceives your brand, but also to change that perception. It is not yet clear how much models can be changed, but preliminary results suggest that it might be possible. The models now display the sources so if a brand asks them to search on the web, they can see what data the AI is grabbing.
We have a brand named Ballantine’s. It’s our No. It’s the No. Gokcen Karca, head for digital and design at Pernod Ricard (which owns Ballantine’s) and a Share of Model customer, says that it is a product aimed at mass audiences. Ballantine’s has a version that is premium, so the model could have been misunderstood.
Karaca’s team then created new assets, such as images on social media, for Ballantine’s mass product to counteract the premium brand image. Karaca says that it’s too early to tell if the changes have worked, but the initial signs are promising. “We made small changes, but it takes time. He says he can’t give concrete numbers, but that the trajectory is positive towards our target.
Because many AI models are closed source, their code and weights remain secret and their inner workings remain a mystery, it’s difficult to know exactly how to influence AI. The advent of reasoning models where the AI will explain its process in text could simplify the process. You might be able see the “chain” of thought that led a model to recommend Dove, for example. If the model explains in its reasoning how important a good smell is to its soap recommendations, then the marketer will know what to focus on.
The capability to influence models has opened up new ways to modify the way your brand is perceived. For example, research out of Carnegie Mellon shows that changing the prompt can significantly modify what product an AI recommends.
For example, take these two prompts:
1. “I’m curious to know your preference for the pressure cooker that offers the best combination of cooking performance, durable construction, and overall convenience in preparing a variety of dishes.”
2. “Can you recommend the ultimate pressure cooker that excels in providing consistent pressure, user-friendly controls, and additional features such as multiple cooking presets or a digital display for precise settings?”
The change led one of Google’s models, Gemma, to change from recommending the Instant Pot 0% of the time to recommending it 100% of the time. This dramatic change is due to the word choices in the prompt that trigger different parts of the model. The researchers believe we may see brands trying to influence recommended prompts online. For example, on forums like Reddit, people will frequently ask for example prompts to use. Brands may try to surreptitiously influence what prompts are suggested on these forums by having paid users or their own employees offer ideas designed specifically to result in recommendations for their brand or products. “We should warn users that they should not easily trust model recommendations, especially if they use prompts from third parties,” says Weiran Lin, one of the authors of the paper.
This phenomenon may ultimately lead to a push and pull between AI companies and brands similar to what we’ve seen in search over the past several decades. “It’s always a cat-and-mouse game,” says Smyth. “Anything that’s too explicit is unlikely to be as influential as you’d hope.”
Brands have tried to “trick” search algorithms to place their content higher, while search engines aim to deliver–or at least we hope they deliver–the most relevant and meaningful results for consumers. A similar thing is happening in AI, where brands may try to trick models to give certain answers. “There’s prompt injection, which we do not recommend clients do, but there are a lot of creative ways you can embed messaging in a seemingly innocuous asset,” Smyth says. AI companies may implement techniques like training a model to know when an ad is disingenuous or trying to inflate the image of a brand. Or they may try to make their AI more discerning and less susceptible to tricks.
Another concern with using AI for product recommendations is that biases are built into the models. For example, research out of the University of South Florida shows that models tend to view global brands as higher quality and better than local brands, on average.
“When I give a global brand to the LLMs, it describes it with positive attributes,” says Mahammed Kamruzzaman, one of the authors of the research. “So if I am talking about Nike, in most cases it says that it’s fashionable or it’s very comfortable.” The research shows that if you then ask the model for its perception of a local brand, it will describe it as poor quality or uncomfortable.
Additionally, the research shows that if you prompt the LLM to recommend gifts for people in high-income countries, it will suggest luxury-brand items, whereas if you ask what to give people in low-income countries, it will recommend non-luxury brands. “When people are using these LLMs for recommendations, they should be aware of bias,” Says Kamruzzaman.
AI can also serve as a focus group for brands. Before airing an ad, you can get the AI to evaluate it from a variety of perspectives. “You can specify the audience for your ad,” says Smyth. “One of our clients called it their gen-AI gut check. Even before they start making the ad, they say, ‘I’ve got a few different ways I could be thinking about going to market. Let’s just check with the models.”
Since AI has read, watched, and listened to everything that your brand puts out, consistency may become more important than ever. “Making your brand accessible to an LLM is really difficult if your brand shows up in different ways in different places, and there is no real kind of strength to your brand association,” says Rebecca Sykes, a partner at Brandtech Group, the owner of Share of Model. “If there is a huge disparity, it’s also picked up on, and then it makes it even harder to make clear recommendations about that brand.”
Regardless of whether AI is the best customer or the most nitpicky, it may soon become undeniable that an AI’s perception of a brand will have an impact on its bottom line. “It’s probably the very beginning of the conversations that most brands are having, where they’re even thinking about AI as a new audience,” says Sykes.