The new shopping assistant is not a chatbot embedded in a retailer's website. It is ChatGPT, Perplexity, or Gemini — and a growing share of consumers now open one of these before they have ever visited a store, clicked an ad, or scrolled a product feed.
A 2024 Salesforce study found that 17% of consumers had already used a generative AI tool to help with a purchase decision. Among 18- to 34-year-olds, that share was considerably higher. More telling: consumers who try AI for shopping tend to keep using it.
Here is what that actually looks like across the three ways consumers use AI to shop today: product research, comparison, and recommendations.
Product research: the first place AI earns trust
For many shoppers, AI has replaced the first three browser tabs.
When someone wants to understand a product category before spending money, an AI assistant offers something a results page cannot: synthesis rather than links. Ask "what should I look for in a standing desk" and you get a structured answer covering height range, motor quality, weight capacity, and surface material — not ten competing listicles with affiliate links in every sentence.
This is where AI shopping behavior typically begins — not at the purchase stage, but at the education stage.
The queries people actually use in this phase:
- "What should I look for in [product] before I buy?"
- "What are the differences between [material A] and [material B] for [use case]?"
- "Is [product type / price tier] actually worth it?"
- "What do people regret not knowing before buying [product]?"
Comparison: where AI shopping saves the most time
Comparison shopping is one of the most time-consuming parts of any purchase. The information is always scattered, written in different retailers' terminology, and optimised to make each product look like the best option.
AI comparison removes most of that friction. The consumer describes what they are deciding between, and the AI structures the trade-offs directly — applied to their specific context.
A real example: "Compare a jute rug, a wool rug, and a polypropylene rug for a living room with two kids and a dog. I want something that will actually hold up, and I don't want to spend more than $300."
The response does not return three product pages. It explains fiber durability and cleanability for each material, notes which sheds or pills, identifies which holds up to pet claws, and reaches a recommendation with a reason.
What makes AI comparison reliable:
The pattern is the same: specificity is the input, usefulness is the output. "Best sofa under $1,000" produces generic results. "Best sofa under $1,000 for a small apartment, durable with pets, easy to clean" produces actionable guidance.
Recommendations: AI as the person with good taste
The recommendation use case is where AI shopping feels least like technology and most like talking to someone who actually knows what they are doing.
Search engines reward keywords. AI assistants reward context.
"I'm decorating a small apartment living room. Light grey sofa. I want a coffee table that doesn't feel heavy in the space. Prefer natural materials. Budget around $300." That sentence is too long for a search bar. It is exactly right for an AI assistant.
How to get the most from AI recommendations:
- Include the constraint that matters most. Budget, room size, material preference, the person you are buying for.
- Describe what you want to avoid. "Nothing too trendy" or "avoid anything that needs special care" narrows the field.
- Give the use case, not just the category. "Sneakers for someone who walks five miles a day on city sidewalks" beats "comfortable sneakers."
- Ask for trade-offs explicitly. After any recommendation, ask: "What are the downsides?"
- Use follow-up. AI shopping is iterative. If the first answer is not quite right, say why.
Why the source behind the recommendation matters
AI recommendations are only as good as the information they draw from.
When an AI assistant recommends a product, it pulls from structured, indexed data — product descriptions, editorial content, and catalog information built to be understood by an agent evaluating options on your behalf. The quality of that underlying data determines whether the recommendation is calibrated to your needs or generically plausible.
Structured, specific product data produces better answers. Thin, keyword-optimised descriptions produce answers that sound confident but are not actually matched to your situation.
This is why the marketplace a product is listed on matters — not just to the merchant, but to the quality of the answer you receive as a shopper. Marketplaces that have invested in agent-readable product context produce better recommendations than those that have not.


