• marcwajsberg 32 minutes ago
    The attribution point is huge: the “decision” can happen in the model’s answer, and your analytics only see the last hop.

    A practical mental model for recommendations is less “ranking” and more confidence:

    Does the model have enough context to map your product to a problem? Are there independent mentions (docs, comparisons, forum threads) that look earned vs manufactured? Is there procedural detail that makes it easy to justify recommending you (“here’s the workflow / constraints / outcomes”)? For builders, a good AEO baseline is: Publish a strong docs/use-case page that answers “when should I use this vs alternatives?” Seed real-world context by participating in existing discussions (HN/Reddit/etc.) with genuine problem-solving and specifics. Track influence with repeatable prompt tests + lightweight surveys (“how did you hear about us?”) since last-click won’t capture it.

    It feels like early SEO again: less perfect instrumentation, more building the clearest and most defensible reference for your category.

  • theorchid 7 hours ago
    However, there is a lack of information when a user opens your website after interacting with AI.

    Google Search Console shows the user's query if the query is popular enough and your website is in the search results. Bing shows all queries, even if they are not popular, and if your website is in the search results.

    But if AI recommends your website when answering people's questions, you cannot find out what questions the user discussed, how many times your website was shown, and in what position. You can see the UTM tag in your website analytics (for example, GPT adds utm source), but that is the maximum amount of information that will be available to you. But if a user discussed a question with AI and only got your brand name, and then found your site in a search engine, you won't be able to tell that they found you with the help of AI advice.

    [-]
    • nworley 2 hours ago
      This is exactly what set me off in trying to figure out the visibility gap.

      What’s strange is that we’re moving into a world where recommendations matter more than a click, but attribution still assumes a traditional search funnel. By the time someone lands on your site, the most important decision may have already happened upstream and you have no idea.

      The UTM case you mentioned is a good example: it only captures direct "AI to site" clicks, but misses scenarios where AI influences the decision indirectly (brand mention to later search to visit). From the site’s perspective tho... yeah it looks indistinguishable from organic search. It makes me wonder whether we’ll need a completely new mental model for attribution here. Perhaps less about “what query drove this visit” and more about “where did trust originate.”

      Not sure what the right solution is yet, but it feels like we’re flying blind during a pretty major shift in how people discover things.

      [-]
      • quiqueqs 44 minutes ago
        This is why most of these AI search visibility tools focus on tracking many possible prompts at once. LLMs give 0 insight into what users are actually asking, so the only thing you can do is put yourself in the user’s shoes and try to guess what they might prompt.

        Disclaimer: I've built a tool in this space (Cartesiano.ai), and this view mostly comes from seeing how noisy product mentions are in practice. Even for market-leading brands, a single prompt can produce different recommendations day to day, which makes me suspect LLMs are also introducing some amount of entropy into product recommendations (?)

  • theorchid 7 hours ago
    This is especially important when launching new SaaS projects. Google does not trust new domains for the first 6-12 months. But if you publish information about your project on other sites, the AI will recommend your site in its responses. Just post a few times on Reddit, and in a week, GPT will be giving out links to your SaaS product. AI doesn't need exact low-frequency or high-frequency keywords like SEO does. AI is good at understanding user queries and giving out the right SaaS that solves the user's problem. You don't need to create a blog on your website and try to rank it in search engines. It is enough to post articles on other websites with information about your project.
    [-]
    • nworley 2 hours ago
      This matches a lot of what I’ve been seeing too.

      What stood out to me is that AI seems far less concerned with domain age than Google is. If there’s enough contextual discussion around a product (ie. Reddit threads, blog posts, docs, comparisons) then AI models seem willing to surface it surprisingly early.

      That said, what I’m still trying to understand is consistency. I’ve seen cases where a product gets recommended heavily for a week, then effectively disappears unless that external context keeps getting reinforced.

      So it feels less like “rank once and you’re good” (SEO) and more like “stay present in the conversation.” Almost closer to reputation management than classic content marketing.

      Curious if you’ve seen the same thing, especially around how long external mentions keep influencing AI recommendations before they decay.