• dogline 25 seconds ago
    With lots of people showing how Saas apps can be easily written these days, I'm not as interested in those articles, as people showing off new ideas of what I can do with these new found abilities. This is cool.
  • ahmedfromtunis 55 minutes ago
    As a (former) reporter, site monitoring is a big part of what I do on a daily basis and I used many, many such services.

    I can attest that, at least from the landing page, this seems to be a very good execution of the concept, especially the text-based diffing to easily spot what changed and, most importantly, how.

    The biggest hurdle for such apps however are 'js-based browser-rendered sites' or whatever they're called nowadays. How does Site Spy handle such abominations?

  • xnx 2 hours ago
    I like https://github.com/dgtlmoon/changedetection.io for this. Open source and free to run locally or use their Saas service.
    [-]
    • raphman 2 hours ago
      There's also https://github.com/thp/urlwatch/ - (not aware of any SaaS offer - self-hosted it is).
      [-]
      • vkuprin 2 hours ago
        Yep, urlwatch is a good one too. This category clearly has a strong self-hosted tradition. With Site Spy, what I’m trying to make much easier is the browser-first flow: pick the exact part of a page visually, then follow changes through diffs, history, RSS, and alerts with very little setup
    • vkuprin 2 hours ago
      Yep, changedetection.io is a good project. With Site Spy, I wanted to make the browser-first workflow much easier: install the extension, connect it to the dashboard, click the exact part of the page you care about, and then follow changes as diffs, history, or RSS with very little setup. I can definitely see why the open-source / self-hosted route is appealing too.
      [-]
    • pelcg 2 hours ago
      Looks cool and this can be self hosted and it is for free.

      Nice will try this out!

    • beepbooptheory 1 hour ago
      Sure but this one has a MCP server, costs money, and was presumably made last night!
  • tene80i 1 hour ago
    RSS is a useful interface, but: "Do most people just want direct alerts?" Yes, of course. RSS is beloved but niche. Depends who your target audience is. I personally would want an email, because that's how I get alerts about other things. RSS to me is for long form reading, not notifications I must notice. The answer to any product question like this totally depends on your audience and their normal routines.
    [-]
    • ikari_pl 58 minutes ago
      It's niche because some companies decided so.

      you used to have native RSS support in browsers, and latest articles automatically in your bookmarks bar.

      [-]
      • ctxc 13 minutes ago
        That's good reasoning, but the parent's point still stands?
  • layman51 12 minutes ago
    How might this tool work in terms of “archiving” a site? This is just something I was wondering given the recent change and controversy about archiving service sites on Wikipedia.
  • iamflimflam1 33 minutes ago
    Something I was planning on building but never got round - if anyone wants to do it then feel free to use this idea.

    Lots of companies really have no idea what javascript is being inserted into their websites - marketing teams add all sorts of crazy scripts that don't get vetted by anyone and are often loaded dynamically and can be changed without anyone knowing.

    A service that monitors a site and flags up when the code changes - even better if it actually scans and flags up malicious code.

  • enoint 3 hours ago
    Quick feedback:

    1. RSS is just fine for updates. Given the importance of your visa use-case, were you thinking of push notifications?

    2. Your competition does element-level tracking. Maybe they choose XPath?

    [-]
    • vkuprin 2 hours ago
      Yep, Site Spy already has push notifications, plus email and Telegram alerts. I see RSS as the open interface for people who want to plug updates into their own reader or workflow. For urgent things like visa slots or stock availability, direct alerts are definitely the main path.

      And yeah, element-level tracking isn't a brand new idea by itself. The thing I wanted to improve was making it easy to pick the exact part of a page you care about and then inspect the change via diffs, history, or RSS instead of just getting a generic "page changed" notification

  • dev_at 51 minutes ago
    There's also AnyTracker (an app) that gives you this information as push notifications: https://anytracker.org/
    [-]
    • Knork-and-Fife 27 minutes ago
      and also visualping.io which sends email alerts
  • reconnecting 1 hour ago
    I remember there was something called Visualping many years ago, and the real issue was that when a website changed its structure, it broke the comparison.

    Did you solve this?

  • nicbou 36 minutes ago
    Buddy I love you!

    I have wanted this for so long! My job relies on following many German laws, bureaucracy pages and the like.

    In the long run I want specific changes on external pages to trigger pull requests in my code (e.g. to update a tax threshold). This requires building blocks that don't exist, and that I can't find time to code and maintain myself.

    I currently use Wachete, but since over a year, it triggers rate limits on a specific website and I just can't monitor German laws anymore. No tools seem to have a debounce feature, even though I only need to check for updates once per month.

  • bananaflag 2 hours ago
    Very good!

    This is something that existed in the past and I used successfully, but services like this tend to disappear

    [-]
    • vkuprin 2 hours ago
      That’s a completely fair concern. Services in this category do need to earn trust over time. I built the backend to handle a fair amount of traffic, so I’m not too worried about growth on that side. My goal is definitely to keep this running for the long term, not treat it like a one-off project
  • hinkley 1 hour ago
    Back in 2000 I worked for a company that was trying to turn something like this into the foundation for a search engine.

    Essentially instead of having a bunch of search engines and AI spamming your site, the idea was that they would get a feed. You would essentially scan your own website.

    As crawlers grew from an occasional visitor to an actual problem (an inordinate percent of all consumer traffic at the SaaS I worked for was bots rather than organic traffic, and would have been more without throttling) I keep wondering why we haven’t done this.

    Google has already solved the problem of people lying about their content, because RSS feeds or user agent sniffing you can still provide false witness to your site’s content and purpose. But you’d only have to be scanned when there was something to see. And really you could play games with time delays on the feed to smear out bot traffic over the day if you wanted.

  • makepostai 3 hours ago
    This is interesting, gonna try it on our next project! thumb up
  • digitalbase 2 hours ago
    Cool stuff. You should make it OSS and ask a one time fee for it. I would run it on my own infra but pay you once(.com)
  • pwr1 3 hours ago
    Interesting... added to bookmarks. Could come in handy in the future