RSS Feed Finder
0,00 €
Finds the feeds that websites forget to tell you about.
Description
RSS Feed Finder is a Windows desktop application that attempts to discover RSS (Really Simple Syndication), Atom, RDF, and JSON feeds from any website URL. It works by running a series of detection strategies in sequence, probing the target site from multiple angles and reporting whatever it finds.
Many websites publish syndication feeds but don’t always make them easy to find. Feed URLs may be buried in HTML source, tucked behind non-standard paths, or spread across subdomains and category pages. This tool tries to surface them.

I made the application for a bit of fun as a side effect of something else I was doing, and as a learning exercise. Maybe it’ll be useful for someone else!
How It Works
When you enter a URL and click **Scan**, the application runs up to seven strategies in order. Each strategy tries a different approach to locating feeds.
Strategy 1 — HTML link tags.
- The tool fetches the target page and looks for `<link>` elements with RSS/Atom type attributes. This is the standard way sites advertise their feeds, and it’s checked first.
Strategy 2 — HTTP response headers.
- Some servers advertise feeds via `Link` headers in the HTTP response. The tool inspects these after fetching the main page.
Strategy 3 — Common paths.
- The tool probes a list of well-known feed paths such as `/feed/`, `/rss.xml`, `/atom.xml`, `/feed.json`, and many others. This covers WordPress, Ghost, Jekyll, Hugo, Tumblr, ASP.NET, and other common platforms. It tries priority paths first, then an extended list.
Strategy 4 — Internal link crawling.
- The tool scans links on the main page looking for anything that resembles a feed URL — paths containing “feed”, “rss”, “atom”, or similar patterns. It follows those links and validates what it finds.
Strategy 5 — Category and tag feeds.
- Many CMS platforms generate per-category and per-tag feeds. The tool looks for category paths in the site’s navigation and sitemap, then attempts to construct feed URLs from them (e.g. `/category/news/feed/`).
Strategy 6 — Subdomain probing.
- The tool checks common subdomains like `blog.`, `news.`, `feeds.`, and `rss.` for feed paths. This is useful for sites that host their blog on a separate subdomain.
Strategy 7 — Google search fallback.
- If no feeds have been found by this point, the tool searches Google for feed URLs associated with the domain. This is a last resort and depends on Google not blocking the request.
Each discovered URL is validated by fetching it and checking whether the response is valid XML (RSS, Atom, or RDF) or a JSON Feed. Only confirmed feeds appear in the results.
Dealing with Bot Protection
Many websites use Web Application Firewalls (WAFs) such as Cloudflare, Sucuri, and Akamai that can block automated requests. The tool has three HTTP session modes to try to work around this:
- Standard uses a normal .NET HttpClient with browser-like headers. This works for most sites but may be blocked by stricter WAFs.
- Legacy SSL uses relaxed TLS settings with simpler request headers. Some WAFs are actually more suspicious of browser-like headers combined with non-browser TLS fingerprints, so this plainer approach sometimes succeeds where Standard fails.
- Curl shells out to the system’s `curl.exe` with low-security cipher settings. This produces a genuine curl TLS fingerprint, which some WAFs treat differently from .NET’s fingerprint.
In Auto mode (the default), the tool tries all three modes when fetching the main page and uses whichever one succeeds. If a mode gets blocked during the scan, the tool falls back to the remaining modes automatically. The tool also detects aggressive rate limiters and increases the delay between requests accordingly.
None of this is guaranteed to bypass any particular WAF. Some sites simply will not serve content to automated clients, and the tool will report what it can.
Using the Application
Basic usage.
- Enter a URL in the text field and click Scan (or press Enter). The left pane shows the discovery log with colour-coded results — green for found feeds, blue for status messages, yellow for warnings. The right pane shows detailed HTTP diagnostics for every request and response.
Pause and Resume.
- Click Pause to suspend the scan at any time. This lets you inspect what has been found so far without losing progress. Click Resume to continue from where it left off. The Stop button cancels the scan entirely.
Copy results.
- Each pane has a clipboard button (📋) in its header. Click it to copy the full pane contents as plain text.
Settings
Delay (s) controls the minimum wait between HTTP requests, with ±30% random jitter added. The default of 2 seconds is a reasonable starting point. If a site is rate-limiting you (you’ll see warnings in the log), try increasing this to 5 or 8 seconds. The tool will also increase the delay automatically if it detects aggressive rate limiting.
Session lets you force a specific HTTP mode instead of auto-detecting. This is occasionally useful if you know a site responds well to a particular approach, or if auto-detection picks the wrong mode.
Max category limits how many category/tag feeds are probed. The default of 10 is usually sufficient. Large sites may have hundreds of categories; setting this higher will make scans take considerably longer.
Subdomains enables or disables subdomain probing (Strategy 6). Disable it if you know the site doesn’t use subdomains, to save time.
JSON Feed enables or disables detection of JSON Feed format alongside RSS and Atom.
Google fallback enables or disables the Google search fallback (Strategy 7). This only runs if no feeds have been found by the earlier strategies.
What Gets Reported
Results are grouped into Main feeds and Category/tag feeds. Each feed shows its URL, type (RSS, Atom, RDF, or JSON Feed), and title if one was found in the feed’s metadata.
The discovery log in the left pane shows every step the tool takes, including which strategies found which feeds. The HTTP diagnostics pane on the right shows the raw request and response details for every HTTP call, which is useful for understanding why a particular site is or isn’t cooperating.
Limitations
There are a number of things this tool cannot do, or may struggle with. It’s worth understanding these before relying on the results.
It cannot bypass all bot protection.
- Some WAFs require JavaScript execution, CAPTCHA solving, or genuine browser sessions to serve content. The tool does not run JavaScript or render pages. If a site’s WAF blocks all three HTTP modes, the tool simply cannot access that site’s pages or feeds.
It may miss feeds.
- The tool checks a broad set of common paths and patterns, but it cannot discover feeds at entirely non-standard URLs that don’t follow any known convention. If a site publishes a feed at `/my-custom-feed-path/syndication/v2`, the tool is unlikely to find it unless it’s linked from the main page or appears in search results.
Google fallback is unreliable.
- Google frequently blocks automated search requests. When the fallback does work, it depends on Google having indexed the feed URLs, which isn’t always the case.
Subdomain probing is limited.
- The tool only checks a small set of common subdomains (blog, news, feeds, rss, www). Sites using unusual subdomains will not be covered.
Rate limiting may cause incomplete results.
- If a site aggressively rate-limits requests, some strategies may fail partway through. The tool will warn you when this happens, but the results may be incomplete. Increasing the delay and re-running can sometimes help.
Category feed discovery depends on site structure.
- The tool looks for category URLs in the site’s navigation links and sitemap. If the site doesn’t expose its categories in either of those places, this strategy may not find anything.
It does not verify feed content quality.
- The tool confirms that a URL returns valid feed XML or JSON, but it does not check whether the feed contains recent or meaningful content. A feed may be valid but empty, abandoned, or contain only placeholder entries.
It requires curl.exe for the Curl session mode.
- On Windows, curl is included by default in Windows 10 version 1803 and later. If you’re on an older system or curl has been removed, the Curl mode will not be available.
Tips for Getting the Most from It
If a scan comes back with no results or only partial results, there are a few things worth trying.
Increase the delay.
- If the log shows warnings about blocked requests or rate limiting, increase the delay to 5–10 seconds and re-run.
Try a specific session mode.
- If Auto mode picks a mode that gets blocked halfway through, try forcing a different one. Legacy SSL sometimes works where Standard fails, and vice versa.
Check the HTTP diagnostics.
- The right pane shows exactly what happened with each request. Look for patterns — if all requests to a certain path return 403 or 503, the site is actively blocking that route.
Disable unnecessary strategies.
- If you know a site doesn’t use subdomains, uncheck the Subdomains option. This speeds up the scan and avoids unnecessary requests that might trigger rate limiting.
Look at category feeds separately.
- Some sites don’t have a main site-wide feed but do publish per-category feeds. The category feed strategy can sometimes find these when the main path checks come up empty.
Use pause to check progress.
- If you’re scanning a slow or rate-limited site, pause partway through to see what has been found so far. If you’ve already got what you need, there’s no reason to wait for the remaining strategies to finish.
Version History
Version 1.0.0 Current
Released: February 14, 2026
Only logged in customers who have purchased this product may leave a review.



Reviews
There are no reviews yet.