22 points by chadwebscraper 4 hours ago | 5 comments
arm32 2 hours ago
Residential proxies are sketchy at best. How can you guarantee that your service's infrastructure isn't hinging on an illicit botnet?
dewey 1 hour ago
There's a lot of variety in the residential proxy market. Some are sourced from bandwidth sharing SDKs for games with user consent, some are "mislabeled" IPs from ISPs that offer that as a product and then there's a long tail of "hacked" devices. Labeling them generally as sketchy seems wrong.
sfRattan 33 minutes ago
> Some are sourced from bandwidth sharing SDKs for games with user consent...

The notion that most people installing a game meaningfully consent to unspecified ongoing uses of their Internet connection resold to undeclared third parties gave me a good, hearty belly laugh. Especially expressed so matter-of-factly.

Thank you.

fnimick 31 minutes ago
Legal? probably. Ethical? Absolutely not.
chadwebscraper 2 hours ago
This is a good callout - I’ve tried my best thus far to limit the use of proxies unless absolutely necessary and then focus on reputable providers (even though these are a bit more pricey).

Definitely going to give this more thought though, thank you for the comment

cpursley 35 minutes ago
Self hosted option with life changes coming soon: https://mulberry.bot
arjunchint 2 hours ago
So what happens when the website layout updates, does the monitoring job fail silently?
chadwebscraper 1 hour ago
So with APIs, it adjusts. For HTML layouts, it looks at the previous diffs to catch potential errors and then re-indexes.
chadwebscraper 4 hours ago
Here’s how it works:

1. Paste a URL in, describe what you want

2. Define an interval to monitor

3. Get real time webhooks of any changes in JSON

Lots of customers are using this across different domains to get consistent, repeatable JSON out of sites and monitor changes.

Supports API + HTML extraction, never write a scraper again!

codingdave 2 hours ago
Writing a scraper isn't the hard part, that is actually fairly trivial at this point in time. Pulling content into JSON from your scrape is also fairly trivial - libraries exist that handle it well.

The harder parts are things like playing nicely so your bot doesn't get banned by sysadmins, detecting changes downstream from your URL, handling dynamically loading content, and keeping that JSON structure consistent even as your sites change their content, their designs, etc. Also, scalability. One customer I'm talking to could use a product like this, but they have 100K URLs to track, and that is more than I currently want to deal with.

I absolutely can see the use case for consistent change data from a URL, I'm just not seeing enough content in your marketing to know whether you really have something here, or if you vibe coded a scraper and are throwing it against the wall to see if it sticks.

chadwebscraper 2 hours ago
I appreciate the response! I also agree - happy to add some clarity to this stuff.

Bot protection - this is handled in a few ways, the basic form bypasses most bot protections and that’s what you can use on the site today. For tougher sites, it solves the bot protections (think datadome, Akamai, incapsula).

The consistency part is ongoing, but it’s possible to check the diffs and content extractions and notice if something has changed and “reindex” the site.

100k URLs is a lot! It could support that, but the initial indexing would be heavy. It’s fairly resource efficient (no browsers). For scale, it’s doing about 40k/scrapes a day right now.

Appreciate the comments, happy to dive deeper into the implantation and I agree with everything you’ve said. Still iterating and trying to improve it.

codingdave 5 minutes ago
Re-indexing seem sub-optimal. I can't think of a use case where people care if the design changes. Even some content changes are not going to be interesting. Someone corrected a typo, updated punctuation, that kind of thing... such things are just noise if you are trying to react to content changes.

Your system needs to know not only what changed, but whether or not it matters. Splitting meaningful content from irrelevant noise is exceedingly important. If you know that, you do not need to re-index because you can diff only the meaningful content.

As far as the 100K URLs, each URL has between 200 and 1000 sub-pages beneath the top-level page. They all need to be periodically scanned for updates, while capturing that distinction of noise vs. meaningful change. I've actually got code that does the needed work - it is scaling it up to that level that I didn't want to take on.

I'm not sure what you mean by no browsers. My existing scraper uses headless browsers, in order to capture JavaScript-driven content and navigate through a SPA without having to re-load at every URL change. If you are not using even a headless browser, how are you getting dynamic content?

2 hours ago
tmaly 2 hours ago
this must wreck their google analytics stats
chadwebscraper 2 hours ago
lol it probably does unless their filtering is great
groby_b 1 hour ago
"AntiBot bypass".

I see we continue to aim for high ethical standards throughout the industry.