I am trying to collect a bunch of scripts from an RMM site to look at them / build a library.

when I search their site, I get a page listing the different scripts - 1 per page. Click on the link for a script you get a page explaining what the script does. and there’s a link to download the script.

seems simple - I don’t want external sites. I only want to go 2 deep from that search page. but not sure what I am doing wrong.

or is there a better / easier app for windows?

5 Spice ups

wget recursive mirror if the site allows it.

Warning, site operators may register this as a DOS or other type of attack.

Are you getting an error, we don’t know.

We can’t suggest if you’re doing something wrong or not, without knowing what’s happening

after posting, I wound up manually scraping the several pages of search results into 1 page and it’s on my computer.
I pointed the URL to file…search.htm
I set the scan rules page to +.domainIWant.com/
That seems to be getting the scripts and pages, but loads of other things.
There’s LOADS of tmp files. is that usual?
Like:
windows-remote-desktop-software.html.tmp

wisecurve.html.tmp

There’s about 700 pages / scrips. So I was thinking 1 HTML page and 1 script download per = 1400 items. HTTrack is stll running and up to 15K items downloaded on the local drive! : )
It says 5556 of 18K links scanned : )
THere’s 6,000 .tmp files
Interesting - some of the tmp files don’t have another file with the same name other than the tmp.
but for some, theres an HTML file and then the TMP files with the same name in different folders, related to language?!

I’ve had to grab content from pages that need interaction too, like login or custom searches. What worked for me was submitting a form with Playwright, since it lets you load the page, fill in fields, click buttons—basically everything you’d do in a browser. Once the form’s submitted and the data’s there, you can grab the HTML or take screenshots, whatever you need.