https://www.resauctions.com/auctions/24572-two-day-richard-and-mary-lou-taylor-lifetime-collection-absolute-auction?page=2

I want to archive just this auction, mainly the photos but the closing prices would be nice too. So my goal is either a folder of photos, or that and a browsable offline copy.

Simply right clicking and saving works decently enough but surely there is a faster way. There are 16 pages and I’m actually doing 2 separate auctions.

My usual script for a webpage is crawling the whole site and pulling various vendor information instead of just this webpage. My script usually doesn’t do that so I’m thinking it’s something to do with how the site is structured.

wget -p --convert-links -e robots=off -U mozilla --no-parent https://www.resauctions.com/auctions/24572-two-day-richard-and-mary-lou-taylor-lifetime-collection-absolute-auction?page=2

Any advice would be appreciated.

  • erik530195@alien.topOPB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I suppose I could pull the entire site and delete the fluff afterwards, but that seems like a very long and resource intensive process for such a small grab.