My yt-dlp is running most nights, but I’ve gotta stop… I’m never going to get around to watching everything.
Could you share the Britney Spears??? :)
My yt-dlp is running most nights, but I’ve gotta stop… I’m never going to get around to watching everything.
Could you share the Britney Spears??? :)
What? The hash is just a number in a text file. Open the md5 with notepad?
You aren’t very clear… Are you saying it creates the hash, but then verifying fails? I don’t know what you mean by unreadable
Are you saying it fails to make a checksum at all? What program? Try this one: https://github.com/gurnec/HashCheck/releases/tag/v2.4.0
Seriously, this isn’t link hoarding!!
But really OP, anything you think you might want should be downloaded, sites are vanishing all the time. Ramp up the hoarding!
I have a 50MB HDD in my first computer, a 286 Digital VaxMate that still works, from 1989 maybe?
It’s all I use
No file, it outputs in with the usually displayed info as it is processing, just adds more
The read and seek error rates appear to be zero. Any errors would be recorded in the top 4 digits. Different manufacturers use the fields differently.
Not sure about command timeout off the top of my head
The ultraDMA CRC actually looks bad on the 1st one I think.
Check some real documentation for seagate though to know what the fields mean
Yt-dlp does pretty much everything. It can do a channels playlists, not sure about a custom saved one, but try feeding it a link to it? Then you can just keep re-running it to download any new additions
Video download helped (red/yellow/blue ball icon) is the real deal
Is it the website kicking it off? Can you run with --debug and post what it says when the request fails?
I’ll be back at my computer later and can send over an idea to try. I have a wgetrc file that has gotten around some issues before
WGET is awesome, I have scraped tons with it. So many options, you can even spoof all the request header info to get around sites that try to limit auto downloaders. Here is the manual: https://www.gnu.org/software/wget/manual/wget.html
https://addons.mozilla.org/en-CA/firefox/addon/dont-accept-webp/
Wget is not behaving identically to a browser so im unsure what this part of the request looks like or if it needs modification. If it isnt working let me know.
For future scraping, look at the mirror command. It sets recursion to infinite and will make a full copy of the site. You can also use the --convert-links option, which changes all the links to point to the locally downloaded files. It then behaves the same as the real website.
You cant go too deep unless you use --span-hosts, it can grab external files from different domains to make the mirrored site a true copy, but yea, you often don’t need that. You also want to be more careful with recursive depth here - it can go too deep and you end up with too much data.
Some sites also need to use the wait or random-wait command to avoid detection.
Smart stats are cropped off?
External SSD is way better than USB key. I just got a Samsung T7 and compared to name brand keys it is far superior. Both the controller and the chips are better quality. The speed is consistently fast compared with keys that always seem to slow down / hang / fluctuate on big transfers. I don’t know what the actual stats hare but have heard SSD just has higher IOPS
Is the screenshot unreadably blurry or just my phone sucking? I cant click it to enlarge or zoom in…
Get Everything: https://www.voidtools.com/downloads/
Replace windows search with it: https://github.com/srwi/EverythingToolbar
It is a million times faster and better than windows search, I use it constantly. Easier than even navigating to files normally.