I find myself downloading lots of files from the web when converting sites into my company’s CMS. Whether from static sites or other CMS platforms, trying to do this manually sucks. But, thanks to wget’s recursive download feature, I can rip through a site, and get all of the images I need, while keeping even the folder structure.
One thing I found out was that wget respects robots.txt files, so the the site you are trying to copy has one with the right settings, wget will get only what is allowed. This is something that can be overridden with a few tweaks. I gladly used it and decided to pass it along. See the instructions at the site below.
Continue reading “Override Robots.txt With wget”