You chance upon a website containing tens of file download links, all of which you want to scoop. The site does not have an RSS feed nor does it provide an FTP service. Right-clicking on each of the tens of links and saving it is definitely tedious. You’re impatient, but what can you do?
It happened to me today. I was impatient. But thank God I’m a Linux user. So here are what I did:
- View the page source.
- Select the section containing the links and copy it.
- Paste the copied text to a buffer running on vim.
- Run cool vim commands to strip lines of unnecessary texts so that what remain are the URLs to the files I wanted to download.
- Insert “wget” before each of the URLs through the command “
:%s/^/wget /g
“. - Save the buffer to a file.
- Make the file executable:
chmod +x /path/to/file
. - Run the program in a directory where I want to dump the files.
My favorite way to do this is with firefox’s page info.
Go to tool->page info then click the links tab. Now all you have to do is select the links you want, copy them into a text file and then wget -i that_txt_file