||5 months ago|
|downloaders||5 months ago|
|images||3 years ago|
|templates||3 years ago|
|utils||5 months ago|
|webInterface||2 years ago|
|webInterfaceNoAuth||3 years ago|
|.clang-format||3 years ago|
|.gitignore||2 years ago|
|CreateDatabase.py||2 years ago|
|Generate_Certificates.sh||4 years ago|
|LICENSE.txt||3 years ago|
|LikedSavedDatabase.py||2 years ago|
|LikedSavedDownloaderServer.py||2 years ago|
|PasswordManager.py||3 years ago|
|ReadMe.org||5 months ago|
|content-collector.service||2 years ago|
|pyproject.toml||5 months ago|
|settings.py||2 years ago|
|settings_template.txt||3 years ago|
|submission.py||2 years ago|
- Content Collector
- Content Collector is moving!
- Project status
- Login management
- Running the script only
- OSX Python issues
Use this awesome tool to download
…which you've marked as Liked, Hearted, or Saved from
…to disk! You can then browse the results.
Crossed out = These sites do not have official APIs, so breakage can happen much easier. As of 2021-07-04, they are both non-functioning.
Content Collector is moving!
UPDATE: For the near to mid-term, Content Collector will remain on GitHub.
Content Collector will be moving off of GitHub in the future.
I use Content Collector nearly every day, so it is still a neat and useful project. You can expect very good (though not perfect) Reddit support, a good offline media browser, and a local media scanning setup.
However, sites which lack suitable download APIs cause this type of project to be a constant battle or arms race against the site updating its security measures.
As a software developer, this kind of battle isn't one I want to fight. As such, you shouldn't expect any more updates to this project in regards to new site support or major features.
I think the ultimate solution to the media downloading problem is a web browser with strong automatic image/video downloading integrations.
Content Collector may still be valuable in that world by offering a good way to browse the content you've downloaded, but its automatic downloading is likely to fall into disrepair.
I'm still quite proud of this project, and hope that the other users have gotten good value out of it.
0. Check Releases
Check the Releases page for a ready-to-use version of Content Collector. If you find a release for your system that works, you can skip straight to the Usage section.
1. Clone this repository
git clone https://github.com/makuto/Liked-Saved-Image-Downloader
2. Install python dependencies
Poetry can be used to automatically request the proper dependencies:
sudo pip3 install poetry # If you are going to do -E security, you may have to: sudo apt install libffi-dev poetry install -E security
poetry install -E security is only necessary if you want to use authentication and SSL encryption (which is recommended).
poetry install is sufficient if you do not want those features.
The old way
This is the manual way to install the dependencies. Dependencies will be installed to your system, and virtual environments will not be used.
The following dependencies are required:
pip install praw pytumblr ImgurPython jsonpickle tornado youtube-dl git+https://github.com/ankeshanand/py-gfycat@master git+https://github.com/upbit/pixivpy py3-pinterest
You'll want to use Python 3, which for your environment may require you to specify
pip3 instead of just
If you want to require the user to login before they can interact with the server, you must install passlib:
pip install passlib bcrypt argon2_cffi
3. Generate SSL keys
cd Liked-Saved-Image-Downloader/ ./Generate_Certificates.sh
This step is only required if you want to use SSL, which ensures you have an encrypted connection to the server. You can disable this by opening
LikedSavedDownloaderServer.py and setting
useSSL = False.
4. Run the server
poetry run python3 LikedSavedDownloaderServer.py
Starting the server on boot
If you want to use Systemd, do the following:
sudo cp content-collector.service /etc/systemd/system/content-collector.service sudo systemctl enable content-collector sudo systemctl start content-collector
When updating the server, you can use the following command to restart it:
sudo systemctl restart content-collector
You can also use cron, but it's more of a hassle to stop/restart the server:
# Must be root account for access to port 443 (or 80 for unsecured servers) sudo crontab -e
Add this to the file that opens for editing (customize path to your liking), then save that file:
@reboot cd /home/pi/ContentCollector && sudo poetry run python3 LikedSavedDownloaderServer.py 2>&1 | tee LikedSavedServer.log
Reboot your system to start the server.
If you did not use poetry
Access the server
Open localhost:8888 in any web browser.
If your web browser complains about the certificate, you may have to click
Advanced and add the certificate as trustworthy, because you've signed the certificate and trust yourself :).
(Explanation: this certificate isn't trusted by your browser because you created it. It will still protect your traffic from people snooping on your LAN).
If you want to get rid of this, you'll need to get a signing authority like
LetsEncrypt to generate your certificate, and host the server under a proper domain.
When first running the server, you will be prompted to set a password.
If you forget your password, simply delete passwords.txt.
The home page provides access to all server features:
Set up accounts
Use Settings to configure the script:
Make sure to click "Save Settings" before closing the page.
You don't have to fill in every field, only the accounts you want.
Go to the Download Content page and click "Download new content":
Wait until the downloader finishes (it will say "Finished" at the bottom of the page). While the downloader is running, the "Download new content" button will disappear.
Enjoy! Use Browse Content to jump to random content you've downloaded, or browse your output directory:
The browser should scale nicely to work on both mobile and desktop.
The script requires login before running the script, changing settings, or browsing downloaded content.
If you host Content Collector on the internet, you should rely on a more robust authentication scheme (e.g. use a reverse proxy which won't proxy requests to Content Collector until you have authenticated with the proxy server). Content Collector was designed for LAN use.
Note that all login cookies will be invalidated each time you restart the server. If you don't restart the server, your browser should remember login indefinitely.
The web interface will automatically prompt for a new password when first starting up.
You can also use
PasswordManager.py to generate a file
passwords.txt with your hashed (and salted) passwords:
python3 PasswordManager.py "Your Password Here"
You can create multiple valid passwords, if desired. There are no separate accounts, however.
If you want to reset all passwords, simply delete
LikedSavedDownloaderServer.py and find
enable_authentication. Set it equal to
False. You must restart the server for this to take effect.
Running the script only
This is deprecated. You should use the web server to configure and run the script instead.
settings_template.txtinto a new file called
Fill in your username and password
Falseif you are sure you want to do this
Run the script:
Wait for a while
Check your output directory (the default is
outputrelative to where you ran the script) for all your images!
If you want more images, set
Tumblr_Total_Requests to a higher value. The maximum is 1000. Unfortunately, reddit does not allow you to get more than 1000 submissions of a single type (1000 liked, 1000 saved).
Not actually getting images downloaded, but seeing the console say it downloaded images? Make sure
settings.txt has several additional features. Read the comments to know how to use them.
OSX Python issues
On OSX, running the downloader from the Content Collector server may cause this error:
Output: output objc: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called.
This is a problem with Python and OSX's security model clashing. See this issue for an explanation.
To work around it, you need to first run
…before running the Content Collector server in that same terminal.
Or add the bash profile suggested in this answer.
Feel free to create Issues on this repo if you need help. I'm friendly so don't be shy.