Also don't let people get their hopes up that the two poorly running sites still work.
|1 month ago|
|downloaders||7 months ago|
|images||1 year ago|
|templates||1 year ago|
|utils||11 months ago|
|webInterface||1 month ago|
|webInterfaceNoAuth||2 years ago|
|.clang-format||2 years ago|
|.gitignore||11 months ago|
|CreateDatabase.py||11 months ago|
|Generate_Certificates.sh||2 years ago|
|LICENSE.txt||1 year ago|
|LikedSavedDatabase.py||11 months ago|
|LikedSavedDownloaderServer.py||1 month ago|
|PasswordManager.py||2 years ago|
|ReadMe.org||1 month ago|
|content-collector.service||1 month ago|
|pyproject.toml||10 months ago|
|settings.py||7 months ago|
|settings_template.txt||2 years ago|
|submission.py||11 months ago|
Use this awesome tool to download
…which you've marked as Liked, Hearted, or Saved from
…to disk! You can then browse the results.
Crossed out = These sites do not have official APIs, so breakage can happen much easier. As of 2021-07-04, they are both non-functioning.
Check the Releases page for a ready-to-use version of Content Collector. If you find a release for your system that works, you can skip straight to the Usage section.
git clone https://github.com/makuto/Liked-Saved-Image-Downloader
Poetry can be used to automatically request the proper dependencies:
sudo pip3 install poetry # If you are going to do -E security, you may have to: sudo apt install libffi-dev poetry install -E security
poetry install -E security is only necessary if you want to use authentication and SSL encryption (which is recommended).
poetry install is sufficient if you do not want those features.
This is the manual way to install the dependencies. Dependencies will be installed to your system, and virtual environments will not be used.
The following dependencies are required:
pip install praw pytumblr ImgurPython jsonpickle tornado youtube-dl git+https://github.com/ankeshanand/py-gfycat@master git+https://github.com/upbit/pixivpy py3-pinterest
You'll want to use Python 3, which for your environment may require you to specify
pip3 instead of just
If you want to require the user to login before they can interact with the server, you must install passlib:
pip install passlib bcrypt argon2_cffi
cd Liked-Saved-Image-Downloader/ ./Generate_Certificates.sh
This step is only required if you want to use SSL, which ensures you have an encrypted connection to the server. You can disable this by opening
LikedSavedDownloaderServer.py and setting
useSSL = False.
poetry run python3 LikedSavedDownloaderServer.py
If you want to use Systemd, do the following:
sudo cp content-collector.service /etc/systemd/system/content-collector.service sudo systemctl enable content-collector sudo systemctl start content-collector
When updating the server, you can use the following command to restart it:
sudo systemctl restart content-collector
You can also use cron, but it's more of a hassle to stop/restart the server:
# Must be root account for access to port 443 (or 80 for unsecured servers) sudo crontab -e
Add this to the file that opens for editing (customize path to your liking), then save that file:
@reboot cd /home/pi/ContentCollector && sudo poetry run python3 LikedSavedDownloaderServer.py 2>&1 | tee LikedSavedServer.log
Reboot your system to start the server.
Open localhost:8888 in any web browser.
If your web browser complains about the certificate, you may have to click
Advanced and add the certificate as trustworthy, because you've signed the certificate and trust yourself :).
(Explanation: this certificate isn't trusted by your browser because you created it. It will still protect your traffic from people snooping on your LAN).
If you want to get rid of this, you'll need to get a signing authority like
LetsEncrypt to generate your certificate, and host the server under a proper domain.
When first running the server, you will be prompted to set a password.
If you forget your password, simply delete passwords.txt.
The home page provides access to all server features:
Use Settings to configure the script:
Make sure to click "Save Settings" before closing the page.
You don't have to fill in every field, only the accounts you want.
Go to the Download Content page and click "Download new content":
Wait until the downloader finishes (it will say "Finished" at the bottom of the page). While the downloader is running, the "Download new content" button will disappear.
Enjoy! Use Browse Content to jump to random content you've downloaded, or browse your output directory:
The browser should scale nicely to work on both mobile and desktop.
The script requires login before running the script, changing settings, or browsing downloaded content.
If you host Content Collector on the internet, you should rely on a more robust authentication scheme (e.g. use a reverse proxy which won't proxy requests to Content Collector until you have authenticated with the proxy server). Content Collector was designed for LAN use.
Note that all login cookies will be invalidated each time you restart the server. If you don't restart the server, your browser should remember login indefinitely.
The web interface will automatically prompt for a new password when first starting up.
You can also use
PasswordManager.py to generate a file
passwords.txt with your hashed (and salted) passwords:
python3 PasswordManager.py "Your Password Here"
You can create multiple valid passwords, if desired. There are no separate accounts, however.
If you want to reset all passwords, simply delete
LikedSavedDownloaderServer.py and find
enable_authentication. Set it equal to
False. You must restart the server for this to take effect.
This is deprecated. You should use the web server to configure and run the script instead.
settings_template.txt into a new file called
Fill in your username and password
False if you are sure you want to do this
Run the script:
Wait for a while
Check your output directory (the default is
output relative to where you ran the script) for all your images!
If you want more images, set
Tumblr_Total_Requests to a higher value. The maximum is 1000. Unfortunately, reddit does not allow you to get more than 1000 submissions of a single type (1000 liked, 1000 saved).
Not actually getting images downloaded, but seeing the console say it downloaded images? Make sure
settings.txt has several additional features. Read the comments to know how to use them.
On OSX, running the downloader from the Content Collector server may cause this error:
Output: output objc: +[__NSPlaceholderDate initialize] may have been in progress in another thread when fork() was called.
This is a problem with Python and OSX's security model clashing. See this issue for an explanation.
To work around it, you need to first run
…before running the Content Collector server in that same terminal.
Or add the bash profile suggested in this answer.
Feel free to create Issues on this repo if you need help. I'm friendly so don't be shy.