Origin of the Requirement:#
I really enjoy reading blogs. However, I always come across some interesting blogs waiting to be discovered. So, I started following some recommended blog channels. At the same time, I also wanted to share some of my favorite blogs, but it was too troublesome to manually copy them every time. So, I thought of creating an automated way, and also learn about Github Action in the process.
Journey#
- Most of the time was spent on figuring out how to obtain the opml file. The documentation provided by Tiny Tiny RSS was quite concise, and most of the online resources were limited to deployment instructions. So, I had to figure it out on my own.
- The web version of TinyTinyRSS provides a button to export OPML, and the URL that the button points to is
http://example.com/tt-rss/backend.php?op=opml&method=export
. However, it requires authentication. You need to log in. - The example provided by TinyTinyRSS also includes an API call for logging in, so I started with that. I tried adding the data parameter directly. However, I tried various ways to add it without success.
- Later, I noticed that a successful login would return a session value, so I experimented with curl first.
# Login and get Session ID SESSION=$(curl -s -d '{"op":"login","user":"user","password":"password"}' http://example.com/tt-rss/api/ | python -c "import sys, json; print(json.load(sys.stdin)['content']['session_id'])") # Get opml file curl -o my_tiny_tiny_rss.opml 'http://example.com/tt-rss/backend.php?op=opml&method=export' --cookie "ttrss_sid=${SESSION}"
- Translating it to Python, I used requests. Looking back now, this should be a pretty basic operation, and I had encountered sessions before. If I had remembered earlier, I could have saved some time.
- The web version of TinyTinyRSS provides a button to export OPML, and the URL that the button points to is
- There are ready-made libraries for parsing opml, so I used one.
- Then, I extracted some personal information and wrote it in a configuration file. I encountered a pitfall here.
data = {'op': 'login', 'user': user, 'password': password}
. Initially, I wrote it like this:data = f"{{'op': 'login', 'user': {user}, 'password': {password}}}"
. Although they look the same in form, the former is a JSON object, while the latter is a string. This also reminded me that although Python has dynamic typing, I still need to be careful about type errors. - Finally, I used Github Action. I had used it before, but I directly used a pre-written workflow. So, I also spent some time learning. I encountered several issues:
- The format of the Yml file. You can use YAML Validator to check it. I think Vscode should also have a corresponding plugin.
- The variables needed at runtime are stored as secrets. I used to think that the value of a secret can only be a string. However, in Getting Github repository secrets in Python in Github Action, it is mentioned that you can put an entire yml file in the value. So, I thought that a JSON file should also work. I tried it and it worked. This way, there are very few changes needed in my code.
- The triggering method of the workflow, you need to add manual triggering, which requires adding
workflow_dispatch:
.
Knowledge Gained#
- Combination of pipes and Python. The following line was written by ChatGPT, it's amazing.
SESSION=$(curl -s -d '{"op":"login","user":"user","password":"password"}' http://example.com/tt-rss/api/ | python -c "import sys, json; print(json.load(sys.stdin)['content']['session_id'])")
- Usage of Github Action
- Python requests
Conclusion#
This project is considered a very small project, but it still took me half a day, with the help of ChatGPT. I came across a statement before, which said that search engines have greatly reduced the difficulty for ordinary people to acquire knowledge, and ChatGPT has further reduced it in a very objective way. Based on my own experience, I strongly agree with this view. Through my supplementation of background information and questioning, ChatGPT saved me the time I would have spent on various tutorials and incomplete documentation. This interaction is more natural than what search engines provide.