Although I planned to spend my time this week working on setting up D3, I instead worked primarily on debugging jargonBot and repostBot. Beyond the copious amount of errors, there were a few things which greatly slowed down this process.
One is the environment in which the bots run. Normally, when running into an error, I would run the script on a test designed to trigger that error. Unfortunately, I do not have control over which posts/comments people make on Reddit, and short of creating spam posts, I do not really have the ability to engineer what my bot will encounter.
Another setback is related to the structure of the updateModels method. In order to guarantee that people have had time to view a bot’s comments, the algorithm does not “learn” until a full hour after it makes the comment. This means that the bot has to run successfully for a full hour after making a comment for me to see the error, and after making a change I have to wait at least another full hour to see if it has fixed the problem.
The final issue is that my code relies on a number of APIs. The most prominant of which, PRAW, is what allows me to connect to Reddit, but I also use the OEDs dictionary API, and access data from S3. If any requests to these APIs return an error (for example, of Reddit’s server temporarily goes down, my bot attempts to make a large number of posts in short succession, my bot is banned from a subreddit), then it will disrupt my bot’s script. In my code, I need to use try/except clauses to account for each potential interruption. Unfortunately, I usually can’t identify these until they happen at least once.
Thankfully, I believe I have resolved almost all of the bugs in jargonBot and repostBot’s scripts, and they are now able to run overnight without exiting.