Thanks for the follow-up offer, Rob, but I don’t think I need to ask for that unless it’s a real quickie that you’re interested in doing. Here’s my use case and the current state of my PHP program to use this data.
Use Case: I frequently query my wa/wf posts (anon and blog) for specified keywords, but I don’t wish to do that against the live data, as I’m concerned that may put an unnecessarily load on the server. So I periodically manually download the export.json file and use a PHP program to filter it to output just the keyword matched lines. So my queries are always done against the saved export files and I’m not tying up server resources.
The PHP program is currently coded to use the export.json file and it works great for my purposes. Now I’d like to replace the currently manual download with a periodic automated one. CJ confirms what I found in that the export.json file is not downloadable via curl or the API. I could use the API to alternatively get all my posts as you’ve suggested, and have successfully tested an automated download for that using curl with an access token. However, the json format is different from that of the export.json file, so I need to reprogram the PHP script to query the API-generated data. I plan to do that as I have time, but until that’s done I’ll just manually download the export.json file.
CJ mentioned a feature request for downloading the export.json file without having to do it manually from the /me page while logged in, and I think that would be of use to some others besides just me. Maybe your mention of a Python program to do this is how such a feature request could be implemented? But in the meantime, I’ll just manually download the files and work on the program revision to use the API data as I have time.
Thanks to you and CJ for your responses.