Thanks for the link, @anon41602260. Indeed, I am aware of the official line from Cultured Code, and I have Googled a bit for . I’ve also written my own AppleScripts to work with the data in the past. I was most curious for advice, workflows, and perspective people had found useful in their own usage, rather than random scripts that half-do the job.
For those coming after me, what I wound up finding useful was:
- Exporting every view as PDF for quick and easy human-readable reference if ever needed. (It is important to be sure you’ve manually expanded all the lists in each view, as the PDFs directly reflect the current view, including hiding the truncated “n more items” at the end of longer lists.)
- Archiving a copy of the raw sqlite database for safe-keeping.
- Replicating my database to a new Things Cloud account.
- Keeping much of the archived data still within my main database for the time being by reorganizing and moving most items under a new “ARCHIVE” Area, where they could be pushed aside but not immediately discarded, and still easily reviewed. To keep these archived items distinct in searches, etc. I wrote a trivial AppleScript to let me quickly prefix each project with a string marking it as an archive and recording which area it originally came from:
[A] [#Area Name]
. I am gradually reviewing the items here and migrating key things back into the active system, and otherwise pushing more and more to Someday status so they don’t appear in most views, and then will delete them.
(3) was a key I hadn’t fully thought of. I mentioned the idea of just creating new Things Cloud accounts, but didn’t realize that it’s not only possible to leave behind existing data on an existing account, it’s actually effectively possible to quickly duplicate accounts (without requiring any sqlite database dance) thanks to how Things’ sync setup works:
- With the database you want to replicate synced, disable Things Cloud sync.
- Re-enable Things Cloud sync, but choose to set up a new account.
- Let it sync => the new account now has a full copy of your current database
- Disable again.
- Re-enable with your original account. (Be careful not to duplicate everything. I tend to say ‘replace mac from cloud,’ though if you definitely haven’t made any other changes in the meanwhile ‘replace cloud from mac’ should also be fine.)
This is obvious in retrospect, but it meant that I didn’t have to make an all new primary account linked to a random new email address as I first imagined, but rather I could push the archive to a new account on said random email address, and leave my main sync account where it is. (A +foo
-style suffix on a Gmail address would work—say, +things-backup-2020.05
—though I just used the catch-all forwarding on my personal domain to define a similar entirely new address there.)
With this method, it’s actually easy to arbitrarily fork your Things cloud database at any time. (Keeping a clear record of these different accounts, and potentially naming them descriptively as above, is obviously useful to keep everything from getting lost. I used 1Password, which is where I trust far-future me would look if I forget.)
I considered exploring doing surgery directly on the sqlite database, to merge or add items between databases, etc., but this always feels like a bad idea with the obvious risk of corruption or missed details.
I noted, however, that Things JSON schema for adding items via their URL scheme is fairly complete—it includes the vast majority of types and metadata supported by the app. (It can even, for example, set the creation and modification dates for an item.) There’s an obvious opportunity for them to better support the kind of round-trip archival I wanted by simply having an option like “Copy as JSON” for any selected item(s), but in the meanwhile I next intend to write some code to generate JSON for any chosen subtree in a Things sqlite database. A useful tool might:
- Render the full database content as HTML.
- Place a link next to each item which would fire the Things URL scheme to recreate that subtree as faithfully as possible using the corresponding JSON encoding.
Hopefully soon!