Cross-computer sync of DEVONThink indexed folders

I was hoping one of the DEVONThink exports here could answer a quick question for me. The Google has not found me a clear answer to this. I could experiment, but perhaps someone already knows the answer.

If I set up a folder and index it in DT 3, and then set up that database to synchronized to a DT sync store (on a WebDAV server), does DT actually copy all of the indexed files to the sync store and then to any other computers that are also sync’d to that database in the sync store?

Presently, the folder I want to sync is stored in my SynologyDrive folder, and so is sync’d to all of my computers via SynologyDrive. If DT sync will actually handle this, I would potentially move this folder out of the SynologyDrive folder and let DT handle the synchronization. If not, of course, I would just keep it in the SynologyDrive folder and set DT up on each computer with a local up sync’d database indexing that folder.

As a secondary question: If DT DOES copy indexed files in a sync’d database, is there a good (or not so good) reason to have DT do the sync vs just having SynologyDrive do the sync and let DT work independently on each computer to do the indexing?


Best to post DT questions in their own Discourse, and I’m not an “expert”, but:

I think this would work as you describe. You will probably have to tell the second database to become an Index, too—it might just keep the sync in the database otherwise.

As for the benefits, I don’t use Synology, so I can’t really say. This way both computers will have the same files in the same database?

Alternatively, you could have a non-synced database on each computer, sync the folder with Synology, and index it in each of the two computer’s non-synced databases. This should work the same way, I think.


There are a lot of potential gotchas with DT3 Indexing.

Can I suggest that a more robust way to handle this would be to set up the folder on one of your computers, then create a local sync store (not necessarily Webdav) on your Synology and sync your database to that sync store. Then go to your second computer and create another local database synced from the local Synology sync store.

1 Like

What problem are you trying to solve?

I agree with @rkaplan - DT isn’t something I would rely on for this syncing task.

I switched from Synology Drive to ResilioSync, and have found it to be a lot more reliable.

To clarify - I do think DT3 can do the syncing well. I just wouldn’t use DT3 indexing.

But you cannot have two computers sharing the same database on DT3, even if they both have access to the Synology device. Instead you have to set up a database on each computer and have each computer in turn sync to the same sync store on the Synology device.

This type of syncing using DT3 is very robust and works well.

1 Like

@rkaplan: Thanks. I too have found DT3 sync to be robust.

I was not thinking of putting the database on a server and having two different DT instances on different computers opening the same database. Each computer will have its own, locally stored, database outside of any sync’d folder. My question is whether, if I am going to have indexed folders that I want present on more than one computer, it is better to have DT on each computer index the local folder copy and keep the folders in sync via a file sync service (eg SynologyDrive, but ResilioSync could just as well be used), OR have the files being indexed stored OUTSIDE of any sync folders and use DT sync to perform the copy between computes. In the latter case, does DT actually copy the files if they are indexed to the sync store and from there to any other computer syncing a database against that store?

@JohnAtl: I have used both SynologyDrive and ResilioSync. I have personally found both to be reliable, each with its own gotchas. Both have features that I find desirable and places where I have had issues. I guess nothing is perfect!

1 Like

I think it depends on how often changes will happen and how low-latency you’ll need to access changed files on different devices.

DT sync’s always been slower for me than Dropbox or iCloud, so if Synology/whatever is more like those cloud services, I’d sync the folder with that and index the folders into “separate” databases.

A caveat: I would expect that the separate databases will have separate DT3 metadata. Things like replicants and smart rules might not worth the same way.

Why is it essential to index the folders rather than importing the files to DT3?

1 Like

@rkaplan: It may not really be so essential. The particular use case I am looking at for this purpose is archiving my emails. My plan is to export each email in its raw email format, and process each file to extract any attachments as well.

I like the idea of keeping the files easily accessible in the Finder so that I can use *nix tools like grep to search through them, and also have the attachments sorted out as separate files which is convenient for me.

Yes, I know DT allows importing of emails (via a extension) and can display said emails and search them. You are probably going to tell me to just create a DT database for my emails and do this all in DT, and you are probably right, for as long as I continue to use DT.

1 Like

As I said in my initial reply, there can be some unanticipated gotchas with indexing which cause you to lose files. I would not suggest indexing for frequently changing, mission-critical data such as this.