No. I don’t use their todo features or any of the calendar integration. It’s solely a notes app for me.
I think it depends on what one’s note-taking objectives are. I’ve tried both Craft and Noteplan and couldn’t get any real traction in either, but was up and running in Obsidian in about an hour and never looked back. I suspect it’s because I’m using Obsidian for research rather than for project management, document production/sharing, or as a notes “everything bucket.” (When I’m in research note-taking mode the last thing I want staring back at me is a calendar and a to-do list …)
But with DT you only get the data out by USING DT itself. If that software package isn’t avaiable to use you CANNOT export the data and have it in the same format.
Obsidian doesn’t hide the data in a database so it’s inherently more future proof because you see all the data there is in your standard file system.
Add 1 minor change, make sure to use totally cross platform filenames for your Obsidian notes and you can easily move even the structure to another system.
I don’t bother with much organization. I just create a note that is an index. I do have a few folders, here’s my folder system
in Obsidian
When a current project is finished the notes all move into the main Adversaria folder and the TOC note goes up into my Top folder
Sure you can. Anything you put into DT stays in its native format. If for some reason the DT app itself were to go poof, it wouldn’t take the files you imported with it. You would always be able to right click on the .dtbase2 file in which your items are stored, select “Show Package Contents,” and pull your files out of the resulting Finder folders. (There’s a folder for each type of file, e.g., doc, pdf, rtf, etc.) You don’t need to open the DT app to do that—it’s all right there in Finder, albeit not entirely transparent to you. It’s totally future proof.
If you index rather than import, your files will always be transparently available via Finder.
This is true but misleading.
Let’s use a file cabinet as an analogy. You have a big ol’ filing cabinet full of hanging file folders, with various manilla folders inside those. You also have a bunch of post-its attached to visually flag/tag various folders and documents.
Now get rid of the filing cabinet.
In the case of Obsidian, if you get rid of the filing cabinet you’re left with a bunch of cardboard boxes containing all your original files, folders, and post-its.
In the case of DEVONthink’s default DB mode, if you get rid of the filing cabinet you’re also throwing away all the hanging folders, all the manilla folders, and all the explanatory post-its. The remaining papers are sorted by some less-helpful criteria (letter size, legal, A5, A4, etc.).
In both cases, you can still read the individual files just fine. You “have your data”, in a very real sense. But I think it’s obvious that the two scenarios aren’t the same.
You’re absolutely correct that indexing gives you the file structure as it is in Finder - but that index isn’t auto-updated, and there are a number of other issues that can come from doing that. In Obsidian, you always see exactly what’s on disk.
It’s totally cool if none of that matters to you, and even if you prefer DEVONthink after understanding the tradeoffs - but it’s a very notable difference between the apps.
Well I’ve had DT take a bunch of files and kill them. That’s aseparate problem and one reason I no longer trust DT at all.
However more to the point you might just get the file but the entire structure is gone when you recover. And if you’ve ever actually tried to get data out from a DT file by showing the package contents you’l find that the structure of the nested folders isn’t particularly useful, many of them are empty folders and actually finding the data is difficult because you’ve lost all the links and metadata that you developed with your system.
In Obsidian the metadata is embedded in the file in plain text and visible. The structure is as you set it up and visible. Nothing is hidden.
Sure. But the question wasn’t whether or not your file structure would survive without being able to fire up DT; the suggestion was that DT wasn’t future-proof because you wouldn’t be able to get your files out in the same format. The files would in their original format if you had to pull them out of the package: a PDF would still be a PDF. Your folder structure would need to be rebuilt, but you’d have all your files.
Given what I get out of DT as opposed to a collection of Finder folders, and given how many files I have, I’m happy to make the trade-off. Your mileage may vary, of course.
It’s not, no. But I could still get my files, which is what I care about.
Respectfully, I think this is the point of disagreement. Oogie said:
You seem to be parsing Oogie’s reference to “data” to mean “files”. But for the people on the other side of the issue from you, “data” means more than just the files. Having a proper copy of the files in their folder structure, is part and parcel of “get[ting] the data out” in this case, since the organization of data into folders is also “data”.
From that vantage point, @OogieM is correct - you can’t do that except by using DEVONthink itself.
Yes, that’s exactly how I understood it. I absolutely understand the pain of having to rebuild a folder structure.
I’m not sure what you mean by “metadata” here. Do you mean YAML frontmatter?
Exactly! Plus all the tagging and links are not still useful when just th files are extracted out of the structure of a DT database.
That is not correct. Data to me is the entire set of information that I use when referncing something. So it’s the structure of the file itself, the location, the contents, the file type, the links, the tags, the information on when created and when edited and more.
See above.
In Obsidian the tags are included in the file. The links are included in the file. The files themselves live in the open in the structure you define and you still have access to all the tools the operating system provides to look at, parse and reference your information.
As I said the only gotcha in Obsidian in terms of future proofing is that you need to be sure that your filenames are portable.
Remember future proofing is not just about software, it’s about changing entire operating systems and structures and how do you know what you had so you can rebuild it in a future system.
Keep in mind I’m the person who has and uses a now over 30 year old set of emails regularly. I believe that the oldest file in terms of origin date I have on my system that I regularly use goes back to 1981. I’ve moved my library of digital data from operating systems, base computers and software packages and versions of those pckages as required to keep the information available to me.
I also typically stick with a software stack for years or decades. It takes a lot for me to add or change a software package and doing so means making sure everything is moved up to the current system. Ditto for hardware changes which is partly why I am still running on a 2013 iMac.
I value long term accessibility for the digital information I generate.
Long term means that I expect that some of the information I am creating will need to be accessed 25-50 and perhaps even 100 or more years from now. So I think about that in all my systems.
fixed most of the spelling errors I think
SHPBRDNG.MD, 2022FLCK.MD, etc.?
All kidding aside, what do you do for naming conventions in light of futureproofing considerations?
Yes it is.
I’ve got a lot of battle scars from corporate powers-that-be deciding to move from one bespoke enterprise software package to another bespoke enterprise software package with little regard for what was going to get lost in the process. (And the bespoke package is rarely as good as the off-the-shelf alternative …)
You’ve got me beat there! My oldest email only goes back to 1992. The banking records I dragged over from Quicken when I migrated from Windows do go back to the Pleistocene, though.
There’s absolutely nothing in your post that I disagree with. I’ve had to take a deep breath and accept that some of my hard work wouldn’t survive migration to another system (all of my Lightroom edits, for instance), but it’s a risk I sometimes need to take.
I’ve really been enjoying Obsidian. I’m not using it to its full potential but the fact it is just .md files makes it perfect
It wasn’t my intention to turn a discussion about PKM tools into an issue with sides, and I certainly hope my posts didn’t have that result. I misunderstood OogieM’s post and was trying to correct what I took to be a misperception regarding the way DT handles imported files. (I read “format” and took that to mean “file format.”) I am certainly not disputing the value of having a robust system for file and information management and I’m more than happy to call links and directory trees data since that’s what they are. I don’t think there’s a disagreement here so much as a difference in needs and preferences.
There’s no tool, system, or workflow that’s going to be right for everyone, and I suspect that for this community part of the fun is exploring the wealth of tools to see what works best for us as individual users. We’ve got as many sides as a Buckyball.
DT provides functionality that I need to get the most out of my digital repositories—functionality that I don’t believe that I can easily find elsewhere or cobble together with other tools. But it’s definitely not for everyone. Using it means making tradeoffs, but they’re tradeoffs I’m willing to live with for the functionality I get.
One of the things that I appreciate about DT is that it gives me a lot of flexibility when it comes where and how I store my files. Initially, I did try to use DTP as an “everything bucket,” but have come to the conclusion that everything buckets seem attractive in the abstract but just aren’t all that in actual practice.
The various documents in my decades-old paperless office get imported into a DTP database because 1) I have no interested in building and maintaining a system of Finder folders to store them, 2) DTP gives me lots of ways of attaching useful information to the files themselves, 3) searching in DT is more powerful than searching in Finder, and 4) I can encrypt the database and sync stores. (With regard to item 2: some of that information may not survive if I pull the files out of DTP, but it’s not mission-critical information I’d need to work with the documents effectively. Trust me, if it’s information that needs to travel with the document, it gets attached in an app-agnostic fashion.) I can’t imagine trying to manage my paperless office from Obsidian, but I’m sure that there’s someone out there who’s doing just that.
I don’t import digital media (books, audio files, videos, images, etc.) into DTP. That stuff does live in a system of Finder folders that I index with DTP. I need DTP to mine them for information but I sure as heck don’t want to store them in a DTP database.
Ditto my research notes, which live very happily in Obsidian. They’re indexed in DTP too, and yes many of them have DTP uuids embedded in them. But again, if DTP were to blow up, I’d be able to rebuild the links using a tool like Hook. (Which might actually be a good thing; I’m giving Hook a serious look-see.) It would be a royal pain to have to do so, but I’m willing to accept that potential downside to get the other benefits DTP gives me. (If there is another tool that would allow me to search inside many, many thousands of texts for a specific term, flag each instance for me, rank order the texts in which they appear in terms of relevance, and allow me to get to each one with a single click, please let me know, because if DTP were to go poof, that’s the functionality I would need to replace PRONTO.) Again, I can’t imagine trying to store all of my research materials in Obsidian along with my notes, but there are people for whom that is exactly the right choice.
I might be interested in this as well. While I can get links from the Apple default apps, it is a clunky affair and after looking at the site I discovered that Hook does not require a subscription. But I see two potential downsides to using Hook:
- I’d become dependent on Hook links. I’m a bit more comfortable being dependent on Apple’s links because I’ll never be leaving the Apple ecosystem and I’ll never have to purchase an upgrade.
- They still don’t have a iOS version (its in beta). I would want to make sure that Hook links work everywhere across all platforms. I could not determine from what I read if this was going to be true or not once the iOS app is released. I got the impression that the links are platform dependent but I may be misinterpreting what I read.
My rules include:
A critical component of naming is to not use illegal characters in the filename. If you ever anticipate using any operating system other than the one you are currently using make sure you cover all 3 major systems’ restricted characters. For Linux that means NO SPACES!
No spaces
No special characters other than - and _ I use - to separate subsets of the same general subject and _ in place of spaces
Define the naming system so that looking at a name will get you most of the way to knowing what’s inside the folder or the file.
Exampes
2022-03-21_Annual_Scrapie_Inspection_Report.pdf
2022-03-21_Scrapie_Report_Acquired_sheep.csv
2022-03-21_Scrapie_report_changed_tags.csv
2022-03-21_Scrapie_report_died_butchered_sheep.csv
2022-03-21_Scrapie_report_sold_sheep.csv
Sheep-Disease_Scrapie (a folder for info about Scrapie)
2021-06-22_TSU_Samples_Taken.ods
Book-Swann-Three_Bags_Full-kindle_2021-07-09_13-47-49 (an aside this is a great sheepy murder mystery where the sheep are the detectives)
2012-07-09_Colored_Pencil_Techniques.pdf
Standardize how you will use dates if any and what formats keeping in mind how computers sort things.
My date specific items generally start with a filename of YYYY-MM-DD_
Circa dates use -00- in place of any missing data and a c after the date. 2020c-
Dates that are not circa but to a single level just go that far i.e. 2016_ or 2016-01_
Range dates use _ between the data ranges i.e 2014-10-05_2015-01-01_.
Decide how you will capitalize words keeping in mind that different systems are either case sensitive or case insensitive. Personally I use camel case for readability and no real rules.
Standardize on a few file formats that are open source or ubiquitous as much as possible. (ODT, ODS, PNG, TIFF, CSV, SQLITE, JPEG, PDF, ZIP, DMG etc.)
Convert any files that are in unusual formats whenever possible.
Pad numbers with leading zeros to the precision necessary to handle your data. This is critical for proper sorting among different machines.
001 not 1
07 not 7
Define a very flat filing system that mimics a flat paper system without any pendaflex super folders. Only have one level of folders in your “File Cabinet” folder.
Do not depend on searching to find things. Search systems come and go and depending on your data have numerous false positives or negatives.
Do not depend on system tags to find things. They are not portable across systems.
If you embed tagging data in your files for use by some other tool make sure you use a standard and restricted vocabulary.
Tags should be singular not plural i.e. cat not cats
Create a folder for current active project support material if necessary.
Decide whether to lump all someday/maybe and waiting for files into the main system or into separate folders. I do a mix.
Decide what parts of the system need to be mobile. This is less important as mobile devices have more space and power to manipulate my files but I still can’t carry everything with my on my phone or iPad.
Plan for a robust backup system that you also test regularly.
An untested backup is worthless. Make sure you can actually retrieve files from your backup by testing it.
Plan for how to review the filing system to remove unneeded files on a regular basis.
And from experience:
When switching from whatever you are using now to a more complete and well defined system dump everything into a new folder called backlog and explicitly move files out as you rename them and decide where they should go. This will also aid in weeding out duplicates.
Actually Lightroom stores all that in an SQLite datbase that you can in fact access with other tools. I tested that before I comitted to using LightRoom for all my photo cataloging both personal and for our Historical Society.