this post was submitted on 23 Jul 2024
31 points (86.0% liked)
Programming
17305 readers
259 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This post misses the entire point of JSON/TOML/YAML and the big advantage it has over databases: readability.
Using a file based approach sounds horrible. Context gets lost very easily, as I need to browse and match outputs of a ton of files to get the full picture, where the traditional methods allow me to see that nearly instantly.
I also chuckled at the exact, horribly confusing example you give: upd_at. A metadata file for an object that already inherently has that metadata. It's metadata on top of metadata, which makes it all the more confusing what the actual truth for the object is.
I know! right?
Some say thay since you can use 'tree' and things like ranger to navigate the files, it should work alright. But I guess if you have one giant metadatafile for all the posts on your blog, it should be much easier to see the whole picture.
As for upd_at, it does not contain information about when the files have been edited, but when the content of the post was meaningfully edited.
So if for example I change the formatting of my times form ISO3339 to another standard, it changes the file metadata, but it does not update the post content, as far as the readers of the blog are concerned with. But I get why you chuckled.
Tip:
find -type f | xargs head
(but no it's not comfy)but I don't think going to "one giant metadatafile" argument helps; personally my attention starts splintering far sooner than that. Most of the time, if I'm looking at meta-data of an object, I'm not just looking at that single object, I'm reasoning about it in relation to other data points (maybe other objects in the same collection, maybe not). If at some point I want to shift my focus from created_at to updated_at or back, I need that transition to be as cheap as eye saccade. So by splitting the data to multiple files you are sort of setting "minimal tax" already pretty high.
That said, for simple projects where you want to have as few dependencies as possible, I think it's fine; it might or might not be better than raw-dogging your own format. I've actually implemented pretty much this format multiple times when I was coding predominantly in Bash. (Heck, eg. my JATS framework is pretty much using FAMF for test run state 😄 .) Just be careful: creating / removing files and directories can be a pretty risky operation -- make a typo in (or fail refactoring) a shell variable and you might be just
rm -rf
'ing your own "$HOME". It might be one of things you want to do less of, not more.BTW, I chuckled because you turn from
created_at
tocre_at
for no apparent reason. (I mean, if you like obscure variable names, fine by me, but then why would you call itcreated_at
in the first file?)BTWBTW, I love your site, I wish most of the web looked like that; the grey gives me sort of nostalgy :D Also you reminded me that I should give Kagi a try...