This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # announcements (6)
- # babashka (17)
- # beginners (72)
- # calva (27)
- # cider (26)
- # circleci (6)
- # clj-kondo (35)
- # cljdoc (3)
- # clojure (22)
- # clojure-australia (2)
- # clojure-dev (45)
- # clojure-france (2)
- # clojure-italy (2)
- # clojurescript (60)
- # conjure (16)
- # cursive (8)
- # datahike (10)
- # datascript (1)
- # datomic (3)
- # emacs (5)
- # fulcro (16)
- # graalvm (4)
- # honeysql (1)
- # joker (10)
- # luminus (3)
- # malli (7)
- # off-topic (28)
- # pathom (4)
- # pedestal (2)
- # polylith (1)
- # re-frame (6)
- # reagent (9)
- # reveal (4)
- # shadow-cljs (48)
- # slack-help (1)
- # tools-deps (30)
- # vim (24)
I'm thinking of a recent event where someone accidentally erased his hard disk..., albeit not using this lib.
I would honour the original unix rm command, and only delete the symbolic link and not walk recursively down.
rm -rf * will deref the symbolic link and not go into the link and delete what is contained within there.
Do you or anyone else you know of have a use case that would require following symlinks and recursively delete the contents of target directories? I don't think I've ever wanted to do that and, as you suggest, it's an incredibly dangerous feature. My suggestion would be to not implement this feature. If any user of the library complains then they can write code themselves to follow links (and accept the risk that results from this).
yeah. an alternative could be to rename the option to
:dangerously-follow-links, but yeah, one could just write their own function if they want this
@U018QDQGZ9Q I have seen people make bash scripts for this, but I haven't had the need myself. I just added it because a lot of functions in java.nio already have this behavior
ok , so if you're not opiniated neither, (;p. Maybe behave like the example you found in java.nio would have been an option?
There is no delete-tree / rm -rf in java.nio directly so you have to make choices here. Making it the same as rm -rf makes sense, I'll just leave the option out
Agree with @cdpjenkins Remove the possibility to hang yourself with delete-tree, but leave enough rope for anyone really wanting to hang themselves
As the "recent case" who erased a PC's hard disk this way, I agree with the above. Do what
rm does by default and remove only the link, and make it possible (if you really needed to) to follow the link.
There was some code I saw recently that would follow symlinks inside the tree you were deleting, but only just remove the link (and not follow) if it pointed outside of the tree. That, if it were the default behaviour, would have saved me too. Worth a thought.
That is the default behavior of rm -rf and also fs/delete-tree. Just released the lib. Many thanks all. It comes bundled with babashka 0.2.9, also just released.
For the curious, this is what I found after investigating the incident last week: I was rewriting my backup bash script in babashka. My work mac, for some reason having to do with corporate IT nonsense or ActiveDirectory or somesuch, has a symlink in the home directory (in /Users) of the form
BBC Desktop -> /. (This probably should have been excluded from the rsync backups.) My old bash script, for various reasons, needs to
rm -rf * some old back ups once in a while. Instead of calling rm through clojure.java.shell, I decided to write some very naive Clojure recursive function that walked the tree and deleted the files and directories. When it got to that symlink, it just followed it and start recursively deleting from
/ on the back up PC... Bye bye back up PC.
It had eaten through /etc and most of /usr when I discovered the issue and unplugged the external ssd drive with the backups. Otherwise, if I didn't blow the stack, it would have deleted the back ups too.
So the recent event was a good reason to not include the option in
I've been working with some people on a project where I suggested for us to consult with a DBA (since I'm not one and I'm not that confident in my SQL-fu). The day of reckoning has come, the comments and suggestions started pouring in. "And since you need different classes in the application layer in either case, adding some extra checks when unserializing is trivial" - oh, this is not a good start. So far my proposal of a bunch of simple tables with constraints has been met with table inheritance and JSON fields with constraints in the application layer.
Given discussion on laptops a couple of days ago… I’ve been using M1 MacBook Pro for a couple of week. In short, it’s amazing. IntelliJ/Cursive, shadow-cljs watch, kaocha watch, Fulcro RAD app running in browser. Laptop stays cool, totally silent, and runs for hours with only 20-30% battery consumed. So different than my Intel MBP, which would last for no more than 45m doing similar jobs! It’s a fantastic dev setup — I still use a 2019 15" MBP as my main desktop machine, but I’m finding that I like programming on the M1 MBP better. Even with only one screen! Strange side effect, though. My hands get sometimes painfully cold using it, because the aluminum case is a giant heat sink!!! https://twitter.com/RealGeneKim/status/1358476835315585024 One downside: there are no ARM JVM builds with JavaFX yet. So, I can’t use Reveal anymore, which is a bummer. (I am tempted to install an x86 JVM, but am irrationally afraid that I’ll screw up my working dev environment… 🙂
I've been using a Surface Pro X (Win10/aarch64) for a while, so I hear your JavaFX pain. One thing I've used to work around it is https://github.com/djblue/portal if you haven't tried it yet. In my case portal is usually running in a JVM process in WSL2 and I connect to its localhost port manually from a native Windows browser or vscode webview.
I'm waiting for the 16 inch model that can drive two screens 🙂. My 2012 MBP has been in need of replacement for years now but I've held off doing so due to a) butterfly keyboard and b) impending transition to ARM. So it's really great to hear that the M1 model works well for folks (with some of the tools that I like to use) and I can't wait to get one for myself later this year.
I'm glad to hear Apple's third chipset transition is going smoothly (although I can't imagine buying another Apple device at this point). I went through both of their earlier transitions (680x0 -> RISC -> Intel) and they seem to do a really good job.