Fork me on GitHub
Martynas M07:06:57

Hey. How are these two maps different?

#:xt{:fn 'inc :id :my-id}
:xt{:fn 'inc :id :my-id}


the second one is 2 forms


a jeyword and a map

Martynas M07:06:36

Yeah, I found it too. It's so strange 😄

Christian Johansen07:06:38

And the first one is a different way of expressing {:xt/fn 'inc, :xt/id :my-id}

Christian Johansen07:06:56

The # character indicates a reader macro

Christian Johansen07:06:24

If you evaluate (set! *print-namespace-maps* false) you wouldn’t see the first form, and the difference would be clearer.

Martynas M07:06:12

Alright. I'll try to use it this way and then I might set up some kind of REPL config for it. Maybe CIDER supports this as a config param.


if you use lein you can set it in :global-vars in your dev profile


(or just do this in your dev init-ns / user.clj)


Is there a clear, short naming convention for sets? I couldn’t find anything in the various style guides and clojure.set seems to use mostly s, which can be easily confused with the string naming convention. I would love to use set, but… you know.


I see what you mean, and it makes me wonder why I haven’t thought about the same issue. I think I tend to name sets for what they are or do, like usernames or valid-state?


Hm, although sometimes I still need something more abstract, I think your approach makes a lot of sense in many cases. I’ll have to observe more when I actually use sets and why.


If you do come across some fitting generic name, like s, I’d love to hear about it 🙂


How about z? 😄


Well, it almost sounds like set, so maybe close enough. :D


hehe 😅 tbh I found xs confusing enough to begin with. Sometimes, short names are acronyms or initialisms, like s, and at other times, they’re… something else, like xs. No reason to mix onomatopoeia in there, too!


I often use xs in the context of varargs and pattern matching, like [x & xs], where I would read the xs like a plural form of x, so I find it more acceptable. Maybe it would help if s alone would actually be an acronym for “sequence”. I never really understood why str is not just called strcat to make str available as a clearer naming convention for strings (I think it even was like that in an earlier version of Clojure).


Agreed on str. I think I shadow it pretty often by mistake. I find xs similarly acceptable now in the contexts you mention, but I still see it tripping up beginners.


Interesting that str had a different name earlier on!

Martynas M07:07:32

I use sets for clojure.set and strs for clojure.str. Also some people use s for clojure.spec.alpha so s is not good.


Wouldn’t sets/`strs` suggest that there are multiple sets/strings? If I had a collection of sets/strings I also sometimes name it like that. Good point about s though. Maybe something like strx / setx could also work.

Martynas M08:07:17

I think I haven't worked with a collection that has multiple sets inside yet and I don't know more about it. Also if I work with a collection of strings I know what I have there and on top of that you would never use a list like this: my-list/function. I have never had issues with my naming. But it's only a suggestion and you know better what you like. Edit: Basically I avoid naming my data by their data type only. If I know it's usernames then I call it this way. If I know it's UUIDs then I call it this way. Edit: But even if you name your lists sets and strs you still don't call them like this strs/substring and sets/union .


Maybe the “/” in my previous post was confusing; it was meant to mean “or”, so not like strx/setx but strx for strings and setx for sets. Yeah, I would also try using a clearer name if the collection is about something specific, but sometimes I need more general names for more general functions.


I got the / bit because it wasn’t a part of the block (good formatting on your part). @U028ART884X: Imagine a generic function like

(defn my-fn [my-set] (first my-set))
You don’t care about what data the set contains, but you do care that it’s a set. What would you call the param instead of my-set?

Martynas M10:07:13

You can use my-set just fine. I don't remember the thread but I was talking about imports. Also if your function is a oneliner and the input is the only thing that you operate on then you can use one letter vars just fine. Everything is a preference and I don't know what value you get from my opinion. I only wanted to tell about imports.


@U028ART884X well, @U032GJ90EMA was just fishing for ideas, as far as I could tell 🙂 Nevermind!


Could a CI pipeline detect changes to a hierarchy? For example:

+ ;; a warning commit
+ (derive :static/water :static/substance)
Here the CI should warn: made :static/water a :static/substance. this cannot be undone.
- ;; a warning commit
+ ;; a sometimes-warning commit
  (derive :static/water :static/substance)
+ (derive :fruit/blueberry :fruit/berry)
+ (derive :static/water :static/drinkable)
Here the CI should warn: made :static/water a :static/drinkable. this cannot be undone. It should only warn about the static namespace (not about fruits).
- ;; a sometimes-warning commit
+ ;; a failing commit
  (derive :static/water :static/substance)
- (derive :fruit/blueberry :fruit/berry)
- (derive :static/water :static/drinkable)
Here the CI should fail: :static/water is no longer a :static/drinkable.


i'm not sure i understand, but you could store the previous global-hierarchy in a file and run a diff in your CI tests ? (i mean here)


What migrations library should I consider? Migratus, Ragtime, anything else? The main use case is to manage relational database schema changes over time (data loading / changes would be separate) A data oriented approach would be preferable, hopefully helping to keep queries on the simpler side. There is an existing relational database (mysql - maybe postgres/rds later) that has flat data and some json blob data which I want to break down into individual fields to make the data much simpler I’d like to keep migrations focused on schema changes. Data loading / changes would ideally be done in a separate process (not during application startup). A new service is being written and hopefully new database tables, so switch over should be simpler. Ideally I’d like a library that is simple to work with, well documented, reliable and works with Clojure CLI (preferably not just a Leiningen plugin). Any constructive comments are welcome. Please reply in thread or I’ll probably miss the reply. Thank you.


Sounds like pretty much anything would suit you. Personally, I use Migratus. It's very simple - when I wasn't happy with some of its aspects, I just extended or straight up reimplemented some parts in a few minutes.

👍 1

For my new projects I decided to stop mixing database ops-related logic and application. Instead I treat my database as a separate unit in the system. That allows to build system from mostly independent peaces with some simple rules like application version A requires DB schema version B etc.

✔️ 1

Or, if your needs are simple and you don't want an additional dependency…

;; The version that we expect to work with. If this number is greater than the stored db-version, migrations will be
;; called in sequence to bring the stored version up to date.
(def db-version 159)

(defn perform-migrations! []
  (let [stored-db-version (get-db-version)]
    (doseq [i (range (inc stored-db-version) (inc db-version))]
      (log/info "Performing db migration to version" i)
      ((get migrations i))
      (set-db-version! i))))

👍 1

YMMV, obviously, but this has worked just fine for me over the last 7 years of running a production app 🙂


We've been using ragtime for the last 6 years, there's a dev version that uses next.jdbc (stable versions still depend on clojure.jdbc)

👍 1

I'm looking for a migration library that: • definitely lets you use several SQL dialects (Postgres, MySQL, H2) and thus has some sort of DSL instead having you write CREATE TABLE and such directly in SQL • (preferably) dumps the resulting schema into a file which can be used to set up a new database directly (instead of having to run hundreds of migrations to recreate the DB state on a new installation) I've looked at Liquibase, Flyway, Migratus, and Ragtime. Liquibase seems to be the only one that supports the first requirement. Am I missing something?


why do you need to support multiple SQL dialects?


mybatis migration from the old java days has a good story for sql file migration. I'm scared to death risking embedding code that migrates a database on my program. mybatis is a jar that you execute from the command line.

Patrick Brown15:06:05

Is this some crazy kind of Clojure themed ARP?

⁉️ 2
Patrick Brown16:06:52

I found this looking for Datahike cljs code to study. Now I'm crazy curious, but not enough to go down the rabbit hole.


Hi, this is a debugging question. When a function gets called unexpectedly, how do I find out who called it? I have a lazy-seq with disk-io as side-effect that realizes the whole lazy-seq of thousands of slurps about 2 seconds after it was asked and correctly returned a hand full of elements. I keep the lazy-seq in an atom in state. Is there something about atoms that can force lazy-seqs? I use emacs. Do you have any debugging tips? It is propably my code doing something stupid somewhere, so how do I find the unknown caller?


Also, most repls store the last exception as *e


I thought of that, but I don't have an exception.


Is there a more elegant way than causing an exception in there?


Oh, you want to get the stacktrace at the point you're at?


ie whenever the function gets called?


Something like (-> Thread .currentThread .getStackTrace)


(clojure.repl/pst) works without exception


wow! Yes something like that.


I'm assuming you can throw that in a println in the function, and filter the noise out?


Oh, I didn't know about pst, that works


You are right! I read over the empty arity!


actually, I was wrong ) pst needs an exception but you can give it ex-info (clojure.repl/pst (ex-info "" {}))


Thank you! I will see where it takes me!


I was thinking call-stack.

Alex Miller (Clojure team)16:06:54

(Thread/dumpStack) in the method can be a useful tool

Quentin Le Guennec20:06:13

Hello, is there any resources on the advantages of core.async versus node js async/await? Specifically, I'd like to know if core.async is complete with async/await or if async/await does something that core.async can't do natively.

Quentin Le Guennec20:06:43

(I'm struggling with concurrency in clojure in general, any resource would help)


async/await has built in error handling, which you have to DIY in core.async


afaik: • async/await does not offer any comparable substitute to alts! • async/await has no builtin support for back pressure (eg. parking >!)

Quentin Le Guennec20:06:57

So it seems like core.async is more than async/await?


I don't know if more/less is a great way to describe options that take different trade-offs. In general, I would say async/await is easier to get started with and offers an 80% solution. I think core.async takes more effort to grok, but offers great tools for tackling async problems that are inherently very tricky.


async/await is "first generation" async technology


in established languages that are single threaded, they might be an acceptable solution.


channels can be enduring things between processes, async/await is usually one-shot tasks


you can always use java Futures+Executors for concurrency @quentin.leguennec1. No need to bite off a different model like channels+goroutines

Quentin Le Guennec20:06:34

Unfortunately my futures seems to fail and/or exhaust the jvm memory (I need a lot, 5555). It would seem like it would work with async/await.


No, you're probably doing something else wrong


What are you trying to do?


Generally you'd either have a thread pool, or some other rate limiter, because nothing normally benefits from 5000+ concurrency.

Quentin Le Guennec22:06:23

I need to import 5555 entities from another source with post requests. Right now pipeline-blocking with 8 concurrency seems to work well.


you can have 5555 futures and a semaphore of size 8 around the external service calls


Ya, pipeline-blocking is a good choice, an executor could also be used.


In .NET they have both async/await and channels (added later as a library, and yes they can have backpressure). You can also do alts! (Task.WaitAny). Not sure about nodejs.


It doesn't seem like Task.WaitAny is the same thing. Does it 1. guarantee that exactly one operation occurred? 2. allow a mix of reads and/or writes?


1 - Wouldn't that depend on what you do inside the tasks, like any runtime?


2 - No restrictions AFAIK. Are you talking about some conventions used with core.async?


These are guarantees that alts! provides


But you can do side effects right? And it can't undo those side effects...


If you have two channels that you are consuming, you can use alts! to read from at exactly one channel. This makes a difference for exactly once processing as well as supporting back pressure on the producers for those channels


There are other examples, but it's a key tool in the toolbox for some inherently tricky async problems. It's also a feature that very few other async libraries offer.


Interesting. In C# I usually end up with something like this:


var tasks = new Task[2];

while (running) {
	if (tasks[0] is null) tasks[0] = StartTask1();
	if (tasks[1] is null) tasks[1] = StartTask2();
	var taskIdx = await Task.WaitAny(tasks);
	if (taskIdx == 0) {
		// do something with task[0]
        tasks[0] = null;


Do you have any examples of the tricky situations it helps with?


I didn't show it, but StartTask1 or 2 can be tasks that read off channels with or without backpressure, to address what you said before.


But I guess it doesn't take from 1 and then cancel the other take atomically.

👍 1

So I see what you are saying, just haven't really needed that personally.


I've had it come up a few times, but I can't remember off the top of my head


alts! also support priority


so if you have 3 channels with different priorities, you can guarantee that operations will serve higher priority channel operations first


I guess it's still not the best example since it's mostly just using the priority feature and not the backpressure feature


That is interesting, I'll have to think about that


I know I've used it in the past where it was more of a traditional data pipeline with producers and consumers and I want some amount of buffering for efficiency, but I don't want producers to get too far ahead of consumers.


It doesn't seem like Task.WaitAny is the same thing. Does it 1. guarantee that exactly one operation occurred? 2. allow a mix of reads and/or writes?


For ring , since only the changed namespaces, and those depending on the changed ones might not be reloaded, so a defprotocol might be updated but those using it are not. So basically it suffers the issue as listed here: My question is, do you feel safe using the wrap-reload function?


Reloading in development has been good enough to make it seem worthwhile. It has been years since I've done web development, but there was a time when I was annoyed by wrap-reload and ended up writing an that leaned on tools.namespace. Unfortunately, enough time has passed where I can't remember what that was solving. 🤷. In general, if you are doing any sort of reloading workflow, I've found that having defprotocols isolated in their own namespace really helps minimize issues since it minimizes the reasons for that namespace to change and require reloading.


> I’ve found that having defprotocols isolated in their own namespace really helps minimize issues since it minimizes the reasons for that namespace to change and require reloading. Second this.