Fork me on GitHub
David Pham06:09:25

Is it possible to find the entity with the maximum of some attribute in datalog?

Yuriy Zaytsev08:09:40

(d\q '{:find [(max ?attr)] :in [$] :where [[_ :some/attribute ?attr]]} db)

David Pham15:09:37

How do you get the entitie whose attribute is the maximum?

Yuriy Zaytsev16:09:22

(d/q '{:find  [?entity]
         :in    [$]
         :where [[(datomic.client.api/q '{:find [(max ?attr)]
                                         :in [$]
                                         :where [[?entity :some/attribute ?attr]]} $) [[?attr]]]
                                         [?entity :user-metric/elapsed-ms ?attr]]} db)

David Pham16:09:50

So nested queries?


Q: I have a very slow memory leak in a production Cloud system. Before I start dumping logs and digging around, I wonder if folks out there have any tricks/tips for this process. I’ll post the chart in the thread…..


In particular, I wonder why the indexer line goes up. And does that provide a clue about the leak?


Hi @U0510KXTU, have you actually seen a node go OOM or are you just noticing this in your metrics/dashboard? This small snippet matches with the expectations I have for indexing. The indexing job occurs in the background. Indexing is done in memory and then the in-memory index is merged with the persistent index and a new persistent index is written to the storage service. If you widen the time scale you should see a saw tooth pattern on your indexing line.


@U1QJACBUM No I haven’t yet in prod but the same code running on Solo (test system) has gone OOM. That chart is 2 weeks, hence no saw tooth. Here’s the hour just gone. Saw tooth as expected


Interesting that you think this is normal. Is there some doc somewhere that describes what “normal” is for charts in the dashboard? That would help me (and others I suspect)


Whenever I deploy new code, the FreeMem line jumps back up to 10Mb and starts the slow decline