Fork me on GitHub
#datomic
<
2017-08-17
>
matthavener15:08:30

is it possible to delete a memory db? it looks like its still retained somewhere in the heap, even after d/release and d/delete-database

potetm15:08:47

@matthavener Wouldn't its lifecycle on the heap be determined by GC?

matthavener15:08:41

that’s what I was hoping, but it doesn’t seem to be the case 😞

potetm15:08:54

Why does it not appear to be collected?

potetm15:08:13

Or, better phrased, what are the symptoms?

matthavener15:08:33

if I transact 1 mil datoms, call ‘d/delete-database’ and ‘d/release’, and then dump the heap, I see 1 mil datomic.db.Datum instances, even after a manual GC

matthavener15:08:53

so eventually, after repeating that pattern, my JVM runs out of memory

potetm15:08:58

Interesting....

potetm15:08:34

I'm guessing "manual GC" means (System/gc)? Have you confirmed that GC is actually being run (e.g. via jstat)?

potetm15:08:20

I realize this isn't particularly helpful, but it might provide some useful data points. Could certainly just be that they said, "This is a dev tool. We're not going to worry about running out of memory." But only a rep could answer that.

matthavener15:08:51

yeah, we’re abusing the mem db for a kind of whacky kafka+datomic CQRS-type thing

matthavener15:08:19

I will check GC with jstat though, good idea

chris_johnson18:08:06

Is the use-case of running a Peer in AWS Lambda JVM runtime still in the status of “we don’t expect that to work and offer no advice or support”?

andrewhr18:08:46

I believe the the Client API is the way to go in regards to running Datomic + Lambda. Peers will be off-loaded to old-school EC2 as usual

chris_johnson18:08:02

That certainly seems reasonable, however what I was hoping to do was run a Vase service in Lambda and it uses the Peer library. I get as far as it refusing to launch because the transactor keystore and truststore are not at the literal file URIs starting with /datomic that the library expects

chris_johnson18:08:24

I guess my hobby project is going to be more involved than I had thought. 😄

favila19:08:23

@matthavener are you sure you don't have a reference to the connection somewhere? a def or a *1 or something?

matthavener19:08:50

favila: yeah, a bit more digging and I think that is the issue

favila19:08:56

mem db connections probably have strong references to their data, whereas normal connections have some indirection

favila19:08:27

still seems like d/release should clear and poison the connection somehow

devth20:08:29

trying to track down a transaction that includes many retractions on a new db that doesn't contain many transactions. not sure how to form the query without performing a full scan. tried with:

(datomic/q
  '[:find (count ?e)
    :in $ ?log
    :where
    [?e ?a ?v ?tx false]
    [(tx-data ?log ?tx) [[?e ?a ?v _ ?op]]]]
  (datomic/history (latest-db))
  (datomic/log (conn)))
even if i could filter down to transactions that contain 10 or more datoms i would quicky find it

devth20:08:35

can i force re-indexing after an excision?

devth20:08:01

just tried fully excising 923 entities. doesn't appear to take effect as i assume it's going to async reindex at some point

devth20:08:47

looks like it took effect.

favila20:08:34

you can force reindex any time

devth20:08:09

request-index is async though

devth20:08:15

any way to see progress or block ?

favila20:08:30

d/sync-index

devth20:08:29

ah, missed that. apparently sync-excise too. though i don't understand how sync-excise could not communicate w/ transactor

favila20:08:16

probably looks for something in the index

favila20:08:30

excisions are recorded

devth20:08:33

is it building an index in the peer?

favila20:08:48

no, peer looks at storage for the latest root

favila20:08:57

when it moves, that's the new index

favila20:08:04

(means an indexing completed)

devth20:08:19

oh, so it doesn't communicate with tx'or but it does hit storage

favila20:08:42

"doesn't communicate" may just mean waits vs sends a request

favila20:08:54

txor constantly pushes stuff to peer without peer asking

favila21:08:09

so peer may just wait until it sees what it wants

devth21:08:18

cool, makes sense

favila21:08:36

in any given case I am not 100% sure if knowledge is from transactor or storage

favila21:08:15

but the index roots are definitely pulled directly from storage; maybe transactor also informs (via a push) peers that storage is updated

devth21:08:01

ok. interesting.

gworley322:08:09

i'm seeing some weird behavior in my code. I have code wrapped up in a try with (catch Exception e) but it doesn't seem to be catching db.error/transactor-unavailable clojure.lang.ExceptionInfo exceptions

gworley322:08:30

are these exceptions different in some way that try wouldn't catch them? doesn't seem like they would be but weird to see it ignoring the catch

favila23:08:25

maybe the exception is thrown out of a future you deref only outside the try/catch?