Fork me on GitHub

Is there a way to use client api and have dynamically created in-memory databases for testing?


In testing it has been really nice to have an in-memory database created in test fixture, but I guess that is not possible if we use the client api


I rolled my own because I wanted interceptors in that layer as well. I have not tried the OSS lib


i have a query like that and im using datomic free. i want to display tx time with given entity id. but i currently only have tx id as the result. is it possible to get the time values? (FYI datomic pro can use :db/txInstant to make the time )


I'm not aware of any limitation in datomic free except for peer counts. It should just work. Try it.


Your query could be rewritten as


    '[:find ?e ?attrname ?v ?txinst ?added
      :in $ ?e
      [?e ?a ?v ?tx ?added]
      [?a :db/ident ?attrname]
      [?tx :db/txInstant ?txinst]]
    (d/history (db))


thankyou. after i re-run the repl it solves the db/txInstanst. maybe its a bug


@steveb8n Great! Precisely what I'm looking for, thanks.


I wrote that lib because we needed it for probably similar reasons that you need it. LMK if you have any questions.


Question: looking at this lib I learned about all the different kinds of uuids. In my ion code I'm simply using java.util.UUID but I'm wondering if there's any value in using other uuid types? Any uuid experts out there?


Name-based uuid (version 5) has some nice properties

👍 4

java.util.UUID can represent all uuid versions, though, it's not a matter of needing a different type


I agree with Francis, v5 UUIDs are great. I use them when I need a to map something like user ID to UUID so that the same ID always maps to same UUID.


That is a good tip. I can imagine some use cases e.g. saving a lookup by name for entities from a db when the name is immutable. Are there other less obvious scenarios where it's handy?


This is not relevant to datomic, but I used it when I generated a test fixture to MongoDB, basically exactly what temp-id's are in Datomic.


can also use v5 uuids for key-by-value situations


compound indexes, hashes, etc


I had not thought of the compound index. I'll try that in datomic. Thanks!


(it won't be a sorted index)


Good point. I'll stick txn fns for that


Hey, when applying a txn to a dev transactor earlier, my coworker hit this error:

WARN [2018-10-02 11:59:52,036] clojure-agent-send-off-pool-8 - datomic.connector {:message "error executing future", :pid 72084, :tid 243}
org.apache.activemq.artemis.api.core.ActiveMQObjectClosedException: AMQ119017: Consumer is closed
    at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.checkClosed(
Google reveals scant. Any ideas what’s up?


What about transaction functions with client api? What's the client api counter part for datomic.api/function?


That's for cloud and says that only the built in functions and classpath functions are supported.


Does this mean that the cloud api does not support functions like peer api with datomic.api/function does?


The documentation is.... vague


Hi, I ran into the same issue as @donaldball.. Deja vu there, almost happened at the same time, and we are not co-workers 😉


[clojure-agent-send-off-pool-803] WARN datomic.connector - {:message “error executing future”, :pid 43, :tid 87854} org.apache.activemq.artemis.api.core.ActiveMQObjectClosedException: AMQ119017: Consumer is closed at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.checkClosed( at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive( at org.apache.activemq.artemis.core.client.impl.ClientConsumerImpl.receive( at datomic.artemis_client$fn__1607.invokeStatic(artemis_client.clj:169) at datomic.artemis_client$fn__1607.invoke(artemis_client.clj:162) at datomic.queue$fn__1363$G__1356__1368.invoke(queue.clj:18) at datomic.connector$create_hornet_notifier$fn__7866$fn__7867$fn__7870$fn__7871.invoke(connector.clj:195) at datomic.connector$create_hornet_notifier$fn__7866$fn__7867$fn__7870.invoke(connector.clj:189) at datomic.connector$create_hornet_notifier$fn__7866$fn__7867.invoke(connector.clj:187) at clojure.core$binding_conveyor_fn$fn__4676.invoke(core.clj:1938) at at at java.util.concurrent.ThreadPoolExecutor.runWorker( at java.util.concurrent.ThreadPoolExecutor$ at


In our case, we’ve tentatively discovered that batching the forms of the txn into separate txns gets it to transact. Unfortunately, the original txn is only 77 forms, like 27k in size, not especially large, so it’s a little bit surprising.


It’s unsettling to note that downgrading from java10 to java8 fixes the problem


Specifically, downgrading the peer from java10 to java8 fixes the problem. Are there known issues with datomic peer and java10?


@donaldball Do you mean to say you can reproduce it reliably?


It seems so, yes.


@donaldball I was under the impression it is a connection issue between peer and transactor or peer and storage


on-prem has some very very old deps


using anything past java 8 is asking for trouble


My datomic storage stack pretty consistently fails to delete due to the Vpc not deleting. If I manually go into my VPCs and delete the datomic-created VPC, it works. This is pretty annoying. Is there a fix for this?


Hi @U083D6HK9. We do not consider deleting a storage stack to be part of any regular workflow, so I am curious why you are doing this?


We allow our developers to provision Datomic Cloud stacks when they need them. They then delete them when they no longer need them anymore. We end up with lots of stale VPCs and failed stack deletions. Also, because it is launched via a CloudFormation template, one would only expect for it to work with the regular CloudFormation operations, including Delete Stack.


@U072WS7PE a lot of those headaches would be mitigated if devs have a local instance to work against. That being said, the Datomic CFT deletion should work as expected.


I totally agree, and we test deletion in our regression suite. Can you send us more information about the error you are seeing?


@U083D6HK9 are developers recreating storage stacks against existing storage?


or have you handrolled something to deal with all of ? We left this manual on purpose to discourage people from deleting their data.


The Events tab in the CF UI says: > The vpc 'vpc-0931d229f45a061a1' has dependencies and cannot be deleted. (Service: AmazonEC2; Status Code: 400; Error Code: DependencyViolation; Request ID: 8d2f7d93-f945-4002-b424-c47aba885b04) and then DELETE_FAILED: > The following resource(s) failed to delete: [Vpc]. New storage each time. We have a custom script to delete all those resources. I'm planning on adding it to a public gist as I'm sure others have this workflow as well.


Why not leave the storage stack up all the time, and just provision compute when needed? That is certainly what we do internally.


@U083D6HK9 the whole architecture is designed so that you can leave storage up and just reconnect to it. Is there some benefit to doing this extra work that I am not seeing? If there is some isolation we fail to support I would like to make it first class.


Because developers want to ensure they are working in a clean environment with empty DBs.


We at first tried the approach of suffixing DBs with UUIDs, but that became a real pain.


OK, that is good input, thanks! Will discuss with the team.


@U083D6HK9 Do you take a similar approach with AWS resources, e.g. automating the creation of 1-off DDB, S3, etc. as needed?


Yes. We use Pulumi (similar to Terraform) which has the concept of a stack. A stack consist of any number of resources. When created, a stack provisions all the resources with a unique name. This allows us to spin up entire instances of our infrastructure for any given environment: prod, dev, qa, kennys-prod-test, etc.


are the unique names Pulumi makes better than DB+UUID suffix in some way?


Yes. It makes our dev workflow much easier. As an example, our application calls for DBs named accordingly: admin, customerdb-<cust UUID>1, customerdb-<cust UUID>2, ..., customerdb-<cust UUID>N. When working in a REPL, we know that all we need to do is connect to the admin DB, not admin-<UUID>. We ended up writing a wrapper for the connect function that auto-suffixed the DB name with the current UUID suffix. But then it became a problem when you wanted to run a development system (i.e. http server for UI dev) and run your tests using a clean DB all in the same REPL. This was the primary motivator for writing The workflow we followed when working with the peer library was identical and it worked really well. We didn't need to think about what the current binding for the db-suffix was.


Ultimately it boiled down to less code we need to maintain. Writing and testing code against the peer library was intuitive and easy.


thanks @U083D6HK9! This is very helpful input.