This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-03-12
Channels
- # announcements (65)
- # aws (1)
- # babashka (12)
- # beginners (111)
- # bristol-clojurians (1)
- # cider (32)
- # clj-kondo (55)
- # clojars (3)
- # clojure (71)
- # clojure-europe (17)
- # clojure-france (4)
- # clojure-italy (36)
- # clojure-losangeles (8)
- # clojure-nl (6)
- # clojure-uk (115)
- # clojurescript (2)
- # datomic (99)
- # fulcro (32)
- # graalvm (12)
- # graphql (20)
- # hoplon (203)
- # meander (56)
- # mount (3)
- # off-topic (17)
- # pathom (17)
- # reitit (22)
- # shadow-cljs (32)
- # spacemacs (9)
- # tools-deps (19)
- # vim (25)
- # vscode (3)
Is there a Clojure client library which provides the same interface as the Datomic Client API, but for accessing Datomic REST APIs? My use-case is that I would like to collaborate with someone who would be using a Datomic REST API from Python, while I'm using the same database from Clojure code.
I guess my main motivation is the ability to share some in-memory Datomic DB, which is possible via the REST server, but I have no convenient Clojure interface to it.
OR I can run a peer-server with an in-memory database, which I can connect to conveniently from Clojure via the Datomic Client API,
BUT there are not Datomic Client API libraries for other languages, like Python.
3rd option would be to just run a transactor with protocol=mem
in its config.properties
file, but that throws this error:
java.lang.IllegalArgumentException: :db.error/invalid-storage-protocol Unsupported storage protocol [protocol=mem] in transactor properties /dev/fd/63
The reason for wanting to share an in-memory Datomic DB is have a really tight feed-back loop within our office, where we have 1 machine with 80GB RAM, while other machines have only 16GB
hi everyone, last time I periodically see the following error in logs
org.apache.activemq.artemis.api.core.ActiveMQNotConnectedException: AMQ119010: Connection is destroyed
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:335)
at org.apache.activemq.artemis.core.protocol.core.impl.ChannelImpl.sendBlocking(ChannelImpl.java:315)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQClientProtocolManager.createSessionContext(ActiveMQClientProtocolManager.java:288)
at org.apache.activemq.artemis.core.protocol.core.impl.ActiveMQClientProtocolManager.createSessionContext(ActiveMQClientProtocolManager.java:237)
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSessionChannel(ClientSessionFactoryImpl.java:1284)
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSessionInternal(ClientSessionFactoryImpl.java:670)
at org.apache.activemq.artemis.core.client.impl.ClientSessionFactoryImpl.createSession(ClientSessionFactoryImpl.java:295)
at datomic.artemis_client.SessionFactoryBundle.start_session_STAR_(artemis_client.clj:81)
at datomic.artemis_client$start_session.invokeStatic(artemis_client.clj:52)
at datomic.artemis_client$start_session.doInvoke(artemis_client.clj:49)
at clojure.lang.RestFn.invoke(RestFn.java:464)
at datomic.connector.TransactorHornetConnector$fn__10655.invoke(connector.clj:228)
at datomic.connector.TransactorHornetConnector.admin_request_STAR_(connector.clj:226)
at datomic.peer.Connection$fn__10914.invoke(peer.clj:239)
at datomic.peer.Connection.create_connection_state(peer.clj:225)
at datomic.peer$create_connection$reconnect_fn__10989.invoke(peer.clj:489)
at clojure.core$partial$fn__5839.invoke(core.clj:2623)
at datomic.common$retry_fn$fn__491.invoke(common.clj:533)
at datomic.common$retry_fn.invokeStatic(common.clj:533)
at datomic.common$retry_fn.doInvoke(common.clj:516)
at clojure.lang.RestFn.invoke(RestFn.java:713)
at datomic.peer$create_connection$fn__10991.invoke(peer.clj:493)
at datomic.reconnector2.Reconnector$fn__10256.invoke(reconnector2.clj:57)
at clojure.core$binding_conveyor_fn$fn__5754.invoke(core.clj:2030)
at clojure.lang.AFn.call(AFn.java:18)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
I was also on the verge of trying to use a Docker containerized on-prem Datomic. If I understand correctly, that's not an officially recommended way to run a Datomic system. Can you share some setup instructions for it please?
sometimes it results in restarting datomic with the {:message "Terminating process - Heartbeat failed", :pid 13, :tid 223}
oh, just found out that peer version is lower than transactor, 5951 vs 6045, probably that's the issue 😞
How do you do the equivalent of a SQL left-join?
(d/q '[:find (pull ?e [*]) (pull ?e1 [*]) (pull ?e2 [:bank/name])
:where
[?e :customer/id ?id]
[?e1 :address/id ?id]
[(get-else $ ?e2 :bank/id "No Value") ?id]
[(get-else $ ?e2 :bank/name "No Value") ?name]]
@conn)
That is what I am trying to do @U09R86PA4
That’s just the default value?
Basically :customer/id
and :address/id
returns 300,000 results. When I join on bank it returns 10,000
I effectively want to have the most possible results with the others kind of enriched with extra data when it is found
Like I want it to start with :customer and effectively merge each result onto it with default values if they are not present.
Like if I joined a table on a column
It only returns me results that have a :bank/id
so I lose all the previous finds.
Yes I am unifying those and then wanting to add some extra data if it is there
I don’t quite understand refs yet so no.
ok, so you’re joining by some concrete value “id” (I guess a string?) which happens to be common to :customer/id :bank/id, and :address/id?
Yes. But there is very little bank data.
What is the natural way of doing things in datomic? Using the refs etc?
This is kind of a dirty data CSV import. So I haven’t got much luxury in the naming of things. Was just assessing if datalog will help me with pre-processing. But it’s harder than I thought
So you would split it into 2 queries?
You could unify to one query using or
and a sentinel for the missing bank case, but you can’t pull off that sentinel
Thanks. I was trying to avoid that but I guess it requires stronger schema for that.
it’s easy to find the common values via query and build the tx you need to make the refs
[?c :customer/id ?id][?a :address/id ?id] -> [:db/add ?c :customer/address ?a] for example
The problem I have is that this is a CSV ETL job. Daily drops with millions of rows and 10's of columns. So doing these checks doesn’t seem that efficient.
Though I could be wrong….
The CSV’s is pretty much always the same… with just a bit of new data.
I’m starting to believe it.
if you aren’t becoming the source of truth for what you ingest, and that stuff is stable-shaped already and you just want to make some joins, sql may be better
datomic shines with graph-shaped data you’re growing with (i.e. history of changes is important) and a primary datastore you add to incrementally with a live application
it doesn’t do giant bulk imports well, and it can join by value but you miss out on a lot of graph-shaped niceties
meanwhile some sql engines can query csv directly, import csv with magical fast bulk importers, and are already used to joining by value
e.g. have you considered redshift or athena (for cloud things at huge scale)? I think they both work by sticking table-shaped files (e.g. csv) into s3 and then “just working”
It’s a shame. Because I was using Meander to transform the CSV’s before entering them into the DB. Then I was using the DB for the joins and pulling it all out. The semantics are pretty much the same because they both use logic programming.
We haven’t got huge scale
You mean keeping them all to the same ns of the keyword?
I mean, is there something from the unit of work you can use to mint a unique upserting attribute
I don’t think so. It’s all good 🙂
[{:db/id "bank-123" :bank/unique-id "value-derived-from-customer}{:db/id "customer-123" :customer/bank "bank-123, ,, :other/customer "stuff",,,}]
that said, I think whether you use datomic or not should be driven entirely by what you plan to do after you ingest these CSVs
@U1C36HC6N the convo.
@U09R86PA4 yea I find it interesting though I also think I need to know SQL and tables a bit more before working with such abstractions.
Hi I am trying to run datomic transactor. I downloaded Datomic. And following the steps from
https://docs.datomic.com/on-prem/dev-setup.html. When I try to run the local transactor. I get this error. java.lang.Exception: 'protocol' property not set
Any Ideas. Here is the full error log
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Terminating process - Error starting transactor
java.lang.Exception: 'protocol' property not set
at datomic.transactor$ensure_args.invokeStatic(transactor.clj:116)
at datomic.transactor$ensure_args.invoke(transactor.clj:105)
at datomic.transactor$run$fn__22768.invoke(transactor.clj:387)
at clojure.core$binding_conveyor_fn$fn__5754.invoke(core.clj:2030)
at clojure.lang.AFn.call(AFn.java:18)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:835)
I downloaded Datomic, added the licence key in dev-transactor-template.properties
and ran this script
bin/transactor datomic_properties/dev-transactor-template.properties
in the dev-transactor-template.properties
file @U09R86PA4?
@U09R86PA4 So this is what I have so far
protocol=dev
host=0.0.0.0
port=4334
license-key=<MY_KEY>
memory-index-threshold=32m
memory-index-max=256m
object-cache-max=128m
Now I am getting Terminating process - License not valid for this release of Datomic
you can check this at your http://my.datomic.com site
Okay rookie mistake, my key was expired :face_palm:
licenses are perpetual, so you can use an older version (released before it expired)
@U09R86PA4 thank you so much. Local Transactor is running now
What makes the dev database not production worthy? Are there signficant performance advantages to using postgres or another sql server? Mostly concerned about on-prem solutions at the moment.
As a consequence, the data you handle with Datomic leaves in files on the disks of the transactor machine. From an operations perspective it's not a great architecture to couple these 2 concerns. Your transactor process doesn't need any persisted state, you could run it (or them) on ephemeral machines.