Fork me on GitHub
#datomic
<
2018-08-22
>
joshkh09:08:51

back in june @denik asked the question > "is there a story for a webapp that uses datomic ion and websockets?". i'm very interested in this too. my (webapp) project is very websocket heavy, so AWS API Gateway isn't useful for me. is that a roadblock?

Petrus Theron10:08:44

You might be able to use AWS IoT for this and call out to your Ions as needed, as IoT supports MQTT over websockets: https://docs.aws.amazon.com/iot/latest/developerguide/protocols.html#mqtt-ws

joshkh13:08:11

thanks, i'll look into it!

denik14:08:44

@joshkh please keep me post it if you find a good way to make it work

Joe Lane16:08:08

@joshkh that is exactly what you should do. I demo’ed this myself and its magnificent. I’m working on making a writeup but i need to set up the right blogging system with work first.

joshkh17:08:58

cheers! this seems to be exactly what i'm looking for. very much looking forward to the tutorial. 🙂

💯 4
Joe Lane18:08:16

Haha, cool, y’all found my twitter account.

eoliphant13:08:59

Sweet. I actualy have a couple upcoming user stories for exactly this lol

Petrus Theron10:08:23

Is there a way to access the Datalog internals so I can block queries that touch certain attributes? Basically, I want to whitelist attributes for querying or pulling.

Petrus Theron10:08:50

It would be great if there was an intermediate Datalog parser step that returned a list of "compiled attributes" for a given query before being run, so that I could block the query. For wildcards, I'd like to constrain the list of pulled attributes.

joshkh13:08:15

i remember seeing someone post a keep-alive wrapper for the datomic SOCKS proxy script. anyone have a link?

jaret16:08:39

I just copied out the advice given earlier in slack (which I was able to find on an archive)

jaret16:08:52

Please feel free to add any experience reports to that thread ^

joshkh17:08:09

cheers, thanks Jaret

markbastian15:08:43

I am having an issue with transacting a large amount of data in which I keep getting the exception "db.error/transactor-unavailable Transactor not available". I've got along the lines of ~100k objects being transacted using transact-async, each one resulting in about 5-10 datoms. I'm partitioning the data so that I'm performing a large number of small transactions. I've tried a variety of sizes for "small" such that the transaction report queue shows txdata sizes along the lines of 100-10000 datoms depending on how I do my partitioning. It looks like the transactor is being saturated then once it recovers it starts writing again. I've tried setting a high timeout (-Ddatomic.txTimeoutMsec=120000) and that doesn't seem to work. The log does give a warning of "2018-08-22 07:58:07.869 WARN default o.a.activemq.artemis.core.server - AMQ222183: Blocking message production on address 'test-98bbb5cd-6025-4228-aee8-f0ee886d5a82.tx-submit'; size is currently: 263,836 bytes; max-size-bytes: 262,144." Could the problem be an artemis configuration? If so, how would I configure that for datomic? I can't seem to find any information on that. Any ideas? Thanks!

markbastian15:08:41

Additional data: I'm using datomic-pro-0.9.5697 with a postgresql backing store configured basically the way the datomic site says to configure it. My current transactor template file has the "recommended settings" for production enabled and write-concurrency=8.

marshall16:08:28

@markbastian https://docs.datomic.com/on-prem/capacity.html#plan-for-back-pressure If you’re hitting backpressure you need to slow your import and/or wait

marshall16:08:01

Datomic OnPrem doesn’t provide “flow control” in the form of a queue or work limiting in the peer library; if you need that kind of upstream arbitration you need to supply it

markbastian16:08:08

Should the use of transact-async just queue up the data in the artemis queues and get picked up eventually?

markbastian16:08:35

That was my understanding when I read that before, but I am still pretty new to datomic, especially the transactor.

markbastian16:08:44

Ok, I think I figured something out. If I call transact-async and let it do its thing it works very well. I can write hundreds of thousands of datoms in not much more time that it takes to process the data. The issue is when I dereference the future it returns. I want to be able to catch any exceptions in the transaction and it seems like the only way to do this is to eventually deref the future. I tried putting that in another future, but that still seems to block the transactor (or something that gives bad performance).

markbastian16:08:34

So, I think my only issue is how I can catch possible exceptions on the transaction without turning it into a blocking operation.

conan16:08:12

What's the easiest way of reliably scheduling a call to backup-db for a datomic cluster built using the CloudFormation template?

markbastian16:08:33

To put it quite clearly, I can do this just fine and it is very perfomant:

(datomic/transact-async conn [[:my "data"]]) ;Assume for the moment that data may or may not be valid.
But if I do this it performs terribly:
(future
  (try
    @(datomic/transact-async conn [[:my "data"]])
    (catch Exception e (.printStackTrace e))))
Should I be using core.async as shown here: https://docs.datomic.com/on-prem/best-practices.html#pipeline-transactions? It still seems like this would dereference the future and you'd have the same issues.

marshall16:08:38

if you don’t deref the future it’s fire and forget

marshall16:08:20

if you need to know whether the transaction failed or not you either have to keep the future and deref it (only tells you if it succeeded, not if it potentially failed) or query for the things later to ensure they got in there

markbastian17:08:53

Yeah, my general case is fire and forget and it works very well. I just would like some facility to report an exception if and only if one occurs. I suppose I could put the futures on a queue and occasionally poll them to see if they've been realized and then check for exceptional behavior. Alternatively, if I suspect a problem I could enable derefing the future only in those cases. Thanks for the tips!

eraserhd19:08:40

Hey, wait... Datomic is reordering query clauses???

jaret17:08:41

There is no feature to re-order clauses. Clause order is up to you. If you’re seeing something unexpected here, please feel free to log a support case to <mailto:[email protected]|[email protected]> https://docs.datomic.com/on-prem/query.html#clause-order

eraserhd18:08:21

Ok. It's specifically in calling a user function. We have a work-around, but I'll report there.

eraserhd19:08:43

Is this new?

kenji_signifier23:08:01

Hi, I’m looking at Ions to implement business specific layer on top of Datomic to support operations such as money transfer, stock trading (transfer share amounts between accounts), etc. It seems to be good fit, but I’d also like to publish post-commit events to Kafka topics. I think it could be achievable with datomic.Connection.txReportQueue in peer API, but it seems not exist in Client API. Are there alternatives to achieve this in Datomic Cloud?

stuarthalloway01:08:46

not at present, but you could poll

Dustin Getz15:08:10

Can you write to a queue in a transaction function? Rich was talking about this at Conj party '18. (Be careful not to block in the transactor)

kenji_signifier03:08:15

@U09K620SG thx for the idea and I read Ions launch day questions as well. In this case I’d like to capture post-tx so transaction function may not be the best place as I need to consider multiple invocations and tx failure case. @U072WS7PE Is it possible to connect Datomic Cloud DB via Peer API if I run it from VPC peered network to circumvent securities? Suppose it’s possible, would it be considered as “at-your-own-risk” or a proper approach?

eoliphant13:08:52

@U1QA1G3UH we’re using http://www.onyxplatform.org/ to stream out transactions and do other stuff with them. @U0509NKGK has a great blog post on how they did it https://www.stuttaford.me/2016/01/15/how-cognician-uses-onyx/ The preferred approach to accessing your datomic endpoint from another VPC is using AWS’ VPC endpoint feature https://docs.datomic.com/cloud/operation/client-applications.html#separate-vpc

kenji_signifier20:08:24

Hi @U380J7PAQ thx for the info. Onyx datomic plugin had only Peer API until I contributed the PR to support for Datomic Client API and Datomic Cloud in Feb. so I presume you’re using Peer API 🙂 https://github.com/onyx-platform/onyx-datomic/pull/31 BTW, I decided to use Kafka Streams instead. (And now Distributed Masonry is a part of Confluent!) My question is if there is a way to access Datomic Cloud via Peer API. Yes, I know I can VPC peer, but I’d like to know if accessing Datomic Cloud via Peer API is a) possible, b) supported.