Fork me on GitHub
#datomic
<
2015-07-20
>
robert-stuttaford07:07:11

@timothypratley: yup. it’s incredibly powerful. what makes it doubly so is that you can do the same with time!

robert-stuttaford12:07:56

@stuartsierra: have you perhaps solved the issue of how to stop listening to datomic.api/tx.report.queue in the context of your component pattern?

robert-stuttaford12:07:17

right now i’ve got a listener in a core.async thread, but i’m struggling to find a way to cleanly stop it

stuartsierra12:07:04

@robert-stuttaford: I've found the 'Component' Lifecycle really only works for application start-up / shut-down. Anything with a shorter or different lifespan needs to be handled separately.

robert-stuttaford12:07:48

so you no longer use the tools.namespace/reset thing?

stuartsierra12:07:25

@robert-stuttaford: No, no, I use that for everything that should be started and stopped as a whole with the rest of the application.

robert-stuttaford12:07:33

this is for app start/stop, but for the development workflow, i’m doing this many times over. of course, it’s all fine if i restart the jvm. but, for obvious reasons, i don’t want to do that

robert-stuttaford12:07:47

yes. ok. in my case, this is true for the tx-report-queue as well

stuartsierra12:07:20

@robert-stuttaford: For development, I would typically use a test DB with a unique generated name.

robert-stuttaford12:07:55

ok. so there’s no clean way to repeatedly listen/unlisten to the tx-report-queue that you know of?

robert-stuttaford12:07:18

i get the test db thing, but again, in my case, i’m working with a large production database and working on a stats processing system that works with that data

robert-stuttaford12:07:13

i know why this isn’t supported, but man, it would *rock* if datomic’s api provided a tx-report chan natively.

stuartsierra12:07:37

@robert-stuttaford: Not sure what you're really trying to do here. You can use remove-tx-report-queue to disconnect the queue.

robert-stuttaford12:07:55

-sigh-. of course. thank you for being the voice of reason, Stuart. i suppose that will cause the next .take to return nil or something?

stuartsierra12:07:02

Or just create and manage the tx-report-queue outside of the component / reset.

robert-stuttaford12:07:11

i’ll see how remove-tx-.. works out

stuartsierra12:07:29

@robert-stuttaford: The Tx report queue is a BlockingQueue, so I expect once you call remove-tx-report-queue it will block forever.

robert-stuttaford12:07:55

that’s disappointing.

robert-stuttaford12:07:15

we’re using the tx-report-queue as an input for Onyx

robert-stuttaford12:07:35

i remember from the pedestal talk at Cwest 2013, one of the architecture diagrams had Datomic doing this as well. i distinctly remember Tim Ewald speaking very highly of the capability. would be great if it were a little easier to start and stop listening to it in a repeatable way.

stuartsierra13:07:15

@robert-stuttaford: Should be easy enough to ignore, just close your channel.

lowl4tency13:07:55

Guys, I have a launchgroup with datomic transactor. If I wanna 2 transactor for HA purposes which one address endpoint I should use? Can I up a loadbalancer and balance the requests? I mean aws ElasticLoadBalancer

lowl4tency13:07:17

@bkamphaus: sup

stuartsierra13:07:24

@lowl4tency: The Transactors don't need to be load-balanced. The active Transactor writes its location into Storage and the Peers get it from there.

stuartsierra13:07:08

You can't have two active Transactors for the same database. That's the point simple_smile

lowl4tency13:07:21

stuartsierra: thanks a lot. Do you have any expirience with AWS and datomic? I'm trying to get datomic endpoint from ScalingGroup with CloudFormation but don’t see any method for GetAtt and privateIP of the instances of the LaunchGroup

lowl4tency13:07:00

One method which I have it’s use aws cli and get list of instances but it looks like a bicycle

stuartsierra13:07:32

@lowl4tency: Not sure I understand your question. The only "Datomic endpoint" that matters is the Storage URI. Peers will automatically use that to find the Transactor. The AWS set-up scripts in the Datomic distribution manage the rest.

lowl4tency13:07:17

stuartsierra: endpoint, i mean an address and a port of the datomic transactor ec2 instance

lowl4tency13:07:41

for pass it later to an application

lowl4tency13:07:06

So, let me clarify, I have a CloudFormation stack which runs datomic transactor as ec2 instance (got a template from datomic generator), I have a stack with application, I wanna to pass the datomic transactor’s address to the application.

lowl4tency13:07:16

Datomic Transactor’s running into a AutoscalingGroup, so I’m not able to get ec2 instance's private address

stuartsierra13:07:01

@lowl4tency: I'm not sure how to get that. But you don't need it just to use Datomic. It is all handled automatically by the Transactor, Storage, and the Peer Library.

lowl4tency13:07:47

stuartsierra: I don’t need it if I just run a transactor. simple_smile

lowl4tency16:07:08

hm, soo, what about failover? If one transactor is failed?

lowl4tency16:07:22

which I connected

bhagany16:07:03

@lowl4tency: failover is coordinated through the storage, so you don't have to worry about it

bhagany16:07:46

the transactors store a heartbeat in storage. when the primary fails, the secondary notices and takes over transparently

bhagany16:07:14

there will be a brief window in which transact will fail, though

lowl4tency16:07:42

bhagany: but how my app will know about I have new tranactor

bhagany16:07:16

peers know what transactor to use because they look at the storage too

lowl4tency16:07:22

Look, my transactor failed, autoscaling kill old and run new one, but app is aconfigured still to old the transactor

bhagany16:07:29

no, it's not

bhagany16:07:36

that information is in the storage

lowl4tency16:07:06

so, I don’t need to pass the datomic URI?

bhagany16:07:17

the datomic URI specifies the storage

lowl4tency16:07:58

if I have postgres rds as backend I need to pass the rds endpoint?

lowl4tency16:07:08

as datomic uri I mean

bhagany16:07:20

I haven't used postgres as a storage, but I'm pretty sure, yes

lowl4tency16:07:48

bhagany: it clarifys all simple_smile

bhagany16:07:54

excellent simple_smile

lowl4tency16:07:03

bhagany: do you use dynamodb ?

bhagany16:07:33

I will be, I'm currently developing our first datomic-backed service

bhagany16:07:04

but I haven't gone through all prod deployment stuff yet

lowl4tency16:07:49

bhagany: I almost done with datomic on aws simple_smile

lowl4tency16:07:02

It’s really awesome

bhagany16:07:10

exciting! I hope it goes well

lowl4tency16:07:01

bhagany: it’s in finish state already

lowl4tency16:07:32

Running an app and going to test updates and other related processes

bhagany16:07:08

I see. Mine is a user-facing design system

bhagany16:07:22

for self-serve e-commerce

ghadi21:07:59

Design question: Given a process that is stepping through the Log, how to identify all changes that happened to entities in a particular partition?

ghadi21:07:26

I'd like to ignore datoms related to schema, as well as other partitions I don't care about

ghadi21:07:18

Looking for mainly the appearance of entities, or the transaction of datoms related to entities

stuartsierra21:07:51

@ghadi: You can get the partition from an entity ID with d/part, then resolve it to a keyword with d/ident.

ghadi21:07:28

thanks, stuartsierra . Is there a better approach than filtering through the log?

ghadi21:07:05

like using d/q and making ad hoc queries against a db and its successor/predecessor?

ghadi21:07:34

(I'm trying to broadcast to other systems changes that happen to certain types of entities)

stuartsierra21:07:55

@ghadi: It really depends on the specifics of your entities and your queries.

ghadi21:07:22

Yeah. I don't care what the change actually is, I'm fine with re-publishing the entire representation of the entity (as opposed to publishing a specific delta)

stuartsierra21:07:37

The tx-report-queue and the Log, together, give you a guarantee you'll see every change when it happens.

stuartsierra21:07:09

Then it's up to you what you want to do with that information.

ghadi21:07:15

stuartsierra: I just grokked this: ` :where [(tx-ids ?log ?t1 ?t2) [?tx ...]] [(tx-data ?log ?tx) [[?e]]]` I think that will be sufficient, if I filter on ?e

ghadi22:07:30

follow up -- using query to join the d/log against a particular d/db.

ghadi22:07:35

works swell.