This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-07-20
Channels
- # admin-announcements (15)
- # announcements (1)
- # beginners (10)
- # boot (39)
- # bristol-clojurians (2)
- # clojure (146)
- # clojure-canada (1)
- # clojure-gamedev (8)
- # clojure-italy (2)
- # clojure-japan (16)
- # clojure-korea (1)
- # clojure-poland (1)
- # clojure-russia (20)
- # clojure-spain (3)
- # clojurebridge (5)
- # clojurescript (104)
- # core-typed (2)
- # datomic (80)
- # indycljs (1)
- # jobs (1)
- # ldnclj (31)
- # off-topic (15)
- # om (21)
- # onyx (7)
- # ox (9)
- # re-frame (2)
- # reagent (16)
@timothypratley: yup. it’s incredibly powerful. what makes it doubly so is that you can do the same with time!
@stuartsierra: have you perhaps solved the issue of how to stop listening to datomic.api/tx.report.queue
in the context of your component pattern?
right now i’ve got a listener in a core.async thread, but i’m struggling to find a way to cleanly stop it
@robert-stuttaford: I've found the 'Component' Lifecycle really only works for application start-up / shut-down. Anything with a shorter or different lifespan needs to be handled separately.
so you no longer use the tools.namespace/reset
thing?
@robert-stuttaford: No, no, I use that for everything that should be started and stopped as a whole with the rest of the application.
this is for app start/stop, but for the development workflow, i’m doing this many times over. of course, it’s all fine if i restart the jvm. but, for obvious reasons, i don’t want to do that
yes. ok. in my case, this is true for the tx-report-queue as well
@robert-stuttaford: For development, I would typically use a test DB with a unique generated name.
ok. so there’s no clean way to repeatedly listen/unlisten to the tx-report-queue that you know of?
i get the test db thing, but again, in my case, i’m working with a large production database and working on a stats processing system that works with that data
i know why this isn’t supported, but man, it would *rock* if datomic’s api provided a tx-report chan natively.
@robert-stuttaford: Not sure what you're really trying to do here. You can use remove-tx-report-queue
to disconnect the queue.
-sigh-. of course. thank you for being the voice of reason, Stuart. i suppose that will cause the next .take to return nil or something?
Or just create and manage the tx-report-queue outside of the component / reset.
i’ll see how remove-tx-.. works out
@robert-stuttaford: The Tx report queue is a BlockingQueue, so I expect once you call remove-tx-report-queue
it will block forever.
that’s disappointing.
we’re using the tx-report-queue as an input for Onyx
i remember from the pedestal talk at Cwest 2013, one of the architecture diagrams had Datomic doing this as well. i distinctly remember Tim Ewald speaking very highly of the capability. would be great if it were a little easier to start and stop listening to it in a repeatable way.
@robert-stuttaford: Should be easy enough to ignore, just close your channel.
Hi all
Guys, I have a launchgroup with datomic transactor. If I wanna 2 transactor for HA purposes which one address endpoint I should use? Can I up a loadbalancer and balance the requests? I mean aws ElasticLoadBalancer
@lowl4tency: The Transactors don't need to be load-balanced. The active Transactor writes its location into Storage and the Peers get it from there.
You can't have two active Transactors for the same database. That's the point
stuartsierra: thanks a lot. Do you have any expirience with AWS and datomic? I'm trying to get datomic endpoint from ScalingGroup with CloudFormation but don’t see any method for GetAtt and privateIP of the instances of the LaunchGroup
One method which I have it’s use aws cli and get list of instances but it looks like a bicycle
@lowl4tency: Not sure I understand your question. The only "Datomic endpoint" that matters is the Storage URI. Peers will automatically use that to find the Transactor. The AWS set-up scripts in the Datomic distribution manage the rest.
stuartsierra: endpoint, i mean an address and a port of the datomic transactor ec2 instance
for pass it later to an application
So, let me clarify, I have a CloudFormation stack which runs datomic transactor as ec2 instance (got a template from datomic generator), I have a stack with application, I wanna to pass the datomic transactor’s address to the application.
Datomic Transactor’s running into a AutoscalingGroup, so I’m not able to get ec2 instance's private address
@lowl4tency: I'm not sure how to get that. But you don't need it just to use Datomic. It is all handled automatically by the Transactor, Storage, and the Peer Library.
stuartsierra: I don’t need it if I just run a transactor.
hm, soo, what about failover? If one transactor is failed?
which I connected
@lowl4tency: failover is coordinated through the storage, so you don't have to worry about it
the transactors store a heartbeat in storage. when the primary fails, the secondary notices and takes over transparently
bhagany: but how my app will know about I have new tranactor
Look, my transactor failed, autoscaling kill old and run new one, but app is aconfigured still to old the transactor
so, I don’t need to pass the datomic URI?
if I have postgres rds as backend I need to pass the rds endpoint?
as datomic uri I mean
bhagany: it clarifys all
bhagany: do you use dynamodb ?
bhagany: I almost done with datomic on aws
It’s really awesome
bhagany: it’s in finish state already
Running an app and going to test updates and other related processes
Design question: Given a process that is stepping through the Log, how to identify all changes that happened to entities in a particular partition?
I'd like to ignore datoms related to schema, as well as other partitions I don't care about
Looking for mainly the appearance of entities, or the transaction of datoms related to entities
@ghadi: You can get the partition from an entity ID with d/part
, then resolve it to a keyword with d/ident
.
(I'm trying to broadcast to other systems changes that happen to certain types of entities)
@ghadi: It really depends on the specifics of your entities and your queries.
Yeah. I don't care what the change actually is, I'm fine with re-publishing the entire representation of the entity (as opposed to publishing a specific delta)
The tx-report-queue and the Log, together, give you a guarantee you'll see every change when it happens.
Then it's up to you what you want to do with that information.