This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-01-19
Channels
- # beginners (34)
- # boot (111)
- # cider (37)
- # clara (57)
- # cljsjs (1)
- # cljsrn (22)
- # clojure (156)
- # clojure-austin (2)
- # clojure-mke (7)
- # clojure-russia (9)
- # clojure-spec (221)
- # clojure-uk (47)
- # clojurescript (42)
- # code-reviews (4)
- # community-development (9)
- # core-async (3)
- # cursive (50)
- # datomic (81)
- # emacs (12)
- # events (5)
- # hoplon (1)
- # jobs (2)
- # lein-figwheel (4)
- # leiningen (1)
- # luminus (3)
- # mount (2)
- # off-topic (1)
- # om (94)
- # om-next (3)
- # onyx (33)
- # re-frame (23)
- # reagent (41)
- # remote-jobs (9)
- # rum (30)
- # slack-help (2)
- # specter (1)
- # untangled (20)
- # yada (17)
@erichmond are you worried about latency or throughput? Keep in mind that there's a part of latency that does not depend on the number of updates, that is the Peer-Transactor roundtrip
This one should be between 10 and 100 ms I'd say, depending on your network I guess
If I'm only interested in all entities with a certain attribute (and not it's value), will AVET be quicker, or AEVT? Or will they be exactly the same?
@val_waeselynck Yeah, I meant non-network latency. Thanks for the answer!
hmm, as i work on seed data for the DB, i realized it is just data, so instead of manually crafting, amending the dataset or contorting regex… i can just define it in repl and then map over it and grab result and put into conformity norm file. works great with two caveats:
1) ordering and formatting is disturbed, but i can live with that…
2) reading in #db/id[:db.part/user]
evals to #db/id[:db.part/user -1020063]
. And that one worries me.
Any ideas how to get around reader macro expansion?
@karol.adamiec not sure I understand what you're trying to do, but regarding your questions: 1) disturbed compared to what? 2) what's the problem with that?
ignore 1, just my keys are out of order, and formatting is not nice like handcrafted.
about 2 well, i have a lot of conformity norms
and i do process them independently
it worries me that there might be conflicts?
but saying that aloud i realize it is tempids, scoped per transaction
so it is probably fine?
as long as you don't have more than 1M tempids per transaction, you should be okay 🙂
so just to put to bed my worries, the magical numbers in conformity norms are absolutely fine, no risks whatsoever… ?
magical number likes -1020063 you mean?
i am 99% sure of that, but a confirmation would be cool 😄
oh, you mean that they appear in your edn file, correct?
yeah, handcrafted norms are nice and tidy, automated ones do include nasty -10234 identifers
nice one is
{:db/id #db/id[:db.part/user]
:price/currency 1000
:price/country :GB}
I see, weird indeed
after going through repl mapping :
[{:db/id #db/id[:db.part/user -1020062], :price/currency 1000, :price/country :GB}]
i then grab the value and paste into file
Just edit out the numbers in #db/id
, as long as they aren't used twice.
@stuartsierra but am i right assuming that as long as the numbers are unique in a transaction they will not do harm? other than visual nastiness?
I can't see how they would hurt.
In fact, with the latest Datomic releases, you don't even need :db/id
.
need to upgrade then 🙂
Is it ok to configure the transactor’s host
to its external ip and the alt-host
to the loopback interface?
Had it the other way around at first, but the peer was printing error messages when starting, I figured by swapping them it’d try the publicly accessible one first, which worked
how can i find what is version of transactor running? i suspect i have older version on AWS than what i specified in my automation scripts… ;/
ActiveMQNotConnectedException AMQ119007: Cannot connect to server(s). Tried with all available servers. org.apache.activemq.artemis.core.client.impl.ServerLocatorImpl.createSessionFactory (ServerLocatorImpl.java:799)
downgrading to peer library 0.9.5394
helps to alleviate the problem
when you upgrade the transactor (assuming you’re using our cloudformation scripts), you need to specify the new version in the template CF file then run ensure-cf again before starting the stack
yes, i located the s3 logs. version is not what i expected 🙂. thx
whole problem was stemming from the fact that update to autoscaling group did not bring down old transactor… once i bring it down by hand , new one started correctly
when I upgrade I tend to stand up a whole new stack (2 transactors) with a new name (i.e. alternate between stackName-left and stackName-right)
then once the new set are up and i see metrics from them i remove the entire old stack
@souenzzo No, d/pull
only supports navigation among entities through :db.type/ref
attributes.
So @stuartsierra, is there any (simple) way to turn (nested)maps into EAV?
@souenzzo It's not part of the Datomic API. It's not that hard to write a recursive function to do it. If you want to use Datomic query (`d/q`) over collections of maps, you could transact them into a temporary, in-memory Datomic database.
surely there's a fn somewhere inside of Datomic that does it. is it not possible to run internal fns? (or is there some licensing issue?)
@devth @souenzzo There are some subtleties here. E.g, should tx function invocations be expaned; you need the db to access attribute schema; are you ok that the db might change; do you want to eagerly resolve lookup refs or not; should string tempids be converted to numeric tempids; should we auto-create tempids if it's missing (with partition inference)
so datomic is internally making those decisions when you call transact
with a list of maps right?
yes, but it only cares about the final expansion, and lots of that can remain internal impl details
making it a proper public api function would require setting the output contract, and there can be significant variation in what people want
that said, you can write a very simple (but naive) map to datom expander if you don't need to handle the full range of possible input
sounds awesome. out of curiosity, what are potential use cases for needing tx to be in List form rather than Map? mine is authorization of transactions.
we wanted to break up big txes semi-automatically, but that requires extra cross-tx tempid tracking
@devn @favila For example, https://github.com/stuartsierra/mapgraph is a trivial "database" that flattens nested maps. https://github.com/tonsky/datascript must contain a similar procedure.
@stuartsierra that looks great, been meaning to implement something like mapgraph for a while now
Have you considered an api where the graph implement the associative interfaces so you could just use clojure.core/update-in
et al and it’d automatically follow the links?
@jfntn No. That would greatly increase the scope of the library. The point was to make something simple. update-in
isn't really needed — you can pull
the entity you want, update
it like an ordinary map, and add
it back into the graph.
I considered implementing an interface similar to Datomic's d/entity
, but even that is probably more complexity than I want to deal with.
Right I was mentionning update-in because we have use cases where we want to perform updates at a path deep in the graph without incurring the cost of a normalization round-trip
I'm sure you could write something for that specific use case. It wouldn't work generally because entity references may be inside collections.