Fork me on GitHub
#datomic
<
2017-06-12
>
bbqbaron00:06:27

yes, i believe you can convert datoms into a stream of some sort. one moment…

bbqbaron00:06:41

i’ve never had to use it, but something like that?

az02:06:02

Great thank you, any ideas on a realtime strategy for datomic?

misha07:06:30

@chelsey how is "not requiring to pack every-possible query-result modification into query itself" painful? kappa I'd rather ->>/`map`/`filter` all the things, than building monstrous queries (there is no execution-planners to optimize your queries btw.)

misha08:06:25

find ids, pull some attributes, aggregate or whatever, :beach_with_umbrella:

bbqbaron11:06:44

@aramz what did you mean by “realtime strategy?” assuming you’re not referring to Starcraft, of course 😛

joshg18:06:49

If you’re in the DFW area tomorrow, we’re talking about Om Next and Datomic at the Clojure Meetup: https://www.meetup.com/DFW-Clojure/events/239721041/

az18:06:38

@bbqbaron haha sorry, yes that is correct, I meant to say a strategy for building realtime apps with datomic.

az18:06:23

Is there someway to have the notion of change streams like rethinkdb? Would there be a reasonable way to create a map of queries that get looked up on new queries, and invalidated when a new query will alter the result?

az18:06:12

Basically, how could we know if the query: [?e :person/name ?name] would have a new result after adding a new person to the database?

az18:06:58

Then theoretically, could we have peers spin up, and manage connections and streaming changes for a list of client queries?

az19:06:41

Or would a naive approach work better to simply just rerun each live query on each peer. Do I understand correctly that those will likely run against the cache anyway?

jdkealy19:06:34

are there any examples on how to call tx-pipeline ? I see the function in the docs, confused on how to use it.

uwo19:06:04

when y’all write data migrations, do you stick with the typical approach of always having a revert procedure for each migration?

Lambda/Sierra19:06:30

@aramz Datomic does not have "streaming queries" in that sense. Each peer receives a stream of all new transactions in the database, but it is up to the application to figure out what to do with that. For a small number of simple queries, it might be OK to re-run them all on each transaction. For more complex scenarios, you will need to write custom code to examine each transaction and decide if it affects the data you are interested in.

Lambda/Sierra19:06:21

@uwo Generally, no. The "revert migration" doesn't really make any sense in Datomic: Schema is added, but never removed (in production). See https://github.com/rkneufeld/conformity as an example of a Datomic schema migration strategy.

uwo19:06:31

@stuarthalloway does the answer change if it’s only about data migrations not schema migrations? I know I don’t need to remove attributes

Lambda/Sierra19:06:07

@uwo Still no. You can never truly "go back to the way it was" before the migration. You can only add new data, which might include retractions of data added in previous migrations.

Lambda/Sierra19:06:34

There is a difference in how we tend to work with Datomic in development versus production. In development, it is common to delete and recreate a database many times until you are satisfied with the schema. But once you have real data in production, you can only add.

uwo19:06:37

are there no such thing as data migrations forward?

uwo19:06:03

right, we’re trying to figure out how data migrations will work in production right now

Lambda/Sierra19:06:40

In production, you only ever add new information (which may include retractions or schema alterations).

Lambda/Sierra19:06:35

Every "migration" is just transacting new things into the database. Conformity is one way to manage this.

uwo19:06:17

thanks @stuartsierra we’re wrestling with some misconceptions

nwjsmith20:06:12

I think you've got the wrong stu