Fork me on GitHub
#datomic
<
2016-03-10
>
greywolve12:03:43

assuming i have a database value which has a certain entity id present within it, and i then use (as-of db some-point-in-time-before-that-entity-id exists) , and I do (d/with ...) on that as-of db, with a tx like [:db/add entity-id some-attr some-value], why does that succeed still? surely it should blow up with an invalid entity-id , since it technically doesn't exist at that point in time?

lmergen12:03:39

man, it's so tempting to start using Datomic for a project I'm working on right now, because I saw the video about resource ownership/authorization, and it just felt so elegant to solve it that way (transform the database object into a new database that only contains objects that are "owner" by the current user)

lmergen12:03:52

but I feel like it would introduce so much complexity at the same time

val_waeselynck13:03:40

@greywolve: weird indeed, are you positive about the entity id not existing before that ?

val_waeselynck13:03:01

@lmergen: what complexity are you worried about ?

lmergen13:03:11

operational complexity

lmergen13:03:21

i see that i need to maintain & operate several services

lmergen13:03:39

right now we do $everything in AWS and the ecosystem it provides

val_waeselynck13:03:54

biggest impediment is the limit on the number of processes IMO

val_waeselynck13:03:12

I do deploy it on AWS, and it's OK operations-wise

val_waeselynck13:03:21

and we're only a 2-devs startup

val_waeselynck13:03:38

but you do have to plan and accomodate for it

lmergen13:03:13

but it seriously goes against my gut feeling right now... it's a beautiful thing, but it would be awesome if there would be some datomic-as-a-service with AWS or so

greywolve13:03:23

val_waeselynck: yeah i set the entity id to about 2 years back, so pretty sure šŸ˜› when i restore the entire db back to a point in time just before that entity existed, then d/with correctly blows up when i try that tx

lmergen13:03:24

then I would be using it without even thinking about it

robert-stuttaford13:03:32

apps + 2 transactors + ddb

lmergen13:03:44

also, i wonder how well Datomic scales on a write level

robert-stuttaford13:03:55

itā€™s not that big of an issue. there are details, but itā€™s not rocket surgery simple_smile

val_waeselynck13:03:22

@lmergen: official stance on this is that your write volume must be consistent with the upper limit on your database size (10 billion datoms)

lmergen13:03:42

oh then that's not going to work anyway... shite

val_waeselynck13:03:25

@lmergen: having said that, it can be interesting to use a hybrid architecture with some NoSQL database for collecting most of your data and Datomic to post-process it

lmergen13:03:38

the thing is

lmergen13:03:50

we have a huge data warehouse (think 10 billion records / month)

lmergen13:03:59

and we have some relational database

lmergen13:03:39

and I'm trying to find a solution to query our data warehouse, mix & match it with our relational database, and not having to worry about the slow query time of our data warehouse

lmergen13:03:42

in other words, i want magic!

lmergen13:03:53

and it needs to scale up to the moon too simple_smile

robert-stuttaford13:03:08

worrying about running two transactors is the least of your problems, then simple_smile

lmergen13:03:32

yeah, I'm trying to explore whether Datomic is a solution for this

lmergen13:03:58

someone suggested me into this direction, to "pre-warm" the pipelines with our data warehouse data...

lmergen13:03:11

but I don't think it will work

val_waeselynck13:03:44

I'm not an expert in this kind of deployment šŸ˜• maybe you shoud just contact the guys at Cognitect

lmergen13:03:27

yeah I'm already in contact with one of their sales people, but I think what I'm after is actually some consultancy on this matter.. hmm

lmergen13:03:53

well if anyone is reading this, and thinks they can help us with this, send me a pm

Lambda/Sierra13:03:39

I think Datomic aims to fulfill the role of a traditional relational database ā€” with better scaling characteristics than most SQL's ā€” more than the role of a "big data" solution.

jonahbenton14:03:58

@lmergen yeah, Datomic is not a solution to this problem, it plays in the oltp space. Likely in your situation your sql database will need to be able to utilize pre-calculated historical aggregations done on warehouse data, in the appropriate shapes, so that you can use sql across both datasets. In the warehouse/sql world this is an ETL problem, one needs to orchestrate pushes from the warehouse into some version (slave/standby/replica) of the oltp system. This usually is very messy and complicated. What's appealing about Datomic from an architectural perspective is being able to look at this problem from a pull perspective, with layers of declaratively-defined caches

lmergen14:03:46

yep, this is exactly the path we're going to be taking

lmergen14:03:09

as in, I don't think it's possible to do this in realtime, we simply need to schedule periodical jobs

dm315:03:01

people are doing all sorts of fancy stuff to make results appear faster, e.g. https://www.mapr.com/developercentral/lambda-architecture, but all of this brings huge amounts of complexity

Ben Kamphaus17:03:14

@greywolve: branching off of an as-of point with with isnā€™t supported.

Ben Kamphaus17:03:19

with works against the db without the filter. as-of dbs will filter out prospective transactions from with when queried against. At present, prospective branching from points in the past isnā€™t supported.

greywolve18:03:56

bkamphaus: thanks for the clarification simple_smile

arthur.boyer21:03:47

Hi all, Iā€™ve been examining the schema of a Datomic database, using this kind of query:

clj
  (d/q '[:find ?attr
         :where
         [_ :db.install/attribute ?e]
         [?e :db/ident ?attr]]
       db)
But I get retracted attributes as well. Does anyone know how to filter out retracted attributes?

hiredman21:03:19

what makes you think you are getting back retracted attributes?

arthur.boyer21:03:04

Iā€™m getting back attributes like

:customer-account-state/customer-username_retracted5632a067-4048-4186-8610-8e5286596ebe

arthur.boyer21:03:38

Iā€™ve inherited this code, with no handover, so thereā€™s a possibility thatā€™s this is some non standard craziness and I just havenā€™t found the place where it comes from.

Ben Kamphaus21:03:27

@arthur.boyer: You canā€™t retract attributes. That looks like renaming (with a retracted+sha convention) only, and renaming is a typical way to deprecate an attribute. In general, retracted values will only appear in history databases, and you have to bind the fifth position of the datom (the ?added) portion to see if it was an add or retract op.

arthur.boyer21:03:43

Ok that makes sense. Iā€™ve run queries binding the fifth position and they came back true. Iā€™ll do some more digging and see if thereā€™s a way I can clean them up. Thanks.

Ben Kamphaus21:03:23

if you donā€™t want those values returned by a schema check, I guess a regex against string rep of the ident would be fine (assuming those are only cases of ā€œretractedā€)

arthur.boyer21:03:27

Thatā€™s what Iā€™ve been doing, but it feels like a nasty hack, and not what I think idiomatic datomic should look like.

arthur.boyer22:03:22

So, what do you do if you do want to retract an attribute? Do you retract all the datoms that use it?

ethangracer22:03:20

hey all, I have a question on the pull syntax bnf grammar (copied from the pull api site):

pattern            = [attr-spec+]
attr-spec          = attr-name | wildcard | map-spec | attr-expr
attr-name          = an edn keyword that names an attr
wildcard           = "*" or '*'
map-spec           = { ((attr-name | limit-expr) (pattern | recursion-limit))+ }
attr-expr          = limit-expr | default-expr
limit-expr         = [("limit" | 'limit') attr-name (positive-number | nil)]
default-expr       = [("default" | 'default') attr-name any-value]
recursion-limit    = positive-number | '...'
I just tried putting a default-expr in as the first item in a map-spec and it worked. so iā€™m thinking the doc on the site is incomplete? or am I missing something?

Ben Kamphaus22:03:54

a more common pattern is to migrate data to a new Datomic database (by i.e. replaying the log with filter/transform) with a finalized schema. That works if you want to drop a lot of the initial modeling learning process, say. But I think the possibility of retracting attributes could introduce more operational complexity than keeping them and removing them from queries.

Ben Kamphaus22:03:52

@ethangracer: do you have a repro or just a code example of the specific expression that works and violates the grammar? I can take the repro case to the team and see whether the grammar or the behavior is correct.

arthur.boyer22:03:00

@bkamphaus: Thanks for that, I think the practical upshot now is that I can safely ignore things named retracted and we can consider a more radical solution later.

ethangracer22:03:26

@bkamphaus: sure, for this sample the idea is that I want to load todos from the server. but maybe I haven't sent any todos to the server yet. In which case I want to return an empty vector (to distinguish between loaded / not loaded from the server). So the bnf compatible query is

(d/pull db [{:todos [:id :title :is-completed]}] __id-for-todo-list__)
which returns nil if that todo-list has no todos. If instead, I write:
(d/pull db [{(default :todos []) [:id :title :is-completed]}] __id-for-todo-list__)
then the pull returns [], even though this is not documented by the bnf

Ben Kamphaus22:03:15

@ethangracer: thanks, Iā€™ll investigate and get back to you.

ethangracer22:03:19

@bkamphaus: sounds good, thanks