Fork me on GitHub
#datomic
<
2017-07-15
>
souenzzo03:07:03

(d/transact @conn [{:db/ident       :foo/bar
                    ;:db/cardinality :db.cardinality/one
                    :db/valueType   :db/ident}])
=> ":db.error/invalid-install-attribute Error: {:db/error :db.error/schema-attribute-missing, :attribute :db/cardinality, :entity #:db{:id 65, :ident :foo/bar, :valueType 10}}",
(d/transact @conn [{:db/ident       :foo/bar
                    :db/cardinality :db.cardinality/one
                    :db/valueType   :db/ident}])
=> ":db.error/invalid-install-attribute Error: {:db/error :db.error/schema-attribute-missing, :attribute :db/cardinality, :entity #:db{:id 65, :ident :foo/bar, :valueType 10}}",
=> ":db.error/not-a-value-type Not a value type: :db/ident",
- How datomic knows that is missing an attribute? - How datomic knows that :db/ident is not a valueType? There is some tool like schema/`spec`?

favila04:07:28

Look at entity 0

favila04:07:59

(The :db.part/db entity)

mss12:07:16

hello all, new to datomic and considering trying to use it as a datastore on a project. for my specific use case, there’s a set of attributes I’d like to store that might exhaust the 10b datom soft limit relatively quickly. jamming that into another datastore obviously is un-ergonomic I was looking for some clarification around how db/noHistory works. it seems from my tests that facts are still accumulated (I’m testing off the memory version of datomic), as opposed to something resembling an update-in-place operation when a new value is transacted for an attribute. is that actually the case? beyond that, is excision an option if I don’t have a particularly critical retention window? anecdotally, it seems to put tremendous pressure on the db and doesn’t seem like a long-term solution either appreciate any input or suggestions

hmaurer13:07:36

@mss I assume you read this bit from the doc: > The purpose of :db/noHistory is to conserve storage, not to make semantic guarantees about removing information. The effect of :db/noHistory happens in the background, and some amount of history may be visible even for attributes with :db/noHistory set to true. ?

hmaurer13:07:05

Beyond that I am too much of a newb to help you at the moment

mss13:07:53

yep, that’s what I’m looking at

mss13:07:54

seems to suggeset that facts don’t accrete, and only some amount of mostly recent facts are stored. my experience using the mem transactor/storage was that all the facts were retained. wondering whether that’s actually the case in a production setup, and if so if there’s another solution I might be missing

val_waeselynck14:07:09

@mss I believe db/noHistory takes effect when recent datoms are compacted into an index segment: it does not change the fact that db values are immutable

val_waeselynck14:07:40

@mss if you have too much data for Datomic, I suggest you try and figure out if some of the data could go to a complementary data store (e.g S3 or a KV store)

mss14:07:55

yep that def makes sense

mss14:07:23

and I’m leaning away from datomic for that specific set of attrs, just wanted to make sure I wasn’t missing something obvious. still wrapping my head around the tech

val_waeselynck14:07:55

We typically have 5% of our data and 95% of our schema in Datomic

hmaurer16:07:30

@val_waeselynck so datomic is essentially a database of pointers to external storage for you?

hmaurer16:07:40

I assume you then have to ensure that this external storage is also immutable?

hmaurer16:07:50

I mean, that you interact with it in an immutable fashion

hmaurer16:07:28

I that sense Datomic is an index for your data, and you only store in it attributes that you might want to query/filter upon

val_waeselynck16:07:02

@hmaurer no, it is mainly a regular database to me - the use of external storage is marginal, it just happens to cover a lot of bytes. And yes, the external storage is treated immutably

hmaurer16:07:11

@val_waeselynck what do you use as your external storage? and do you enforce its immutability through permissions? (e.g. if you use S3 there might be a way to make it insert-only with IAM permissions)

hmaurer16:07:35

(out of curiosity)

val_waeselynck16:07:52

S3 with public but secure object names, and no I don't believe so

schmee16:07:31

I want to count find the campaign with the highest number of creatives, this is what I got so far:

(let [db (d/db conn)]
    (->> (d/q
           '[:find ?campaign (count ?creative)
             :where
             [?campaign :campaign/id]
             [?creative :creative/campaign ?campaign]]
           db)
         (d/q
           '[:find ?campaign (max ?count)
             :in $ [[?campaign ?count]]]
           db))))

schmee16:07:42

but this gives me back every campaign and its count

schmee16:07:55

what am I missing?

val_waeselynck17:07:52

There is no 'max-by' aggregation in Datomic unfortunately