Fork me on GitHub
#datomic
<
2018-09-19
>
rhansen08:09:09

Will datomic cloud upgrade to the t3 instances for solo topology? I'm thinking of reserving soon, so would be nice to know 🙂

mping08:09:50

I’m having trouble thinking in datomic terms; suppose I have a schema with datoms that have two instant (start and end) and I want to do a histogram across a time period (eg: count per day, where day between the datom’s start and end) . Some of my data may not have an entry for a given day. Is this kind of query possible in datomic? Or I’d have to query for each day?

steveb8n09:09:20

@mping can’t you fill in the gaps using a fn in the Datomic client system?

mping09:09:24

@steveb8n I was reading about that just now, I guess I can but to fill the gaps I’d have to do a query to know where the gaps are. I can live without gaps filled right now

steveb8n09:09:00

or just query for all data, put in a map by date. then in client fn “get” each value in a loop that generates all dates, using a default value of zero in the get

clarice11:09:36

Please can someone help me? I am following the docs (https://docs.datomic.com/on-prem/query.html#calling-java) and came across the Java method calls that I can't seem to get working. I get a FileNotFoundException Could not locate System__init.class or System.clj on classpath. clojure.lang.RT.load (RT.java:463) when I run

[:find ?k ?v
 :where [(System/getProperties) [[?k ?v]]]]
I did try using java.lang.System/getProperties but it did not work. I created a question on StackOverflow (https://stackoverflow.com/questions/52378601/using-system-getproperties-in-a-datomic-query-throws-a-filenotfoundexception) with more information.

stuarthalloway11:09:11

hi @UCM1FJA4E! Are you running on-prem or cloud?

stuarthalloway11:09:02

I believe that is a bug introduced with classpath function support. Datomic is incorrectly treating that symbol as a Clojure name

stuarthalloway11:09:30

the workaround is to give it a Clojure symbol to work with, e.g.

(defn get-props [] (System/getProperties))

(d/q '[:find ?k ?v
       :where [(user/get-props) [[?k ?v]]]])

stuarthalloway11:09:45

thanks for the report!

clarice11:09:28

Awesome! Thanks Stuart. That works.

Roman Tsopin12:09:45

Hi there, is there way to use Datomic ions with websocket or other real time communication?

henrik13:09:10

I have seen people talking about using AWS IoT for websocket stuff. I’m not familiar with the details, though.

Joe Lane13:09:28

yeah, use aws IoT topics. I haven’t had the time to do a writeup yet (had to shave a few yaks) but since a lambda function can be a consumer and / or a producer to an IoT topic you can call an Ion whenever a message is published to an IoT topic (from a web browser or an app, for example)

eoliphant13:09:23

+1 to this. IoT's websocket support makes this super easy. Have actually spent more time on figuring out the best approach to spitting out events. Been playing around with Onyx as well as just ions read the log and publish events in conjunction with lambda scheduling

Roman Tsopin14:09:45

Thanks! Will check IoT

johnj14:09:38

Is it posible to create a new system as a query group only? for having a system with just a t2.medium

johnj14:09:53

yeah saw that but was no completely sure, thanks.

mping14:09:43

Hi, what’s the syntax for 2-arity query functions on a peer? I have the following but it spits out a stacktrace:

(da/q '[:find (sql-events.datomic/occupation ?s ?e) 
        :where 
        [_ :booking/start ?s]
        [_ :booking/end ?e]]
      (da/db api-conn))

mping14:09:00

(defn occupation [s e] ...)

mping14:09:41

nvmind, it’s an aggregation and it can only have one variable

grzm15:09:24

(Moving to #datomic from #ions-aws) When naming ion.cast/metric values, here's the behavior I'm seeing:

;; following the metric example 
(cast/metric {:name :MyMetricName ,,,}) ;; => Cloudwatch metric: Mymetricname
;; following the keys example for events: 
(cast/metric {:name :my-metric-name ,,,}) ;; => Cloudwatch metric: My-metric-name
In at least one of these cases I'd expect to get a Cloudwatch metric name of MyMetricName. How should I adjust my expectations?

stuarthalloway16:09:20

@U0EHU1800 are you running on Solo?

stuarthalloway16:09:19

Could be "In order to reduce cost, the Solo Topology reports only a small subset of the metrics listed above: Alerts, Datoms, HttpEndpointOpsPending, JvmFreeMb, and HttpEndpointThrottled." -- https://docs.datomic.com/cloud/operation/monitoring.html

grzm16:09:44

I'm running production (2 × i3.large) (on 441)

grzm16:09:17

I'm not missing metrics: they appear, they're just not spelled the way I'd expect looking at the transformation rules in the documentation.

grzm16:09:59

{:name :MyMetricName} -> Mymetricname {:name :my-metric-name} -> My-metric-name

grzm16:09:46

(Considering now two people have misinterpreted my explanation, clearly I should have explained it better. My apologies.)

stuarthalloway17:09:36

are you seeing those metric names printed in the AWS CloudWatchlog?, or as metric names in the metrics console, or both?

grzm18:09:11

Those are the metric names in the Metrics (sub) console in CloudWatch. I actually don't see them when searching the CloudWatch log groups (using ${.Type="Metric"} as my filter. ${.Type="Event"} and ${.Type="Alert"} returns expected log messages).

grzm18:09:20

Should I expect metrics to be reported in the CloudWatch log messages?

stuarthalloway18:09:10

the report in the log is an operational report about sending the messages

stuarthalloway18:09:36

and cannot be trusted on this issue because it has its own necessary to-json transformation

stuarthalloway18:09:52

will investigate further

jaret19:09:06

@U0EHU1800 Stu has asked that I investigate this further. I am opening a case/ticket to track this. I would like to add you as a contact on the ticket. Could you PM me a good e-mail to add to the ticket so that I can notify you of our findings?

madstap17:09:45

What is the most convenient to, given a t value, find the previous t? The use-case is to query the history db as-of before transaction x happened.

favila17:09:16

you don't usually need the exact previous t, you just need a t less than the target

favila17:09:39

so you can just subtract one (from t or tx) and use that

madstap17:09:12

Perfect, thanks!

idiomancy17:09:44

could someone help me understand the difference between :db.unique/identity and :db.unique/value? I'm just having a little trouble wrapping my head around it right now. Didn't get much sleep 😅

favila17:09:55

the only difference is what happens in txs like this:

favila17:09:03

{:db/id "new-id"
 :my-unique-value-attr "foo"}

favila17:09:36

if "foo" already exists on a db id somewhere, this tx will throw an exception

favila17:09:54

{:db/id "new-id"
 :my-unique-identity-attr "foo"}

favila17:09:17

this, however, will substitute "new-id" with the entity id of whatever has the "foo" assertion on it

favila17:09:40

this is what they mean by "upsert" in the datomic docs

idiomancy17:09:41

ohhhh, okay, that's what it means. the value is unique regardless of the key its associated with

favila17:09:57

yes, but they both share that

favila17:09:27

:db.unique/identity goes the extra step of inferring a db/id when a transaction doesn't have one

favila17:09:58

:db.unique/value will not do that, you must explicitly say what entity you are writing to

idiomancy17:09:15

huh... so what would be a valid transaction using a value?

favila17:09:34

the first one is valid if foo does not exist

favila17:09:42

if it does exist, just don't use a tempid

favila17:09:01

e.g. {:db/id [:my-unique-value-attr "foo"] :some-other-attr "bar"}

favila17:09:23

but you have to do a read before you write to determine which case it is

favila17:09:33

creating vs updating

favila17:09:44

:db.unique/identity you can treat both the same

idiomancy17:09:45

interesting... the docs give social security number as an example use case, but I don't see how that doesn't fall under unique/identity 😕

favila17:09:10

you probably don't want to upsert on social security

favila17:09:14

if you are intending to add a new person record, and that person happens to have the same ssn as an existing person, it's unlikely that the desired behavior is to update the existing record

favila17:09:22

the desired behavior is probably "freak out"

idiomancy17:09:47

intersting. I keep going back and forth conflating each concept with the other. I should treat myself to more sleep. for quick reference right now, and to check my understanding, I'm going to be associating crypto-currency balances to crypto-currency account addresses, so the account address, the long uuid associated with the crypto account, is probably going to be a unique/value right?

idiomancy17:09:37

because no other account can ever be added with that same address, and that address will never be updated in any way..

johnj18:09:43

this an identity case, you want to identify accounts by their address (uuid)

idiomancy18:09:32

so, I like the way you expressed that thought in terms of the verb "to identify". I want to identify an account by its address. Cool, I can get behind that. would you happen to have a similar phrasing in mind for the value case? because I think that's the bit I'm missing

johnj18:09:27

the opposite 😉

johnj18:09:17

when you want a unique identifier that doesn't need to identify an entity

favila18:09:31

:db.unique/identity is precisely to identify the entity

favila18:09:41

where upsert behavior is never a sign of trouble

johnj18:09:51

yeah, I was talking about :db.unique/value

favila18:09:59

:db.unique/value is merely that the value must be unique

idiomancy18:09:09

why would I ever have a unique identifier that doesn't identify an entity?

favila18:09:09

the value may not identify the entity at all

favila18:09:33

e.g. other system's identifiers

favila18:09:20

or, values that may change on an entity; but you don't care you just want to make sure two entities don't have the same one at any time

favila18:09:56

e.g. perhaps the value represents some limited resource

favila18:09:05

that the entity is "using" by asserting

idiomancy18:09:10

^^ you're talking about ident or value right now?

favila18:09:25

> why would I ever have a unique identifier that doesn't identify an entity?

favila18:09:30

I am answering that question with examples

idiomancy18:09:33

I see, wow, okay

idiomancy18:09:48

the resource example might be the thing that does it for me

idiomancy18:09:56

I think that is starting to make it clear

idiomancy18:09:00

so its just a rule that two things can't have the value at the same time. it can be moved, removed, or changed, provided it doesn't change to a value that something else is already associated with.

johnj18:09:21

maybe you have some range of numbers you have to use to label something but you can only use a number from the range once

idiomancy18:09:10

yeah, that's interesting. considering it in terms of a small pool of unique values that need to be shared is a good way of making it concrete for me

idiomancy18:09:46

thanks @U4R5K5M0A, @U09R86PA4, this was really helpful

favila18:09:43

it may be better just to think of it in terms of desired behavior

favila18:09:15

:db.unique/value: must be unique a+v assertion db-wide; asserting on a different entity will throw

favila18:09:45

:db.unique/identity: must be unique a+v assertion db-wide; asserting on a tempid will assert on the existing "owning" entity, otherwise it will create a new one

favila18:09:14

the only difference between them is how transactions that assert the attribute on a temporary entity id will respond

idiomancy18:09:26

yeah, okay, that makes sense.

favila18:09:41

for value, they will throw (tx rejected); for identity, they will "upsert" (resolve the tempid to the existing entity id)

idiomancy18:09:45

value is about asserting distinct cases, identity is about resolving to the same case

johnj19:09:05

@U09R86PA4 do you use datomic for public facing web apps?

johnj19:09:26

curious if you have cross yourself with response times/latencies issue that you couldn't handle easily (from the datomic side)

favila19:09:38

not really?

johnj19:09:55

its my number one fear, response times increasing as the userbase grow

favila19:09:07

response for what?

favila19:09:28

what kind of queries?

johnj19:09:28

just business data, say 100K rows

johnj19:09:45

what storage do you use most of the time?

favila19:09:52

we use google cloud mysql

johnj19:09:00

interesting, hear everywhere dynamo is the most polished storage for datomic and more performant since reads and write don't fight each other for resources

johnj19:09:10

I guess it depends on your needs, thanks

favila19:09:12

we had other constraints, it's not a great choice at all

favila19:09:42

but memcached makes read speed nearly irrelevant

favila19:09:02

mysql traffic is nearly all writes

idiomancy19:09:39

how does dynamo overcome the transactor write bottleneck?

favila19:09:57

it doesn't

favila19:09:07

the transactor itself, not the storage, is the bottleneck

idiomancy19:09:28

huh, yeah, then I don't see the advantage, considering peers can be scaled horizontally, right?

favila19:09:48

the advantage is operational

favila19:09:06

you don't have to size any storage, you don't have to run regular maintenance

favila19:09:05

mysql innodb has terrible garbage problems with datomic's workload

favila19:09:23

and it doesn't have an on-line space reclaimer like postgres vacuum

favila19:09:30

"optimize tables" locks

idiomancy19:09:26

mm, yeah I could see that

favila19:09:28

but memcached provides basically unlimited read-scalability

favila19:09:45

and the write load is always limited by the transactor anyway

johnj19:09:55

@U09R86PA4 your data doesn't fit on the peers?

favila19:09:57

and the queries are dead simple

favila19:09:40

even if it did, it's got to get to the peer

favila19:09:03

many peers reading one mysql is worse than reading memcaches

idiomancy19:09:54

there's a hot cache of frequently indexed accessed data, and then whenever you reach beyond that it has to pull more from storage. Ones entire data set is rarely ever present on the peer in its entirety

idiomancy19:09:12

so the docs tell me, anyway

favila19:09:20

yeah there are two levels

favila19:09:34

the object cache, which is datoms-as-java-instances, after decoding from storage

favila19:09:26

then there's the encoded blocks, up to 60-ish k in size I think, containing potentially thousands of datoms; those are what is in storage and cacheable by memcached

favila19:09:52

the storage is always just a key-value store from datomic's perspective

idiomancy19:09:18

gotcha. I'd not really understood the object cache

favila19:09:25

In fact I don't understand why they didn't offer any explicitly key-valuey storages

favila19:09:40

bdb for example would be a great fit

favila19:09:12

when they say the entire db "fits in peer memory" that means that all blocks can be fully decoded into instances and fit in the object cache

favila19:09:53

that's considerably more memory than what storage uses because what's in storage is compressed and encoded

johnj19:09:30

ah good point

johnj19:09:35

have you seen http://myrocks.io/ ? may help you

johnj19:09:50

oh nevermind, remembered you are in google cloud

favila19:09:02

that's another pain point

favila19:09:08

datomic is very aws-centric

favila19:09:16

we can't even do backups to buckets

johnj19:09:00

is their mysql backup not enough?

johnj19:09:30

or you prefer at the datomic level?

johnj19:09:46

haven't read anything about datomics backup yet

favila19:09:50

mysql backup can only restore to mysql

johnj19:09:59

looks like dynamo is the least headache from an operational point of view

favila19:09:10

datomic backups are storage-agnostic

favila19:09:32

we do both

johnj20:09:16

ok, backing to a file system instead to some cloud storage thing doesn't sound nice if you are already in the cloud

idiomancy18:09:37

phew, alrighty then. good to know I'm correct in assuming that I'm still wrong.

souenzzo18:09:43

when I retract a noHistory value, this last will stay there forever or will be "cleaned"?

iambrendonjohn22:09:59

I’m looking at migrating a production system to Datomic over time.

iambrendonjohn22:09:24

Does anyone have recommended readings of people that have done this before? I’m hoping to read about how they validated and justified the migration to their team and how the migration went.

iambrendonjohn22:09:30

Really, I’m wanting to read some critical thinking and reflection on the decisions that were made.

marshall22:09:56

@iambrendonjohn I’d be happy to discuss what we’ve seen from customers/users WRT that kind of migration. You can email me at marshall @ http://cognitect.com

iambrendonjohn23:09:31

Thanks Marshall, will do :thumbsup:

matthew.gretton22:09:32

Hi - I'm coming back to Datomic, after a year or so, and am interested in building an app using datomic cloud. Given there is no in-mem peer available, how do you generally test the validity of transaction data? Writing unit tests using the in-mem peer implementation meant that in the past I could test a large amount of my apps functionality through unit tests.

marshall22:09:42

You can use d/with

marshall22:09:56

Could also use a tear off db in cloud

marshall23:09:06

I tend to do both

matthew.gretton23:09:37

I assume I'd still need to connect to an external client to use d/with?

marshall23:09:00

Which is why I tend to use tear off db

marshall23:09:04

I.e. uuid name

marshall23:09:14

Use it for the tests then delete it

matthew.gretton23:09:49

Ok - But still creating and deleting the database externally?

marshall23:09:33

If you need isolation, you can run a separate solo system for that

marshall23:09:03

Should be fine for functional tests. Spin up a prod system if you need to test perf

matthew.gretton23:09:07

So there somewhere between unit and integration tests in the traditional sense.

marshall23:09:28

Probably so, yea

marshall23:09:50

Get a bit more "reality" than unit

marshall23:09:06

By using the actual db

matthew.gretton23:09:56

Yup - That was the great thing about the in-mem impl. It gave you "reality" in a true unit test.

matthew.gretton23:09:30

Given there is an on-prem version still on offer, I suppose you could still use the in-mem for testing, are they compatible from a transaction data standpoint?

marshall23:09:03

Mostly. There are some differences

matthew.gretton23:09:01

Thanks. Will take a look at the link, think it's been added since the last time I looked.

matthew.gretton23:09:46

Thanks - Have you had any problems with the tear off db approach? E.g. increased time it takes for tests to run in-mem vs over the wire? Network flakiness breaking tests, etc...

matthew.gretton23:09:06

Anyway, thanks for your answers, they have been really helpful

kenny16:09:18

@U6M20CPK2 Because we wanted the ability to run offline by running in-mem dbs only, we have been using this https://github.com/ComputeSoftware/datomic-client-memdb to run our unit tests.

marshall23:09:48

Haven't really seen much issue with that sort of thing

marshall23:09:50

Mem is obviously super quick, but most tests I use for tear off db stuff aren't "huge" so its been fine