This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-09-19
Channels
- # 100-days-of-code (12)
- # beginners (116)
- # calva (2)
- # cider (16)
- # cljdoc (5)
- # cljs-dev (26)
- # clojure (161)
- # clojure-italy (7)
- # clojure-nl (9)
- # clojure-spec (49)
- # clojure-uk (112)
- # clojurescript (50)
- # clojutre (4)
- # core-async (2)
- # cursive (4)
- # datomic (192)
- # emacs (10)
- # events (4)
- # figwheel-main (147)
- # fulcro (94)
- # graphql (5)
- # instaparse (1)
- # jobs-rus (1)
- # keechma (10)
- # leiningen (223)
- # luminus (3)
- # mount (23)
- # nrepl (8)
- # off-topic (44)
- # onyx (10)
- # pedestal (5)
- # re-frame (19)
- # reitit (8)
- # shadow-cljs (62)
- # uncomplicate (3)
Will datomic cloud upgrade to the t3 instances for solo topology? I'm thinking of reserving soon, so would be nice to know 🙂
I’m having trouble thinking in datomic terms; suppose I have a schema with datoms that have two instant
(start and end) and I want to do a histogram across a time period (eg: count per day, where day between the datom’s start and end) . Some of my data may not have an entry for a given day. Is this kind of query possible in datomic? Or I’d have to query for each day?
@steveb8n I was reading about that just now, I guess I can but to fill the gaps I’d have to do a query to know where the gaps are. I can live without gaps filled right now
or just query for all data, put in a map by date. then in client fn “get” each value in a loop that generates all dates, using a default value of zero in the get
Please can someone help me? I am following the docs (https://docs.datomic.com/on-prem/query.html#calling-java) and came across the Java method calls that I can't seem to get working. I get a FileNotFoundException Could not locate System__init.class or System.clj on classpath. clojure.lang.RT.load (RT.java:463) when I run
[:find ?k ?v
:where [(System/getProperties) [[?k ?v]]]]
I did try using java.lang.System/getProperties
but it did not work.
I created a question on StackOverflow (https://stackoverflow.com/questions/52378601/using-system-getproperties-in-a-datomic-query-throws-a-filenotfoundexception) with more information.hi @UCM1FJA4E! Are you running on-prem or cloud?
I believe that is a bug introduced with classpath function support. Datomic is incorrectly treating that symbol as a Clojure name
the workaround is to give it a Clojure symbol to work with, e.g.
(defn get-props [] (System/getProperties))
(d/q '[:find ?k ?v
:where [(user/get-props) [[?k ?v]]]])
thanks for the report!
Hi there, is there way to use Datomic ions with websocket or other real time communication?
I have seen people talking about using AWS IoT for websocket stuff. I’m not familiar with the details, though.
yeah, use aws IoT topics. I haven’t had the time to do a writeup yet (had to shave a few yaks) but since a lambda function can be a consumer and / or a producer to an IoT topic you can call an Ion whenever a message is published to an IoT topic (from a web browser or an app, for example)
+1 to this. IoT's websocket support makes this super easy. Have actually spent more time on figuring out the best approach to spitting out events. Been playing around with Onyx as well as just ions read the log and publish events in conjunction with lambda scheduling
Thanks! Will check IoT
Is it posible to create a new system as a query group only? for having a system with just a t2.medium
no, query groups extend a production system https://docs.datomic.com/cloud/whatis/architecture.html#query-groups
Hi, what’s the syntax for 2-arity query functions on a peer? I have the following but it spits out a stacktrace:
(da/q '[:find (sql-events.datomic/occupation ?s ?e)
:where
[_ :booking/start ?s]
[_ :booking/end ?e]]
(da/db api-conn))
(Moving to #datomic from #ions-aws) When naming ion.cast/metric values, here's the behavior I'm seeing:
;; following the metric example
(cast/metric {:name :MyMetricName ,,,}) ;; => Cloudwatch metric: Mymetricname
;; following the keys example for events:
(cast/metric {:name :my-metric-name ,,,}) ;; => Cloudwatch metric: My-metric-name
In at least one of these cases I'd expect to get a Cloudwatch metric name of MyMetricName
. How should I adjust my expectations?@U0EHU1800 are you running on Solo?
Could be "In order to reduce cost, the Solo Topology reports only a small subset of the metrics listed above: Alerts, Datoms, HttpEndpointOpsPending, JvmFreeMb, and HttpEndpointThrottled." -- https://docs.datomic.com/cloud/operation/monitoring.html
hm, better link at https://docs.datomic.com/cloud/ions/ions-monitoring.html#metrics
I'm not missing metrics: they appear, they're just not spelled the way I'd expect looking at the transformation rules in the documentation.
(Considering now two people have misinterpreted my explanation, clearly I should have explained it better. My apologies.)
are you seeing those metric names printed in the AWS CloudWatchlog?, or as metric names in the metrics console, or both?
Those are the metric names in the Metrics (sub) console in CloudWatch. I actually don't see them when searching the CloudWatch log groups (using ${.Type="Metric"}
as my filter. ${.Type="Event"}
and ${.Type="Alert"}
returns expected log messages).
the report in the log is an operational report about sending the messages
and cannot be trusted on this issue because it has its own necessary to-json transformation
will investigate further
@U0EHU1800 Stu has asked that I investigate this further. I am opening a case/ticket to track this. I would like to add you as a contact on the ticket. Could you PM me a good e-mail to add to the ticket so that I can notify you of our findings?
What is the most convenient to, given a t
value, find the previous t
? The use-case is to query the history db as-of before transaction x happened.
could someone help me understand the difference between :db.unique/identity
and :db.unique/value
?
I'm just having a little trouble wrapping my head around it right now. Didn't get much sleep 😅
this, however, will substitute "new-id" with the entity id of whatever has the "foo" assertion on it
ohhhh, okay, that's what it means. the value is unique regardless of the key its associated with
:db.unique/identity
goes the extra step of inferring a db/id when a transaction doesn't have one
:db.unique/value
will not do that, you must explicitly say what entity you are writing to
interesting... the docs give social security number as an example use case, but I don't see how that doesn't fall under unique/identity
😕
if you are intending to add a new person record, and that person happens to have the same ssn as an existing person, it's unlikely that the desired behavior is to update the existing record
intersting. I keep going back and forth conflating each concept with the other. I should treat myself to more sleep.
for quick reference right now, and to check my understanding, I'm going to be associating crypto-currency balances to crypto-currency account addresses, so the account address, the long uuid associated with the crypto account, is probably going to be a unique/value
right?
because no other account can ever be added with that same address, and that address will never be updated in any way..
so, I like the way you expressed that thought in terms of the verb "to identify". I want to identify an account by its address. Cool, I can get behind that. would you happen to have a similar phrasing in mind for the value case? because I think that's the bit I'm missing
or, values that may change on an entity; but you don't care you just want to make sure two entities don't have the same one at any time
so its just a rule that two things can't have the value at the same time. it can be moved, removed, or changed, provided it doesn't change to a value that something else is already associated with.
maybe you have some range of numbers you have to use to label something but you can only use a number from the range once
yeah, that's interesting. considering it in terms of a small pool of unique values that need to be shared is a good way of making it concrete for me
thanks @U4R5K5M0A, @U09R86PA4, this was really helpful
:db.unique/value: must be unique a+v assertion db-wide; asserting on a different entity will throw
:db.unique/identity: must be unique a+v assertion db-wide; asserting on a tempid will assert on the existing "owning" entity, otherwise it will create a new one
the only difference between them is how transactions that assert the attribute on a temporary entity id will respond
for value, they will throw (tx rejected); for identity, they will "upsert" (resolve the tempid to the existing entity id)
value is about asserting distinct cases, identity is about resolving to the same case
@U09R86PA4 do you use datomic for public facing web apps?
curious if you have cross yourself with response times/latencies issue that you couldn't handle easily (from the datomic side)
interesting, hear everywhere dynamo is the most polished storage for datomic and more performant since reads and write don't fight each other for resources
huh, yeah, then I don't see the advantage, considering peers can be scaled horizontally, right?
@U09R86PA4 your data doesn't fit on the peers?
there's a hot cache of frequently indexed accessed data, and then whenever you reach beyond that it has to pull more from storage. Ones entire data set is rarely ever present on the peer in its entirety
then there's the encoded blocks, up to 60-ish k in size I think, containing potentially thousands of datoms; those are what is in storage and cacheable by memcached
when they say the entire db "fits in peer memory" that means that all blocks can be fully decoded into instances and fit in the object cache
that's considerably more memory than what storage uses because what's in storage is compressed and encoded
have you seen http://myrocks.io/ ? may help you
ok, backing to a file system instead to some cloud storage thing doesn't sound nice if you are already in the cloud
when I retract a noHistory value, this last will stay there forever or will be "cleaned"?
I’m looking at migrating a production system to Datomic over time.
Does anyone have recommended readings of people that have done this before? I’m hoping to read about how they validated and justified the migration to their team and how the migration went.
Really, I’m wanting to read some critical thinking and reflection on the decisions that were made.
@iambrendonjohn I’d be happy to discuss what we’ve seen from customers/users WRT that kind of migration. You can email me at marshall @ http://cognitect.com
Thanks Marshall, will do :thumbsup:
Hi - I'm coming back to Datomic, after a year or so, and am interested in building an app using datomic cloud. Given there is no in-mem peer available, how do you generally test the validity of transaction data? Writing unit tests using the in-mem peer implementation meant that in the past I could test a large amount of my apps functionality through unit tests.
I assume I'd still need to connect to an external client to use d/with?
Ok - But still creating and deleting the database externally?
So there somewhere between unit and integration tests in the traditional sense.
Yup - That was the great thing about the in-mem impl. It gave you "reality" in a true unit test.
Given there is an on-prem version still on offer, I suppose you could still use the in-mem for testing, are they compatible from a transaction data standpoint?
This might be illuminating as well https://docs.datomic.com/cloud/operation/planning.html
Thanks. Will take a look at the link, think it's been added since the last time I looked.
Differences are noted here https://docs.datomic.com/on-prem/moving-to-cloud.html
Thanks - Have you had any problems with the tear off db approach? E.g. increased time it takes for tests to run in-mem vs over the wire? Network flakiness breaking tests, etc...
Anyway, thanks for your answers, they have been really helpful
@U6M20CPK2 Because we wanted the ability to run offline by running in-mem dbs only, we have been using this https://github.com/ComputeSoftware/datomic-client-memdb to run our unit tests.