This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-02-15
Channels
- # adventofcode (13)
- # aleph (5)
- # announcements (8)
- # beginners (87)
- # calva (9)
- # cider (102)
- # cljs-dev (71)
- # cljsrn (2)
- # clojure (198)
- # clojure-dev (28)
- # clojure-europe (3)
- # clojure-italy (27)
- # clojure-nl (3)
- # clojure-spec (1)
- # clojure-uk (43)
- # clojurescript (121)
- # component (11)
- # cursive (20)
- # data-science (13)
- # datascript (2)
- # datomic (102)
- # dirac (4)
- # duct (5)
- # emacs (14)
- # figwheel-main (7)
- # fulcro (37)
- # hoplon (11)
- # jackdaw (3)
- # jobs (2)
- # leiningen (16)
- # nrepl (2)
- # off-topic (51)
- # pathom (34)
- # pedestal (12)
- # perun (10)
- # portkey (1)
- # re-frame (6)
- # reitit (1)
- # shadow-cljs (21)
- # spacemacs (8)
- # tools-deps (2)
- # vim (2)
Hi! Can someone recommend an example project demonstrating unit testing of functions that write/read to datomic, please? I read a blog post about Yeller's testing, which seems fine, but as a beginner I would benefit from actual code to study (they refer to a function empty-db which is not included).
Clients or Peers ?
Hi! I am still a bit unsure about this terminology, as a peer server, iirc, sits between a traditional client and the datomic server. The functions I need to test boil down to calls to "(d/transact (get-conn) {:tx-data ...data..})" and "(d/q some-query (get-db))". I thought it could be a suitable unit test to verify that the program can assert and read back stuff using these functions as expected.
But perhaps it is better to refactor these functions to leave out these mutable calls, and write tests only for these immutable bits (that go before the actual datomic calls).
Still, if it were feasible to create unit tests that could use datomic (without actually destroying anything in your real datomic instance), it would be possible to do testing of functions that compose a number of datomic transactions.
Consider using d/with as well
Thanks! Yes, d/with seems like the way to go. If I understand it correctly d/with can make safe unit testing possible, if you provide a connection to an actual datomic instance. It could be handy to be able to setup/teardown throwaway in-memory databases as well, so your main instance does not need to be available to run the tests.
You want Peers to do that
https://vvvvalvalval.github.io/posts/2016-01-03-architecture-datomic-branching-reality.html
It seems like Datomic Ions grabs clojure.tools.log which prevents userspace code (my ion function) from using normal logging. What is the recommended way to do logging from an Ion function?
@tlaudeman what I've done is use cast/event
😕 I know it's not exactly what it's meant for, but cast/dev
can't currently be shown in the cloudwatch logs
if there's a more official answer I'd like to know. by "not what it's meant for" I mean that I've been using it to occasionally troubleshoot certain issues, not log actual events to cloudwatch
FWIW I find the biggest struggle with ions atm is how opaque the system can be when things aren't quite working
is the issue in the lambda? in the api gateway? in the Ion? where do I get the feedback for it?
Having a way to direct cast/dev
somewhere would be an improvement
ah, it has been once I figured it out... just per the docs it seemed like not what it was meant for
> An event is an ordinary occurrence that is of interest to an operator, such as start and stop events for a process or activity vs. > Dev is information of interest only to developers, e.g. fine-grained logging to troubleshoot a problem during development
Soo... I guess we should have a thin logging abstraction layer. I'm bringing over legacy code that is rife with (log/infof "Important thing happened to id: %s" my-id)
. There are also implications for local unit tests outside of Datomic Cloud.
I wouldn’t hesitate to use cast/event
for that purpose; you can always ‘switch’ it on the env or parameters
so e.g. for some reason my api stops working after I deploy the change. in reality, it's a problem with coercing the response payload properly.
I'd like to log the the response payload. I'd initially reach for cast/dev
since that sounds more applicable, but in actuality I want cast/event
.
i’ll look at how we might improve the docs
i would consider dev
for things you’d never want to end up in your prod system, events
are for anything that you would want there, either for monitoring or troubleshooting
hey all, beginner datomic/clojure related question. I have an attribute I want to use to do a lookup, a :user/id
that’s a datomic uuid. schema looks like:
{:db/ident :user/id
:db/valueType :db.type/uuid
:db/unique :db.unique/value
:db/cardinality :db.cardinality/one}
when actually using the attr to do a pull
, I’m running into an issue where datomic can’t recognize the uuid as such without the #uuid
reader literal, but using the reader literal breaks the compiler because the actual value is a var. rough example here:
(defn my-var-fn []
"users-uuid")
(let [user-id (my-var-fn)]
(datomic/pull (datomic/db conn)
'[*]
[:user/id user-id])) => nil
(let [user-id (my-var-fn)]
(datomic/pull (datomic/db conn)
'[*]
[:user/id #uuid user-id])) => compiler error
(datomic/pull (datomic/db conn)
'[*]
[:user/id #uuid "users-uuid"])) => pulls data properly
any tips for how to get around this?(UUID/fromString ...)
instead of using the reader tag, that would at least solve your issue 😛
exactly what I was looking for. knew there had to be some dead simple solution to this like that 😂
reader literals work… well only with literals ^^
Trying to upgrade my datomic cloud system. Currently have three stacks, two of which are nested (compute and storage stacks under a common ancestor). When upgrading the compute stack I get a warning telling me I should upgrade the parent instead. Does it matter?
are backups created by backup-db
compressed? I'm seeing my dynamo backing store usage at 236.94MB but the s3 backup created with bin/datomic backup-db ...
is only 50MB. Is this normal?
there shouldn't be any garbage in this system. i care about retaining the full history
to expand on what he said, backup-db follows the roots to the branches so it doesn't have garbage
so... after running gcStorage, my dynamo db is still: 189.30MB yet the backup is 50MB. is this normal?
hmm, what is "garbage" in this context? if i only ever write new values, will garbage still accumulate?
ah ok. so do i need to explicitly invoke gcStorage
periodically in order to free space on the dynamo or will it happen automatically at some point under just normal operation?
i assume not based on: "Garbage collection does not lock, block, nor even read any of the segments representing live data."
and you should run it regularly (weekly or daily) with an "ago time" that is generous enough to encompass any peer's last d/db call
e.g. if you have a peer still using a db from a d/db call it made a week ago, your gcstorage ago time should be more than one week
I am getting a dependency conflict on
{:deps
{com.cognitect/transit-java #:mvn{:version "0.8.311"},
which conflicts with other librariesis there a way to tell datomic cloud to use a more up to date library of transit?
so... after running gcStorage, my dynamo db is still: 189.30MB yet the backup is 50MB. is this normal?