This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-02-13
Channels
- # beginners (50)
- # boot (27)
- # bristol-clojurians (7)
- # cider (30)
- # clara (1)
- # cljs-dev (130)
- # cljsrn (14)
- # clojure (179)
- # clojure-austin (6)
- # clojure-greece (2)
- # clojure-italy (4)
- # clojure-spec (19)
- # clojure-uk (54)
- # clojurescript (64)
- # core-async (3)
- # data-science (1)
- # datomic (66)
- # duct (11)
- # emacs (5)
- # figwheel (1)
- # fulcro (26)
- # funcool (7)
- # jobs (1)
- # jvm (6)
- # keechma (5)
- # lein-figwheel (5)
- # luminus (5)
- # lumo (1)
- # off-topic (93)
- # parinfer (37)
- # pedestal (15)
- # protorepl (10)
- # re-frame (42)
- # reagent (12)
- # shadow-cljs (71)
- # spacemacs (3)
- # specter (7)
- # vim (8)
- # yada (9)
I’m running into a transactor failure after trying to add indexes to some entities; I’m getting a “no such file or directory” error creating a tempfile
where is it trying to create a temp file, and why is it failing? This is a fairly vanilla Ubuntu system the transactor is running on
ok, found this — https://docs.datomic.com/on-prem/deployment.html#misconfigured-data-dir — I’m reasonably sure the datadir config is correct, but that’s what I’ll check
hm, I set data-dir=/data
in my transactor properties, but it appears to use data
, ignoring the absolute path.
Hi, I worked on adding Datomic Cloud support to Onyx datomic plugin for the last couple of weekends and finally I managed to pass tests both on peer and client API. https://github.com/onyx-platform/onyx-datomic/pull/29 I was wondering if there is a good solution to test the code against datomic cloud in CI environments. Two concerns 1) it would be very nice if there is free option for OSS products. I’m more concerned with account/license management than actual amount out-of-my-pocket. 🙂 2) Running socks proxy as a helper process in CI. Any suggestions welcome.
@chris_johnson thanks, I will keep pushing on the transactor part then, still no clue what is keeping it from connecting to dynamodb
hi @kenji_signifier — if CI is running in AWS you don’t need or want the socks proxy, just use ordinary AWS VPC mechanisms to give CI access to Datomic Cloud, see e.g. https://docs.datomic.com/cloud/operation/client-applications.html
Thx @U072WS7PE, they use circleCI, and I’m gonna look into if it is doable to VPC peer. Otherwise, I’ll consider to run a different CI in AWS.
PSA: If you run Cassandra with datomic, don't upgrade your java to the latest version (1.8.0_161) which will brake Cassandra. (it wont start anymore)
@mpenet https://stackoverflow.com/questions/48597284/cassandra-does-not-start-cause-of-an-abstractmethoderror-with-jdk-to-8u161?rq=1
Had to manually download an old JDK and do update-alternatives
on ubuntu. That fixed it.
@chris_johnson thanks again for responding. I managed to fix it in the meanwhile. I can now confirm that I have the transactor running in FARGATE as well.
@gerstree That’s great! I may come ask you for help at some point. 😉
I did a quick write up with the most important pieces. Hope this helps others. The blog will come online shortly and I will send you the link
https://www.avisi.nl/blog/2018/02/13/how-to-run-the-datomic-transactor-on-amazon-ecs-fargate/
I will write up a blog post about it and I'm happy to share our cloudformation template
hey, is there a way to view peer metrics on one that's running, but doesn't have a metrics callback configured? will setting datomic.metricsCallback
on a running peer do anything?
Is Datomic bothered if I create another table (or several) in the same database on mysql ?
the transactor and gc-deleted-dbs only needs SELECT INSERT UPDATE DELETE; Peers only need SELECT
General n00b question - when transacting loads of datoms into Datomic in line with a schema, are people building their maps by getting the db/ident keys from the schema, or are you holding your key-names in config separately (for convenience)?
I'm not sure I understand? Attributes are usually just literals in your transacting code; there is no need to get ident keys or hold ident keys somewhere.
So, I have a schema that looks like this:
[{:db/ident :crop/id
:db/valueType :db.type/uuid
:db/cardinality :db.cardinality/one
:db/doc "The unique identifier for a Crop"
:db/unique :db.unique/value}
{:db/ident :crop/common-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The Common Name for a Crop"}
{:db/ident :crop/itis-name
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The ITIS Name for a Crop"}
{:db/ident :crop/itis-tsn
:db/valueType :db.type/string
:db/cardinality :db.cardinality/one
:db/doc "The ITIS TSN (Taxinomic Serial Number) for a Crop"}]
I am expecting to have a vector of maps that look like this:
{:crop/id #uuid "bd7c5850-051f-5305-88a7-076e53822448"
:crop/common-name "Cocoa"
...
}
So when I take in the data I need to build a map using the keys: :crop/id :crop/common-name :crop/itis-name :crop/itis-tsn
I was wondering if people generally hold them in config elsewhere as a simple list / vector, or if they munge the schema as an input
(defn crop-csv->map [[^String id common-name itis-name tsn]]
{:crop/id (java.util.UUID/fromString id)
:crop/common-name common-name
:crop/itis-name itis-name
:crop/itis-tsn tsn})
(with-open [csv ( the-csv-file)]
(->> (clojure.data.csv/read-csv csv)
(map crop-csv->map)
(partition-all 100)
(run! #(deref (datomic.api/transact-async conn %)))))
there's a contract between the code and the database it is transacting against, namely that both have the same idents with compatible schemas
there are some datomic libs that will allow your code to essentially make "schema preconditions" to ensure that code and db have compatible schema
I see… OK, but that means a separate function for each data source. I am trying to write a generic “put datoms in Datomic” function that gets the db/idents either from a separate config or the schema on use.
I just was curious as to whether or not people use a separate config value for convenience or just write a function to pull them out of the schema
I'm still not sure what you are doing that would make pulling schema a useful exercise
Personally the value is: 1. Why have 25 functions when I can have 3 2. The thought exercise of it.
The Schema is to hand and is a “place” where the db/ident attributes are enumerated.
in fact the convenience of having them already extracted into config runs teh risk of the schema changing and the config not being updated.
unless the entire transform is data driven, in the end you have to write some code that knows what an attribute means