Fork me on GitHub
#datomic
<
2018-02-13
>
csm01:02:43

I’m running into a transactor failure after trying to add indexes to some entities; I’m getting a “no such file or directory” error creating a tempfile

csm01:02:25

where is it trying to create a temp file, and why is it failing? This is a fairly vanilla Ubuntu system the transactor is running on

csm01:02:54

ok, found this — https://docs.datomic.com/on-prem/deployment.html#misconfigured-data-dir — I’m reasonably sure the datadir config is correct, but that’s what I’ll check

csm01:02:27

hm, I set data-dir=/data in my transactor properties, but it appears to use data, ignoring the absolute path.

kenji_signifier07:02:46

Hi, I worked on adding Datomic Cloud support to Onyx datomic plugin for the last couple of weekends and finally I managed to pass tests both on peer and client API. https://github.com/onyx-platform/onyx-datomic/pull/29 I was wondering if there is a good solution to test the code against datomic cloud in CI environments. Two concerns 1) it would be very nice if there is free option for OSS products. I’m more concerned with account/license management than actual amount out-of-my-pocket. 🙂 2) Running socks proxy as a helper process in CI. Any suggestions welcome.

gerstree09:02:30

@chris_johnson thanks, I will keep pushing on the transactor part then, still no clue what is keeping it from connecting to dynamodb

stuarthalloway12:02:22

hi @kenji_signifier — if CI is running in AWS you don’t need or want the socks proxy, just use ordinary AWS VPC mechanisms to give CI access to Datomic Cloud, see e.g. https://docs.datomic.com/cloud/operation/client-applications.html

kenji_signifier21:02:28

Thx @U072WS7PE, they use circleCI, and I’m gonna look into if it is doable to VPC peer. Otherwise, I’ll consider to run a different CI in AWS.

rauh14:02:47

PSA: If you run Cassandra with datomic, don't upgrade your java to the latest version (1.8.0_161) which will brake Cassandra. (it wont start anymore)

mpenet14:02:35

just curious what's the error?

rauh14:02:45

Had to manually download an old JDK and do update-alternatives on ubuntu. That fixed it.

gerstree15:02:55

@chris_johnson thanks again for responding. I managed to fix it in the meanwhile. I can now confirm that I have the transactor running in FARGATE as well.

Chris Bidler15:02:38

@gerstree That’s great! I may come ask you for help at some point. 😉

gerstree20:02:15

I did a quick write up with the most important pieces. Hope this helps others. The blog will come online shortly and I will send you the link

gerstree15:02:40

I will write up a blog post about it and I'm happy to share our cloudformation template

jjfine15:02:40

hey, is there a way to view peer metrics on one that's running, but doesn't have a metrics callback configured? will setting datomic.metricsCallback on a running peer do anything?

laujensen16:02:55

Is Datomic bothered if I create another table (or several) in the same database on mysql ?

favila16:02:23

No. There's actually no way to even make it use a table name other than datomic_kvs

favila16:02:49

you can restrict the user permissions if you are paranoid

favila16:02:50

the transactor and gc-deleted-dbs only needs SELECT INSERT UPDATE DELETE; Peers only need SELECT

maleghast16:02:23

General n00b question - when transacting loads of datoms into Datomic in line with a schema, are people building their maps by getting the db/ident keys from the schema, or are you holding your key-names in config separately (for convenience)?

favila16:02:31

I'm not sure I understand? Attributes are usually just literals in your transacting code; there is no need to get ident keys or hold ident keys somewhere.

favila16:02:45

Are you doing something unusual?

maleghast16:02:57

So, I have a schema that looks like this:

[{:db/ident :crop/id
  :db/valueType :db.type/uuid
  :db/cardinality :db.cardinality/one
  :db/doc "The unique identifier for a Crop"
  :db/unique :db.unique/value}
 {:db/ident :crop/common-name
  :db/valueType :db.type/string
  :db/cardinality :db.cardinality/one
  :db/doc "The Common Name for a Crop"}
 {:db/ident :crop/itis-name
  :db/valueType :db.type/string
  :db/cardinality :db.cardinality/one
  :db/doc "The ITIS Name for a Crop"}
 {:db/ident :crop/itis-tsn
  :db/valueType :db.type/string
  :db/cardinality :db.cardinality/one
  :db/doc "The ITIS TSN (Taxinomic Serial Number) for a Crop"}]

maleghast16:02:11

I am expecting to have a vector of maps that look like this:

{:crop/id #uuid "bd7c5850-051f-5305-88a7-076e53822448"
 :crop/common-name "Cocoa"
...
}

maleghast16:02:06

So when I take in the data I need to build a map using the keys: :crop/id :crop/common-name :crop/itis-name :crop/itis-tsn

maleghast16:02:42

So, if I want a function to build that map, I need the keys from somewhere.

favila16:02:58

yes, in your function that builds the map

favila16:02:21

e.g., lets say your input data is a csv file

maleghast16:02:25

I was wondering if people generally hold them in config elsewhere as a simple list / vector, or if they munge the schema as an input

favila16:02:40

munge the schema?

favila16:02:16

here's how I would write a function to make a transactable map of data

favila16:02:00

e.g. if the input is a csv

favila16:02:15

this function takes a csv row and turns it into a transactable map

favila16:02:19

(defn crop-csv->map [[^String id common-name itis-name tsn]]
  {:crop/id (java.util.UUID/fromString id)
   :crop/common-name common-name
   :crop/itis-name itis-name
   :crop/itis-tsn  tsn})

favila16:02:32

then transacting is just collecting these maps:

favila16:02:39

(with-open [csv ( the-csv-file)]
  (->> (clojure.data.csv/read-csv csv)
       (map crop-csv->map)
       (partition-all 100)
       (run! #(deref (datomic.api/transact-async conn %)))))

favila16:02:43

(very simple example)

favila16:02:01

there's no looking up of idents anywhere; they're literals in your code

favila16:02:45

there's a contract between the code and the database it is transacting against, namely that both have the same idents with compatible schemas

favila16:02:36

there are some datomic libs that will allow your code to essentially make "schema preconditions" to ensure that code and db have compatible schema

favila16:02:02

but in 5 years of production datomic I haven't found a need for anything like this

maleghast16:02:04

I see… OK, but that means a separate function for each data source. I am trying to write a generic “put datoms in Datomic” function that gets the db/idents either from a separate config or the schema on use.

favila16:02:09

the db is the source of truth

maleghast16:02:27

Which is what I assumed anyone would do.

favila16:02:55

this function is so simple; what is the value in abstracting it?

maleghast16:02:06

I just was curious as to whether or not people use a separate config value for convenience or just write a function to pull them out of the schema

favila16:02:26

I'm still not sure what you are doing that would make pulling schema a useful exercise

favila16:02:41

are the transformations expressed as data?

maleghast16:02:43

Personally the value is: 1. Why have 25 functions when I can have 3 2. The thought exercise of it.

maleghast16:02:15

The Schema is to hand and is a “place” where the db/ident attributes are enumerated.

maleghast16:02:26

So pulling them from the schema is reliable.

favila16:02:40

uh, sure, but how does the code know the transformation to make?

maleghast16:02:06

in fact the convenience of having them already extracted into config runs teh risk of the schema changing and the config not being updated.

favila16:02:10

unless the entire transform is data driven, in the end you have to write some code that knows what an attribute means

favila16:02:25

(not just what it's schema is--what it means)

favila16:02:00

if you are contemplating such an approach I would definitely put everything you need in the db

maleghast16:02:08

I think that I see what you mean, thanks

favila16:02:06

to make this useful you will probably need to annotate your schema entities with more attributes that are understood by your data-transformation-building layer