This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-08-31
Channels
- # architecture (1)
- # aws (23)
- # beginners (13)
- # boot (18)
- # cider (5)
- # clara (1)
- # cljs-dev (22)
- # cljsjs (9)
- # cljsrn (28)
- # clojure (120)
- # clojure-canada (12)
- # clojure-dev (6)
- # clojure-italy (4)
- # clojure-korea (1)
- # clojure-russia (18)
- # clojure-sg (8)
- # clojure-spec (45)
- # clojure-uk (12)
- # clojurescript (240)
- # component (4)
- # cursive (17)
- # datomic (91)
- # editors-rus (4)
- # figwheel (2)
- # flambo (6)
- # hoplon (163)
- # instaparse (6)
- # jobs (1)
- # leiningen (2)
- # luminus (5)
- # om (22)
- # om-next (2)
- # onyx (35)
- # perun (15)
- # play-clj (1)
- # protorepl (4)
- # re-frame (106)
- # reagent (4)
- # ring (106)
- # schema (1)
- # spacemacs (17)
- # untangled (40)
- # yada (14)
Upon further thought, that’s probably abysmal performance unless you had something to filter on before evaluating the matching of the slug (or the db is extremely small).
I am wondering what the best practice in Datomic is for doing autoincrementing sequences. Currently I have this implementation, using a db function to increment a noHistory field.
(def gen-id
(d/function
'{:lang "clojure"
:params [db entity-type entity-id]
:code (let [seq-ent (d/entity db [:sequence/name entity-type])
_ (cond
(empty? seq-ent)
(throw (Exception. (str "Sequence with name " entity-type " not found in database. Are the sequences initialized?")))
(nil? (:sequence/sequence seq-ent))
(throw (Exception. (str "Sequence with name " entity-type " is nil. Are the sequences initialized?"))))
new-value (inc (:sequence/sequence seq-ent))]
[[:db/add (:db/id seq-ent) :sequence/sequence new-value]
[:db/add entity-id entity-type new-value]])}))
It can then be used like this when creating the transaction data:
`
(let [tempid (d/tempid :db.part/user)]
[[:fns/gen-id :task/task-id tempid]
{:db/id tempid
:task/status "PENDING"}])
where :fns/gen-id is the function and :task/task-id is the noHistory attribute that is incremented.My problem is that I want to add several incremented ids in a transaction, but adding several just means that they all see the same old value of the :task/task-id attribute and increment to the same new value, meaning that all the entities get the same id.
So I want to be able to create one transaction, that adds several entities, each of which is assigned a (different) incrementing id. Any ideas?
Just pinging @bkamphaus for when he gets up 🙂
Ben doesn’t work at Cognitect anymore, but perhaps @marshall can help
trying to set up transactor on AWS proves to be problematic. AutoScaling group keeps cycling the transactor. I followed the AWS setup docs and still get the same results (had similiar issue with terraform before). I see Dynamodb table, roles, scaling group and launch configuration. But it keeps cycling transactors because it claims they fail health checks. When trying to connect using shell:
datomic % uri = "datomic:
<datomic:ddb://eu-west-1/temporary/mydb>
datomic % Peer.createDatabase(uri);
// Error: // Uncaught Exception: bsh.TargetError: Method Invocation Peer.createDatabase : at Line: 2 : in file: <unknown file> : Peer .createDatabase ( uri )
Target exception: java.lang.IllegalArgumentException: :db.error/read-transactor-location-failed Could not read transactor location from storage
perhaps dynamo aws permission issues?
well roles are ensured
they do exist
see them in iam console
I mean maybe your peer (not the transactor) can't connect to dynamodb
i did open cidr 0.0.0.0 so i would expect it to work...
so basically i do have dynamo table transactor. and want to connect from datomic shell from my local machine
thing that worries me is constant cycling of transactors
seems like they do not initialize properly
so they do not drop information about transactor into dynamo so shell peer can not connect
on my ect transactor instance this is my log
user-data: inflating: datomic-pro-0.9.5394/LICENSE-CONSOLE
user-data: inflating: datomic-pro-0.9.5394/datomic-pro-0.9.5394.jar
user-data: inflating: datomic-pro-0.9.5394/README-CONSOLE.md
user-data: pid is 1557
user-data: ./startup.sh: line 26: kill: (1557) - No such process
/dev/fd/11: line 1: /sbin/plymouthd: No such file or directory
initctl: Event failed
Stopping atd: [ OK ]
Shutting down sm-client: [ OK ]
Shutting down sendmail: [ OK ]
Stopping crond: [ OK ]
it definitely is not looking good
it spins up and then a fatal error shuts instance down
and what is /sbin/plymouthd ?
and in aws console in my dynamo table there no data whatsoever…. that is why peer can not connect i suppose.
could someone take a peek at a correct transactor logs and paste what it does right after unpacking of datomic?
as a side note it is definitely not a license issue, i just mangled the licencse key on purpose and same results followed
same behaviour on us-west-1 region 😞
@karol.adamiec are you sure you are not exceeding the allotted number of processes for your license? Do you have transactor logs so I can verify it is failing on a health check to heartbeat? Finally, I assume you reviewed this page: http://docs.datomic.com/aws.html
@karol.adamiec other thoughts: Are the memory settings valid? (e.g. heap fits within instance size) Test the license key being valid by launching transactor locally (not just garble the key). are you using a supported AWS instance type? (writable file system?) per the docs supported instance types can be found in a datomic generated cloud formation template under the key “AWSInstanceType2Arch"
heartbeat is not on
t1.micro
apparently supported
will follow on rest of hints shortly 🙂, thx
If the transactor fails to write heartbeat at launch the docs indicated to verify storage connection information Datomic has can be used to reach the storage.
i will clean all infra get fresh datomic zip and follow very carefully the docs. might be some detail somewhere
second time the charm 🙂
I think t1 Micro is not officially supported. People have done it and I am not sure what they configured to get it to work
sure thing
i hoped to shortcut the setup with terraform template but time to go elbows deep would come anyway….
@karol.adamiec Yes, @jaret is correct - t1.micro is not officially supported. Default settings won’t work with that instance type. i’d recommend getting it running on a larger (supported) instance type (i.e. these: http://docs.datomic.com/capacity.html#dynamodb) then you can go back and tweak configuration to get it running on the instance type that suits your needs
thanks @marshall will do and report back
Ok, will try that. Thanks!
CompilerException java.util.concurrent.ExecutionException: java.lang.IllegalArgumentException: :db.error/not-a-db-id Invalid db/id: #db/id[:db.part/mypartition -10100001], compiling:
Alternatively, change the ident in your partition definition to:
:db/ident :db.part/mypartition
Now it works, thanks! What is most correct, to use “:mypartition” or to use “db.part/mypartition”?
Ok, thx.
i had success with us and eu regions for datomic. previous issues most likely due to instance type. now using m3.medium.
what is the cheapest supported transactor instance type so one can spin dev/qa environments?
@karol.adamiec Some discussion of this here: https://groups.google.com/forum/#!searchin/datomic/micro%7Csort:relevance/datomic/9q-HGWulKwo/U4GwZXI2DQAJ http://stackoverflow.com/questions/26102584/decrease-datomic-memory-usage/26108628#26108628
thanks @marshall very helpful links 🙂
anyone having tried @mrmcc3 terraform module? i would be glad for syntax for the licence. The one from email text file pasted into tf file complains about escaping. Are the licence trailing \ needed?
hmpphhhhh. Long story short: most likely it was license encoding issue. Would be nice to get some sensible output from transactor startup on AWS in such cases!!
@casperc regarding your incrementing function from this morning - if you’re calling the transaction function more than once (i.e. on multiple entities), the value you get from the current db (in your call to d/entity) will be the same for all those calls - the A in ACID means there is no ‘order’ within a transaction, it all happens at the same time. If you need to have dependent ordering, you could either write a transaction function that handles the multi-entity case manually (i.e. you resolve which of the entities gets what values) or break it up into multiple transactions
You probably want something like this
(sort (d/q '[:find [?ident ...]
:where
[?e :db/ident ?ident]
[_ :db.install/attribute ?e]]
db))
(taken from https://github.com/Datomic/day-of-datomic/blob/master/tutorial/schema_queries.clj)
Conceptually, your query is like saying: “Give me all the datoms in the database, and return a set of the attribute of each”.
Great way to look at the problem @jgdavey! Thanks.
I’m still new to Datomic, but I really like the design and the simplicity of it!
I’m a "SQL pro", but I need some more days to adjust the mindset to Datalog!
Thought the order only affected the performance?!
(d/q '[:find ?attr ?type ?card
:where
[_ :db.install/attribute ?a]
[?a :db/valueType ?t]
[?a :db/cardinality ?c]
[?a :db/ident ?attr]
[?t :db/ident ?type]
[?c :db/ident ?card]]
db)
=>
#{[:db.alter/attribute :db.type/ref :db.cardinality/many]
[:db.install/function :db.type/ref :db.cardinality/many]
[:db.install/valueType :db.type/ref :db.cardinality/many]
[:db.excise/attrs :db.type/ref :db.cardinality/many]
[:db/ident :db.type/keyword :db.cardinality/one]
[:db.excise/before :db.type/instant :db.cardinality/one]
[:db/index :db.type/boolean :db.cardinality/one]
[:db/fn :db.type/fn :db.cardinality/one]
[:db/fulltext :db.type/boolean :db.cardinality/one]
[:db/unique :db.type/ref :db.cardinality/one]
[:db.excise/beforeT :db.type/long :db.cardinality/one]
[:db.sys/partiallyIndexed :db.type/boolean :db.cardinality/one]
[:db/isComponent :db.type/boolean :db.cardinality/one]
[:db/lang :db.type/ref :db.cardinality/one]
[:db.sys/reId :db.type/ref :db.cardinality/one]
[:db.install/partition :db.type/ref :db.cardinality/many]
[:db/txInstant :db.type/instant :db.cardinality/one]
[:db/valueType :db.type/ref :db.cardinality/one]
[:db/cardinality :db.type/ref :db.cardinality/one]
[:db/excise :db.type/ref :db.cardinality/one]
[:db/doc :db.type/string :db.cardinality/one]
[:fressian/tag :db.type/keyword :db.cardinality/one]
[:db/noHistory :db.type/boolean :db.cardinality/one]
[:db.install/attribute :db.type/ref :db.cardinality/many]
[:db/code :db.type/string :db.cardinality/one]
[:country/name :db.type/string :db.cardinality/one]}
(d/q '[:find ?attr ?type ?card
:where
[?a :db/valueType ?t]
[_ :db.install/attribute ?a]
[?a :db/cardinality ?c]
[?a :db/ident ?attr]
[?t :db/ident ?type]
[?c :db/ident ?card]]
db)
=>
#{[:db.alter/attribute :db.type/ref :db.cardinality/many]
[:db.install/function :db.type/ref :db.cardinality/many]
[:db.install/valueType :db.type/ref :db.cardinality/many]
[:db.excise/attrs :db.type/ref :db.cardinality/many]
[:db/ident :db.type/keyword :db.cardinality/one]
[:db.excise/before :db.type/instant :db.cardinality/one]
[:db/index :db.type/boolean :db.cardinality/one]
[:db/fn :db.type/fn :db.cardinality/one]
[:db/fulltext :db.type/boolean :db.cardinality/one]
[:db/unique :db.type/ref :db.cardinality/one]
[:db.excise/beforeT :db.type/long :db.cardinality/one]
[:db.sys/partiallyIndexed :db.type/boolean :db.cardinality/one]
[:db/isComponent :db.type/boolean :db.cardinality/one]
[:db/lang :db.type/ref :db.cardinality/one]
[:db.sys/reId :db.type/ref :db.cardinality/one]
[:db.install/partition :db.type/ref :db.cardinality/many]
[:db/txInstant :db.type/instant :db.cardinality/one]
[:db/valueType :db.type/ref :db.cardinality/one]
[:db/cardinality :db.type/ref :db.cardinality/one]
[:db/excise :db.type/ref :db.cardinality/one]
[:db/doc :db.type/string :db.cardinality/one]
[:fressian/tag :db.type/keyword :db.cardinality/one]
[:db/noHistory :db.type/boolean :db.cardinality/one]
[:db.install/attribute :db.type/ref :db.cardinality/many]
[:db/code :db.type/string :db.cardinality/one]
[:country/name :db.type/string :db.cardinality/one]}
If you @jaret have look at http://www.learndatalogtoday.org/chapter/4, the tab ”2” at the bottom and then click the link ”I give up”, then <Run Qurery> works just fine, but if you change place of the first and the second where criteria, then it doesn’t work. I have the same behavior in my database.
0.9.5372
datomic-pro
But the query should always return the same result, regardless of the order of the predicates?
that would be my expectation. Same set of results. Can you verify that you see the same count when you run against an empty database?
I got the same result on an empty database.
Ok, thanks!
@teng Were you running against a restore of a db of some kind? If so, I do have an explanation. Datomic 0.9.5206 included a fix for some bootstrap datoms not being in the VAET index. If you run the queries against a database created by a version older than that the result you see is expected.
I will look into creating a new version of the mbrainz example database that doesn’t exhibit this behavior