Fork me on GitHub

Anybody run into ion deploys failing silently?


At least I think silently, maybe I’m not looking in the right place but nothing’s showing up in CloudWatch > datomic-<stack-name> log


hey all -- noob q -- is there a maximum length limit for a :db/type :string? if so, what is it?


For Cloud, there seems to be a limit of 4096 characters: On-prem doesn’t seem to have this limitation.


Thanks @U06B8J0AJ! Indeed we have on-prem -- I am guessing it's the size of a sql blob. If someone confirms would be awesome 🙂


Can I get a conn from a db ?


(d/db conn) using the peer lib

Joe Lane14:09:15

I think he is asking the other way around.


yep, my bad


(let [db  (d/db conn)]
    (d/connect  (:id db)))


apparently (:id db) gets the uri


undocumented, though afaik


Do the recent Datomic transactor EC2 images enable enhanced networking?


is a good idea compose d/as-of with d/since to make queries about a "time range"? it works on client API too?


so I can do (users-that-changed-name db) => all users that changed name (users-that-changed-name (d/as-of db #inst"2018")) => users that changed name before 2018 (users-that-changed-name (d/since (d/as-of db #inst"2018") #inst"2017")) => users that changed name just in 2017


So, I'm thinking about a thing. It's probably crazy, and if someone can tell me off the bat "hey, that's crazy" I'd appreciate it.. I want to analyze log files stored on S3, and Amazon provides it's own query analysis tool using a redshift cluster, but you know what would be really cool? Using datomic as the query engine


Let's assume that I have total flexibility over how the logfiles and S3 buckets are structured


so I can do (users-that-changed-name db) => all users that changed name (users-that-changed-name (d/as-of db #inst"2018")) => users that changed name before 2018 (users-that-changed-name (d/since (d/as-of db #inst"2018") #inst"2017")) => users that changed name just in 2017


This might be a bit of a pipe dream, but can everyone just stop for a second and consider how cool it would be to make datomic a log aggregator?


I already made a big data product with datomic (with on-prem) I received some GB of CSV files + a running SQL database, then I made a pipe to transact this data into a temporary DB-URI Once I finish this "raw" import, I made some queries in this db and insert "real" db-uri, that is used my the application This "real" db-uri goes huge very quickly, then monthly I create a new one, then the aplication sometimes need's to query the current + all olders db. A possible optimization is use CSV+SQL+Olders db's to create the "current" db.


Huh, fascinating. It might take me a while to fully parse that, but I'm pretty sure that sounds like what I had in mind. Hmm. I think what I would do is possible use an sns event to trigger a transform on s3 raw log files. It would reformat them to something that could be ingested by datomic?


But either way, the basic idea is to have a datomic view of imported raw files


I mean, the model intuitively seems to map so well! It is after all, a log aggregator by nature

Daniel Hines20:09:32

I'd like to make what is essentially a to-do list app to coordinate daily jobs between a team of people, and I want to use Datomic, but I'm not sure how to model my scenario in Datomic. Every day, users need to check off whether they've completed each task on the list, but the list will evolve over time. When, say, an admin user adds another item to the daily to-do list, should I then update the schema to include that item as an attribute belonging to a to-do list, or is there a better way to model it?


No. List items are data, not schema. Your schema should define something like :todo/name as a string and when an admin adds another item, transact {:todo/name <another item>}.

Daniel Hines04:09:43

Ok, thanks, that makes sense. So list items would be an entity with a :todo/name as a string and a :todo/completed as bool, and todo list could be another entity that had a list item as an attribute with cardinality many, so that, given a todo-list, I could pull all its items. I could also imagine doing a full text search of all todo’s and wanting to know to which list a given result belonged; do I also need to include an attribute on the list item with cardinality one that points to which list it belongs to?


Back references are just as fast as forward references. If you're querying, just ask [?list :list/todo ?todo]. With that in mind, if you want the constraint that each todo is a member of only one list, I would instead have a cardinality one reference on the todo (`:todo/list`) and query for all todos pointing to a given list when you need it. If you're pulling, you can back-reference by saying (pull db [:list/name {:todo/_list [:todo/name :todo/completed]}] <list id>). Lookup :as in the pull docs if you want to rename the back-ref. Are you planning on resetting the completed bools daily?


Say, is there a practical difference between '[:find (count ?e) . :in ...] and '[:find (clojure.core/count ?e) . :in ...] ? I'm messing around with backtick and it (rather irritatingly) namespace-resolves all my symbols


I'm assuming the aggregation function looks for the literal symbol 'count, just curious if anyone knew whether that's true or not