Fork me on GitHub

How can I query for the x most recently created entities (in my case chat messages) without pulling in all of them, sorting them in the client, and then do a take ?


it's a tough problem, because you can't traverse the transaction log in reverse order - you can go back some arbitrary period and walk forwards, and keep taking chunks like that until you've found PAGE_SIZE


if you're modelling a new system with an empty db, then you can save an ever-decrementing value with all the things you want to walk backwards in this way, because then you can take advantage of d/datoms to walk that index in sorted (incrementing) order -- giving you a naturally reversed index to traverse


happy to discuss in more detail, @magnars, because it's an interesting problem that i'm wide open to solving better 🙂


ah, that's an interesting solution. I'll give that a shot. Thanks again, Robert. 🙂


i'd love to hear how it goes, if you're amenable!


I'll let you know. 🙂


@jared i now se logs in S3, but no transactor metrics in Cloudwatch. There must be missing a step or something in the documentation. The only thing i can think of is that the ensure-transactor is calling out to aws and setting something up ? Im not using ec2, i have 'on premis' cassandra storage and a transactor running with: aws-s3-log-bucket-id=ice-dev-transactor aws-cloudwatch-region=region=eu-west-1 aws-cloudwatch-dimension-value=ice-dev-transactor The Policies are set up as documentet (PutObject and PutMetricData,PutMetricDataBatch) in Security Credentials->Policies I se the logs in S3 and i se the S3 metrics in Cloudwatch (PutMetricData and BucketSizeBytes) but no transactor metrics.


@drankard do you see a HeartbeatMsec metric in the transactor logs? or any of the metrics IN the transactor logs?


To see your Transactor's logs: Go to the S3 console Select your log bucket (see the transactor properties file output by the bin/datomic ensure-transactor command, it contains the bucket name) Drill down in the directory hierarchy to find .zip'd log files


nop no metrics


im unable to run ensure-transactor but added the properties manually


ensure-transactor java.lang.IllegalArgumentException: No method in multimethod 'ensure-transactor*' for dispatch value: :cass at clojure.lang.MultiFn.getFn( at clojure.lang.MultiFn.invoke( ...


If you do not have metrics in your transactor logs then you wont see any metrics in Cloudwatch. It indicates to me that your transactor is not up. Additionally wherever (environment) your transactor is running it needs AWS access keys. Those access keys have to be for the user you have granted permissions for in AWS.


@robert-stuttaford Using an indexed attribute with a declining value to find the x last entities seems to be working fine. 🙂

(defn next-chat-event-id [db]
  (if-let [datom (first (d/datoms db :avet :chat-event/id))]
    (dec (nth datom 2))

(defn get-recent-events [db num]
  (->> (d/datoms db :avet :chat-event/id)
       (take num)
       (map (fn [[eid _ _ tx]]
              (get-event (d/entity db eid)
                         (:db/txInstant (d/entity db tx)))))))


(d/datoms) is powerful stuff!


does anyone else find converting dates to instants to be a little tedious? Coming from using ORM's, I'm used to passing a string value and having things be converted automatically. The case I have is a datepicker component from the frontend, and saving its value in datomic. I'm tempted to just store it as a timestamp and have the type be a long... Since I will be living with this decision for years to come, is this a bad idea ?


Is your datepicker on the frontend sending longs, tho? Wouldn't you have to convert the string either way?


well... converting is more convenient at the datepicker level than in some backend function that updates the database -- which may or may not have the date attribute passed to it


so.. yes in javascript, before the the update API call is made, convert it to a long, and parse from long when setting its value


I think I would go for the proper data type in the db over a little convenience.


cool thanks for the advice. I'll go for the proper data type then 🙂


fantastic, @magnars 🙂 i'm working on a tx-by-tx rebuild of our prod database ( north of 40mil txes so far ), and i'm definitely going to include a decrementing index with the new data where necessary


oh man! Are you worried about the 10 billion datom limit at all?


i am, a little. we need to make another ~165 copies of our current database to reach that


this is why i want a codebase that can rebuild the db, given rules for each transaction shape


so that we can shard things later on


Aye, makes sense.


admittedly, a lot of the data in our db right now is trash. we made so many beginner mistakes in the first couple years 🙈


which is another reason for the rebuild


Haha, I bet you're not alone in that.


i've been storing cookies in my datomic DB.. i guess it's time to rethink that


a selective database rebuild would be incredibly useful, for almost every production user of datomic


what's the preferred way to unparse the instants... i've been so confused about these different date classes, like joda vs clojure.instant... Is there a simple way to take the instant and return "yyyy-MM-dd"... clj-time seems to want a joda instance, so do you convert from instant to joda, then use clj-time on it ?


clj-time.coerce/from-date clj-time.coerce/to-date gets you 80% there


perfect, exactly what i was looking for 🙂


I ran into an error with fulltext search. (fulltext $ :recall/search-text ?search) errors when ?search is RAGE3:10". The " in the search criteria is the culprit. Is this a bug?