Fork me on GitHub

Hi, any opinions on using fully namespaced spected keywords as datomic attributes? Like org.domain.entity/attribute?


as opposed to what alternative?


just :entity/attribute.


One issue that I’ve found is trying to spec ref attributes, where I can have the actual entity or a lookup ref when transacting. It’s seems a bit weird to have the domain this close to a db spec.


can u give a specific example of that spec situation? this sounds like an interesting problem. i was also wondering about what is a good naming scheme for specs. for now i just went with the problem-domain-entity-name/attr style, and my namespaces dealing with those attributes are <company-short-name>.<project-name>.data.<problem-domain-entity-name> or maybe don't even have the .data part if the project is simple enough.


eg: company internet domain: project name: rule-crib domain entity terminology: financial transaction domain entity name: txn (as opposed to tx, which we kept to refer to Datomic transactions) NS: gini.rule-crib.txn Datomic attribute: :txn/descr I've also tried to just have gini.txn, but it's easy to get lost between projects within a monorepo, where different projects might deal with different aspects of the same domain entity...


Imo it’s not appropriate to spec txdata for d/transact. The spec for transaction data is the spec for the transaction dsl, not for its keys


So a transaction map is a s/map-of, not an s/keys


the spec for keywords should be what you would get in pulls


Cool, I’m sharing this opinion now. What do you think about the namespaces? There is a lot of boilerplate when converting between them, having the same attributes internally and in datomic solves some of the issue. Wire Internal Datomic :attribute <-> org.domain.entity/attribute <-> :entity/attribute


I prefer sharing keywords, but keyword length fatigue is real


I’d drop “org” though, unless you already have a strong code convention around that


If you can somehow maneuver your data keyword namespaces to be actual namespaces (maybe put specs there? idk) your life will be much better, or at least not have as much keyword typing in it


I’m doing that, the actual length is not an issue (only in repl things get polluted sometimes). Our convention is to wire unnamespaced between services, but it’s not hard to namespace all things, the issue is more between internal and datomic schemas.


About the org, we thought on using it because it would be clearer when it’s an external data from a provider, for example.


:org.domain.otherorg.entity/attribute maybe?

👍 4

Is there a way to get the database URI back from a datomic.Connection (or datomic.peer.Connection)?


If not, is there a reason for it? Eg. not to keep connection secrets for a DynamoDB connection around for long (when someone is using the not-recommended non-role based uri)? It would be convenient to obtain a connection URI back from a conn, when creating a Datomic component using some state management library, like component / mount / juxt/clip. It could simplify the stop operation, which can automatically clean up in-memory test databases with random names for example...


morning folks - just testing my thinking... if I accidentally deferred giving my datomic compute instance an application name during template setup etc (working through the ion tutorial) - would I be right in thinking there should be a way to give a value to that parameter in AWS console ? Thanks.


It will automatically get the application name of the compute group, but you can also change it by doing an “update stack” in the cloudformation console


this may be a dumb question but if you're querying with a vector of values is there a way to count how many values in a muli-valued attribute you matched on?


you can in the result with distinct or count-distinct depending on how you aggregate


you can’t within the query easily without a function call or subquery


[:find ?e (count-distinct ?v) :in $ [?v …] :where [?e :attr ?v]]


I want to use Java 11 lambda ions in Datomic Cloud. My IDE and libraries are set up for Java 11 on my local machine, but it still deploys Java 8. Any suggestions?


"Lambda ions" don't actually run in AWS Lambdas


the lambdas are cookie-cutter forwarding proxies that talk to the compute group, which actually runs your code


that way your code runs with the full database essentially "local"


once a machine is hot, it scorches


Not sure what the timeline is on Java 11 support. I'm sure it's on the roadmap but not sure where


no problem


not sure if there is a diagram of how this all works somewhere


diagram near the top


incidentally I have been working on a AWS Lambda custom runtime for clojure -- as a separate effort


can run Java 14 in the Lambda, where your function handler is an ordinary var that gets called with ordinary maps as arguments, and returns a map


can set clojure.core/merge as a valid lambda handler


it's like 100LOC + 25MB for the JVM


no silly macros to do the silly java interop


@hadilsabbagh18 is there something from Java 11 that you really really want to use?


Personally I like the client being available. Cuts down on a lot of deps.


No, I was hoping to get a little more speed out of the JVM. Want to use the latest and greatest before going to Production.