Fork me on GitHub

does anybody know how to express '[:find (?e ...)]' in the query as a map?


nm, forgot to wrap in a vector


is anyone worried about Datomic Cloud's lack of excision in relation to GDPR taking effect next month? if so, any thoughts or tips for mitigating the situation?


@joshkh Best solution I can think of is 1. refactor your code to store sensitive data in a complementary store 2. manually export all the non-sensitive data from your Cloud deployment and import it into a new Cloud deployment (yes, that will require downtime). Will blog about 1 soon


ah yes, thank you. we've been exploring option 2 as a last resort. the whole situation is a bit frustrating in the sense that option 1 still requires a fair amount of dev work while keeping two databases in sync. just to be clear, you're suggesting that datomic only stores a reference to some user ID in another database, and that's where the personally identifiable information goes?


To be clear, to me these are not 2 options to choose between, but two steps to take in conjunction. Step 1 deals with future data, and step 2 deals with past data


> i've heard conflicting arguments that even maintaining some user id (even if it's only a reference to an external data source) is not within the "spirit" of GDPR. I think whoever said that has not thought this through from an IT perspective (granted, this is common amongst lawyers). This really sounds like an unreasonable expectation, as it can prevent things like accounting from being done properly, and I'm taking the bet that GDPR will not get enforced to this extent.


ah. fortunately we don't have any past data. it's a brand new project so.. lucky us!


What about using noHistory on the attributes that contain personal information?


without knowing how datomic works under the hood it's not an option. according to the documentation: The purpose of :db/noHistory is to conserve storage, not to make semantic guarantees about removing information. The effect of :db/noHistory happens in the background, and some amount of history may be visible even for attributes with :db/noHistory set to true.


thanks for the suggestion though


It was a question rather than a suggestion (because we're in the same situation), but you've answered it, thanks 🙂


i've heard conflicting arguments that even maintaining some user id (even if it's only a reference to an external data source) is not within the "spirit" of GDPR. anywho, i'd love to read your post when you've finished. where can i find your blog? i'll bookmark it in the mean time.


there has been a feature request since March for targetted excision in Datomic Cloud but no updates since then. I don't understand why Datomic Cloud doesn't support it but on-prem does


Anyone trying to run dev against datomic cloud with the socket connection? For me it stops working all the time generating timeout exceptions, is there something I can do to have a better experience during dev?


i have the same problem and have learned just to deal with it

Alex Miller (Clojure team)18:04:13

I use it all the time and it drops about once per day for me

Alex Miller (Clojure team)18:04:44

I’m specifically talking about the socks proxy

Alex Miller (Clojure team)18:04:10

are you saying you see timeouts sending queries / txns / etc?


I'm talking about the socks proxy. I get timeouts when running tests which creates the client, creates a new database, creates a connection and test stuff. But for me is more like once every 10 minutes than once a day, which makes for a really bad experience. Restarting the proxy solves the problem but I would like to have a smoother experience.


@mynomoto Some folks here discussed using a keep-alive tool of some sort for the proxy, but I can’t recall the specific one they mentioned. I’ll see if I can find it again


@marshall I would appreciate that, thanks!


I could put the that script to test the socks proxy in loop but I'm not sure if that's a great idea.


i believe autossh was the tool that was discussed


and replacing the ssh command in the provided script with an analogous autossh command


I will check that out, thanks!


I believe autossh -M 0 -o "ServerAliveInterval 5" -o "ServerAliveCountMax 3" -v -i $PK -CND ${SOCKS_PORT:=8182} [email protected]${BASTION_IP} was the suggested replacement command. I can’t comment on the specific efficacy of it, and I wish I could recall who to credit about it


Trying it now, I will report the results later, thanks.


question about ongoing operation of on-prem: do you have AWS AMIs for the default transactor stack available in us-east-2?


That page covers how to run the stack. Using us-east-2 should work fine


I see, I was trying to run a build out of a local copy of datomic-pro-0.9.5372, which obviously predates the existence of us-east-2


symlinks: not always your friends


@marshall is Datomic Cloud just a “managed version” of Datomic on-prem? or is it actually a different product? (as in, a different codebase with some features only available in cloud)

Alex Miller (Clojure team)19:04:52

it is a different product and mostly different code base with a totally different architecture


@hmaurer it is a different product. it uses the same data model, but has a strictly different architecture and use of storage (among other things)


@chris_johnson Ah, that would do it.


@alexmiller you type faster than i do


@marshall do you intend on keeping the features available on on-prem the same as on cloud? or could we end up in a situation where an application using Datomic Cloud cannot migrate to on-prem easily? (and is therefor “locked” on AWS)?


Also, from what I understand transaction functions are not available on Cloud (yet). Is there another way to safely enforce invariants?


@hmaurer feature dev will continue on both products, but we can’t guarantee every feature will come to both products


we do intend to support the same API (Client) for both


We are working on options for the problems solved with txn functions (invariants being one of those)


currently you can use the built in cas functionality to force atomicity on certain kinds of updates


Great to hear. Also, can we expect to see peer support on Cloud in the near-ish future? And for txn functions, can we expect to hear more about this before the end of the year?


Yep, but cas is quite limiting from what I understand


it does work in some cases though


I don’t have a timeline for any features


Is peer support on Cloud something you plan to include at least?


We are interested in solving the problems that peer helps with (i.e. code/data locality), but have not determined if there will be “Peers” per-se in Cloud


or if there will be other/preferable ways to achieve those goals


I see. Last but not least: do you allow querying the indexes directly in Cloud? And do you allow listening to the transaction log?


there is not currently a tx-listener feature (like the tx-report-queue) in Cloud. For most use cases polling should be totally fine, but we’re interested in feedback here as well


@marshall do you mean polling using tx-range?


potentially. depends what you’re looking for


if you want to know if anything has been updated, you could just inspect the basis-t


building a generic worker which listens to the transaction log and performs some task


e.g. keep an elasticsearch instance in sync


but, yes, you could use tx-range to get latest txns as well


Sorry, yet another question: is it possible to use Cloud from outside AWS? e.g. for an app on Heroku to connect to a Cloud setup


without heroku private spaces


yes, it is possible, but you will have to handle permissions/setup for the communication channel


by default Datomic Cloud runs in a private VPC in AWS


that doesn’t allow traffic in from outside (other than via the bastion)


so you’ll have to configure the communication channels to allow that yourself (with the associated risks of having your DB available on the internet)


the advantage of accessing from within AWS (same or different VPC) is you can use IAM roles and security groups to control that access very specifically


I see. Using it within AWS seems preferable indeed; I just wanted to know if the option was there to use it remotely.


Thank you 🙂


I'm doing a transaction and got a server error:

datomic.client.api/transact                      api.clj:  268
                                    datomic.client.api/ares                      api.clj:   52
                                       clojure.core/ex-info                     core.clj: 4739
clojure.lang.ExceptionInfo: Server Error
    data: {#object[clojure.lang.Keyword 0x4f9fa472 ":datomic.client-spi/context-id"] "efd6cf48-685f-4055-b56e-1242be7ac557", #object[clojure.lang.Keyword 0x33705c6a ":cognitect.anomalies/category"] #object[clojure.lang.Keyword 0xa9f85af ":cognitect.anomalies/fault"], #object[clojure.lang.Keyword 0x2cc736a2 ":cognitect.anomalies/message"] "Server Error", #object[clojure.lang.Keyword 0x6f72f901 ":dbs"] [{#object[clojure.lang.Keyword 0x634e59fc ":database-id"] "d91f432b-1821-4213-8b80-e8d59a4e7b8c", #object[clojure.lang.Keyword 0x5cd4f36 ":t"] 5, #object[clojure.lang.Keyword 0x60c82a21 ":next-t"] 6, #object[clojure.lang.Keyword 0xc9d4817 ":history"] false}]}


How do I find what is wrong?


autossh works great for me! No more timeouts since I started using it, which makes the development flow way better.


About the error above, looks like that you cannot delete a database and immediately create one with the same name. I was doing it on tests and that caused the error. Adding a random suffix to the database name fixed the problem.


Correct, there is a small window when you can't immediately reuse a db name. Having a random suffix is a good solution


@marshall I'm not sure if it is possible but a more specific error message could be useful for that.

Alex Miller (Clojure team)21:04:42

it is particularly confusing because the create-database succeeds but subsequent transacts fail