This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-04-18
Channels
- # architecture (14)
- # beginners (89)
- # cider (336)
- # cljsrn (2)
- # clojure (181)
- # clojure-berlin (1)
- # clojure-dusseldorf (3)
- # clojure-finland (4)
- # clojure-germany (5)
- # clojure-italy (18)
- # clojure-norway (10)
- # clojure-spec (9)
- # clojure-uk (94)
- # clojurescript (84)
- # cursive (3)
- # data-science (4)
- # datomic (82)
- # emacs (2)
- # events (4)
- # figwheel (1)
- # fulcro (6)
- # graphql (2)
- # hoplon (46)
- # instaparse (24)
- # jobs (9)
- # lein-figwheel (2)
- # luminus (18)
- # lumo (3)
- # mount (1)
- # off-topic (14)
- # onyx (17)
- # parinfer (22)
- # planck (1)
- # protorepl (1)
- # re-frame (50)
- # reagent (7)
- # ring-swagger (6)
- # rum (4)
- # shadow-cljs (94)
- # spacemacs (9)
- # specter (7)
- # tools-deps (2)
- # uncomplicate (4)
- # vim (33)
is anyone worried about Datomic Cloud's lack of excision in relation to GDPR taking effect next month? if so, any thoughts or tips for mitigating the situation?
@joshkh Best solution I can think of is 1. refactor your code to store sensitive data in a complementary store 2. manually export all the non-sensitive data from your Cloud deployment and import it into a new Cloud deployment (yes, that will require downtime). Will blog about 1 soon
ah yes, thank you. we've been exploring option 2 as a last resort. the whole situation is a bit frustrating in the sense that option 1 still requires a fair amount of dev work while keeping two databases in sync. just to be clear, you're suggesting that datomic only stores a reference to some user ID in another database, and that's where the personally identifiable information goes?
To be clear, to me these are not 2 options to choose between, but two steps to take in conjunction. Step 1 deals with future data, and step 2 deals with past data
> i've heard conflicting arguments that even maintaining some user id (even if it's only a reference to an external data source) is not within the "spirit" of GDPR. I think whoever said that has not thought this through from an IT perspective (granted, this is common amongst lawyers). This really sounds like an unreasonable expectation, as it can prevent things like accounting from being done properly, and I'm taking the bet that GDPR will not get enforced to this extent.
without knowing how datomic works under the hood it's not an option. according to the documentation: The purpose of :db/noHistory is to conserve storage, not to make semantic guarantees about removing information. The effect of :db/noHistory happens in the background, and some amount of history may be visible even for attributes with :db/noHistory set to true.
It was a question rather than a suggestion (because we're in the same situation), but you've answered it, thanks 🙂
i've heard conflicting arguments that even maintaining some user id (even if it's only a reference to an external data source) is not within the "spirit" of GDPR. anywho, i'd love to read your post when you've finished. where can i find your blog? i'll bookmark it in the mean time.
there has been a feature request since March for targetted excision in Datomic Cloud but no updates since then. I don't understand why Datomic Cloud doesn't support it but on-prem does
Anyone trying to run dev against datomic cloud with the socket connection? For me it stops working all the time generating timeout exceptions, is there something I can do to have a better experience during dev?
I use it all the time and it drops about once per day for me
I’m specifically talking about the socks proxy
are you saying you see timeouts sending queries / txns / etc?
I'm talking about the socks proxy. I get timeouts when running tests which creates the client, creates a new database, creates a connection and test stuff. But for me is more like once every 10 minutes than once a day, which makes for a really bad experience. Restarting the proxy solves the problem but I would like to have a smoother experience.
@mynomoto Some folks here discussed using a keep-alive tool of some sort for the proxy, but I can’t recall the specific one they mentioned. I’ll see if I can find it again
I could put the that script to test the socks proxy in loop but I'm not sure if that's a great idea.
and replacing the ssh command in the provided script with an analogous autossh command
I believe
autossh -M 0 -o "ServerAliveInterval 5" -o "ServerAliveCountMax 3" -v -i $PK -CND ${SOCKS_PORT:=8182} ec2-user@${BASTION_IP}
was the suggested replacement command.
I can’t comment on the specific efficacy of it, and I wish I could recall who to credit about it
question about ongoing operation of on-prem
: do you have AWS AMIs for the default transactor stack available in us-east-2
?
I see, I was trying to run a build out of a local copy of datomic-pro-0.9.5372
, which obviously predates the existence of us-east-2
symlinks: not always your friends
@marshall is Datomic Cloud just a “managed version” of Datomic on-prem? or is it actually a different product? (as in, a different codebase with some features only available in cloud)
it is a different product and mostly different code base with a totally different architecture
@hmaurer it is a different product. it uses the same data model, but has a strictly different architecture and use of storage (among other things)
@chris_johnson Ah, that would do it.
@alexmiller you type faster than i do
faster but worser
@marshall do you intend on keeping the features available on on-prem the same as on cloud? or could we end up in a situation where an application using Datomic Cloud cannot migrate to on-prem easily? (and is therefor “locked” on AWS)?
Also, from what I understand transaction functions are not available on Cloud (yet). Is there another way to safely enforce invariants?
@hmaurer feature dev will continue on both products, but we can’t guarantee every feature will come to both products
We are working on options for the problems solved with txn functions (invariants being one of those)
currently you can use the built in cas functionality to force atomicity on certain kinds of updates
Great to hear. Also, can we expect to see peer support on Cloud in the near-ish future? And for txn functions, can we expect to hear more about this before the end of the year?
We are interested in solving the problems that peer helps with (i.e. code/data locality), but have not determined if there will be “Peers” per-se in Cloud
I see. Last but not least: do you allow querying the indexes directly in Cloud? And do you allow listening to the transaction log?
yes, there is direct index access (https://docs.datomic.com/cloud/query/raw-index-access.html)
there is not currently a tx-listener feature (like the tx-report-queue) in Cloud. For most use cases polling should be totally fine, but we’re interested in feedback here as well
if you want to know if anything has been updated, you could just inspect the basis-t
building a generic worker which listens to the transaction log and performs some task
Sorry, yet another question: is it possible to use Cloud from outside AWS? e.g. for an app on Heroku to connect to a Cloud setup
yes, it is possible, but you will have to handle permissions/setup for the communication channel
so you’ll have to configure the communication channels to allow that yourself (with the associated risks of having your DB available on the internet)
the advantage of accessing from within AWS (same or different VPC) is you can use IAM roles and security groups to control that access very specifically
I see. Using it within AWS seems preferable indeed; I just wanted to know if the option was there to use it remotely.
I'm doing a transaction and got a server error:
datomic.client.api/transact api.clj: 268
datomic.client.api/ares api.clj: 52
clojure.core/ex-info core.clj: 4739
clojure.lang.ExceptionInfo: Server Error
data: {#object[clojure.lang.Keyword 0x4f9fa472 ":datomic.client-spi/context-id"] "efd6cf48-685f-4055-b56e-1242be7ac557", #object[clojure.lang.Keyword 0x33705c6a ":cognitect.anomalies/category"] #object[clojure.lang.Keyword 0xa9f85af ":cognitect.anomalies/fault"], #object[clojure.lang.Keyword 0x2cc736a2 ":cognitect.anomalies/message"] "Server Error", #object[clojure.lang.Keyword 0x6f72f901 ":dbs"] [{#object[clojure.lang.Keyword 0x634e59fc ":database-id"] "d91f432b-1821-4213-8b80-e8d59a4e7b8c", #object[clojure.lang.Keyword 0x5cd4f36 ":t"] 5, #object[clojure.lang.Keyword 0x60c82a21 ":next-t"] 6, #object[clojure.lang.Keyword 0xc9d4817 ":history"] false}]}
autossh
works great for me! No more timeouts since I started using it, which makes the development flow way better.
About the error above, looks like that you cannot delete a database and immediately create one with the same name. I was doing it on tests and that caused the error. Adding a random suffix to the database name fixed the problem.
Correct, there is a small window when you can't immediately reuse a db name. Having a random suffix is a good solution
@marshall I'm not sure if it is possible but a more specific error message could be useful for that.
it is particularly confusing because the create-database succeeds but subsequent transacts fail