Fork me on GitHub
#datomic
<
2019-10-02
>
tatut11:10:05

is there any way to permanently delete stuff in cloud? other than deleting whole db and recreating it without the offending datoms… if this has been discussed, I’d appreciate pointers to relevant discussion, thanks.

tatut11:10:37

mostly for the rare(ish?) case of GDPR requests to remove some user’s information

timgilbert14:10:58

The feature you're looking for is called "excision" in on-prem: https://docs.datomic.com/on-prem/excision.html. No idea what the story is for Cloud though.

tatut14:10:08

yes, I know about excision and that it is not available in cloud

viesti05:10:19

thinking that what kind of process is excision in on-prem, is it a internal rewrite of the database?

tatut06:10:03

as :db/txInstant can be set (but must be increasing) I think it would be possible to copy a database to a new one while excising some entities… perhaps not feasible for huge databases

tatut06:10:18

but a once a month GDPR maintenance break (if needed) where we recreate the whole db

Msr Tim19:10:42

My boss wants to know this too before we use this database at work.

dmarjenburgh21:10:46

Currently not possible in cloud AFAIK. Our solution is to store only a unique id for each user entity in datomic (you can add other non personally identifiable information as well) and store the other userdata in dynamodb under that key.

tatut04:10:14

that’s our current approach just having a uuid in datomic for a user and storing the actual data elsewhere

mloughlin13:10:25

an approach I've heard about (but not used) is "crypto shredding" - encrypt the data in the immutable DB, and keep the key in a mutable DB. Delete the key when GDPR request rolls in. (I AM NOT A LAWYER 😉 )

magnars11:10:21

I'm looking to retract all changes done in a transaction - how would I go about finding the changes in that transaction without doing a full table scan?

magnars11:10:50

(this is Datomic On-Prem)

magnars12:10:12

Never mind, I am no longer convinced this is a good idea.

curtosis14:10:44

perhaps an obvious question (haven’t worked with reserved instances before), but is there anything special I’d need to do to use reserved instances for Datomic Cloud CF templates?

curtosis14:10:54

doing some preliminary price workups

Msr Tim17:10:18

me too. Not sure if you can share your findings here. thank you.

curtosis19:10:35

It’s pretty straightforward. My numbers show $1/day is an upper bound for Solo, at least in us-east. You can do significantly better with reserved instance pricing.

curtosis19:10:45

about double if you want to use the new analytics gateway function… a non-nano jumps the price up.

curtosis19:10:23

Basic Production config is dominated by the primary i3.larges.

curtosis19:10:54

roughly $4-5k/year depending on whether you want analytics and/or query groups.

curtosis19:10:22

it’s all public info — unfortunately the AWS Marketplace pricer is mostly useless.

hadils16:10:38

Hi dumb question: I am trying to upgrade my Datomic Cloud compute stack to 512-8806. I have RTFM. It is not working. Any one have issues, or can help me?

marshall16:10:07

@hadilsabbagh18 The latest release is a storage & compute upgrade

marshall16:10:12

you should update your storage stack first, then compute

hadils16:10:23

The storage upgrade was successful.

marshall16:10:34

what error do you see when doing the compute upgrade?

hadils16:10:04

Wait, let me double-check...

hadils16:10:15

Yes. It was updated today. The error I'm getting with the conpute upgrade is:

AMI ami-05c81c69e00244cc9 is invalid: The image id '[ami-05c81c69e00244cc9]' does not exist (Service: AmazonAutoScaling; Status Code: 400; Error Code: ValidationError; Request ID: 6a302920-e531-11e9-9e8a-693b91fa55e0)

hadils16:10:14

us-west-2 -- Oregon

marshall16:10:27

ok give me one sec

marshall16:10:58

it appears that AWS Marketplace didn’t create that AMI for that region correctly. I will report as an issue to them immediately. Sorry for the inconvenience - I’ll follow up when I hear back from them

hadils16:10:19

Thanks again @marshall! I really appreciate it!

richhickey16:10:03

@curtosis there should be nothing special - credits for reservations you’ve made should be automatically applied to instance hours you consume

👍 4
Msr Tim17:10:18

me too. Not sure if you can share your findings here. thank you.

souenzzo17:10:33

Can't find where to download CLI Tools

souenzzo17:10:23

Docs can provide a simple working example as https://github.com/Datomic/ion-starter ?

marshall17:10:55

the zip is now downloaded from that link

👍 4
hadils18:10:33

@marshall Is there any update? I am stuck right now...

marshall18:10:47

I haven't heard back from the marketplace team. You should be able to rollback to the prior version compute template

hadils18:10:12

what about the storage template?

marshall18:10:21

You can leave it

marshall18:10:29

The update there wont hurt anything

hadils18:10:32

Thanks marshall. I appreciate your help.

marshall18:10:42

No problem

benoit20:10:43

Trying out the new analytics support. It works great! Should I blame metabase for this strange formatting of the cardinality many attribute ":asset-model/curated-content"?

plexus02:10:06

Yes, metabase has some heuristics to prettify names, based on a list of english words and their relative frequencies. You can turn it off in the settings somewhere, I've often seen it create weird results.

benoit12:10:54

Great, thanks!

benoit20:10:04

I'm getting this error when trying to count rows:

clojure.lang.ExceptionInfo: [?start ?e] not bound in expression clause: [(>= ?e ?start)] {:message "[?start ?e] not bound in expression clause: [(>= ?e ?start)]", :errorCode 65536, :errorName "GENERIC_INTERNAL_ERROR", :errorType "INTERNAL_ERROR", :failureInfo {:type "clojure.lang.ExceptionInfo", :message "[?start ?e] not bound in expression clause: [(>= ?e ?start)]", :suppressed [], :stack ["datomic.client.api.async$ares.invokeStatic(async.clj:58)" "datomic.client.api.async$ares.invoke(async.clj:54)" "datomic.client.api.sync$unchunk.invokeStatic(sync.clj:47)" "datomic.client.api.sync$unchunk.invoke(sync.clj:45)" "datomic.client.api.sync$eval11267$fn__11288.invoke(sync.clj:101)" "datomic.client.api.impl$fn__2619$G__2614__2626.invoke(impl.clj:33)" "datomic.client.api$q.invokeStatic(api.clj:351)" "datomic.client.api$q.invoke(api.clj:322)" "datomic.presto$split_count.invokeStatic(presto.clj:99)" "datomic.presto$split_count.invoke(presto.clj:86)" "datomic.presto$create_connector$reify$reify__2395.getRecordSet(presto.clj:247)" "io.prestosql.spi.connector.ConnectorRecordSetProvider.getRecordSet(ConnectorRecordSetProvider.java:27)" "io.prestosql.split.RecordPageSourceProvider.createPageSource(RecordPageSourceProvider.java:43)" "io.prestosql.split.PageSourceManager.createPageSource(PageSourceManager.java:56)" "io.prestosql.operator.TableScanOperator.getOutput(TableScanOperator.java:277)" "io.prestosql.operator.Driver.processInternal(Driver.java:379)" "io.prestosql.operator.Driver.lambda$processFor$8(Driver.java:283)" "io.prestosql.operator.Driver.tryWithLock(Driver.java:675)" "io.prestosql.operator.Driver.processFor(Driver.java:276)" "io.prestosql.execution.SqlTaskExecution$DriverSplitRunner.processFor(SqlTaskExecution.java:1075)" "io.prestosql.execution.executor.PrioritizedSplitRunner.process(PrioritizedSplitRunner.java:163)" "io.prestosql.execution.executor.TaskExecutor$TaskRunner.run(TaskExecutor.java:484)" "io.prestosql.$gen.Presto_316____20191002_195730_1.run(Unknown Source)" "java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)" "java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)" "java.lang.Thread.run(Thread.java:748)"]}}

marshall13:10:13

can you share the sql query you’re issuing and your schema and metaschema?

benoit14:10:06

Query from the stack trace seems to be: {:query "SELECT count(*) AS \"count\" FROM \"centriq\".\"asset_model\"", :params nil},

benoit14:10:54

Metaschema is basic:

{:tables
 {:user/id {}
  :asset-model/id {}
  :asset-tag/id {}
  :property/id {}}}

marshall15:10:21

what is the datomic schema

benoit16:10:08

Sent in a direct message. Thanks.