Fork me on GitHub

I seem to have the datomic-socks-proxy up and running - and can create a database successfully and establish a connection. However, when transact the initial schema I get

{:cognitect.anomalies/category :cognitect.anomalies/forbidden,
     :datomic.client/http-result {:status nil, :headers nil, :body nil}}


This implies that my AWS credentials aren’t set up correctly - but I’m surprised I can create-database and delete-database without a similar exception.

Hendrik Poernama06:01:30

Is Datomic Cloud available for Asia Pacific Regions? The estimator only shows US and EU.


is CloudDatomic locked to any special storage db ?


@laujensen you don’t worry about storage with Cloud. it uses a combination of DDB, S3 and EFS internally; using the best of each for the most suitable cases


@robert-stuttaford - I only worry because Dynamo would incur an extra expense relative to the number of writes


they only use DDB for consistency roots, i believe; all the actual data is elsewhere (S3, EFS)


so DDB throughput would be much lower than on-prem usage


Great, thanks


@stuarthalloway is memcached in the mix with Cloud? if not, is that because the new design makes it redundant, or because it’s still coming?


@poernahi not yet for Asia Pacific. Datomic Cloud makes extensive use of AWS features, not all of which are available in all regions. We will be working with AWS to roll out to more regions over time.


@robert-stuttaford I recall Marshall and Stu talking about memcache not being used. Instead I think the cloud caching hierarchy is RAM, EFS then S3.


@robert-stuttaford Instead of memcache, let me introduce valcache. It implements a nice immutable subset of the memcache API, but is backed by SSDs. Latency like memcache but vastly cheaper capacity. This is one reason Production uses i3.large


Production dedicates most of the 475GB SSD to valcache, so if e.g. your entire set of dbs added up to 300GB, then all of it would end up in the valcache on all the primary compute nodes.


Looking at the AWS docs that is a #[email protected]! load of I/O: "... Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches..." @stuarthalloway @marshall - congratulations on the release, awesome news.


@malcolm.edgar it is a full time job keeping up with what all the different EC2 flavors can do 🙂


@donmullen that forbidden error seems weird to me too — did you get it sorted?


will valcache be OSS at some point, @stuarthalloway? (idle curiosity)


Or just available? That’s worth purchasing IMO.


interesting — note that the memcache API is not fully supported, only the good (immutable) parts


@laujensen Robert’s comments about DDB usage are spot on. Cloud handles all of that and uses DDB autoscaling as well. All together it will use significantly less DDB throughput than a comparable On-Prem system


Reassuring thanks @marshall. Its not always clear when we’re talking pennies vs thousands of dollars on Amazon 🙂


@laujensen staying around $1/day all-in is a real thing with Solo


it is amazing to watch DDB scale up for an import and have an “expensive” day that is $0.25 more 🙂


@stuarthalloway We might move SabreCMS to AWS and I expect that’ll be hammering out millions of hourly writes since every page view is at least 2 writes


well that will need Production 🙂


Yes it will 🙂


Do you always start with Solo, or is it non-trivial to convert to production?


if i guess, i’d say it’s a matter of adding an i3 and removing the t2. if you add then remove, no downtime


as s3 / ddb have full storage coverage


-waits for correction-


ah, yeah. more stuff 🙂


the upgrade only takes a few minutes, totally sensible to start with Solo and switch on need


@stuarthalloway - have not sorted out the credential issue yet. Will look at it this afternoon. Likely some aws user error on my part - but did seem strange to do create/delete and not transact.


@donmullen I don’t want to lose track of that, please let me know what you find. Maybe I will just mosey over there. 🙂


@donmullen I am looking into reproducing the issue you ran into. Were you running as an AWS admin or using the admin created by datomic?


@jaret my creds have full admin and I added datomic policy to group as well. Did not help. Will be back online around 1est.


@stuarthalloway better keep your distance. Just getting over flu and now at cvs minute clinic for strep test - though that’s unlikely. Fun times. EB had to get out in snow without me 😞


@donmullen are you certain that you created the DB and transacted in the same session? I can’t even create a DB without admin credentials.


@jaret - yes - same session


Hmm.. @jaret @stuarthalloway - just going through the movies example and was able to transact the schema.


And first-movies and query. Strange. Will let you know if I see what I was getting before.


I am wondering if your socks proxy ran into a broken pipe after DB create


@jaret I stopped / restarted proxy a few times - and went through the steps to create database and try a transaction - always was getting the “forbidden” error.


That would rule out that possibility. I am going to continue to try to re-create please let us know if you run into it again.


AWS “on-Prem” operations question to which I think I know the answer but also think that any experiments I would do to prove myself wrong or right would be hopelessly blinded by own mental model of the Datomic storage layer: If I have a Datomic database in a DynamoDB backend, and I stop the running transactor, copy the DDB table somewhere else (another region, say), and start a new transactor instance of the same version as the first one, pointed at that new table, does the new transactor have the “same” database in it (that is, the same schema and datoms as the original such that queries will give the same results)? Follow-up: same question but in the case where the original transactor has a rock fall on it from space instead of being stopped gracefully.


@chris_johnson In theory yes the DB would be “identical”. In practice a perfect copy of DDB is trickier than the same operation for, say, a SQL database. discusses this somewhat further


if you can ensure a “consistent copy” (as defined there) ^ then storage-level backup/copy is acceptable


while following I get [{:type clojure.lang.Compiler$CompilerException :message java.lang.RuntimeException: Unable to resolve symbol: halt -when in this context, compiling:(datomic/client/api.clj:57:11) :at [clojure.lang.Compiler analyze 6688]} {:type java.lang.RuntimeException :message Unable to resolve symbol: halt-when in this context :at [clojure.lang.Util runtimeException 221]}] when trying to get a repl via 'lein repl'


same result via cider in emacs, btw.


@macrobartfast I got that until I switched to Clojure 1.9


oh sweet let me try that then


bam! resolved. thanks!


of course, that triggers 1.9 related cider errors... nice.


instinctively went to file an issue on github for the 1.8/1.9 issue, then remembered you can't for datomic!


not even sure what you're supposed to use... haven't used proprietary stuff in so long.


searching on 'where to file a datomic issue' and so on brings up nothing readily.


sweet, thanks.


Yep. Love the username. We were discussing wanting pan galactic gargle blasters the other day


yes, downing one as we speak.


Mmmm. Lemon-wrapped gold brick


best drink in existence.


I know I shouldn't, but considering having a third blaster.


drats, running low on Fallian marsh gas.