Fork me on GitHub
#datomic
<
2018-01-19
>
donmullen01:01:41

I seem to have the datomic-socks-proxy up and running - and can create a database successfully and establish a connection. However, when transact the initial schema I get

{:cognitect.anomalies/category :cognitect.anomalies/forbidden,
     :datomic.client/http-result {:status nil, :headers nil, :body nil}}

donmullen01:01:15

This implies that my AWS credentials aren’t set up correctly - but I’m surprised I can create-database and delete-database without a similar exception.

Hendrik Poernama06:01:30

Is Datomic Cloud available for Asia Pacific Regions? The estimator only shows US and EU.

laujensen08:01:50

is CloudDatomic locked to any special storage db ?

robert-stuttaford08:01:40

@laujensen you don’t worry about storage with Cloud. it uses a combination of DDB, S3 and EFS internally; using the best of each for the most suitable cases

laujensen08:01:43

@robert-stuttaford - I only worry because Dynamo would incur an extra expense relative to the number of writes

robert-stuttaford08:01:35

they only use DDB for consistency roots, i believe; all the actual data is elsewhere (S3, EFS)

robert-stuttaford08:01:51

so DDB throughput would be much lower than on-prem usage

laujensen08:01:15

Great, thanks

robert-stuttaford08:01:56

@stuarthalloway is memcached in the mix with Cloud? if not, is that because the new design makes it redundant, or because it’s still coming?

stuarthalloway12:01:04

@poernahi not yet for Asia Pacific. Datomic Cloud makes extensive use of AWS features, not all of which are available in all regions. We will be working with AWS to roll out to more regions over time.

malcolm.edgar12:01:37

@robert-stuttaford I recall Marshall and Stu talking about memcache not being used. Instead I think the cloud caching hierarchy is RAM, EFS then S3.

stuarthalloway12:01:25

@robert-stuttaford Instead of memcache, let me introduce valcache. It implements a nice immutable subset of the memcache API, but is backed by SSDs. Latency like memcache but vastly cheaper capacity. This is one reason Production uses i3.large

stuarthalloway12:01:09

Production dedicates most of the 475GB SSD to valcache, so if e.g. your entire set of dbs added up to 300GB, then all of it would end up in the valcache on all the primary compute nodes.

malcolm.edgar12:01:45

Looking at the AWS docs https://aws.amazon.com/blogs/aws/now-available-i3-instances-for-demanding-io-intensive-applications/ that is a #$@! load of I/O: "... Designed for I/O intensive workloads and equipped with super-efficient NVMe SSD storage, these instances can deliver up to 3.3 million IOPS at a 4 KB block and up to 16 GB/second of sequential disk throughput. This makes them a great fit for any workload that requires high throughput and low latency including relational databases, NoSQL databases, search engines, data warehouses, real-time analytics, and disk-based caches..." @stuarthalloway @marshall - congratulations on the release, awesome news.

stuarthalloway12:01:34

@malcolm.edgar it is a full time job keeping up with what all the different EC2 flavors can do 🙂

stuarthalloway12:01:00

@donmullen that forbidden error seems weird to me too — did you get it sorted?

robert-stuttaford12:01:16

will valcache be OSS at some point, @stuarthalloway? (idle curiosity)

potetm12:01:20

Or just available? That’s worth purchasing IMO.

stuarthalloway13:01:11

interesting — note that the memcache API is not fully supported, only the good (immutable) parts

marshall14:01:01

@laujensen Robert’s comments about DDB usage are spot on. Cloud handles all of that and uses DDB autoscaling as well. All together it will use significantly less DDB throughput than a comparable On-Prem system

laujensen14:01:06

Reassuring thanks @marshall. Its not always clear when we’re talking pennies vs thousands of dollars on Amazon 🙂

stuarthalloway14:01:01

@laujensen staying around $1/day all-in is a real thing with Solo

stuarthalloway14:01:31

it is amazing to watch DDB scale up for an import and have an “expensive” day that is $0.25 more 🙂

laujensen14:01:50

@stuarthalloway We might move SabreCMS to AWS and I expect that’ll be hammering out millions of hourly writes since every page view is at least 2 writes

stuarthalloway14:01:03

well that will need Production 🙂

laujensen14:01:09

Yes it will 🙂

laujensen14:01:23

Do you always start with Solo, or is it non-trivial to convert to production?

robert-stuttaford14:01:04

if i guess, i’d say it’s a matter of adding an i3 and removing the t2. if you add then remove, no downtime

robert-stuttaford14:01:18

as s3 / ddb have full storage coverage

robert-stuttaford14:01:40

-waits for correction-

robert-stuttaford14:01:18

ah, yeah. more stuff 🙂

stuarthalloway14:01:20

the upgrade only takes a few minutes, totally sensible to start with Solo and switch on need

donmullen14:01:47

@stuarthalloway - have not sorted out the credential issue yet. Will look at it this afternoon. Likely some aws user error on my part - but did seem strange to do create/delete and not transact.

stuarthalloway14:01:35

@donmullen I don’t want to lose track of that, please let me know what you find. Maybe I will just mosey over there. 🙂

jaret15:01:41

@donmullen I am looking into reproducing the issue you ran into. Were you running as an AWS admin or using the admin created by datomic?

donmullen16:01:39

@jaret my creds have full admin and I added datomic policy to group as well. Did not help. Will be back online around 1est.

donmullen16:01:02

@stuarthalloway better keep your distance. Just getting over flu and now at cvs minute clinic for strep test - though that’s unlikely. Fun times. EB had to get out in snow without me 😞

jaret17:01:06

@donmullen are you certain that you created the DB and transacted in the same session? I can’t even create a DB without admin credentials.

donmullen18:01:34

@jaret - yes - same session

donmullen18:01:12

Hmm.. @jaret @stuarthalloway - just going through the movies example and was able to transact the schema.

donmullen18:01:22

And first-movies and query. Strange. Will let you know if I see what I was getting before.

jaret18:01:34

I am wondering if your socks proxy ran into a broken pipe after DB create

donmullen18:01:28

@jaret I stopped / restarted proxy a few times - and went through the steps to create database and try a transaction - always was getting the “forbidden” error.

jaret18:01:18

That would rule out that possibility. I am going to continue to try to re-create please let us know if you run into it again.

Chris Bidler19:01:48

AWS “on-Prem” operations question to which I think I know the answer but also think that any experiments I would do to prove myself wrong or right would be hopelessly blinded by own mental model of the Datomic storage layer: If I have a Datomic database in a DynamoDB backend, and I stop the running transactor, copy the DDB table somewhere else (another region, say), and start a new transactor instance of the same version as the first one, pointed at that new table, does the new transactor have the “same” database in it (that is, the same schema and datoms as the original such that queries will give the same results)? Follow-up: same question but in the case where the original transactor has a rock fall on it from space instead of being stopped gracefully.

marshall19:01:51

@chris_johnson In theory yes the DB would be “identical”. In practice a perfect copy of DDB is trickier than the same operation for, say, a SQL database. https://docs.datomic.com/on-prem/ha.html#other-consistent-copy-options discusses this somewhat further

marshall19:01:34

if you can ensure a “consistent copy” (as defined there) ^ then storage-level backup/copy is acceptable

macrobartfast22:01:28

while following https://docs.datomic.com/on-prem/getting-started/connect-to-a-database.html I get [{:type clojure.lang.Compiler$CompilerException :message java.lang.RuntimeException: Unable to resolve symbol: halt -when in this context, compiling:(datomic/client/api.clj:57:11) :at [clojure.lang.Compiler analyze Compiler.java 6688]} {:type java.lang.RuntimeException :message Unable to resolve symbol: halt-when in this context :at [clojure.lang.Util runtimeException Util.java 221]}] when trying to get a repl via 'lein repl'

macrobartfast22:01:40

same result via cider in emacs, btw.

fingertoe23:01:50

@macrobartfast I got that until I switched to Clojure 1.9

macrobartfast23:01:04

oh sweet let me try that then

macrobartfast23:01:33

bam! resolved. thanks!

macrobartfast23:01:12

of course, that triggers 1.9 related cider errors... nice.

macrobartfast23:01:02

instinctively went to file an issue on github for the 1.8/1.9 issue, then remembered you can't for datomic!

macrobartfast23:01:16

not even sure what you're supposed to use... haven't used proprietary stuff in so long.

macrobartfast23:01:35

searching on 'where to file a datomic issue' and so on brings up nothing readily.

macrobartfast23:01:02

sweet, thanks.

marshall23:01:50

Yep. Love the username. We were discussing wanting pan galactic gargle blasters the other day

macrobartfast23:01:40

yes, downing one as we speak.

marshall23:01:06

Mmmm. Lemon-wrapped gold brick

macrobartfast23:01:33

best drink in existence.

macrobartfast23:01:10

I know I shouldn't, but considering having a third blaster.

macrobartfast23:01:43

drats, running low on Fallian marsh gas.