Fork me on GitHub
#datomic
<
2016-01-05
>
peterromfeld04:01:49

My license is still good until May 11, 2016 I followed the gpg guide from https://my.datomic.com/account -> https://github.com/technomancy/leiningen/blob/master/doc/DEPLOY.md#authentication creted credentials.clj and then encrypted it into .gpg however, when running lein deps i still get:

Could not transfer artifact com.datomic:datomic-pro:pom:0.9.5344 from/to  (): Failed to transfer file: . Return code is: 204, ReasonPhrase: No Content.
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.

peterromfeld04:01:11

no proxy running

peterromfeld04:01:44

in project.clj:

...
:repositories {""
                 {:url   ""
                  :creds :gpg}}
...

peterromfeld04:01:36

gpg-agent is asking for pass-phrase and not complaining, so i guess that part is correct too

peterromfeld04:01:56

the unencrypted credentials.clj i just copied from http://my.datomic.com

peterromfeld04:01:02

{#"my\.datomic\.com" {:username “[email protected]"
                      :password “ “asdf-asdf-asdf-asdas-asdasd”}}

peterromfeld04:01:40

i also did a bin/maven-install which installed the peer libs into

~/b/datomic-pro-0.9.5344> ls -la ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/
$0.9.5344/_maven.repositories   $0.9.5344/datomic-pro-0.9.5344.jar       $0.9.5344/datomic-pro-0.9.5344.pom
$0.9.5344/_remote.repositories  $0.9.5344/datomic-pro-0.9.5344.jar.sha1  $0.9.5344/datomic-pro-0.9.5344.pom.sha1
so i thought it wouldnt even had to try to install it..

peterromfeld04:01:22

ok nvm simple_smile after rm -rf ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/ it worked.. seems that if you install peer libs via maven-install it conflicts somehow with lein deps

sdegutis16:01:12

Does setting the host key in datomic.properties to "localhost" prevent remote machines from being able to connect to it?

sdegutis16:01:40

Specifically in the Free transactor/protocol.

kschrader22:01:58

does the connect function key off of something other than the uri to do its caching?

kschrader22:01:27

we had a local backup of a database that we thought we were working with

kschrader22:01:43

but we had an earlier connection to production

kschrader22:01:53

in the same REPL

kschrader22:01:59

that it ended up using

kschrader22:01:32

I’ve verified that whichever storage we connect to first gets cached

Ben Kamphaus22:01:11

@kschrader: cached of the db’s unique identifier / name — which is the same between restored backups in different storages.

kschrader22:01:00

ok, the API docs say: Connections are cached such that calling datomic.api/connect multiple times with the same URI value will return the same connection object.

kschrader22:01:07

which is wrong

kschrader22:01:10

and dangerous 😔

Ben Kamphaus22:01:02

@kschrader: it’s not intended that a peer should talk to two instances of the same database. That said, I understand your concern, I’ll look at correcting the API documentation. Discussion of this behavior has come up before.

kschrader22:01:21

we were able to roll back the changes by looking at the DB in the past and reversing the transactions

kschrader22:01:35

but it made for a more eventful afternoon than I would have liked

Ben Kamphaus22:01:54

@kschrader: Specifically we’ve discussed making it throw. I.e. we want to prevent the “and dangerous” portion of that, but at present don’t have any intention of support the database forking (i.e. connection from one peer to multiple forks of a previous database).

kschrader22:01:13

that would be fine

kschrader22:01:32

when I do a connect the second time it should fail

kschrader22:01:50

not silently keep the original connection open

Ben Kamphaus22:01:22

if a different URI, right. That’s the change in behavior that’s been discussed. Nothing’s been slated for a release at present but I’ll update the dev team with your experience, and I’ll let you know how it will be addressed.

kschrader22:01:21

it’s probably only something we’d run into from the REPL

kschrader22:01:33

but when it goes sideways it goes very sideways

kschrader22:01:15

is there anyway to force change the :db-id after a restore?

Ben Kamphaus22:01:39

@kschrader: db-id is locked in. As I mentioned, connect and the peer library in general aren’t intended to support the idea of having forks of the same database. If you do need a workaround, i.e. to take data you’ve tested in staging and push it to prod, or to update selectively, etc. you have to be outside of the same peer app. Separate REST API peers or your own endpoints in different peer apps, etc. can work (no collision in the connection cache in that case), but it’s still a bit outside expected use.

Ben Kamphaus22:01:49

I guess the question is, what’s the use case/goal of talking to both the db and restore from the same instance? A lot of the speculative transaction stuff is meant to be handled by e.g. with.

kschrader22:01:52

specifically what happened today was to pull a copy of our production DB locally, tested a script against it, and then ran the script against production

kschrader22:01:56

all from a repl

kschrader22:01:10

but the repl got restarted between step one and two

kschrader22:01:20

so it had a connection to production

kschrader22:01:38

and then later in the day we ran a function that has a hardcoded URL to localhost

kschrader22:01:52

and it ran against our production data

kschrader22:01:13

if connect had thrown an exception it would have prevented it

kschrader22:01:18

we generally have our production DB firewalled as well, but it was a bit of the perfect storm

kschrader22:01:30

the behavior was still unexpected though

Ben Kamphaus22:01:22

@kschrader: got it. I understand and am sympathetic to the unexpected aspect of it and will get the docs corrected and the dev’s team focus on the standing request for an error there.

Ben Kamphaus22:01:23

Keeping a separation in your environments such that the same peer can’t talk to both prod and staging/test dbs is probably the best means to prevent anything similar in the mean time.

kschrader22:01:25

yep, that’s what we usually do

kschrader22:01:51

firewall reactivated