Fork me on GitHub

My license is still good until May 11, 2016 I followed the gpg guide from -> creted credentials.clj and then encrypted it into .gpg however, when running lein deps i still get:

Could not transfer artifact com.datomic:datomic-pro:pom:0.9.5344 from/to  (): Failed to transfer file: . Return code is: 204, ReasonPhrase: No Content.
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.


no proxy running


in project.clj:

:repositories {""
                 {:url   ""
                  :creds :gpg}}


gpg-agent is asking for pass-phrase and not complaining, so i guess that part is correct too


the unencrypted credentials.clj i just copied from


{#"my\.datomic\.com" {:username “[email protected]"
                      :password “ “asdf-asdf-asdf-asdas-asdasd”}}


i also did a bin/maven-install which installed the peer libs into

~/b/datomic-pro-0.9.5344> ls -la ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/
$0.9.5344/_maven.repositories   $0.9.5344/datomic-pro-0.9.5344.jar       $0.9.5344/datomic-pro-0.9.5344.pom
$0.9.5344/_remote.repositories  $0.9.5344/datomic-pro-0.9.5344.jar.sha1  $0.9.5344/datomic-pro-0.9.5344.pom.sha1
so i thought it wouldnt even had to try to install it..


ok nvm simple_smile after rm -rf ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/ it worked.. seems that if you install peer libs via maven-install it conflicts somehow with lein deps


Does setting the host key in to "localhost" prevent remote machines from being able to connect to it?


Specifically in the Free transactor/protocol.


does the connect function key off of something other than the uri to do its caching?


we had a local backup of a database that we thought we were working with


but we had an earlier connection to production


in the same REPL


that it ended up using


I’ve verified that whichever storage we connect to first gets cached

Ben Kamphaus22:01:11

@kschrader: cached of the db’s unique identifier / name — which is the same between restored backups in different storages.


ok, the API docs say: Connections are cached such that calling datomic.api/connect multiple times with the same URI value will return the same connection object.


which is wrong


and dangerous 😔

Ben Kamphaus22:01:02

@kschrader: it’s not intended that a peer should talk to two instances of the same database. That said, I understand your concern, I’ll look at correcting the API documentation. Discussion of this behavior has come up before.


we were able to roll back the changes by looking at the DB in the past and reversing the transactions


but it made for a more eventful afternoon than I would have liked

Ben Kamphaus22:01:54

@kschrader: Specifically we’ve discussed making it throw. I.e. we want to prevent the “and dangerous” portion of that, but at present don’t have any intention of support the database forking (i.e. connection from one peer to multiple forks of a previous database).


that would be fine


when I do a connect the second time it should fail


not silently keep the original connection open

Ben Kamphaus22:01:22

if a different URI, right. That’s the change in behavior that’s been discussed. Nothing’s been slated for a release at present but I’ll update the dev team with your experience, and I’ll let you know how it will be addressed.


it’s probably only something we’d run into from the REPL


but when it goes sideways it goes very sideways


is there anyway to force change the :db-id after a restore?

Ben Kamphaus22:01:39

@kschrader: db-id is locked in. As I mentioned, connect and the peer library in general aren’t intended to support the idea of having forks of the same database. If you do need a workaround, i.e. to take data you’ve tested in staging and push it to prod, or to update selectively, etc. you have to be outside of the same peer app. Separate REST API peers or your own endpoints in different peer apps, etc. can work (no collision in the connection cache in that case), but it’s still a bit outside expected use.

Ben Kamphaus22:01:49

I guess the question is, what’s the use case/goal of talking to both the db and restore from the same instance? A lot of the speculative transaction stuff is meant to be handled by e.g. with.


specifically what happened today was to pull a copy of our production DB locally, tested a script against it, and then ran the script against production


all from a repl


but the repl got restarted between step one and two


so it had a connection to production


and then later in the day we ran a function that has a hardcoded URL to localhost


and it ran against our production data


if connect had thrown an exception it would have prevented it


we generally have our production DB firewalled as well, but it was a bit of the perfect storm


the behavior was still unexpected though

Ben Kamphaus22:01:22

@kschrader: got it. I understand and am sympathetic to the unexpected aspect of it and will get the docs corrected and the dev’s team focus on the standing request for an error there.

Ben Kamphaus22:01:23

Keeping a separation in your environments such that the same peer can’t talk to both prod and staging/test dbs is probably the best means to prevent anything similar in the mean time.


yep, that’s what we usually do


firewall reactivated