This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-01-05
Channels
- # admin-announcements (183)
- # aws (30)
- # beginners (22)
- # boot (301)
- # cider (19)
- # cljs-dev (3)
- # cljsrn (23)
- # clojars (15)
- # clojure (136)
- # clojure-italy (8)
- # clojure-nl (4)
- # clojure-russia (19)
- # clojured (10)
- # clojurescript (134)
- # component (48)
- # cursive (7)
- # datavis (4)
- # datomic (50)
- # devcards (6)
- # events (9)
- # jobs (1)
- # ldnclj (10)
- # lein-figwheel (19)
- # leiningen (1)
- # luminus (16)
- # off-topic (5)
- # om (151)
- # proton (60)
- # re-frame (10)
- # reagent (25)
- # remote-jobs (1)
- # slack-help (3)
- # spacemacs (1)
- # vim (1)
My license is still good until May 11, 2016
I followed the gpg guide from https://my.datomic.com/account -> https://github.com/technomancy/leiningen/blob/master/doc/DEPLOY.md#authentication
creted credentials.clj and then encrypted it into .gpg
however, when running lein deps
i still get:
Could not transfer artifact com.datomic:datomic-pro:pom:0.9.5344 from/to (): Failed to transfer file: . Return code is: 204, ReasonPhrase: No Content.
This could be due to a typo in :dependencies or network issues.
If you are behind a proxy, try setting the 'http_proxy' environment variable.
no proxy running
in project.clj:
...
:repositories {""
{:url ""
:creds :gpg}}
...
gpg-agent is asking for pass-phrase and not complaining, so i guess that part is correct too
the unencrypted credentials.clj i just copied from http://my.datomic.com
{#"my\.datomic\.com" {:username “[email protected]"
:password “ “asdf-asdf-asdf-asdas-asdasd”}}
i also did a bin/maven-install
which installed the peer libs into
~/b/datomic-pro-0.9.5344> ls -la ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/
$0.9.5344/_maven.repositories $0.9.5344/datomic-pro-0.9.5344.jar $0.9.5344/datomic-pro-0.9.5344.pom
$0.9.5344/_remote.repositories $0.9.5344/datomic-pro-0.9.5344.jar.sha1 $0.9.5344/datomic-pro-0.9.5344.pom.sha1
so i thought it wouldnt even had to try to install it..ok nvm
after rm -rf ~/.m2/repository/com/datomic/datomic-pro/0.9.5344/
it worked.. seems that if you install peer libs via maven-install
it conflicts somehow with lein deps
Does setting the host
key in datomic.properties
to "localhost"
prevent remote machines from being able to connect to it?
@sdegutis: Possibly relevant mailing list discussion: https://groups.google.com/d/topic/datomic/wBRZNyHm03o/discussion
does the connect
function key off of something other than the uri to do its caching?
@kschrader: cached of the db’s unique identifier / name — which is the same between restored backups in different storages.
ok, the API docs say: Connections are cached such that calling datomic.api/connect multiple times with the same URI value will return the same connection object.
@kschrader: it’s not intended that a peer should talk to two instances of the same database. That said, I understand your concern, I’ll look at correcting the API documentation. Discussion of this behavior has come up before.
we were able to roll back the changes by looking at the DB in the past and reversing the transactions
@kschrader: Specifically we’ve discussed making it throw. I.e. we want to prevent the “and dangerous” portion of that, but at present don’t have any intention of support the database forking (i.e. connection from one peer to multiple forks of a previous database).
if a different URI, right. That’s the change in behavior that’s been discussed. Nothing’s been slated for a release at present but I’ll update the dev team with your experience, and I’ll let you know how it will be addressed.
@kschrader: db-id is locked in. As I mentioned, connect
and the peer library in general aren’t intended to support the idea of having forks of the same database. If you do need a workaround, i.e. to take data you’ve tested in staging and push it to prod, or to update selectively, etc. you have to be outside of the same peer app. Separate REST API peers or your own endpoints in different peer apps, etc. can work (no collision in the connection cache in that case), but it’s still a bit outside expected use.
I guess the question is, what’s the use case/goal of talking to both the db and restore from the same instance? A lot of the speculative transaction stuff is meant to be handled by e.g. with
.
specifically what happened today was to pull a copy of our production DB locally, tested a script against it, and then ran the script against production
we generally have our production DB firewalled as well, but it was a bit of the perfect storm
@kschrader: got it. I understand and am sympathetic to the unexpected aspect of it and will get the docs corrected and the dev’s team focus on the standing request for an error there.
Keeping a separation in your environments such that the same peer can’t talk to both prod and staging/test dbs is probably the best means to prevent anything similar in the mean time.