This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2015-12-10
Channels
- # admin-announcements (32)
- # announcements (4)
- # aws (25)
- # beginners (296)
- # boot (1)
- # cider (87)
- # clara (16)
- # cljs-dev (7)
- # cljsrn (41)
- # clojure (121)
- # clojure-art (26)
- # clojure-japan (4)
- # clojure-miami (190)
- # clojure-russia (168)
- # clojure-sg (3)
- # clojure-sweden (13)
- # clojurescript (138)
- # clojurex (7)
- # cursive (98)
- # data-science (2)
- # datomic (129)
- # devcards (10)
- # editors (5)
- # funcool (1)
- # hoplon (31)
- # jobs (1)
- # ldnclj (4)
- # lein-figwheel (3)
- # off-topic (2)
- # om (213)
- # onyx (33)
- # parinfer (7)
- # portland-or (1)
- # re-frame (19)
- # reagent (2)
- # ring-swagger (27)
- # slack-help (3)
I've got a Cassandra cluster in a data center setup for Datomic (using the provided CQL scripts) and a locally running transactor connected to it.
Launching with Java options -server -Xms1g -Xmx1g -XX:+UseG1GC -XX:MaxGCPauseMillis=50
Starting datomic:cass://<IP-ADDRESS>:9042/datomic.datomic/<DB-NAME>?user=iccassandra&password=369cbbab59f6715bfde80cce13cde7cc&ssl= ...
System started datomic:cass://<IP-ADDRESS>:9042/datomic.datomic/<DB-NAME>?user=iccassandra&password=369cbbab59f6715bfde80cce13cde7cc&ssl=
But when I try to launch the web console with
bin/console -p 8080 staging datomic:cass://<IP-ADDRESS>:9042/?user=<username>&password=<password>&ssl=false
I get this error in the browser
Cannot support TLS_RSA_WITH_AES_256_CBC_SHA with currently installed providers trying to connect to datomic:cass://<ip address>:9042/?user=<username>&password=<password>&ssl=false, make sure transactor is running
Has anyone seen this error before?
If I try to programmatically connect it says Caused by: java.lang.IllegalArgumentException: Cannot support TLS_RSA_WITH_AES_256_CBC_SHA with currently installed providers
.
in a stack trace.
@domkm: if you're trying to throw informative exceptions from a transaction function, just use clojure.core/ex-info
@paxan: ex-info
doesn't differentiate between state and argument exceptions but those Datomic exception classes do.
Fair point @domkm. I've settled on just using ex-info
in our txn functions based on the recommendation from one of the datomic people.
Can datomic be a good solution for a ecommerce solution made in clojure. Or a accounting app made in clojure ?
hell yes, and hell yes
both require a full audit trail to be sound. datomic excels at that
oke, then I have to search for a good tutorial / book to learn datomic and the way I have to make the queries
see the 3 pinned items in here
there’s no book yet
but there are great videos from the folks who make Datomic
open the People list and then click pinned items just above all the people in the list
👍 also a bunch of recipes in the clojure cookbook here: https://github.com/clojure-cookbook/clojure-cookbook/tree/master/06_databases, 6-10 through 6-15
Q: we are evaluating datomic fora social network app, and we are storing large body of text.
it is. you can use a blob store (DynamoDB or similar) to store the actual text bodies, and store the keys in Datomic, and so benefit from all Datomic’s capabilities until the point where you need to retrieve this text
actually, clarification is needed. Datomic’s performance is pressured when it is given large strings, but it’s totally capable of dealing with many, small strings
which case describes your problem?
@robert-stuttaford: Hi robert, these days, I am working to import the 20 million data from mysql to datomic, and applied the policies about pipe-blocking and batching, but it kept happening the java.util.concurrent.ExecutionException: clojure.lang.ExceptionInfo: :db.error/transactor-unavailable Transactor not available {:db/error :db.error/transactor-unavailable} error
@joseph, for import jobs, you need to tweak your transactor memory settings. what storage are you using?
your import is crushing the transactor
what storage are you using? dynamo or something else?
the other issue is that threshold. it’s going to index to storage whenever it reaches that threshold. so you might want to increase that quite a bit
ok. try increasing that threshold to 128 or even 256mb
you should also give it time between batches to catch up with itself
ok, and we have around 120 variables, and each variable has around 100 000 datum, should I do the request-index after importing each variable's data?
that’s a wise idea
import one variable’s datoms, request index, wait for it to go back to sleep
are you batching the transactions for those 120,000?
so 2000 datoms at a time, say
ok. if the values are small, then you can go higher than that
when you were saying "wait for it to go back to sleep", do you mean I should wait for the return of request-index_
you can also tag the transaction itself if you want to keep track of which source values you’re transacting - e.g. [:db/add (d/tempid :db.part/tx) :source-range “variable-A__4001-6000”]
wait for its CPU usage to die down
i’ve never used request-index, reading docs
hey vijay
Hi Robert!
yeah request-index returns immediately. you can wait for the deref of http://docs.datomic.com/clojure/#datomic.api/sync-index to return
(do (d/request-index conn) @(d/sync-index conn (d/basis-t (d/db conn))))
something like that
let me know how it goes, i am curious to learn from your use-case
testing now, but I am a little unclear about the reason to increase the index threshold instead of decreasing
well, if you’re manually controlling when you index, then doing so is only necessary to prevent it from indexing before you’re ready
it might start indexing before you’re done transacting all the datoms for a variable
stacktrace?
i prefer slow and correct to fast and incorrect
getting to fast and correct is a matter of tuning, which might not be worth the time investment if you achieve your goal before you get there
i say that as someone who’s been there XD
yes, of course correct is most important, used to batch around 150 datoms, it also works, and almost the same speed as now...
i recommend you reach out to @michaeldrogalis in #C051WKSP3, i think they might be working on some sort of SQL->Datomic ETL tool. your case might just be a great test case for them if that happens to be true
@robert-stuttaford: I’m refactoring a legacy publishing system, and in midst of experimenting with datomic for it hence the large text requirement.
so it is large text blobs?
how big is your biggest string?
10s of kb? 100s of kb? 1s of mb?
oh you can stick that in Datomic no problem
we’re using strings of that size and it’s totally fine
how many records are you talking?
ok. so, the pressure that large strings puts on the system is that indexing takes longer, and less datoms are stored per index segment, which means more have to be retrieved from storage when satisfying query
@bkamphaus and @luke can both comment with more detail than that
so it’s not like it’s a boolean GOOD or BAD; it’s a slow degradation as your size and volume increases
personally i think you might consider spending a day writing a migration and put a whole bunch of data in and write some queries, and see how it all feels.
I have never heard of ppl trying to build somehting like http://medium.com backed by datomic
my biased opinion is that the benefits Datomic will bring you will far outweigh any perf costs you might pay. and, there are ways to deal with it if you do find that the perf pressure is too great (put strings in KV store, store keys in Datomic)
then again, I have not really come across datomic being positioned as some kind of all-purpose backend storage
we use it for everything
interesting. My plan B is to keep storing large text in mongo doc, and datomic refer to it by key/id
we use DDB as a Datomic backend only
no other storages or direct-use dbs, aside from Redis as a post-query cache for some hot pages
we used memcached as a 2nd tier cache for Datomic as well
100%, happy to assist
@robert-stuttaford: I am a little bit confused about more peers. because I met one situation, that kind of read around 1 million datum's value from datomic, and query failed every time, I considered that's because the result is too big and out of memory. So I am thinking of if more peers will help it?
the result set of a Datalog query has to be able to fit into memory
the datoms under consideration do not have to, but if they don’t, you’ll have cache churn as it cycles index segments in and GC cleans up
you can, however, lazily walk over the datoms yourself, building up some sort of a result
this talk has stuff about doing that <ttp://www.infoq.com/presentations/datomic-use-case>
the relevant api is http://docs.datomic.com/clojure/#datomic.api/datoms
this way you can lazily walk your million datoms, performing functional transformations (filtering, mapping, reducing, etc), and either arrive at an end result, which still has to fit into ram, or do some sort of processing and commit results to some sort of I/O so that your results no longer need to fit into ram
i hope all that makes some sense
yes, that's also what's I am think of, the limited ram is one problem, but these days I read some info about the coordination among peers, and get confused about if more peers can help...
the fact that you can hold on to a database value indefinitely solves the timing problem
doesn’t matter how long the query phase takes, you don’t have to worry about the database changing on you
this allows you to perform all the work on a single peer, in a lazy-sequence fashion, perhaps parellelising some of the work along the way
i point you at http://onyxplatform.org again, as it’s built for precisely this sort of work coordination
btw, the experiment fails with the same reason, and the strange thing is there is neither error nor warn in the log
transactor unavailable?
26 <logger name="datomic.transaction" level="DEBUG"/>
27
28 <!-- uncomment to log transactions (peer side) -->
29 <logger name="datomic.peer" level="DEBUG"/>
30
31 <!-- uncomment to log the transactor log -->
32 <logger name="datomic.log" level="DEBUG"/>
33
34 <!-- uncomment to log peer connection to transactor -->
35 <logger name="datomic.connector" level="DEBUG"/>
36
37 <!-- uncomment to log storage gc -->
38 <logger name="datomic.garbage" level="DEBUG"/>
39
40 <!-- uncomment to log indexing jobs -->
41 <logger name="datomic.index" level="DEBUG"/>
check the transactor's logs - anything in there?
i would uncomment the last one indexing jobs
i have to go. good luck