This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-05-17
Channels
- # aws (16)
- # beginners (82)
- # boot (29)
- # cider (43)
- # cljs-dev (90)
- # cljsrn (14)
- # clojure (79)
- # clojure-dev (12)
- # clojure-greece (4)
- # clojure-italy (12)
- # clojure-russia (81)
- # clojure-shanghai (1)
- # clojure-spec (39)
- # clojure-uk (28)
- # clojurescript (159)
- # consulting (1)
- # cursive (16)
- # data-science (6)
- # datomic (18)
- # devops (3)
- # emacs (22)
- # figwheel (1)
- # graphql (15)
- # hoplon (3)
- # jobs (1)
- # jobs-discuss (8)
- # leiningen (1)
- # luminus (6)
- # lumo (1)
- # off-topic (18)
- # om (6)
- # onyx (38)
- # pedestal (30)
- # perun (3)
- # re-frame (38)
- # reagent (8)
- # ring-swagger (2)
- # rum (2)
- # sql (2)
- # unrepl (14)
- # untangled (1)
- # vim (8)
Anyone know what's a good, cheaper, ec2 instance that works for a AWS deployed datomic transactor? The default is c3.large which is a bit expensive for my little project.
I looked around and it sound as though some ec2 instances won't work, but I couldn't quite determine what the requirements are.
ezmiller77: I think it's about access to the local filesystem
@ezmiller77 It's possible to run a Transactor on an EC2 micro, though only for tiny workloads. A 'small' instance works for light load, you just have to set the heap and cache sizes correctly on the transactor.
stuartsierra: Thanks for the response. My little project just serves results to a single blog (my own), which is not exactly getting hit by a lot of requests 😉. I suppose that qualifies as a tiny workload, no?
hey @marshall + @jaret – were you able to find out any more info on the issue where tx-range
can potentially produce a clojure.lang.Delay
? referring to: https://clojurians-log.clojureverse.org/datomic/2016-10-04.html#inst-2016-10-04T18:16:46.002099Z
i'm seeing this consistently using onyx-datomic on a fresh datomic-pro 0.9.5561
db with schema installed and about 500 generated entities.
we just had an issue where we forgot to set some attributes to be fulltext indexed, and apparently you can't alter fulltext-indexing status through a migration.
So our current plan is to rename the ident for that attribute to something else. then reinstall that ident with fulltext-indexing set. Then port the data we currently have over from the old renamed ident to this new one.
yedi: why not simply create a new attribute, migrate the data over to it, and change the client code to use that one? Seems easier operationally
@U06GS6P1N we're only in alpha currently and don't have much data for this attribute. So i'm thinking preserving the name is worth it as long as there's no major operational issues with this method
well I'm not sure what you suggest is feasible without an interruption of service
and a breaking change in your database e.g you won't be able to go back to previous versions of the code.
and 2. we have other fields that we might want to make fulltext indexed, but we're not currently sure about them yet. Since it isn't trivial to make them fulltext-indexed down the line, we're considering just making them indexed from the get go. What are the performance trade-offs for having a bunch of fulltext-indexed attributes?