This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-07-26
Channels
- # bangalore-clj (1)
- # beginners (12)
- # boot (48)
- # cider (56)
- # clara (1)
- # cljs-dev (15)
- # clojure (455)
- # clojure-austin (2)
- # clojure-dev (33)
- # clojure-italy (26)
- # clojure-nl (6)
- # clojure-poland (10)
- # clojure-russia (23)
- # clojure-spec (33)
- # clojure-uk (62)
- # clojurescript (37)
- # code-art (2)
- # cursive (12)
- # datomic (48)
- # funcool (1)
- # juxt (16)
- # leiningen (13)
- # off-topic (12)
- # om (23)
- # onyx (16)
- # other-lisps (5)
- # parinfer (2)
- # pedestal (28)
- # re-frame (60)
- # reagent (8)
- # ring (1)
- # ring-swagger (15)
- # spacemacs (5)
- # specter (53)
- # test-check (2)
- # unrepl (8)
- # vim (14)
I'm trying to use environ, but using it at compile time(our build machine have all env vars), if I wrote (def ^:const x (env :x))
it works, x will be compile to a constant. but (def ^:const x (Integer/parseInt (env :x)))
will not work.
@doglooksgood are you doing AOT compiling? Or is this ClojureScript?
@bfabry : why should delete be more expensive than assoc? can't we just 'mark it with a tombstone' ?
@danielcompton I'm doing AOT compile in Clojure.
@qqq A look at the source for assoc
on vectors should convince you that delete
would be more expensive -- assoc
doesn't change any of the indices for the vector so only the segment containing the "changed" item needs to be updated. delete
would cascade a change of indices across the segments of the vector. As I understand how vectors are implemented in Clojure -- happy to be corrected.
@doglooksgood Aside from all the myriad problems associated with AOT and why you should avoid it... ahem ...could you be a bit more specific about "will not work"? What exactly is the error/behavior you get?
@seancorfield : I failed to consider indexing. Does vector maintain the invariant "the only non-full 32-item block is the last block" ?
I'm not sure about the internal implementation, but vectors guarantee constant time index lookup. seems hard to have both that guarantee and a fast delete-at-index op
vectors guarantee a (log_32 num-elements) time lookup, it's stored as a btree with branching factor of 32
I think it's possible to store "how many elements does this subtree have" , which also allows fast lookup
you're right, log32n. so if you allowed delete you'd have to rebalance the tree right? honestly I haven't thought about data structure algorithmic performance since uni, but I just kind of trust that this probably isn't feasible. I'm sure the fact that the data structures are persistent makes it even more complicated
disclaimer: I haven't read the actual source code so now we both agree that a persistent vector is a tree with depth log32n this means that assoc is NOT O(1), but O(log_32 n) the reason being, when we update a node, we have to update "all anvestors" until we get to the root
since vectors are persistent => assoc has to create a new node => but then it has to create a new block for every anvestor from the thing we want to update until we get up to the root
so then the question is: can we do delete! in O(log_32 n) time, whilemaking it easy to index -- and I don't know, but I'm leaning towards yes
if I use (def ^:const x (env :x))
and :x in env is "hello", it will be compile to (def ^:const x "hello")
i think, because our build machine have environment variables, and production machine dont, this is the way we want. if I use (def ^:const x (Integer/parseInt (env :x)))
, x will not be compiled to a literal constant. so at runtime, this (Integer/parseInt (env :x))
will run again.
^:const has a very small very specific use, that is not it, it only happens to "work" for (env :x) but that could change at anytime
(I'm a bit surprised (def ^:const x (env :x))
works, to be honest)
^:const causes the value of def to be inlined at the invocation site, it doesn't care about how the value is produced
const doesn't mean that the literal expression passed to def
will be inlined, (def ^:const x (do (println "foo") 2))
the println will only ever be runned once
I'm trying to implement a special macro (:: a t-sig) where it always evals to a, regarxless of where the :: is located at
@qqq Not sure I'm understanding you but Clojure does not support "reader macros"...
@seancorfield : I also thoguht it might be impossible, but then I read: https://stackoverflow.com/questions/20677055/define-my-own-reader-macro-in-clojure
Tagged literals begin with #
-- like #inst
and #uuid
-- but that's not "reader macros".
Tagged literals have #
, a namespace-qualified symbol, and a regular Clojure expression. The regular Clojure expression is read, then passed to the function associated with that symbol.
@qqq We use tagged literals in our configuration library at work so we can define values in "special" ways.
@seancorfield : (I know nothing about tagged literals / reader macros) -- so what you're saying is that (1) what tagged literals get is after macro expansion and (2) it's basically a function call with a single argument ?
How come tools.deps.alpha
is using a map (which is merged with the default deps)? Doesn't that mean I can't specify the order of the deps? Which, IMO is crucial on the JVM.
This is discussed in my talk, but if order matters, then your system is already broken. That is, you should never have two different versions of the same class on your classpath - if you do, then something bad has already happened.
Just tried out Compojure API (2.0). https://github.com/metosin/compojure-api
Paranormal
@rauh. Do you mean the order of maven coordinates in project.clj? I wasn't aware that ordered mattered there
@pesterhazy No I mean the new tools.deps.alpha
project that just got released. In maven all of the order matters, which makes projects predictable. Though there is a bug report for leiningen too: https://github.com/technomancy/leiningen/issues/2283
It's going to seeminly work for projects that have <=8 dependencies and then all in a sudden the depedencies will be semi random and even change if you just change the version number.
it's prolly worth raising the issue with @alexmiller & co
@rauh, I was aware of tools.deps but didn't know order influenced the maven algorithm
Maybe just order apathetically?
Whoops alphabetically
A wonderful typo though
> Dependency mediation - this determines what version of a dependency will be used when multiple versions of an artifact are encountered. Currently, Maven 2.0 only supports using the "nearest definition" which means that it will use the version of the closest dependency to your project in the tree of dependencies. You can always guarantee a version by declaring it explicitly in your project's POM. Note that if two dependency versions are at the same depth in the dependency tree, until Maven 2.0.8 it was not defined which one would win, but since Maven 2.0.9 it's the order in the declaration that counts: the first declaration wins. https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html
At least it would make it predictable
But yeah deviating from maven here doesn't seem like a good idea
I imagine deps could support a vector of "tuples" and map transparently, use the former if you care, the other if not (but since maps make ordering unpredictable I am not sure that even make sense to support them at all)
Yeah I wish the new t.d.alpha
was leaning heavily on maven and would just provide a hiccup wrapper around their XML config. That'd be neat.
to be fair I've never been in a situation where I had to re-order maven coordinates in clojure to fix dependencies. Have you?
Can't you accomplish the same thing through exclusions?
imho it doesn't really matter, if the aim is to have a simple tool that kind of maps 1-1 with maven without too much indirection, that ordering issue could be considered a flaw
It's seldom and only matters if you have the same files in two jars. Which you shouldn't have. But then, it's not clear how often this problems has been the issue with user reported problems who never figured it out what the issue was.
@pesterhazy we have had to do this many times when using xml
deps conflicts is quite common, ex between guava versions in transitive deps it happens to me all the time
IMO predictability is crucial, so deps should neither be put into a set (as leiningen does currently) nor into a map.
hi everyone,…i want ask, sqlkorma how set date and datetime mysql format timezone GMT ?
@rauh, wait why is it a set? do you mean internally?
@rauh my (limited) understanding is that you're meant to resolve all dependency conflicts completely, so it would actually be ok to store them as a map
@danielcompton how would that work given that there are transitive dependencies, java dependnecies, etc.?
do you mean a complete spec in the style of yarn.lock?
I thought the idea was that you completely resolved all of the transitive dependencies explicitly
@danielcompton ahh? didn't understand it that way.
yeah I think you're closer @mpenet
but I thought idea is that there is no conflicts
sort of like lein pedantic
I see, that would make sense, and actually an improvement over raw maven definitions (arguably, since way more verbose)
@danielcompton My issue isn't only about transitive deps, but about the non-predictable ORDER of the jar's on the classpath. (See my leiningen ticket)
hmm, yeah
Does the order matter for you because you have several JARs which provide the same namespaces/classes?
alex mentioned the classpath order in the talk here in berlin
iirc, he said that the order is useful for some things (like overriding packages a la carte) but is not a great fit for other things, like making sure the right dependencies are picked up
so it sounded to me like tools.deps is intended to take classpath order out of the equation somehow
well... probably someone, just ask, but maybe in the #boot channel...
what could be an easy way to some a random choice given some probabilities
I just need a function that given something like this:
{:a 1/2 :b 1/4 :c 1/4}
returns me a random element from [:a ':b :c] respecting the probabilities for each of them
yeah I thought about that, so I guess there is no easier way?
wouldnt repeating each choice depending on the probability and then using rand-nth
do the trick?
that would work in a simple case like this one
but in the general case I would need to find the LCD of all the fractions
and generate a massive vector with all the repeated stuff
Something like this gives me the ranges I can then check
user> (def probs {0 1/2 1 1/4 2 1/4})
#'user/probs
user> (into {} (map-indexed
(fn [idx [el prob]]
[el
[(apply + (take idx (vals probs)))
(apply + (take (inc idx) (vals probs)))]])
probs))
{0 [0 1/2], 1 [1/2 3/4], 2 [3/4 1N]}
quite sure it can be improved a lot though
nice thanks
I am pondering about how I would implement a simple compiler and/or interpreter in Clojure. In particular, I am wondering what would be the best way to model the abstract syntax tree. The first (and most obvious) option would need to represent AST nodes as maps, tagged with their type/label (e.g. :type key). I would then define the various functions that operate on the AST as multimethods using the node’s types as dispatch keys. Another option would be to represent each type of node as a record, and to then have them all implement a protocol (e.g. IExpr) which would contain have implementations for all the functions operation on AST nodes. This comes wth the added benefits that AST nodes could implement existing protocols too. Which of these options would be most idiomatic in Clojure? Or do you have another one to suggest?
Which parts in particular? I am not looking to write a parser, just to understand what the best way to model an AST in clojure would be
I made a notice about instaparse because if you use it — you don't have to think about AST representation yourself
https://github.com/clojure/tools.analyzer https://github.com/clojure/tools.analyzer.jvm
thanks! so from that I gather that raw maps with a :type field seems to be the way to go?
Another, completely unrelated question… Does anyone have advice/examples on how to structure application business logic? This is a fairly general question (not specific to Clojure), but there might be Clojure-specific idioms coming into play. Roughly speaking I am wondering if I should take a CQRS-y road and try to have a clear distinction between functions that modify the app’s state and functions that read that state. The application I am currently working on is a fairly simple, CRUD-y Clojure+Datomic web app, but I would like to use it as a learning ground for good practices
I’d start with the general concept of event sourcing, which is more general and more usable than CQRS
@noisesmith right, sorry, I think my question was misphrased. Right now I am more concerned about how to structure my code at the application level (how to organise functions and manage side effects)
event sourcing is a strategy for this
how so @noisesmith ?
the concept is that events are immutable and describe the actual domain data
then, your db describes state, and is created via a reduce across the events (literally or conceptually)
@noisesmith i mean, I am familiar with what event sourcing is, but I don’t see how it would help with this
because now your state is just a query across your events
it’s an optimization
so you have “all events”, from that you derive a postgress db, or a datomic db, or even a mongo db - via looping over the events, maybe with reduce
that’s your state of the world
Right, but Datomic already gives a fairly event-sourced model (except that transactions dont’ represent domain actions, but they can be reified with a key that does represent a domain action)
now, if your events are set up properly (strictly ordered, immutable) - all instances of your app have access to the same time series of immutable states
queries are reads of the current state, “modifications” are insertions into the stream of events (you must loop back up to that level)
right, right
so you are on the right track if using datomic, you don’t need CQRS for this
My concern is more: how do I nicely separate my “business logic” (functions that perform transactions, authorisation and reify transactions with additional infos) from my API
I use protocols to describe my domain / API level abstractions
what should the interface for those functions be to make them easy to think about and deal with, etc
then for my implementation, I use functional code over vanilla data structures implementing those protocols
the protocols are used as a signal to a reader of the code / user of the library that these names describe domain level concepts, they are the big picture organization of the code
I’ll have to see if I have a good open source implementation of it…
so protocols are your “public API”, and all other functions are implementation details
right - and the protocol methods are expected to take and return hash maps, vectors, keywords, numbers
so we don’t pile on mountains of OO, we use modeling tools on the boundaries as a line in the sand, so to speak - to declare organizational intention
how do you pass context around (db, conn and auth)? and how do you perform authorisation / reified transactions, if you do so?
I use component, so that each subsystem gets the parts of the app that it needs passed in on initialization
absolutely - found it!
protocol definition https://github.com/noisesmith/ludic/blob/master/src/clj/org/noisesmith/ludic/protocol.clj
implementation https://github.com/noisesmith/ludic/blob/master/src/clj/org/noisesmith/ludic.clj
oh man - that project is in a slightly weird state, there’s two GameBoard protocols that should have different names, and some protocols that need to be moved to the proto namespace
sorry ! it’s still in heavy development
the record would expect the ref, yes
And if I want to reify every transaction with, say, the access token of the current user, how would you do this? Would you provide your own “transact” function, wrapping Datomic’s?
that’s a usage of reify that doesn’t match what I thought the word meant
do you mean parameterize? I thought reify meant “make an abstract thing into a concrete one”
ahh! now I get it, thanks
so yeah, I would make an object that represents that reifications including the user id
this is getting deeper into datomic than my working knowledge of it - I’ve taken a workshop but not gotten far with it in real usage
@noisesmith haha ok, no worries. In general though, would you consider it “good code” if I wrapped all access to Datomic inside a protocol (e.g. IDatomicReaderWriter or something) so as to control how every transaction is made, and add some data to the transactions as I see fit?
I’d first see if this is an abstraction datomic itself allows
unless your goal is to be able to swap in another database (which probably means forgoing a bunch of the features that make datomic worth it?)
I wouldn’t bother abstracting things that pragmatically wouldn’t be worth replacing ever
I wouldn’t want to swap out Datomic. The only thing I would want is “intercept” calls to datomic/transact
to add some data to the transactions
I’d say make your own function over transact that adds the data, probably parameterized with a hash map so you can generalize and introspect
so that I don’t have to do this manually everywhere I call datomic/transact
in my application
one thing to avoid is opaque wrappers (whether partial or an Object with hidden state - which btw is what a partial or closure is) - use a record implementing datomic’s own protocol if possible, but parameterized by keys you can introspect on the record and access in context
@noisesmith is there a repl command to get the list of all protocols implemented by an object?
supers
oh, supers needs the class, but that’s easy enough
=> (supers (class {}))
#{clojure.lang.IKVReduce clojure.lang.IFn clojure.lang.IMapIterable java.io.Serializable java.lang.Object clojure.lang.IObj clojure.lang.IMeta java.lang.Runnable clojure.lang.MapEquivalence clojure.lang.IHashEq clojure.lang.ILookup clojure.lang.IPersistentMap clojure.lang.Counted clojure.lang.IEditableCollection clojure.lang.Associative java.lang.Iterable clojure.lang.IPersistentCollection clojure.lang.AFn java.util.Map java.util.concurrent.Callable clojure.lang.Seqable clojure.lang.APersistentMap}
wef-backend.core=> (supers (type (get-conn)))
#{#<Class@35fc6dc4 java.lang.Object>
#<Class@e7b265e clojure.lang.IType>
#<Class@6b337969 datomic.Connection>}
cool - so you can make a defrecord that implements Connection - the others come free with defrecord
this is the point where I end up reading source code usually, heh
I bet it’s documented … somewhere
I’ll ask @U0509NKGK , I am sure he has insights on this
cool - thanks for asking about this, I learned a couple of things in trying to find your answer
@noisesmith Glad to hear that! I was afraid I wasted a bit too much of your time
well - they do it slightly differently than I suggested, because the user-id org-id and tx-data are totally hidden once you call defn
err, I mean once you call transact-wrapper
the return value of that function doesn’t expose any of those things as data
@noisesmith do you think that’s an issue? it seems to simplify the life of the caller, especially if he doesn’t care about those
cluttering every function that calls transact with auth data that it doesn’t care about sounds problematic
@hmaurer until you are trying to debug code using the wrapper (in my experience) - it’s not always neccessary to use the alternative of using a record to store the data instead of a closure, but what this gains is quick access to what the thing actually encompases
you don’t clutter - the record itself is something you can call if you implement IFn - or you just expect people to use a protocol method with it as the first arg (also fairly reasonable but less fancy)
@noisesmith sorry, I think I am missing something. Can you give me a code example of what you mean?
OK - I was just looking at this right now, sorry about the distracting details, but the big picture structure should be illustrative of what I am saying
(defrecord Transmitter [transmit from user-data creator to journey routing]
IFn
(call [this] (.invoke this [nil this]))
(run [this] (.invoke this [nil this]))
(applyTo [this coll] (.invoke this (first coll)))
(invoke [this [routing-override message]]
(let [updated (into this message)
routing (or routing-override routing (first journey))]
(.invoke this routing updated)))
(invoke [_ routing-override message]
(let [{:keys [transmit generic]} message
message (dissoc message :transmit :generic :routing :from :to)
{:keys [journey routing message]}
(if generic
{:journey [routing-override]
:routing :generic/reply
:message (assoc message :generic-raw [routing-override message])}
{:journey (rest journey)
:routing routing-override
:message message})
{:keys [request-id birth-time]} user-data
final-message (assoc message :journey journey :mediary to)]
(when-let [schema-error (check-schema from routing final-message)]
(log/error ::Transmitter
"for routing"
(pr-str routing)
(pr-str {:journey journey})
(pr-str schema-error)))
(log/trace ::simple-kafka-transmit routing "to" request-id "from"
birth-time "-" (pull-transmit-info message))
(transmit from routing final-message))))
the transmitter has all this incidental data - who is sending? who is the target? what data did the initiator of the request expect to get back with any responses? what is the path the overall task should take through the system?
But that’s basically the approach I was suggesting with a “DatomicWriter” protocol, no?
v1 wrapped this in calls to partial
the difference is that this returns an object that acts like a function
maybe I misunderstaood what @(d/transact ...)
is in the transact-wrapper function
right, but so instead of doing something like
(transact transmitter conn tx-data)
you would do
(transmitter conn tx-data)
right
@noisesmith (d/transact returns a promise I think, and @ dereferences it
and if you look at transmitter it shows you all the data it has inside
it acts like a hash-map
that’s the key thing to me
what was the question 🙂
@U0509NKGK Hi! 76 messages to read 😄
I want to add some attributes to every transaction in my sytem (e.g. for audit purposes; things like the current user ID) and I am wondering how I should do it
your question suddenly sounds very focused and pragmatic
e.g. should I wrap all access to Datomic behind a protocol that proxies most calls but does some stuff to transact’s tx-data before proxying
@noisesmith sorry 😄
we use an explicit function wrapping d/transact and d/transact-async
@U0509NKGK do you pass the auth context to all your “business functions” and then pass it explicitly to your function wrapping d/transact every time you call it?
no; we use middleware and binding
with a dynamic var
and if the var has a value, we annotate
we have repl helpers that do the binding so that we also use it when manually altering the db at the repl
do you somehow ensure that no intern misadvertantly uses d/transact instead of your wrapper?
no. but in 5 years, it’s not been a problem. we had ONE case where someone retracted more than they should have. it was easy to fix
i do live i fear of an accidental d/delete-database though
Ok, thanks a lot. One other thing: how do you handle security? Specifically, do you use d/filter?
we can’t use d/filter ; we have too much sharing going on
d/filter is nice if you have very strict boxes. we don’t.
hah. try continuously
although I wonder what would happen if the db got deleted, re-created, then a backup was run
we replicate our backups to off-AWS places, so we have some protection against that
access control what, the backups, or the repl access?
oh, that’s basically normal queries in middleware
@U0509NKGK middleware as in you filter the query after executing it?
no, simpler than that - we explicitly use the viewing user in queries
very often we use datalog to find valid entities, map d/entity, and go from there
sometimes we d/entity on a lookup ref, query to validate access, and continue
right, what I mean is that d/entity will let you traverse the data tree without access control
I am using GraphQL on my app and d/entity could cause issues if I tried to use it directly
as it could potentially go any level deep in the data tree, reaching data that should not be accessible to the current user
oh right, yes. we totally control the query. we don’t allow arbitrary query from clientside
in that case, d/filter is a far safer approach
@U0509NKGK ok, thanks. Sorry, my questions are not very clear today
it’s ok 🙂 hth
A link to an open-source application that you consider well-structured would be helpful, or advice / link to blog posts / resources
@hmaurer have you seen the docs for re-frame? He talks extensively about CQRS-y architecture (even though it’s cljs not clj) https://github.com/Day8/re-frame
There’s a pretty detailed readme doc right up front, but don’t be fooled, the real meat is in the docs/ folder
@manutter51 ah, let me check it now, thanks! By the way, something I should have specified: I am looking for a way to structure my code while keeping it fairly straightforward. E.g. no crazy async at the moment; ideally I would like to keep everything synchronous
When I do some calculation with pmap and want to use another pmap on the result. How I can I wait until the first pmap is done?
To clarify your intention is to load the entirety of output of the first pmap into memory, then consume it once its realised?
If that's the case simply wrap the first pmap in a doall
(pmap #(println "b" %)
(doall (pmap (fn [x] (println "a" x) x)
(range 1000))))
Thanks, that helped me
Does anyone know of a clean way to assoc in a field only if the field isn't null; otherwise return the map unmodified?
Thanks, @dpsutton
merge will overwrite existing keys in the original map... this may (or may not) be the expected behaviour.
I'm aware. That is what I want. It's just an optional field that should be tacked on only if it's not nil. (An authorization http header)
right 🙂
I don't suppose there's a merge that would do (merge {:a 1} {:b nil})
and return {:a 1}
?
if it's just top level keys you could (into {} (map (fn [[k v]] (when v [k v])) my-map)
would probably work
All great ideas.
I've gone with (merge xs (when a {:a a}))
.
Sorry, I'm mixing names. xs
doesn't have a :a
yet. There's no possibility of clobbering.
@rauh I quite like that one. :thumbsup:
Is there a better way to do (symbol *ns* 'symbol)
?
Huh, nevermind. Apparently I can do syntax quoting for the dispatch value of a multimethod. Not sure why I thought I couldn't earlier.
@ghopper Not sure if it would help, but https://github.com/nathanmarz/specter has some really 'navigators' to do all kinds of transformations on data structures. (Maybe not worth the effort for easy navigation, although I think it may be if you have at least 2 steps to navigate, including predicates 🙂 )
@kurt-o-sys Huh, that's cool. Definitely overkill for this though. 🙂
@ghopper for associng a value if not null, with specter it's (setval [:a some?] my-value amap)
@kurt-o-sys are those basically lenses?
for associng only if field exists, it would be (setval (must :a) my-val amap)
will be way more performant than merge as well
I didn't use specter much yet - only found about it about a month ago - but it's definitely cool. Also for small cases.
Style question: say I have three items (def a ["a"]) (def b [["b" "c"] ["d" "e"]]) (def c "f")
, and I want to put them into a list like so:
`[~a ~@b ~c]
but without using syntax-quote/unquote/splicing-unquote...how would you do it as concisely as possible (without losing generality)?the middle one is unpacked
one option is (concat [a] b [c])
- I admit that looks weird
@noisesmith actually, that's not nearly as bad as what I was doing
Does suck a bit that you have to pack the elements you don't want unpacked just to...well...yeah
@jballanc depending on your actual problem, you could use something like (def prepacked [x] (if (and (coll? x) (vector? (first x)))) x [x])
(mapcat prepacked [a b c])
hah...yeah, that probably won't work because some of these are multiple levels nested, and only some of the levels need to be unpacked
my personal preference is to "say what you mean" with syntax-quote, but understandably the team is worried about maintainability...
=> (def Person {:person-id "person-1" :category "customer" :purchase ["p1"] :dates ["d1"]})
#'user/Person
=> (update :purchase Person [])
IllegalArgumentException Key must be integer clojure.lang.APersistentVector.invoke (APersistentVector.java:292)
(and you want assoc not update, but go ahead and read the docs for both assoc and update)
by docs I meant the docstring you access using doc
in the repl, but you may also want to start here https://clojure.org/reference/data_structures#Maps
Hi! I am getting a cryptic error which I do not understand. Here’s the piece of code: https://gist.github.com/hmaurer/8e786bfd507798393c8be45ffb3a1b46. The error is in the comments. Could someone take a look please?
The error is Can't let qualified name
, but I don’t get why the gensym’ed variable gets qualified by the syntax quote
I would double check to make sure the code that is being run actually matches the code you are reading (restart your jvms) and then I would suspect this is actually coming from some other macro
I would put a println at the start of the macro to print out ctx-binding because I suspect it is already being passed in a fully qualified symbol
which matches how I am using the macro, e.g.
(deftest test-authentication
(with-scratch-ctx ctx
...
Still throwing :cause "Can't let qualified name: wef-backend.test-util/conn# "
though
I never got that error with uri# or datomic# by the way, it only occured when I added conn#
you could fix it immediately by replacing most of the macro with a function that returns the context, and have the macro expand in to invoking that and then deleting it
(let*
[uri__16604__auto__
(clojure.core/str "datomic:mem://hello-test-" (java.util.UUID/randomUUID))
datomic__16605__auto__
(integrant.core/init-key :wef-backend/datomic {:uri uri__16604__auto__})
wef-backend.test-util/conn#
(:conn datomic__16605__auto__)
ctx
{:auth nil :conn conn__16606__auto__ :db (datomic.api/db conn__16606__auto__)}]
(try (+ 1 2) (finally (datomic.api/delete-database uri__16604__auto__))))
I just checked and none of the characters from the snippet I copy/pasted had character 160
there is actually an open jira issue that has had some recent activity about changing how clojure handles unicode whitespaces
Probably what happened is that OS X and/or Atom (the editor I am using) has some shortcut or some way to enter this type of whitespace, and it fat-fingered on the shortcut
If Clojure’s reader threw an error saying “unknown character at position X” it would have been easy/easier to debug, but it considered it as part of the symbol
wef-backend.core=> (def 42)
#'wef-backend.core/
wef-backend.core=>
42
wef-backend.core=>
wef-backend.core=> (def hello world this is a long variable 99)
#'wef-backend.core/hello world this is a long variable
wef-backend.core=>
What is a fast way to do this without the repeated nested pointer?
(-> state
(assoc-in [:a :b :b/field0] "")
(assoc-in [:a :b :b/field1] "")
(assoc-in [:a :b :b/field2] ""))
Fast as in, using standard library. I don't want to add any external libs or anything.
Performance isn't a concern.
Sorry, I shouldn't be using that work. 😛
(let [nav [:a :b]]
(-> {:a {:b {}}}
(assoc-in (conj nav :b/field0) "")
(assoc-in (conj nav :b/field1) "")))
{:a {:b {:b/field0 "", :b/field1 ""}}}
@dpsutton That's what I thought I was going to need to do. @bfabry I'm not sure why I didn't think of update-in... That should work nicely.
Not the question is update-in with assoc or merge. Is there any performance difference?
I figured they would be. Anyways, thanks all!
merge is implemented in terms of conj
so yes I'd say identical https://github.com/clojure/clojure/blob/clojure-1.9.0-alpha14/src/clj/clojure/core.clj#L3022
(let [nav [:a :b]]
(reduce (fn [m [k v]] (assoc-in m (conj nav k) v))
{:a {:b {}}}
[[:b/field0 ""]
[:b/field1 ""]
[:b/field2 ""]]))
{:a {:b {:b/field0 "", :b/field1 "", :b/field2 ""}}}
@hmaurer You mean the jar files?
jar files are just zip files and you can unzip them and the .clj files will be inside, if that matters to you
@hmaurer lein runs 2 jvms, lein's jvm and your project's jvm, the clojure runtime that lein uses is part of the lein jar and lives in ~/.lein, the clojure runtime for your project is Just Another Jar(tm)
lots of interesting jar files in ~/.m2/repository
too
a silly way to do it
=> (into #{} (comp cat (take 2)) (clojure.data/diff #{1 2 3} #{2 3 4}))
#{1 4}
is it possible to use clojure.java.shell to invoke a shell command like tr < infile.txt -d '\000' > outfile.txt
?
@aaelony in order to use things like <
you need to invoke /bin/sh
clojure.java.shell/sh, despite the name, is a raw system command that doesn’t use sh
of course that’s not portable, and then you’ve created code that only works on a *nix system or reasonable facsimile
yeah, @noisesmith there is a thorny translation that tr
handles well at the command line that I'd like to shell out within a clojure program ( that does a whole laundry list of things). Haven't yet found the right syntax for calling tr
via clojure.java.shell/sh though
@aaelony you send a normal shell command to sh (clojure.java.shell/sh "/bin/sh" "-c" "tr < infile.txt -d '\000' > outfile.txt")
wow, thank-you. That works. I tried a lot of other combinations but without the "/bin/sh" "-c"
right, sh -c says “find and run the sh executable, tell it to run this”
that class of example should probably be added to the other examples (https://github.com/clojure/clojure/blob/master/src/clj/clojure/java/shell.clj#L130-L142)
np, glad I could help, I wonder if it would be worth submitting a patch to JIRA for adding another println to that comment block
@aaelony there is an example of using sh -c as an arg to sh on the clojuredocs page https://clojuredocs.org/clojure.java.shell/sh
true enough, perhaps a few words there describing why it is useful/needed would work too
@noisesmith thanks for your help yesterday, you were correct !!!!
I’d start with the general concept of event sourcing, which is more general and more usable than CQRS