This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-02-28
Channels
- # aws (7)
- # beginners (69)
- # boot (67)
- # cider (9)
- # cljs-dev (159)
- # cljsrn (2)
- # clojars (25)
- # clojure (345)
- # clojure-austin (9)
- # clojure-berlin (1)
- # clojure-dusseldorf (10)
- # clojure-italy (3)
- # clojure-nl (1)
- # clojure-portugal (1)
- # clojure-spec (73)
- # clojure-uk (59)
- # clojurescript (163)
- # clojurewerkz (1)
- # component (26)
- # core-matrix (2)
- # cursive (20)
- # datascript (32)
- # datomic (15)
- # dirac (16)
- # emacs (3)
- # hoplon (35)
- # jobs-discuss (87)
- # jobs-rus (95)
- # luminus (15)
- # om (135)
- # om-next (3)
- # onyx (47)
- # pedestal (67)
- # perun (74)
- # play-clj (4)
- # portland-or (1)
- # proton (4)
- # re-frame (13)
- # reagent (18)
- # remote-jobs (17)
- # rum (20)
- # specter (11)
- # untangled (101)
- # yada (18)
i stumbled across some code that uses arity overloading on fn without defn… i forgot that was even possible - does anybody actually make consistent use of that? i’d be curious to hear about it
they so desperately wanted the . operator to compose lenses, that it’s really difficult to tell wtf is happening
i realize that it’s too late to make clojure.core/comp defined on a protocol, but coulda just called it pipe or something
but thanks for the thoughts 🙂 appreciated b/c i was totally failing to recall those things
well it’s not so much overloading as it is bending the abstraction to match the existing signature so that overloaded behavior can be accomplished without overloaded dispatch
It also has to do in part with the way conj (and + ) works
@kgofhedgehogs in an effort to use core libs at all cost 😉 , presenting another solution for your "fill with zeroes":
(defn zero-vec
[v]
(vec (for [e v]
(if (vector? e)
(zero-vec e)
0))))
right, but + has always worked that way: (+) => 0, (+ 1) => 1, (+ 1 2) => 3
because of that, reduce was "updated" (renamed transduce) to call (x) and (x acc), and the rest kindof fell out of that.
I agree, I'd prefer protocols, but from what I can tell, that was the development path
i’m not sure i follow - can you pass a transducer “xform” as the “f” parameter to transduce?
the arity overloading of the xform arg is unrelated to the arity overloading to the f arg, or am i mistaken about that?
@bbloom the first thing transduce does is (xform rf)
and then uses that as the rf in a reduce
the function returned by xform and the 'f' need to match, because xform is X -> X and its argument is 'f'
using existing transducers is considerably easier, but not without similar failure modes
well, the lack of reification means the parts don't come with ready made labels, so it is hard to talk about them
it's a bit of a learning curve....I didn't understand them until I got a 3hr brain-dump from Rich on them.
"hey, I heard about transducers, they seem cool; but I can't find them, just a bunch of function composition"
@tbaldridge did you reify a protocol in pixie? i seem to recall you did “transducers at the bottom"
@pandeiro If you’re using log4j you can change the reporting threshold for the namespaces in that library via log4j.properties
.
thanks @seancorfield - hadn't included a logging lib dependency but i will try that route
I can’t remember what tools.logging uses by default but I suspect you can control that too.
@qqq in the second example ?title is a collection of values that form a logical "or"
@tbaldridge: thanks; realized that in a later part of the datalog query langauge
in datascript/datomic, in a
(d/q '[:find .... :in .... :where .... ] db)
is there a way to run a "project" over the results? or do I have to call map after getting the queriesin terms of performance, how “negative” is using partial
over #( … )
?
or is the difference neglibable ?
depends, with partial as soon as you go over 3 args it calls apply, othewise it's just as fast if not more
so its down to personal preference, really?
Having this code
(def n (rand))
(defn calc [v] (* v v))
(def do-iter (atom true))
(defn my-loop []
(when @do-iter
(Thread/sleep 1000)
(println (str "calc: " (calc n)))
(recur)))
(def f (future (my-loop)))
@tsulej, my-loop
is "redefined", but the new function is not used by the thread currently running (calling the function previously bound to my-loop
). You can extract the meat out of my-loop
to a new var e.g. my-fn
and call that from my-loop
.
@tsulej Use (my-loop)
instead of (recur)
, though be aware that this will grow your stack
@mpenet the idea is to enable live coding, I want to rebind/change values externally
@henriklundahl right, good advice, thank you
@rauh I believe it's opposite. (recur)
in tail position is optimized by Clojure ("Note that recur is the only non-stack-consuming looping construct in Clojure.")
If you use my-loop
instead of recur, you can change the behaviour during "runtime" but you'll be growing the stack every second.
@tsulej Happens 🙂 . If you want to avoid that (ie growing your stack) you can pack the future
inside of my-loop
function.
I considered this solution too, but my function is called 25 times per second and runing every call new thread is too much overhead (I suppose)
but solution gave by @henriklundahl is enough for my case
anybody from china? wanted a small test to be done
if the following is accessible http://www.materiall.com. FYI this is a full stack clojure app
pls ignore, i got reply for my query
@compro if it's the http://clojure.org site, this is the repo: https://github.com/clojure/clojure-site
@compro https://github.com/clojuredocs is where the repos live I think. I don't think it's an official Clojure project out of Cognitect, but one of the Cognitect team is in the GitHub org.
I'd guess this is the file you need: https://github.com/clojuredocs/guides/blob/master/articles/tutorials/emacs.md
Was following the page and now I am stuck at lein test
which doesn't seem to run the tests successfully and prints a long output.
java.lang.Exception: No namespace: command-line-args.core
likely means you've got a file missing, named incorrectly, or haven't required it.
If that fails, paste the results of find . -name '*.clj'
in the project.
./project.clj
./src/command_line_args/core.clj
./test/command_line_args/core.clj
./test/command_line_args/core_test.clj
Files in both src
and test
will be considered by Clojure/Leiningen when you require something. The src
and test
bits are left off resulting in two files that could be loaded when you require command-line-args.core
.
Got it. But now having new issues.
lein test command-line-args.core-test
lein test :only command-line-args.core-test/pairs-of-values
FAIL in (pairs-of-values) (core_test.clj:9)
expected: (not (= {:server "localhost", :port "8080", :environment "production"} (parse-args args)))
actual: (not (not true))
Ran 1 tests containing 1 assertions.
1 failures, 0 errors.
Tests failed.
./src/command_line_args/core.clj
:
(ns command-line-args.core)
(defn parse-args [args]
(into {} (map (fn [[k v]] [(keyword (.replace k "--" "")) v])
(partition 2 args))))
./test/command_line_args/core_test.clj
:
(ns command-line-args.core-test
(:require [clojure.test :refer :all]
[command-line-args.core :refer :all]))
(deftest pairs-of-values
(let [args ["--server" "localhost"
"--port" "8080"
"--environment" "production"]]
(is (not (= {:server "localhost"
:port "8080"
:environment "production"}
(parse-args args))))))
@compro fixing the indentation and using clojure.core/not=
makes it a bit easier to see what the test is trying to do:
(deftest pairs-of-values
(let [args ["--server" "localhost"
"--port" "8080"
"--environment" "production"]]
(is (not= {:server "localhost"
:port "8080"
:environment "production"}
(parse-args args)))))
Why are you testing that the value is not equal? Don't you want it to be equal?And check out https://github.com/clojure/tools.cli 😉
Hi, I'm looking for a way to parallelise I/O while consuming a lazy-seq. My current implementation looks like this
(doseq [element lazy-seq]
(some-io element))
Is this a simple way to parellelise this operation without realising the entire seq, producing a seq (throw any return values away) and is safe to perform side effects in?
Simply put a parallelised doseq
Is core.async a good option for this?@mbutler no, core.async has nothing to do with parallelism, rather, it is concerned with making asynchronous events appear synchronous. When you say “without realizing the entire seq” — what determines, in your case, how much of the sequence you want to realize?
I guess i would want to realise each element as it is consumed
and produce nothing as a result, similar to the behaviour of doseq
. It can be slightly eager, each element it not large but the total sequence may be millions in length
you seem to be wanting to make a sequential decision (as it’s consumed), while executing things in parallel ...
Yes i had considered pmap but unfortunately it builds a seq out of the return values of each iteration and returns it
leaving me with a giant seq in memory, which i was hoping to avoid
I dont know how much memory a million element seq of nil
s uses
or if its a problem
ok, then consider a java.util.concurrent executor service:
(def lazee-seq (range 8))
(def tp (Executors/newFixedThreadPool 8))
(defn do-some-work [n]
(Thread/sleep 1000)
(println "Done with task " n))
(run! (fn [element]
(.submit tp #(do-some-work element)))
lazee-seq)
That looks pretty good 🙂
so you might accumulate all those closures in the executor’s queue if the lazee-seq evaluates much faster than the processing
Is this a problem because there is a limit to the size of the queue? @dm3
The I/O work (http request) is what takes the majority of the time currently which is why i was hoping to parellelise it 🙂
(I’m not a big lazy seq producer, maybe there’s a way to do that properly with them too :))
I do that quite often with core.async, an input (chan) + a dotimes over N go blocks for parallelism that take on the main chan from these blocks, and an ouput chan for results it's kind of a "light" version of core.async/pipeline(-async). You get backpressure, bounded buffer, low resource usage etc etc
@mpenet that sounds like roughly what I imagined
the bookeeping when disposing of the whole pipeline is important tho, be aware of that
like closing chans with pending takes and so on (or not continuing to send jobs on closed chans)
I actually have an example of this in an oss lib https://github.com/mpenet/spandex/blob/master/src/clj/qbits/spandex.clj#L390
I will give that a read with eager eyes 🙂
Thanks a bunch
:thumbsup:
I think pipeline-blocking
is a perfect match for this. https://clojuredocs.org/clojure.core.async/pipeline-blocking.
Put element from doseq
to from
channel and use slidding-buffer
or dropping-buffer
as to
channel
Something like
(pipeline-blocking 10 (chan (dropping-buffer 1)) (map some-io) input-chan)
@tap awesome, ill give this a go too, thanks :thumbsup:
@mbutler was playing around a little more with this — while core.async gives you the backpressure support you need, it also adds a lot of overhead for what you’re trying to do, which is just limit the producer. Just dug this up I used a month ago or so for the same purpose, shown as an example here:
(let [sema (Semaphore. 50)]
(doseq [e (range 100)]
(.acquire sema)
(println "permits remaining: " (.availablePermits sema))
(.submit tp #(do
(do-some-work e)
(.release sema)))))
@tap wouldn't a sliding buffer cause a slow down upstream to make entries start dropping?
dominicm: yeah, you’re right. The operation in sliding buffer is unnecessary. Dropping buffer is better
this lets you still use an unbounded queue in the executor if you want, but limits the number of items in the queue to 50, as it will wait to acquire the semaphore before submitting the task. then, in the task itself, release the semaphore. works great I think for what you need, lets you still doseq
over the elements, and is far simpler
That does seem nice and simple, really appreciate you playing around, ill give that a go 🙂 :thumbsup:
@mbut: when working with core.async + real back pressure don't forget buffered channels have limit after which it'll start throw on put
s: https://github.com/clojure/core.async/blob/master/src/main/clojure/clojure/core/async/impl/channels.clj#L152-L156
no worries — dealing with j.u.c primitives will turn off some who want as little java interop as possible, but j.u.c has an amazing set of easy to use tools that are faster and more flexible than what clojure offers in its core library
by "real" back pressure I mean data that is consumed from other systems (i.e. message queue, topic..)
@tolitius yes but he would likely be >!!
and >!
, parking and blocking, instead of async put!
ting, I would imagine … should not run into that, if so, an error in construction
core.async won't help you with the real network back pressure which is built in in TCP/IP..
@joshjones I think people fear the j.u.c primitives as they're not blessed as "safe" necessarily. I know I fear using anything except the occasional atom.
@joshjones Most concurrency primitives (in life, not in j.u.c necessarily here) I've come to consider as tools with many trade-offs. Locks are one such thing, fine if always used in X way. If those trade-offs aren't made extremely clear, then they tend to be used incorrectly.
well, clojure implements most/all of its safe concurrency features right on top of j.u.c … for example, clojure.lang.Atom has a single field, state
, which is a java.util.concurrent.atomic.AtomicReference
@joshjones: perhaps. I just find it quite often to be a misconception when talking about core.async and back pressure. network is a really important part in things we build, but since "back pressure" term is thrown around, it is always good to keep in mind the "real" back pressure (i.e. blocking, congestion control, etc.)
but I understand what you mean — a layer of abstraction removes some uncertainty and adds convenience @dominicm
@joshjones also the blessing from Rich makes me feel (perhaps incorrectly) like they're "safe" to use. They've been given the simplicity stamp of approval.
I didn't know that about atom @joshjones, thats great 😆
Thanks for the advice @tolitius, ill bear that in mind if i have reason to use core.async 🙂
yes, I get it — but concurrent programming of any kind requires a special attention to its logic, or it quickly can become a nightmare, so best to be careful always 🙂
can also highly recommend http://jcip.net/ which should alleviate some of the fear you might have about the j.u.c 🙂
@tolitius I think people often mis-understand that limit of 1024 pending puts....
it has nothing to do with the buffer size, it's the number of threads that can be in a parked/blocked state on the channel, and it's there because in most (all?) other queues, blocking puts/takes are a unbounded queue.
There's nothing stopping you from doing (chan (* 1000 1000 1000)) and then having one billion go blocks put one billion items into the channel.
it's just that you can't have more than 1024 blocks parked at one time. And the argument could be made that if you are trying to do something like that you have a design flaw in your program
the only time I’ve seen someone hit that is when their program was broken to begin with
There's a great talk by @ghadi about the trade-offs and pitfalls of core.async
. It has very little views which is a real shame because it's one of the best stuff you'll find on the Internetz. (aside the excellent material coming from @tbaldridge ) https://www.youtube.com/watch?v=DPpEZ3_XowU
Would anyone perhaps know of a good knowledgebase/article/howto system build on Clojure?
eslachance: Here's one: https://github.com/danielsz/system-dependency-injection/tree/master
Hmm. Well I'm more looking with something with a front-end, if possible.
Thanks for the suggestions guys, I'll look into it!
Open-source if possible obviously so I can muck around in the code 😉
=> (def c (async/chan))
=> (doseq [i (range 1024)] (async/put! c i))
=> (async/put! c 42)
java.lang.AssertionError: Assert failed: No more than 1024 pending puts are allowed on a single channel. Consider using a windowed buffer.
(< (.size puts) impl/MAX-QUEUE-SIZE)
a windowed buffer would "work" by dropping messages once the 1024 pending "parks" is reached.
I see three issues with it:
1. Naming: "put" is overloaded with "put into the channel's buffer" and "put into the park queue"
1.1 Error message suggests that the buffer that is currently used does not work (it throws with this buffer, consider using another buffer)
2. 1024
is an arbitrary number. Why can't I have more than 1024
pending jobs parked? Saying that if I do have more, I am doing it wrong is.. wrong. Although I can certainly may do it wrong.
3. In order to avoid #2 limitation I usually build my own "throttle" mechanism, otherwise the high frequency data, that has a luxury to use the true TCP/IP congestion control, falls apart (`Assert failed: No more than 1024 pending puts are allowed`) once it comes in to the system.If 1024 is not the number, what should it be? It seems that if one is doing lots of async puts onto a channel whose buffer is full, then there is an issue that needs to be dealt with
I suppose one answer could be, “you can’t put!
on a channel whose buffer is full, at all"
1024 was considered to be high enough, that if you see it, you are probably doing something wrong
@joshjones yes, the way tcp/ip deals with this issue it lets the producers know to slow down and the congestion window becomes much smaller, and then grows slowly
@alexmiller that could be true, but this is hardly an argument for 1024
. what is this number based on?
it is not based on anything
it was considered “sufficiently big” to proceed with working on the rest of the implementation and deferring further work on it
the thought being that with time we would have more experience with whether that number is good/bad/or should be configurable
certainly there is a strong belief that there should be a limit as unbounded queues are bad
I agree, it does make sense to deal with unboundness of the queue. making it configurable with a good default (seems like 1024
works? for most) would probably be better. another approach, which would be really useful, but I think would not fly in the async
nature of core.async and its fixed thread pools size, is to (have a config to) block, throttle on q limit reached.
which (in case of the external systems) would delegate the back pressure to the network
but what would you block? put! is async
Right async tcp plus coreasync seems like a good fit.
> put! is async it is until it throws. so it could block instead when the MAX-QUEUE-SIZE is reached. not by default of course, but something like:
(async/throttle number (async/put! is async))
just thinking outloud. I build my own threshold and I stop putting things to a channel when N jobs are pending there. (usually based on file handles and other things). But working with the data that is coming in from the network it proved to be really useful (to me).I think making put! blocking breaks the rest of the design
right, that is why "I think would not fly in the async
nature of core.async and its fixed thread pools size" 🙂
if you want blocking/parking, use >!!/>!
if you want feedback without blocking, you can use offer!
@hiredman I am not, they are recommended https://github.com/clojure/core.async/wiki/Go-Block-Best-Practices
So if you want 10000 tcp connections why not use a fixed size buffer?
Put is pretty much the only good way to interface with async libs
Sure if you want to hang the async libs dispatch thread
No you can limit input as well. Think of a callback based async http lib.
@hiredman >!
has the same MAX-QUEUE-SIZE
limitation that put!
does
@alexmiller yes, I do usualy use >!!
to consume from external systems. however the problem is not simply consuming messages from the pipe, but also do the additional work within go blocks down the road. those go blocks (loops) read as fast as they can from these channels.
@danboykis it suggests put! as a replacement for (go (>! ch x))
, but once you have a go block like that you have already lost
hiredman if I am understanding you correctly you can create a new chan do a put!
to it in the callback and then read from said channel using !<
put!
is a low level thing, and when you use it you are saying "hey, I am going to ignore the feedback mechanism that comes with channels because I have thought this through" and that is basically never a good default state
@alexmiller the problem comes down to marrying a "non async" consumption (read from a message broker, a tcp socket, etc..) with the system that internally uses go blocks. throttling is one way to deal with it. there could be better ways of course.
@danboykis >!!
is not put!
Wat no it's not. Put is just a channel op plus a callback
Sure it will
And the callback will be executed when it completes
It's all the same code just a different ui
So pick the cleanest ui
sure, you can build something that communicates back pressure on top of put!
, but it is easier to actually use the existing constructs that communicate back pressure, and I have yet to see anyone actually communicate back pressure when using put!
, they just put!
in their callback and move on
for the naive case, when if you haven't thought about how back pressure is handled, you will get a better behaving system using the blocking/parking ops. Assuming you don't do something silly like (go (>! ...))
which, again, abandons backpressure
here's some pseudo code
(let [c (a/chan)] (a/go (http-call-with-callback #(a/put! c %)) (a/<! c)))
you have a core.async system, you want to pump messages in to from a message queue. you set up a thread that pulls messages from the queue and pushes them in to the core.async system
using put!
in that thread you will hit the 1024 limit, using !!>
the thread pulling messages will be blocked until the message is "in" the core.async system
Or leverage the message queue's ack API to not have more than X message in flight. Or only ack it (or request the next message) inside the put!s callback
No reason to tie up a OS thread for that.
But it does depend on the MQs interface
I am saying if you do something without completely thinking it through (which as programmers we never do, of course) >!!
is more likely to result in a better behaved system
Clojure: begging people to think about their software designs since 2007
@tbaldridge
> Or leverage the message queue's ack API to not have more than X message in flight
yep, that is one way to throttle, but it relies on the ack
/ X message in flight
feature of the producer. sometimes this is just a tcp socket. for which a throttle can also be built once the packets are defragmented.. but requires the whole other layer of throttle. It would be really helpful if core.async
had an optional throttle built in, since it knows "what the message is" + "what the limit is"
To the original question though every time I've hit the 1024 issue, it's been because of a lack of back pressure or a queue size that's undefined
ztellman had an alternate version: https://gist.githubusercontent.com/ztellman/fb64e81d1d7f0b261ccd/raw/9478c90957ce431b3ead716ca01404c425d68285/foami.clj
sure, which is sort of similar to how >!!
builds a back pressure mechanism on top of put!
So I don't get it, aren't tcp sockets buffered with fixed size buffers. If you stop reading when a channel is full, won't that push pressure back onto the other end of the tcp socket?
yes, assuming the pile of abstractions between you and tcp doesn't break that somehow
@tolitius I must be missing some thing here, how are you hitting 1024 puts if you are properly handling backpressure with put!
https://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0217.html apparently at some point some browsers didn't properly communicate back pressure for websockets
Seems like !!> or put + nio select shoul work
Yeah httpkit is ugly that way too.
@tbaldridge I am not hitting 1024
since I don't put more than X number of messages to the channel (I throttle it). But without throttling there are ways:
socket =>
>!! =>
(go-loop [msg (<! ..)] ... (ask-another-system-something-async callback))
in case a producer (stream from a socket) is faster than the "another-system", jobs queue size grows.
So stop reading from the socket?
Hi, got another thing where I'm sure I'm just missing an obvious function from the standard lib. I have a seq of pairs [[:a 1] [:a 2] [:b 3] [:c 4]]
and I want to construct a map from the first value to a set of the second values: {:a #{1 2} :b #{3} :c #{4}}
. I thought (zipmap)
might do it but it doesn't seem quite right. I can get it working with loop/recur, but I'm wondering if someone can help me find a more simple/elegant solution
Ah, I'm thinking merge-with and conj might be the way
Well, it's not remotely pretty, but I did come up with a (group-by)
solution...
(defn value-sets
"Given a seq of pairs, return a map from the first element of each pair to a set of
all the last elements.
(value-sets [[:a 1] [:a 2] [:b 3] [:c 4]])
; => {:a #{1 2} :b #{3} :c #{4}}"
[pairs]
(let [pair-values (fn [pairs]
(into #{} (map last pairs)))]
(as-> pairs $
(group-by first $) ; {:a [[:a 1] [:a 2]], :b [[:b 3]], :c [[:c 4]]}
(zipmap (keys $)
(map pair-values (vals $))))))
@timgilbert not standard lib but for all group-by related stuff there's my xforms lib: (into {} (x/by-key (x/into #{})) your-coll)
Oh, that looks awesome @cgrand, will look into it. And it's cross-platform too! Thanks!
@jr anytime you have the pattern reduce (fn [m [k v]] ...
you probably want to use (reduce-kv (fn [m k v] ...))
instead
clojure
(->> [[:a 1] [:a 2] [:b 3] [:c 4] [:c 4]]
(group-by first)
(map (fn [[x y]]
[x (set (map second y))]))
(into {}))
(just a random performance tidbit)
;; => {:a #{1 2}, :b #{3}, :c #{4}}
Thanks for the suggestions, these are great! I think I favor @jr's concise reducer, I always forget how useful fnil is
FWIW, transforming that into reduce-kv is slightly cumbersome because my input is a literal seq of pairs, not a map per se
yeah I wouldn't suggest doing that since the data structure we are reducing isn't associative in the way reduce-kv expects
yeah reduce-kv isn't a good optimization for this case imho
For my use-case I'm I/O bound and expecting small inputs anyways, but would you make the accumulator map a transient?
I have found that benchmarking stuff like this does not always match my expectations (so it’s worth doing if you care)
based on what you’ve said, I would pick the one that reads the best and ignore the perf difference though
is http://docs.datomic.com down ?
read it on http://news.ycombiantor.com this morning, was not expecting it to affect me
cloudfront seems to be masking the problem on http://clojure.org and http://clojurescript.org atm
https://twitter.com/ian_surewould/status/836645989972918272 - presenter doing a talk about S3 realizes S3 is down during the talk
ever have one of those days?
https://web.archive.org/web/20170206025048/http://docs.datomic.com/
The thing that sucks about the outage is trying to explain it to a client, and explaining that you can't just swap out storage to "something else".
looks like some stuff is coming back
transients would help the code get more confusing to read 🙂 Only optimize after profiling, and save yourself some time and code complexity 😛
gotta backup the data to a separate availability zone?
yeah, that's a royal PITA...
installing boot on another machine; it worked, via s3; now wondering if it's safe or if it's mitm-ed
what's a good way to traverse over a queue, call a boolean function and remove some items from the queue, without taking the items from one queue and put them into another. Queue can be of course any list operation here. I think using reduce is maybe slowing down this process, because the data is moving from a to b, while I'd like to just keep them in the same queue that Im traversing trough.
remove
?
yes, that came to my mind too. But wanted to ask if someone know some tricks. The only thing I don't like about remove is that Im not always removeing, I could call a function on some conditions. But I think remove
will do it for sure...
Today's puzzler:
dev=> (clojure.core.unify/unify [#:a{:b {:c/d 42, :db/id "foo"}} {:e/f 43, :x/y "foo
"}] '[#:a{:b {:db/id ?x :c/d 42}} {:e/f 43 :x/y ?x}])
nil
dev=> (clojure.core.unify/unify [#:a{:b {:db/id "foo", :c/d 42}} {:e/f 43, :x/y "foo
"}] '[#:a{:b {:db/id ?x :c/d 42}} {:e/f 43 :x/y ?x}])
{?x "foo"}
oh,
dev=> (clojure.core.unify/unify {:a '?x, :c :d} {:c :d, :a :b})
nil
dev=> (clojure.core.unify/unify {:a '?x, :c :d} {:a :b, :c :d})
{?x :b}
hah, on clojure 1.5.1 which is what you get if you clone the project and run lein repl
the breakage happens between clojure 1.6 and 1.7, which is where I think clojure made some changes to hashing, which changed things like the iteration order of maps
Ah, it seems that it is supposed to work, but it's untested: http://dev.clojure.org/jira/browse/UNIFY-6
UNIFY-9 created. I'll look into fixing this and attaching a patch, 'cause this is kind of up my alley.
@alexmiller Are fixes to core.unify welcome and/or likely to be merged?
Hey all. I'm getting the following error when trying to access a service that has a let's encrypt cert: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
. I'm using the latest verison of clj-http
with the :insecure? true
option set. Am I missing something?
java -version openjdk version "1.8.0_121" OpenJDK Runtime Environment (build 1.8.0_121-8u121-b13-0ubuntu1.16.04.2-b13) OpenJDK 64-Bit Server VM (build 25.121-b13, mixed mode)
https://yogthos.net/posts/2016-07-15-JavaSSLWorkaround.html this can hack around it
@eraserhd sure, I’d take a look
@petr I believe this is typical when the root cert is not in the default store for your jdk
I've hit that cert error with LetsEncrypt too.
you can certainly add it yourself manually
I assume you have to update the Java certificate store, which is a hassle but doable.