This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-08-07
Channels
- # bangalore-clj (2)
- # beginners (53)
- # boot (30)
- # cider (27)
- # clara (1)
- # cljs-dev (18)
- # cljsrn (16)
- # clojure (153)
- # clojure-brasil (1)
- # clojure-dusseldorf (5)
- # clojure-italy (20)
- # clojure-losangeles (3)
- # clojure-spec (4)
- # clojure-uk (177)
- # clojurescript (115)
- # component (4)
- # core-logic (1)
- # datomic (29)
- # emacs (9)
- # figwheel (2)
- # gorilla (1)
- # graphql (36)
- # hoplon (4)
- # jobs (1)
- # jobs-discuss (3)
- # juxt (2)
- # keechma (22)
- # lumo (4)
- # off-topic (1)
- # onyx (17)
- # parinfer (96)
- # protorepl (10)
- # re-frame (31)
- # reagent (14)
- # ring-swagger (17)
- # spacemacs (32)
macros -- for () and [], the meta field gives us line/column; we do not get it for int/string/symbols - is there any compiler option which also allows macros to see th emeta info for symbols ?
I have here a ridiculous function I made for generatoring random dates:
(def simple-formatter (f/formatter "yyyy-MM-dd"))
(defn generate-simple-date []
(f/unparse simple-formatter (c/from-long (math/round (* (rand) 9999999999999)))))
(it uses clj-time
and math.numeric-tower
)
How do I turn it into a generator for use with clojure.spec
?(clojure.spec.gen.alpha/generate
(clojure.spec.gen.alpha/gen-for-pred inst?))
;;=> #inst"1970-01-01T00:00:00.024-00:00"
hey folks, I want to give a talk on a conference about clojure and want to encourage people to use it. do you have any reading or watching material to help me with the talk
if I have a "pure" lib, which is to be used with mount/component/etc. how do I arrange using dynamic binding(s) in my implementation? Is there a way to "detach" dynamic binding var from the top level of the file?
There is an atom, and an update function f
, which can be called outside and inside of the add-watch
function. If it is called inside of add-watch
function, I want it to behave differently, than if called outside.
More concrete example: make same function put event on one queue inside of atom, if event is external, and put event inside another queue, if event is emitted by event handler.
Since both call sites will be implemented by user, I'd like to exclude possibility of user error, e.g. (emit-external!)
inside handler.
I’m using clojure.test
and one of my tests passes the value of *ns*
in to a function (I expect it to resolve to the unit test namespace).
This works fine at a REPL, however one of my namespaces does this and *ns*
resolves as user
not the test namespace when running lein test
.
Has anyone run into this problem/bug?
my test namespace is pretty standard… it has a typical ns
form at the top form - nothing fancy.
I looks like it might be a bug in either clojure (1.9) or leiningen.
I have some other namespaces that use the same pattern, and *ns*
is bound correctly even when run via lein test
@rickmoynihan ns is only certain to be defined the way you expect during compile- and macro-expand-time, not at runtime. So if you want to use its value at runtime, you might need to write and use a macro to capture its value and put it somewhere you can later find it at runtime.
@chouser: interesting… can you explain why, and where this is documented?
that’s very much not what I was expecting
@misha I don't think I understand, but it sounds interesting. Do you already have a way to determine if code is being called by add-watch vs not?
@rickmoynihan Well, anyone can set ns whenever they want, so its holistic behavior isn't (and probably can't be) documented. However, generally people leave it alone except by calling 'ns' and 'in-ns', which are documented to set ns. The rest of what I describe falls logically, if somewhat cryptically, from that.
ok, yes that makes a lot of sense actually
basically it’s a global set at read-time… if another namespace is evaluated it will stomp it for the duration of that namespaces evaluation
Being thread local will prevent threads stomping on each other if namespaces are loaded in parallel; but as you say run time is after the fact.
so all bets are off.
so it's pretty safe to rely on the ns value at macro-expand time, because you generally do your (ns ...)
at the top and then don't use it or in-ns
again.
i’ll need to wrap my function in a macro; a bit of a PITA but no big deal.
You should only need a macro if you want different namespaces to be able to call your common thing, and that common thing needs to know the ns of the caller.
yeah, I had thought of that… but it’s not a one off; it’s a pattern my tests are using to find some sidecar files based on the namespace name… I’d rather users had to just use the one function macro, rather than remember to def the ns-name.
yeah you kinda repeated me - I do want that 🙂
Anyway, thanks for your help… I should’ve realised what was happening - not sure why I thought this would ever work… I guess the fact it works at a REPL threw me.
I clearly imagined some magic where there was none
@chouser yes, sort of. Lib api: 1) creates watch for you, and 2) provides slots for event handlers (think re-frame, I guess).
So I "know" when an event is emitted by event handler (if handler happens to emit event), because I can call handler within a binding
.
(if the same handler is called outside of lib's event-loop – it will lack binding around it, and can be flagged as external emitter (or rather "not flagged as internal one")
if I'll give user 2 fns: emit-external
, and emit-internal
– using emit-internal
outside of event-loop might lead to business-state inconsistency, unless I throw based on something
initially, I wanted single emit
function, because 2 event queues is an implementation detail, which user should not really think/know/care about
so, say i went with bindings
. Lib would look like:
1. (make-state)
giving you atom with a state, with added event-loop watcher. You can have many of those.
2. global dynamic var {:state-atom-id false}
, which would be updated with binding
depending on which state's event-loop executes its handlers
@leonoel can you suggest something instead? I don't like the whole ^:dynamic idea from the get go, but not sure what are the alternatives
1) I write it for cljs in the first place, but cljc is the end goal. 2) I did not really thought through how it should behave in multithreaded env (although FSMs have some concurrent things going on)
the problem is, if you rely on a dynamic var to test whether you're inside the event loop, suppose that a handler makes a call to future
, because of binding conveyance the future will believe it's still in the event loop
I can't control handlers implementation, and can't/don't want to force those to be pure either
a threadlocal would work, declare it globally and make your call to the handler inside a try / finally to make sure the flag is reset at the end
so if I allow i/o - emitted event might be one of side effects. And I need to prioritize events emitted from handlers, if any
I'm not saying that threadlocal is better, I say dynamic vars have binding conveyance, which in this particular case makes them inappropriate
@misha Would it be good to provide all three? emit-external
and emit-internal
for users who are doing crazy or unexpected things with threads, and emit
that tries to do the right thing based on the sensed calling context? If so, then would probably also be helpful to give the user access to the sense function, like internal?
or some such.
Kinda weird -- I've never seen a situation where dynamic binding is almost right but you don't want conveyance.
the actual problem I am solving: I need to prioritize events emitted from event handlers, to implement: "run to completion" UML FSM requirement
I am very inclined to proceed with dynamics and just add a "don't" clause. But it might legitimately limit some use cases
or even with just emit-internal
and emit-external
, but that leaks implementation detail
On the other hand, ThreadLocal is pretty easy: (def x (ThreadLocal.)) (.get x) (.set x 42)
and it should be visible from any thread, right? with thread local, there would be multiple projections of ThreadLocal's value, one per thread, right? and since thread can do 1 thing at a time, it cannot emit event both inside and outside the loop at the same time, right?
so all sync emit
calls from within a loop would see "in a loop", and all async emit
calls from within a loop would see "not in the loop".
I would say, any mutable container such as atom
or volatile!
will behave like a threadlocal
@misha my take on this stuff is that whenever possible a library should provide a pure function that takes all relevent data as args (including a params map full of stuff specific to that lib, if needed) - that’s the only thing that easily generalizes to dynamic binding, atoms, all the system state management libraries, etc. etc.
it’s easy to build all of those things given a pure function, it’s not easy to do any of the other conversions (dynamic binding to global atom, global atom to system management lib, etc. are all hard things to do right)
@noisesmith I understand that, and can do pure implementation for everything but this emit
use case.
and the only 100% pure solution I have so far – is 2 separate functions: emit-internal
and emit-external
be aware that if you use threadlocals you are ensuring that your lib can’t be used inside go blocks
at least, not without a lot of pain and complication
but, if someone will use emit-internal
either outside of the loop, or inside the loop in future - it will mess up FSMs semantics, so I need to guard from that somehow
@noisesmith ouch, core.async integration was next on the todo list
what about using core.async to manage event ordering - reifying the emits as a series of events coming from multiple channels, so that a single block can ensure proper ordering
@misha Would it be too ridiculous to have the user's functions return the things to emit, rather than calling emit as a side-effect? I think you ruled it out above for the general case, but even providing it as an option can be helpfully clarifying.
I mean, it would be an option to make a “core async wrapper” that always launches a fresh thread and establishes the correct bindings and ensures your code never runs on a thread owned by go… but that feels a bit hairy too
I think anyone who'd care to "return what to emit" would use emit-internal/external just fine. or maybe I did not quite understand the solution
if you use something other than threads to carry context (and explicitly carry it instead of implicitly) this stuff becomes a lot easier
it’s reasonable to make an api that looks like a side effect to a client, but really returns data to your own managing context that does the real side effects
I thought that's what I'm trying to do here :) basically I need to know whether a function was called inside or outside the loop
@noisesmith The user still has to be careful to return the thing in that case, right? An extra (prn ...)
in the tail position ruins everything?
well, what matters is what they provide to that api
right now, I keep FSM in atom, and expecting user to pass that atom as an argument to (emit)
this way same function can be called inside/outside the loop, but this way I cannot put any context in it to identify the caller
maybe my picture of this is wonky
@misha that way the client is in charge of ordering
doesn’t this get easier if there’s another layer that can fix the ordering?
if the client ultimately decides on the ordering, all you can say is “you did your emit in the wrong order, it’s a problem in your code”
but if you have some indirection between a client saying “this is data to emit” and the actual emission process, there can be something that knows “I have to wait for x before y can come out” or whatever
during event handling (FSM transition between states), there might be 2 event sources: outside
(usual, in-click, on-response, etc.), and inside (events emitted as side effect of FSM transition/entry/exit behaviors).
To satisfy "run until completion" requirement, I need to ignore any outside
events while transition is in progress, but prioritize and execute inside
events.
I implemented it with 2 queues of events. When machine is in stable state (idle), only outside q accepts events, but when there is transition in progress - second q might receive some events. And the only way I can tell where to put an event, is by knowing was it emitted by handler from inside the event-loop (calling same handler outside the loop, even if at the same time, but from different call site - has to put event in outside q
, where it'll wait for idle machine state).
right, that’s easy with a couple of queues and an agent or go block between the user and the atom
I want to avoid baking in core.async at this point, will try to wrap it later as a separate integration, but did not thought about it yet.
So what if the only way to get events into the high-priority queue were for user event handler functions to return data that meant "here are events to queue". Then the only public 'emit' side-effect function would always add to the low-priority queue.
that sounds about right, yes
only the internals of the lib need to access the higher priority queue (I have something just like this in the code I should be working on right now…)
not every event handler (actually, it is transition/entry/exit behavior) will emit event
actually, wait, those behaviors have to return db (state) I guess. Hm, I need to think about that. if there is inherent constrain on return value - then yes, emitting priority events will be done through separate pure fn only.
thank you @noisesmith @chouser
Does anyone have a suggestion or example of building a multipart form submission in clojure? I've been trying with http-kit unsuccessfully. I need the first part to be json and the next part to be a file. To be clear, this is effectively an upload POST call, but server to server.
I think the first thing I'd do is look for is Ring middleware to handle multipart form submissions.
@chrishacker Oh, you mean acting as a client?
Thanks jeff, but I'm sending, not receiving.
I already am using ring to receive file uploads.
At a later point in the lifecycle of the file, I need to submit it to a 3rd party and they require a multipart form.
meant to is the right phrase
it builds a multipart post that only has one type allowed. I need a more complex body, though pretty standard
Authorization: Bearer OvwkFQIqMsi5vTGq+tTAAAAAxKM=
Accept: application/json
Content-Type: multipart/form-data; boundary=01ead4a5-7a67-4703-ad02-589886e00923
--01ead4a5-7a67-4703-ad02-589886e00923
Content-Type: application/json
Content-Disposition: form-data
{
"status":"sent",
....
}
--01ead4a5-7a67-4703-ad02-589886e00923
Content-Type: application/pdf
Content-Disposition: file; filename="test1.pdf";documentid=1
[PDF binary document content not shown]
--01ead4a5-7a67-4703-ad02-589886e00923--
If http-kit can't do it, I'd probably use apache libs with Java interop directly, and follow a stack overflow example. shrug
yeah, was worth asking.
what I want.
It won't do the content in the first boundary set
(or I'm missing something )
javax.mail
is a horrible horrible library that I've successfully used to create non trivial MIME constructions. You may want to look at it.
Thanks for sharing your pain 😉
I'll take a look
It's a little unusual to send form data in the first part as JSON rather than url-encoded, isn't it?
Not my call - docusign API
https://github.com/dakrone/clj-http#post did you try this library?
what graphics library is best to "blit" a 80x25 monospace font to screen? so my clojure data structure represents an 80x25 array of characters, the yare all monospaced, no ligatures / fancy font rendering; I need to blit them to screen to display. What java API will provide the fastest way for doing this?
if you can use any kind of display, ncurses in a terminal might work pretty well (dependent on how good your user's terminal is of course)
"What java API will provide the fastest way for doing this?" Vertex buffers on the GPU. Upload the font to the GPU using a large bitmap, set the U/V of each vertex to match the corners of each letter in the bitmap.
Bonus points if you use vertex shaders to generate the UV coordinates via a vertex shader. So you upload text, the GPU does everything else.
Funny enough, that's about 20 lines of GLSL ^^