Fork me on GitHub

Xforms is fantastic.


I went off spectre because learning a query language is an investment.


Do you find it xforms and specter lead to comparable performance improvements?


Depends on what you're doing ;)


My hand crafted code outperformed spectre, and we needed performance.


Makes sense


Xforms is a pretty sane default in my opinion


@rickmoynihan I mentioned this idea a while ago, and you mentioned that you had a similar idea too:


@U09LZR36F you wanted to create the same lib?


Yeah, but for clojure. I had way less of an idea how.


I thought it would be really neat to have something built on and core async, and see how it performed.


You can do basic optimizations when you can do partial parsing like this.


That sounds pretty nice actually


Sort of like manifold?


You could build it on manifold too, yeah. That would be the power of the library :)


Do you want green threads or do you want thread pool funky thing.


Interesting… Yeah I toyed with similar things maybe 10 years ago. It was pre core.async; and back when MS had just released Rx, which I was using to work with XMPP and pubsub stuff… I had a similar idea about representing protocols as event-streams/marble-diagrams and was doing so for XMPP/STOMP and felt it should be done for HTTP too. I never thought about plugging in different backends as at the time I was just trying to plumb in Netty / byte-buffers. I did build a small experimental clojure lib for it but I never got around to extracting from the other code I writing… Also a month or two later Stuart Sierra released cljque, which was pretty similar to what I was doing; but I felt a fair bit better… then the company I was working for tanked.


Still I think there’s mileage in this kind of approach


Though I’m really not sure of the benefit of plugging in — at least not initially. You’ll have to parse HTTP with sync blocking/io and byte buffers; so you’ll just get more overhead not less. If you want sync io/threads I think you really want to stay in a sync model; rather than have an async API over the whole thing; (and marshal yourself if needed) rather than having to buffer everything and then back fill events for the individual bits; or block the client whilst you handle them? Having said this I guess is there is some value in at least sending the headers all together separately from the body during a request; as it means you don’t need to read the whole thing before killing the request. I think the big problem in bridging these two worlds though is that there’s an impedance mismatch between the coarseness of events you want and their efficiency. i.e. for portability a sync implementation still needs to recreate the finer coarseness of events in the async implementation; which is essentially a chunk of bytes on the wire. So your sync implementation is going to end up having more overhead on a more classical approach… or you’re just going to have to raise different sets of events in each implementation; and break portability.


Oh yeah. Replace socket with the nio equivalent, or maybe the jetty tcp server, etc.


The important thing in this world as far as I’m concerned is representing both the wire protocol and the more abstract network protocol (i.e. client request, server response for http) in the same way and viewing them as one and the same. For HTTP this isn’t that big a deal as it’s mainly just request/response ; though pipelining changes that slightly — but for other protocols bi-directional protocols, such as XMPP etc it’s a bigger win I think.


Good Morning All!