Fork me on GitHub

So I figured out how to send clojure forms without their sources (only js under the var) over to a remote repl finally. For instance to wrap inc just put it in a var, with a js* inside, like so (str "(def avar (js* \"" (str inc) "\"))"). Apply that on the other side and apply the result to either remote state or state you sent over with it.


might fail under advanced compilation though


what can I call on a parameter at runtime to determine if it is a non literal form, like #object[... ???


I guess just #(not (coll? %))?


there are plenty of things that are readable but not collections


dates, uuids, records...


but maybe (fn? %) would help in the case you care about?


@john Take a look at the JavaScript you see for multi-arity functions—it refers to other variables that don't show up in the string form.


specifically, I want to know if the var is js underneath, than can be executed by js*, or not


@mfikes are you referring to a stricter mode of printing strings?


I read something about strict mode


Take a looks at what you get for (str f) when f is defined by:

(defn f
  ([x] :single-argument)
  ([x y] :two-arguments))


Specifically, the keywords in the above definition don't appear. The first ends up under cljs.user.f.cljs$core$IFn$_invoke$arity$1


are keywords stored on a table?


Well, the example above isn't specifically about keywords. It is to illustrate that the string form of a function value isn't all of the JavaScript that defines the function.


where are those keywords stored? the js form will still return those keywords, right?


Of course, you run into a similar thing if a function refers to another var, but the multi-arity example shows that the generated code includes more than what you see when apply str to the value.


Yeah, I was noticing other things disappearing too




To answer your question about the keywords in the example above, the auxiliary JavaScript referenced by the "primary" function constructs the keywords (at least in :none mode)


cljs.user=> cljs.user.f.cljs$core$IFn$_invoke$arity$1
#object[Function "function (x){
return new cljs.core.Keyword(null,"single-argument","single-argument",-784978075);


so single clojure functions are literally being broken down into multiple functions. Not all of which are returned by the var of the function itself, when returning it as a parameter to something.


printed and returned are different


@john The current implementation of the compiler emits multiple JavaScript assignment statements for multi-arity functions.


it prints something, but that's not the object, it's a string that doesn't reflect every property of the object


An easy way to experiment with this and see what is going on is to put your ClojureScript REPL into verbose mode and evaluate some defns and see what code gets emitted.




yeah, shuffling "compiled" cljs code between environments sounds pretty hard.


In some sense, that's how many ClojureScript REPLs work. With the difference being that they easily have access to the emitted JavaScript because they just produced it.


right, staying synced up?


(When you evaluate a form in a ClojureScript REPL, it compiles it to JavaScript, and then evaluates that JavaScript in the target JavaScript environment, which is usually remote.)


so the guts to make it work might be there. But you get into messy business when you want to differentiate the two environments and run some code on one and some on another.


There's also the overhead question of synchronizing state between multiple webworkers or remote repls


Do you have the ClojureScript source for the code you'd like to ship to the remote environment?


so i've got a self-hosted repl up in a webworker and I'm shuttling datastructures back and forth via butler. I'm experimenting with some concurrency ideas. Currently working on an agent implementation.


The most sophisticated example I've seen along those lines is crepl


so I'm experimenting with the different ways of sending code over to the webworker


yeah, I noticed that. Didn't realize they were solving that problem too. I'll definitely have to check it out.


crepl is also self-hosted, and it is syncing code between two environments remote from each other, as well as atom state


do you think it's a good idea to go after that compiled code sharing goal, for the purposes of an agent? Sending pure cljs forms over the wire to be evaluated on the remote end is working at the moment.


though you could probably be a lot lighter weight on the webworker end if you didn't need a compiler on that end.


In my opinion, attempting to extract and sync the compiled JavaScript, by tracing through vars and references seems like would be challenging.


Yes, the self-hosted compiler has a big footprint


Well it's pretty fun to experiment with 🙂 thanks for all your advice!


Yeah, not claiming it is impossible. Experimenting with this stuff is truly fascinating.


what about patching the self-hosting compiler so it has the option to compile to localstorage? then you could get the string to send from there? maybe that's crazy


with localstorage, it would also be immediately accessible within the same browser if the same site


@noisesmith I think the cljs functions are still decomposed into separate functions in the source files


right but you tell the other js to load the whole localstorage document


(maybe that doesn't fit your usage...)


when passing a compiled function, the caller then calls the others from the environment


short of sending the whole environment, you'd have to do diffs. or trace out js function dependencies and send just the dependencies


yeah, that's tricky


and if you have two different threads banging on one datastructure that is synchronized between locations, most threads will be waiting on synchronization


well, localstorage is keyed - they can each claim their "output key" without conflict


I mean - I'd assume the browser could handle that since it's keyed, we could find out it would still conflict I guess


but probably best to use the communications mechanisms that webworkers already have I'm sure


the replikativ folks, with their CRDTs and Konserve sounds similar


I was just thinking that if you are generating the functions at runtime, you have the self hosted compiler (or some subset of it) so with some patching you should be able to output to a usable textual location in the js vm


they're working on datascript on localstorage that syncronizes p2p style over CRDTs


yeah- I could imagine a crdt implementation where each producer claims a key, guaranteeing that order of writes are not important between producers (as long as you get each individual data producer's order right)


Yeah, I'm using keys to synchronize agent actions. It's pretty hairy


@mfikes You were correct ^^


How long does it take yall to run this?


@john for this code V8 appears to be an order of magnitude faster than JavaScriptCore


(V8 takes about a minute)


@mfikes Yup, that's about what I'm seeing. I'm creating MetaFns to carry around the function's source, but they're going roughly 8 times slower. Once they compile down on the remote side, they compile to regular functions. So I thought my webworker was going 8 times faster than my main thread 😂


Turns out the main thread is going about 20% faster


and there's about a 50 ms round trip overhead for sending and receiving to the worker.


for small data


I've implemented a round robin worker pool now though, so I can queue up 20 (fib 40)s and all four of my cores get pinged 🙂 goes 25% to 50% faster than it would have taken on one thread - by my back of the napkin measurements


and that's about the most naive agent pool implementation you could make. Could probably be improved a lot.


klipse has a nice interface to quickly see what JS comes out:


@danbunea Going back a bit for this, but... I'm also just dipping my toes into the waters of generative/fuzzy testing, but I've been inspired by some examples I've seen (even if some of them are a bit toylike, they seem to point in an interesting direction. For instance: and (the latter not toylike, but actually useful at finding bugs in bash with real security implications.)


I've always been dissatisfied with testing- what I want is proofs, what I have is tests. I did spend a few years working on a system where we actually did include some proofs, at varying levels of formality, in both the dev process and the documentation, but... that was only practical for certain relatively isolated pieces of the software.


So, given that what I have is tests, and that they are expensive to write and maintain relative to their effectiveness in finding non-trivial bugs... well the issue with measuring coverage is that lines of code covered is a bad metric. It's paths through code that matters (as well as the input space.) I think the idea of testing systems that can learn how to take new paths through code is pretty interesting.


(Though in a deep sense the distinction between paths through code and the input space may be pretty closely related.)


hello guys, is there a reason keywords don’t extend IMeta & IWithMeta ?


because then they wouldn't be internable


or at least we would need a very different interning system


that jpeg discovery thing looks really cool


Is there a way to use externs provided by the closure-compiler project without copying them?


@jfntn they should be in the classpath


@ashnur The learning to write valid bash scripts part seems even cooler to me (but I have always been a bit parser-oriented, I guess.)


@ashnur But what seems cool to us is not even the cool part, I think... it's what we notice.


@ashnur the cool part is learning to exercise as many paths through the code as possible- that has always been the fly in the ointment.


@anmonteiro hmm that’s strange then, I’m seeing "$svg$$inline_4211$$.$createSVGPoint$ is not a function” when invoked on #object[SVGSVGElement [object SVGSVGElement]] in an advanced build


@ashnur leaning to generate jpegs and valid bash scripts is an epiphenomenon, albeit one we find significant.


@jfntn last time I checked Closure didn't bundle SVG externs


has that changed in the meantime?


oh you’re right!


that is quite the gotcha


I'm inclined to think that we're less than ten years away from a limited form of generative programming, but... I think this sort of generative testing is the first step.


fly in the ointment? epiphenomenon? i am not following you, sorry :-S


i like the idea that there is a tool to explore specific spaces efficiently


Hmm- well what I mean by 'fly in the ointment' is that as programmers we mostly guess at correctness. We can cover every line of code with tests, but in complicated cases we can't cover every path through our code. The best we can do is try to isolate things (the point of functional programming) but we can't do that completely, so our tests shouldn't really reassure us. There are, in many reasonably complicated modern software systems, trillions of possible paths through the code- and that is likely an understatement.


tagore: it won't be too terribly long before most of us are replaced by machines. better keep an eye on languages that include proof engines. they can't replace interactive code, but for the functional bits they will disemploy a lot of programmers.


This is why programming is still more art than engineering. A structural engineer can measure the load a beam can carry, and be pretty sure that if it can easily carry 1000 lbs, and can easily carry 2000 pounds, it can easily carry any weight between the two. An analogous software system might see the beam buckle at exactly 1.436785 pounds, but at no other value.


This is the fly in the ointment.


When I say epiphenomenon what I mean is that while we're oohing and ahing at a simple testing system learning to do huffman encoding, or write valid bash scripts, that's just an incident of something more important happening that is harder for us to see, and thus ooh and ahh about. Code learning to exercise code is more significant, to my limited mind, and different pieces of code learning to adapt to each other seems even more significant, though I doubt I'll be able to understand anything not epiphenomal about the upshot of that.


i see that this topic is important for you 🙂


i find it interesting for the same reason, only difference is that i don't think this would help so much to make programming a science, mostly because programming just like politics or economics is also about values that are not necessarily quantifiable. at least not with our current understanding.


but i do think it can help with building more general tools


any opinions on using vue.js with clojurescript?


if you like to experience stuff. but i see no reason to use it in production. there are probably better tools. more mature stuff, based on datascript and everything


@ashnur was that in response to my question?


@ashnur I suppose so, and I'm inclined to think that the fly in the ointment bit should be important to everyone who programs. It's just so very difficult to know how correct large systems are, and sometimes it's important that they be reasonably correct. As for the rest.. well, that's speculative, but it certainly seems like the outlines of whatever is slouching toward Bethlehem (for good, or for ill, or for both) have become a bit easier to discern lately. I wouldn't be surprised if Skynet grew out of a test framework, at this point 😉. However, this is not very specific to clojurescript, so...


well, trustware will fix all your problems, i am sure 😄


I'm not familiar with trustware, Sounds fishy to me though. I don't trust much, but I trust software only in strictly delimited fashion.