Fork me on GitHub
#clojure
<
2017-07-05
>
niquola09:07:49

Hi, could you point me some http/2/streaming clojure client?

madhuparna09:07:24

We are trying to add clojure.spec and metosin/spec-tools to our codebase, and it looks like clojure 1.9 is a prerequisite. We tried upgrading to 1.9-alpha17, but currently facing a bunch of problems related to the upgrade. Our app currently does not start, with errors similar to the followingZ

Caused by: clojure.lang.ExceptionInfo: Call to clojure.core/defn- did not conform to spec:
In: [0] val: clj-tuple/conj-tuple fails spec: :clojure.core.specs.alpha/defn-args at: [:args :name] predicate: simple-symbol?
A number of these are in libs and stuff. We’ve been upgrading a few of them, or fixing the error, which typically causes a new error to appear. Is there anyway to switch off clojure.spec validating the entire codebase + libs? Possibly get it to stick to a few namespaces only? Is this happening because we are trying to upgrade to an alpha?

bronsa09:07:40

there's no way to turn off spec checks on macroexpansion

bronsa09:07:34

your only option is to upgrade to versions of the libraries that include the fixes you need

jonpither10:07:12

hi - is there a way to get all resources on the classpath - whether they be inside of jars of on the filesystem?

Rachel Westmacott10:07:24

@jonpither I’m sure there must be but I don’t know what it is. eg. iirc Spring does a bunch of classpath scanning

igrishaev12:07:49

Hi! I wrote an article about PostgreSQL to Datomic migration. Hope you’ll find it useful. http://grishaev.me/en/pg-to-datomic

hmaurer12:07:20

igrishaev: thank you for writing this up; very useful!

hmaurer12:07:34

there is a #datomic channel, you might want to post it there too

igrishaev12:07:55

thank you, makes sense

octahedrion12:07:44

does the describe of a spec always have a similar structure to the data it specifies ? Can you give a counterexample ?

hmaurer12:07:51

@octo221 I am not overly familiar with clojure specs, but map specs are represented as vectors I think

hmaurer12:07:57

well, as a map with a :req vector

octahedrion12:07:24

yes,

(keys :req etc)
-- but that has basically the same nesting structure

octahedrion12:07:00

I'm trying to think of a case where you describe a nested structure of collections by a flat spec

joshkh14:07:59

does clojure have a representation of infinity that can work with mathematical operators? something like (<= 1 2 Infinity) ?

octahedrion14:07:26

Double/POSITIVE_INFINITY

octahedrion14:07:32

or

Number.POSITIVE_INFINITY
in CLJS

joshkh14:07:04

!! thanks 🙂

cycle33714:07:09

what's you're preferred method of checking for nils before assignation ?

cycle33714:07:01

i just realized I am repeating (:username respone) when I (if-not (nil? (:username response) (:username response) fallback-value))

cycle33714:07:21

isn't there a syntactic sugar around this ?

octahedrion14:07:37

(or maybe-nil-thing default-value)

bronsa14:07:11

assuming you also want to exclude false

octahedrion14:07:25

(assuming that)

cycle33714:07:54

(-> good sounds)

pseud14:07:20

@avabinary I end up writing functions like (?apply pred f val & args) for these purposes. If the predicate fails, then pass the value along

octahedrion14:07:22

but

(if (nil? maybe-nil-thing) default-value maybe-nil-thing)

bronsa14:07:49

if you specifically want to exclude nil and not false, there's if-some

bronsa14:07:15

(if-some [u (:username foo)] u default)

cycle33714:07:28

there-s also (some->

cycle33714:07:41

which short circuits when nil

cycle33714:07:43

I think points for brevity go to (or value fall-back)

cycle33714:07:03

but yeah, some? is nice

jrahme14:07:21

There is always when-some and if-some as well if you need to bind the value or branch accordingly when nil

potetm15:07:39

@avabinary For your particular case, (:username response fallback-value) will work as you want.

bronsa15:07:58

not if username is set but nil

cycle33715:07:32

true ! good catch

bcbradley15:07:35

this might be a dumb question but how do i get org.clojure/algo.graph on lein?

bcbradley15:07:41

i have [org.clojure/algo.graph "0.1.0"]

bcbradley15:07:44

the repl complains that it doesn't exist in clojars or maven

bcbradley15:07:38

i believe the github page https://github.com/clojure/algo.graph is out of date so i looked inside clojure contrib and it seems the namespace is actually clojure.contrib.graph here https://github.com/richhickey/clojure-contrib though the project name is 'algo.graph' from here https://github.com/clojure/algo.graph/blob/master/project.clj

bcbradley15:07:07

but unfortunately that isn't working for me either

seancorfield16:07:04

@bcbradley The clojure.contrib stuff was for Clojure 1.2. It was discontinued when Clojure 1.3 came out.

seancorfield16:07:44

clojure.algo.graph is as recent as it gets in the "modern contrib" libs. Let me check Maven Central for a version.

seancorfield16:07:49

Confirmed -- no releases have been made to Maven Central. I'll bring it up in the dev channel...

manutter5116:07:42

@bcbradley I haven’t done enough graphs work to be sure this is related, but I remembered seeing a recent announcement about https://github.com/Engelberg/ubergraph, perhaps that might have some of what you’re looking for in algo.graph?

bcbradley16:07:04

i considered loom and ubergraph, but i really don't need to pull in all of that

bcbradley16:07:27

i really just need a topological sort, preferably one that partitions appropriately into sets

manutter5116:07:10

Fair enough.

bcbradley16:07:59

right now i'm just ripping out algo's source

seancorfield17:07:22

bcbradley: Just make sure you respect the EPL license and copyright!

bcbradley17:07:22

if i got the gist of it, the EPL basically says i have to EPL my library if i have any EPL protected code in my library, right?

seancorfield17:07:39

IANAL (I Am Not A Lawyer) so I'd have to go research it. At a bare minimum, with all OSS, you must retain the copyright notice (along with any additional copyright you add).

seancorfield17:07:21

Several OSS licenses are incompatible with other licenses so, yes, you are often "forced" to release your modified version of the code under the same license.

seancorfield17:07:45

Unless you are doing this totally for an internal project and not planning to release it outside your company in any form.

seancorfield17:07:54

(I had several, depressingly long meetings with the Legal Team when I worked at Adobe about OSS license compliance and compatibility... and the upshot of one of those meetings was that we had to ask one OSS project to re-license their code under a more friendly license for us to be able to continue to use it!)

bcbradley16:07:13

if sean comes back with any info on it being added to maven

bcbradley16:07:17

i'll depend on that instead

bcbradley16:07:32

thanks for the link i haven't' seen this before

bcbradley16:07:56

thats not exactly what i wanted but its nice to keep in mind in case i need transitive dependencies

hmaurer16:07:58

Hi! I have two “beginner” questions on clojure.spec: (1) would it make sense to use clojure.spec to validate input API payloads? (2) in that case, how would you work around the restriction that clojure.spec‘s map validation works by using namespaced keywords for key&spec names?

hmaurer17:07:08

just found an answer to (2); only (1) remains 🙂

hmaurer18:07:20

Hi. I am getting the follow error when trying to run lein repl with Clojure 1.9.0-alpha17. Could anyone please tell me what’s going on? https://gist.github.com/hmaurer/47dfa107025f51ec71966170c537a578

english20:07:57

sorry missed this, did you try a lein clean? could be some stale .class files lying around

hmaurer21:07:53

@U06A9614K Someone helped me out (see later messages in #clojure). Basically it was a dependency issue

hmaurer18:07:09

> Caused by: java.lang.IllegalStateException: Attempting to call unbound fn: #‘clojure.future/ident?

noisesmith18:07:46

oh - here’s a fun one @bcbradley - stuartsierra/component comes with a toposort (used for figuring out which order to start components in)

noisesmith18:07:05

oh wait someone mentioned that already

hmaurer18:07:43

I fixed the issue by reverting to Clojure 1.8.0 and adding [clojure-future-spec "1.9.0-alpha17"] in my dependencies, which is what lacinia specified as a dependency (a library I rely upon). However I would still like to know exactly what the issue was if possible 🙂

noisesmith18:07:13

at a certain point spec was taken out of the clojure 1.9 alphas so that it could be updated independently of the clojure code

noisesmith18:07:39

code that was made for a different 1.9 alpha version prior to the split, won’t have that dep

noisesmith18:07:17

so if you are using the newer alpha17 or later, you need to provide that dep

hmaurer18:07:36

Oh I see. How are deps resolved by Lein by the way? If a library I rely upon has for dependency a different version of a library that I also use?

noisesmith18:07:55

you can see the actual versions used via lein deps :tree

hmaurer18:07:58

Will it just pick the most recent version? Or both?

noisesmith18:07:02

eariler in the list beats later in the list

hmaurer18:07:39

oh, so it flattens all dependencies and sub-dependencies in a list, and the first one wins?

noisesmith18:07:58

also you can use :exclusions to say “don’t use this dep to get this other dep” and avoid versions it asks for

noisesmith18:07:14

@hmaurer it’s more complicated than that - it’s an ordered tree

hmaurer18:07:18

lein deps :tree is awesome, thank you 🙂

noisesmith18:07:20

but it does use the first one it finds

hmaurer18:07:46

how does the ordered tree behaviour differs from what I was saying, if you don’t mind elaborating?

noisesmith18:07:27

because it’s not flattened anywhere afaik - it traverses the tree and adds items to a set of deps if you don’t have a version already

hmaurer18:07:44

oh right, ok, but the behaviour is similar

noisesmith18:07:48

I guess conceptually that difference doesn’t matter - it’s as if you flattened and took the first you found

hmaurer18:07:20

Will lein report if dependencies conflict? e.g. if I try to use a version of a dependency that introduced breaking changes, and a library I later require uses an old version of that dependency?

noisesmith18:07:09

no, lein will not complain, but lein deps :tree will report the conflict and tell you which it picked

hmaurer18:07:47

great, thank you 🙂 that clarifies some things

noisesmith18:07:50

you can use lein pedantic (a plugin) to make it bail out with an error https://github.com/xeqi/lein-pedantic

noisesmith18:07:18

oh wait that’s depricated and lein has the pedantic feature… never mind!

noisesmith18:07:23

@hmaurer see also lein help deps

qqq18:07:27

Is there a way to create a 'struct' that IS CONSTRUCTABLE ... but is NOT UPDATEABLE. {} does not wok since it's updateable via assoc Is there some way via deftype / defrecord to create something where (1) I have a constructor and (2) it can't be updated via assoc (or if it is updated, it becomes a plain map and is no longer a type / record)

noisesmith19:07:55

sounds like you want deftype - you can’t assoc or conj etc. a deftype (unless you extend to those protocols of course)

dpsutton19:07:04

user> (deftype Trial [a b])
user.Trial
user> (def t (Trial. 1 2))
#'user/t
user> t
#object[user.Trial 0x5e0c24f4 "user.Trial@5e0c24f4"]
user> (assoc t :a 2)
ClassCastException user.Trial cannot be cast to clojure.lang.Associative  clojure.lang.RT.assoc (RT.java:792)
user> 

dpsutton19:07:42

from the deftype page in the clojure docs: deftype provides no functionality not specified by the user, other than a constructor

ghadi19:07:50

qqq: {} is not updateable

ghadi19:07:09

persistent data structure

qqq19:07:21

sorry, by 'update', I meant create new obj with same 'type' as old obj

qqq19:07:31

I'm concerned with 'invaild objects'

qqq19:07:39

that claim to satissy certain constraints, but in reality do not

qqq19:07:42

due to assoc

qqq19:07:08

@dpsutton: defrecord doesn't does not work, as assoc 'prserves the record type' ; with deftype, how do I read the 'a' and 'b' ?

dpsutton19:07:31

user> (.a t)
1

ghadi19:07:04

> The XY problem is asking about your attempted solution rather than your actual problem

qqq19:07:17

@dpsutton: this was not part of the original request ... but is there a way to have this also work with cljs + optimizations advanced

qqq19:07:31

@ghadi : I agree that my question was poorly formulated.

ghadi19:07:43

invalid data is a gating problem -- you validate at the edge of a subsystem, then you operate with the data

dpsutton19:07:51

what was not part of the original request?

dpsutton19:07:59

you asked how do you read the fields of the deftype

qqq19:07:06

I wnat field acccess to also work via cljs optimiazationsa dvanced

qqq19:07:14

(.a t) I suspect, will not work as cljs optimizatinos renames the .a

ghadi19:07:53

IMHO this is not a data structure problem / impl detail but a program design problem

noisesmith19:07:59

it would rename your accesses too if it did that - that’s kind of why advanced compilation ever works…

noisesmith19:07:53

or would it? now I’m unsure

qqq19:07:17

@noisesmith : optimizationsa-dvanced has me so scared I jus use (aget ... "field-name") everywhere

ghadi19:07:22

also if you suspect that something is generating invalid data, that is something clojure.spec + instrument is really good at catching

dpsutton19:07:29

but if you're scared of bad data getting into your data i think you're working on the wrong side

qqq19:07:41

here's the thing; I'm dynamically checking some type constraints

qqq19:07:50

but if I ask that question, ppl tell me to use spec or core.typed 🙂

dpsutton19:07:59

so what happens if the types don't align?

dpsutton19:07:04

and who would introduce the bad data

qqq19:07:04

throw an assertion

qqq19:07:16

bad data introduced to my programming bugs

noisesmith19:07:20

@qqq

(ins)dev:cljs.user=> (deftype Foo [a b])
cljs.user/Foo
(ins)dev:cljs.user=> (aget (Foo. 1 2) "b")
2

qqq19:07:35

@noisesmith : nice; thanks!

dpsutton19:07:10

well that looks dangerous

dpsutton19:07:46

that looks like "b" may not track what the fields get renamed to in aggressive compilation

ghadi19:07:05

i still don't understand why you can't filter bad data at the edge

qqq19:07:38

@ghadi: the problem is not bad user input

noisesmith19:07:40

just spitballing: system that isn’t designed with the right edges in place?

qqq19:07:51

the problem is that I, the programmer, introduce bugs

qqq19:07:05

these are bugs that a static type checker would catch at compile time

qqq19:07:13

but in clojure, the next best thing I can do is to catch them at runtime

qqq19:07:23

the input data is fine; it's internal functions that manip the data that violate type constraints

bcbradley19:07:39

which of these two implementations would you prefer and why?

(defn inversion [nodes edges]
  (transduce
    (mapcat (fn [[k v]] (map #(vector % k) v)))
    (completing (fn [acc [k v]] (update acc k conj v)))
    (zipmap nodes (repeat #{}))
    edges))
(defn inversion [nodes edges]
  (reduce-kv
    (fn [acc k v] (reduce (fn [acc i] (update acc i conj k)) acc v))
    (zipmap nodes (repeat #{}))
    edges))
expected shape of nodes is like [:framebuffer-0 :framebuffer-1 :framebuffer-2 :texture-0 :texture-1 :texture-2 :texture-3 :program-0 :program-1] expected shape of edges is like
{:framebuffer-0 #{:texture-0 :texture-1 :program-0 :framebuffer-1}
 :framebuffer-1 #{:framebuffer-2 :texture-2 :texture-3 :program-1}
 :framebuffer-2 #{:texture-0 :program-1}}
correct output for those specific inputs:
{:framebuffer-0 #{}
 :framebuffer-1 #{:framebuffer-0}
 :framebuffer-2 #{:framebuffer-1}
 :texture-0 #{:framebuffer-2 :framebuffer-0}
 :texture-1 #{:framebuffer-0}
 :texture-2 #{:framebuffer-1}
 :texture-3 #{:framebuffer-1}
 :program-0 #{:framebuffer-0}
 :program-1 #{:framebuffer-1 :framebuffer-2}}

noisesmith19:07:42

@qqq I’d think accidentally calling an accessor and constructor would be a lot less likely than accidentally calling assoc on the wrong object

qqq19:07:54

@noisesmith : I can make the constructor private, then have a function (which calls the constructor) do checks beforehand

mfikes19:07:34

@qqq For accessing object properties (as opposed to array elements), consider goog.object/get instead of aget.

ghadi19:07:02

@qqq creating custom types that have their own field accessors completely negates the value of generic data access from the clojure std library

ghadi19:07:56

your code will essentially be a non-reusable DSL

ghadi19:07:20

I firmly believe you are going down the wrong path, and that the advice you're getting about specific implementation is misguided

ghadi19:07:32

you should test against invalid / incorrect data (even from your own code). Have you considered writing generative tests for your datastructures? (either from test.check or from spec)

ghadi19:07:38

test against bad data while allowing bad data to exist

ghadi19:07:29

Lots of things that are common in other languages (e.g. privileged data) are the exact wrong approach in Clojure.

ghadi20:07:27

when I say 'generic data access' here is an example of what I mean:

ghadi20:07:38

(get-in order [:line-items 2 :product :picture :url])
If you make a custom type, you'll need to do:

ghadi20:07:20

(-> order
  (get-in [:line-items 2 :product])
  (.-picture)
  (get :url))

ghadi20:07:41

You will infect the codebase with specificity

ghadi20:07:16

and negate a whole lot of the benefits of "the clojure way"

csm20:07:19

is it possible to tell if you’re already in a clojure.core.async/go block inside a defmacro?

dpsutton20:07:27

you could call <!

dpsutton20:07:33

the code of that is literally throw an error

dpsutton20:07:10

the go macro totally rewrites everything in it. so if that throws an error then you are not in a go block. otherwise have it take from a channel that has a single value ready to give

dpsutton20:07:21

super hacky

csm20:07:29

our devs went a bit bonkers with nested gos

hiredman20:07:12

the go macro actually macro expands everything before it does its thing, and the way it macro expands has some differences from the way the compiler does it

hiredman20:07:58

so you could try observing differences in the &env map

hiredman20:07:12

really, if you are in that place, I would just write everything to assume it is being run from a go block in the async threadpool

noisesmith20:07:19

it seems like an assert-go macro would be good, or even assert-not-go, but code that changes behavior in/out of go seems like it’s asking for maintenance issues and subtle bugs

tbaldridge20:07:44

@csm what problem are you trying to solve?

csm20:07:31

it’s stuff like “inject tracking of metrics in async calls”

csm20:07:54

in retrospect, it probably should have just been a fn instead

csm20:07:47

And a few mistakes/misunderstandings (like “wrap in a go call to run this in a different thread”)

csm20:07:41

it only became clear after updating core.async to a newer version, since we started getting method too large errors

tbaldridge20:07:07

So unless you're mixing core.match with core.async you shouldn't hit those problems often.

tbaldridge20:07:40

I mean 65k instructions is fairly large for a method, you might just be able to pull a few functions out the body of the go and reduce the method size a bit.

dpsutton20:07:59

i'm not able to get that hacky way to work. tried inlining the function, not sure

dpsutton20:07:14

it might need to be a macro for the async compiler to do its thing?

bcbradley20:07:28

does anyone know if anyone has tried making futures as efficient as goroutines?

bcbradley20:07:46

as far as i know futures spin up another thread, but I don't think they are pooled

bcbradley20:07:05

maybe someone has their own implementation of futures?

bcbradley20:07:24

i'm only asking because I really dislike core async's approach to concurrency (not that it is worse than most of the alternatives)

csm20:07:33

after thinking about it, it seems like the urge to “hack how this macro works to handle go” would better be served by “rewrite some stuff as fns, and split things up a little”

noisesmith20:07:42

@bcbradley futures are pooled, the fundamental thing is that threads are OS level and OS level context switching can’t be as fast as in-process context switches

bcbradley20:07:08

ah well i'm misinformed about the whole kitnkaboodle then

bcbradley20:07:36

do you think it would be possible to implement futures using in process context switching rather than os level switching?

noisesmith20:07:29

then you require either 1) an interrupt and pre-emptive context switch or 2) cooperative multitasking

noisesmith20:07:57

1 will perform a lot worse - probably as bad as OS task switches, and introduce massive complexity

noisesmith20:07:03

2 is core.async 😄

tbaldridge20:07:06

Yeah, and futures aren't slow at all, they're just memory heavy

tbaldridge20:07:21

@bcbradley have you looked at agents?

bcbradley20:07:50

i though agents were pooled, and thats the reason i figured futures weren't

bcbradley20:07:58

probably shouldn't assume though xd

tbaldridge20:07:06

futures will re-use threads though

bcbradley20:07:31

well thats reassuring

bcbradley20:07:36

i didn't know that, thanks for that

tbaldridge20:07:10

They use a Java Executors.newCachedThreadPool, which means that once created the threads stick around awhile incase they need to be re-used. In my experience "awhile" is something around a minute.

tbaldridge20:07:20

but that's undefined

bcbradley20:07:26

i don't want to start ranting, but the reason i dislike core async so much is because it is pretty much just imperative programming

bcbradley20:07:42

i'm not suggesting hardware threads plus semaphore hell is better

tbaldridge20:07:55

Nope, I agree with you 🙂 and I ranted on that subject at some length at Clojure/West

bcbradley20:07:27

i feel like futures + promises are the cleanest concurrency mechanism anyone has come up with so far

noisesmith20:07:31

@tbaldridge btw thanks for that talk, I often find myself linking the youtube for people who are clearly confused about how to use core.async properly

bcbradley20:07:34

its clean, simple-- and doesn't surprise you

tbaldridge20:07:50

But the answer is almost always: move the async, non-deterministic, bits to the edge of your system

dpsutton20:07:51

from the most recent clojure/west?

ghadi20:07:02

rich hickey elaborated on the problems of promises in a strange loop talk

bcbradley20:07:22

ooh can you link me that?

noisesmith20:07:29

@bcbradley I’ve still seen suprises eg. someone writes code that creates futures inside doseq - eventually they get a collection big enough that the VM goes nuts

noisesmith20:07:06

if they were doing channel ops inside a doseq, the reason for the failure would be more clear and the mitigation strategy would be more straightforward

bcbradley20:07:36

i can pretty much sum up my stance on the matter with one simple comparison

bcbradley20:07:08

core async and the go-lang tradition of concurrency involves imperative programming on stateful (chans are queues) entities

bcbradley20:07:17

promises are simply unrealized values

bcbradley20:07:29

clojure is a language of values, so i would prefer to use promises

bcbradley20:07:41

and futures allow you to realize something concurrently

bcbradley20:07:56

seems like a clean separation of responsibilities to me

tbaldridge20:07:26

Nothing stoping you from doing this: (<!! (thread ...))

noisesmith20:07:35

@bcbradley having worked on a large system which was using futures inside idiomatic fp constructs as a pervasive pattern, give me statefull queues of some kind any day - at least queues give you a straightforward mechanism for flow control and backpressure

noisesmith20:07:07

you might prefer agents with their implicit queue to core.async of course - but you’ll eventually need a queue of some type if you need concurrency

tbaldridge21:07:07

@bcbradley promises encourage a request/response style of code. This is fairly limited. As @noisesmith mentioned, CSP focuses on data flow, pipelines and queues. It's a "steps of processing" model vs a "call and wait for a response".

tbaldridge21:07:21

At least it should be, people use core.async as a request/response system, but it doesn't work out very well.

bcbradley21:07:43

i feel as though steps of processing could be implemented clearly with just chains of promises

potetm21:07:44

> people use core.async as a request/response system, but it doesn't work out very well

bcbradley21:07:21

i do think backpressure is an issue though

tbaldridge21:07:29

@potetm we've all done it 🙂 no condemnation from me.

michaellindon21:07:02

is there a neat way to remove an element from a vector at index i

dpsutton21:07:29

michaellindon: (apply concat ((juxt (partial take 3) (partial drop (inc 3))) [1 2 3 4 5 6]))

noisesmith21:07:58

you could use a finger-tree instead of a vector, they are designed for such things https://github.com/clojure/data.finger-tree

michaellindon21:07:23

thanks for the suggestion, ill take a look 🙂

bcbradley21:07:40

@mccraigmccraig wow i'll have to put that in a tab and read through it

swizzard21:07:42

is core.match ok? it’s been on 0.3.0-alpha4 for the last 3 years

seancorfield21:07:59

It's pretty stable and hasn't needed much maintenance I suspect.

dpsutton21:07:55

spoke with dnolen at conj last year. the way the decision tree is made is fine in java but removes some optimizations in cljs since it uses exceptions for backtracking.

dpsutton21:07:19

if i remember correctly, this was what he wanteded to implement a non-alpha release

seancorfield21:07:27

Versions aren't much of an indicator in Clojure-land, and a contrib library that has gone without updates for a while doesn't indicate it's been abandoned either. Which, yeah, can make it hard to tell the difference between very-stable-ware and abandon-ware 😐

swizzard21:07:53

if the official status is Don’t Worry About It, then all the better

swizzard21:07:05

i just really like it and was worried something bad had happened

seancorfield21:07:23

Someone just pointed out that algo.graph never actually got a release on Maven (and it hasn't had an update in four years).

swizzard21:07:24

that’s bad, right?

urbank21:07:20

trying to use clojure 1.9 from emacs with cider, but when I run cider I get

dpsutton21:07:56

can you run a lein repl in this project without emacs?

noisesmith21:07:36

this is a known issue with clojure 1.9 and old core.async

noisesmith21:07:45

upgrade to a newer core.async and it goes away

nathanmarz21:07:28

@michaellindon with specter:

(setval (nthpath 2) NONE [1 2 3 4 5])
;; => [1 2 4 5]

nathanmarz21:07:53

also works with lists

urbank21:07:23

@noisesmith Hm... now to find out which dependency has the wrong core.async version...

noisesmith21:07:29

you can specify your own core.async version and the deps will have to use it

noisesmith21:07:39

or use lein deps :tree to see where it comes in

tbaldridge21:07:03

@nathanmarz Isn't that O(n) on the size of the new collection though? Why not just (into [] (filter-nth ...) input)

noisesmith21:07:09

(that is, if your deps is earlier in the deps list it overrides)

nathanmarz21:07:52

@tbaldridge yea the underlying impl is O(n), but your snippet looks like O(n) as well

tbaldridge21:07:47

but the whole idea behind conj and the like is that they are not O(n) on the size of the new collection

tbaldridge21:07:11

so I would flag both of our approaches in a code review for being a possible bottleneck

nathanmarz21:07:30

I don't think you can do better than O(n) for that task on a vector

tbaldridge21:07:00

that's why we suggested finger trees

nathanmarz21:07:40

performance and just being able to do the task elegantly are two separate things

nathanmarz21:07:03

sometimes you need to remove an element from the middle of a vector and it's not a bottleneck

nathanmarz21:07:24

unreasonable to require every operation to be O(1)

tbaldridge21:07:36

even then I'd be tempted to use take + drop to do the same thing

dpsutton21:07:43

(apply concat ((juxt (partial take 3) (partial drop (inc 3))) [1 2 3 4 5 6]))

tbaldridge21:07:48

or split-at or whatever it's called

bronsa21:07:10

core.rrb-vector could be used aswell to keep performance sub linear

nathanmarz21:07:12

nthpath can encapsulate the optimal method

nathanmarz21:07:18

whatever it is

bronsa21:07:25

has log subvec & log vector cat

nathanmarz21:07:54

I haven't investigated the optimal implementation for this particular task, but whatever it is it can be encapsulated behind the abstraction

nathanmarz21:07:16

specter is optimal for most of its functionality

urbank21:07:27

@noisesmith Thanks, that worked! What a relief! Nothing more annoying than debugging the dev environment when in a hurry 🙂

tbaldridge21:07:30

on that we'll agree to disagree

nathanmarz21:07:44

what are we disagreeing on?

bronsa21:07:08

I think nathan was talking about performance & tim about api?

tbaldridge21:07:37

yeah, I mean the problem is that specter in this case abstracts so much away that I really don't know what it's going to do

nathanmarz21:07:12

you don't know what it will do semantically or performance-wise?

tbaldridge21:07:58

performance wise, there's a underlying assumption in specter that it will do "the right thing" while maintaining the data type. Many times I don't care about the datatype, instead I care about performance.

tbaldridge22:07:29

so our definitions of "optimal" differ

nathanmarz22:07:17

the original questioner asked about removing an element from a vector, which sounds like he cared about maintaining the datatype

tbaldridge22:07:04

perhaps, but sadly I've run into too much code already that falls apart because someone did (nth ...) on a seq.

nathanmarz22:07:33

I really don't understand what you're asserting there

tbaldridge22:07:43

And what does "the same type" mean in the face of PersistentArrayMap? (I'm actually wondering on this one)

nathanmarz22:07:50

MAP-VALS or ALL on a PersistentArrayMap output another PersistentArrayMap

nathanmarz22:07:05

even one that's already above the threshold

nathanmarz22:07:15

which is surprisingly possible

tbaldridge22:07:50

and if I do something that grows it?

nathanmarz22:07:02

you can't grow it with those operations

tbaldridge22:07:24

whatabout ops that you can grow with

nathanmarz22:07:49

they'll convert to PersistentHashMap

nathanmarz22:07:51

maintaining the literal type isn't the goal of specter, rather to maintain a type with the same expected semantics

nathanmarz22:07:23

PersistentHashMap and PersistentArrayMap are implementation details

nathanmarz22:07:04

that specter maintains PersistentArrayMap on ALL and MAP-VALS is because that's the most performance optimal way to do the transformation

dpsutton22:07:15

I've used an array map to maintain ordering for writing out columnar data, a persistenhashmap would make my site more dynamic than i like.

nathanmarz22:07:44

yea there are isolated cases like that where it matters but not for 99% of use cases

dpsutton22:07:10

i hate some of the subtleties of the clojure datatypes

dpsutton22:07:33

like conj being beginning of a vector and end of a seq

dpsutton22:07:42

or backwards sorry

tbaldridge22:07:36

I also hate some of the side effects of these datatypes, and that's why I've backed off a bit from specter, hiding all these complexities behind a abstraction is nice, but it leaks in performance. How fast is "remove nth"? Well it depends...

tbaldridge22:07:48

Same is true of recursive concat, conj, etc.

dpsutton22:07:05

what do you mean "side effects of these datatypes"?

tbaldridge22:07:12

bad wording, sorry

tbaldridge22:07:29

"subtleties of these datatypes"

dpsutton22:07:32

figured. but wanted to know what you meant

nathanmarz22:07:43

actually for most uses of specter it's extremely difficult to outperform it

tbaldridge22:07:47

So yeah, back to the original problem, I think we should educate users of Clojure to say: "What you are trying to do isn't supported natively by the datatype, you can fake it in these ways, but it's going to have performance problems with larger collections".

nathanmarz22:07:02

for most programmers I would say it's impossible because it requires too much internal knowledge of clojure

nathanmarz22:07:55

that's 60% faster than next best method for transforming every value of a small map

nathanmarz22:07:39

as for "remove nth", specter is probably not currently optimal but that's only because the work hasn't been put into it

nathanmarz22:07:54

the abstraction can be optimal

tbaldridge22:07:03

optimal given the input data type...that's the catch

nathanmarz22:07:27

how is that a catch?

nathanmarz22:07:34

it can run different code for different data types

tbaldridge22:07:31

remove-nth will always be O(n) on a vector. No way to improve that. However, by educating users as to how the underlying collections work, maybe the'll reach for a different more optimal datatype.

nathanmarz22:07:03

I 100% agree it's the programmer's responsibility to understand the data types they're using and the impacts of that, but that's completely orthogonal to specter

nathanmarz22:07:50

specter lets you manipulate your data way more elegantly, especially compound or recursive data, and in many cases with far better performance

nathanmarz22:07:11

I completely reject characterizing it like some magic library with performance "leaks"

qqq22:07:35

I don't think @tbaldridge was blaming specter for the "leaks" -- but rather, unless you already have a mental model of how the nested datatype looks, you can have a single piece of Specter code that is (1) very fast for certai nstructures and (2) very slow for other structures, because some stuff are O(log n) or O(n) depending on the underlying datastructure

nathanmarz22:07:07

whatever the underlying types are, specter will do the operation in the fastest way

nathanmarz22:07:38

it's the responsibility of the programmer to choose the most appropriate types for their app

nathanmarz22:07:10

criticizing specter for a programmer choosing inappropriate types doesn't make sense

qqq22:07:44

1. I've studied specter a bit, even tried to implement a mini one myself. 2. I don't think I could do a better job myself. 3. I think "leaky" here just means -- as a programmer, you have to keep track of the underlying data structures, i.e. it's "leaky" in that you can't ignore the underlying details; not "leaky" as in space/time leakage.

qqq22:07:05

I think this is the standard definition of "leaky abstraction."

nathanmarz22:07:19

I wouldn't call that leaky

nathanmarz22:07:52

"leaky" more appropriately refers to details that you have to worry about that should be encapsulated

qqq22:07:24

Quoting wikipedia: In software development, a leaky abstraction is an abstraction that exposes details and limitations of its underlying implementation to its users that should ideally be hidden away. But here, Spectre queries (or any other queries for that matter) are 'leaky' in that different data structures have different runtimes for different ops, and the programmer has to keep them in mind, so this isn't really abstracted away from the programmer. [I don't know a way to do this better.] [I think this 'leakiness' problem can not be solved -- i.e. any attempt to build a DSL that allows easy manip of heterogeneous datastructures will have to deal with this[]

qqq22:07:17

Just to be clear, I don't know of a way to improve Specter -- I think it's hit local optima -- and this 'leakiness' is a fundeamtanl problem due to different data structures having different runtimes.

nathanmarz22:07:27

data structures are never a detail that should be hidden away

nathanmarz22:07:15

we're just quibbling over terminology, I think we agree on the underlying principle

qqq22:07:02

To someone who expects specter code to be "what, not how" it is leaky because they have to consider underlying datastructures. To somehow who expects to always keep data structuresin mind, it's not leaky. 🙂

qqq22:07:28

Let's argue over something else,

qqq22:07:43

like ... where can i get a good set of exercises for learning how to write a nanopass compiler 🙂

qqq22:07:00

I'm watching the 2013 clojure conj https://www.youtube.com/watch?v=Os7FE3J-U5Q talk ... and I really want to try this out.

nathanmarz22:07:45

we can agree on that 🙂

tbaldridge22:07:17

@qqq records for the AST, postwalk for the passes, run till fixpoint, about all there is too it

tbaldridge22:07:37

or hashmaps even for the ast, whatever you prefer

noisesmith22:07:15

or you could get creative and event source a queue of characters and fold over it to generate a projection representing your compiled code

noisesmith22:07:20

just kidding, don’t do that

hiredman22:07:34

depending on your source and target you can do it without an ast

hiredman22:07:05

if you say your target is a superset of your source, it is macroexpansion

hiredman22:07:12

(or pretty close)

tbaldridge22:07:34

true, but working with order dependant types is unpleasant

tbaldridge22:07:07

may be better with spec, but {:keys [fn-name body]}) is easier than [_ fn-name _ body]

hiredman22:07:27

https://github.com/hiredman/qwerty "macroexpands" a lisp in to something like go in parens, then the emitter strips the parens so you can feed it to the go compiler

hiredman22:07:31

I dunno if it rates an exclamation point, it was fun to fiddle with it for a while, then I gave up on it

qqq22:07:36

hmm, and if I write the passes as transducers, can I easily get a monolithic compiler out o fthis?

tbaldridge22:07:04

you can, but you'll quickly find that some passes need to be run more than once, or need to be run before/after other passes

tbaldridge22:07:30

tools.analyzer is a nano-pass compiler imo.

bronsa22:07:33

and in different traversal orders

tbaldridge22:07:51

yeah that too.

tbaldridge22:07:39

@bronsa does tools.analyzer.jvm still walk the AST backwards for locals clearing? Maybe I'm mis-remembering, but I thought that was cool the first time I saw it.

bronsa22:07:21

makes the algorithm much simpler than walking forward & collecting usage points

bronsa22:07:45

I did it that way just because I couldn't understand the forward algorithm implemented in Compiler.java TBH :)

tjscollins23:07:25

I'm struggling to wrap my head around clj-oauth. All the examples use twitter, but I can't seem to get it to work with Google. Google and twitter's terminology doesn't seem to be the same, and it's confusing the heck out of me. Are there any examples using google's api I can look at?

dealy23:07:14

I'm having trouble with core.async pub/sub. It seems like there must be something I'm not understanding. I can see that my system sometimes is publishing a lot of events really close together. Say 20 within 2 seconds. There are times that all but one of my subscribe loops (event listeners) aren't doing anything during this burst of activity. Then a while later (a couple of min sometimes) during another burst of publish activity the subscribe loops come to life and grab some data and push on to the subscriber channels.

dealy23:07:44

I started out by not using any buffering on the channels. I'm not really clear when additional buffering makes sense or not

dealy23:07:03

It really seems like some of my published events are being lost

hiredman23:07:52

the first thing to do is check that your topic-fn is returning values from the set of things you are subscribing to

dealy23:07:53

when just using (chan) for pub should my source threads block if the subscribers are idle?

dealy23:07:15

yea, I have lots of debug messages that all looks pretty good

hiredman23:07:25

how sure are you?

dealy23:07:35

well I currently only have one topic

hiredman23:07:40

like, are we talking strings you know are byte for byte the same?

dealy23:07:05

my topic is just a keyword

hiredman23:07:34

the next thing to log would be the identity (pr-str prints out the identity hash if I recall) of each thing involved

hiredman23:07:17

to make sure you are creating the pub/sub on the same channel you are publishing to, and you are subscribing to the same pub/sub you are publishing to

hiredman23:07:20

if the channel you are publishing to isn't being consumed for some reason, your publishes with block if there is no buffer, and will block once the buffer is full

hiredman23:07:47

if I recall, a pubsub will consume everything and just ignore messages it has no subscribers for

dealy23:07:58

right, that's the thing, it appears that my publishers never block

hiredman23:07:34

so I would double check your topic-fn, make sure it returns what you think it does on the inputs to the channel

dealy23:07:51

when no buffer is supplied to the channel, does that mean it will block after the first put, until the first take?

hiredman23:07:28

e.g. if your topic-fn is a keyword, and your messages are maps, calling a keyword on a map that doesn't contain it would just return nil

dealy23:07:02

yes, that is how my events are structured just as maps

hiredman23:07:36

so invoke your topic-fn on one of the maps to see what it returns

dealy23:07:28

that's the thing it all works fine when I do it by hand, things get weird once there is lots of simulatneous activity

dealy23:07:56

based on my logging I can see all the topics that are published and those that are received by the listenr loops

hiredman23:07:57

how do you know your publishers don't block?

dealy23:07:48

I say that just based on my theads that push out lots of messges saying that they're publishing.... hmm wait you might be on to something

dealy23:07:10

my log message happens right before I push not after, I might be misinterpreting whats going on

dealy23:07:24

push=publish

dealy23:07:41

ok, made some changes to logging and restarting everyting,

dealy23:07:18

so could you help clarify about when is it appropriate to specify a buffer in these sceanrios?

noisesmith23:07:23

If things lock up under lots of activity, and you aren't buffering, and lots is < thousands, I'd double check for blocking ops in your go blocks.

dealy23:07:04

yea I'm not into thousands of events, generally should be less than 50/sec