Fork me on GitHub
#clojure
<
2016-04-02
>
igortn03:04:04

I have a Java class defined with deftype. It’s happily resolved dynamically and works in REPL. Also, by using (:gen-class) in the namespace and :aot in the project file, I have compiled it into the library jar. I have this lib as a dependency to another Java project. Intellij recognizes this dependency and corresponding imports. The code looks all right - everything is properly resolved. But javac cannot resolve this class, and compilation breaks. Anybody can shed light on this?

shriphani04:04:13

Hi, I’ve got a ring project here with a dependency and I get this flaky error: Exception in thread "main" java.lang.NoClassDefFoundError: schema/utils/PSimpleCell

shriphani04:04:15

the tests all pass locally but don’t on CI.

shriphani04:04:21

has anyone seen something like it ?

igortn04:04:31

Regarding my previous post from 11:51 PM - I was able to succeed by switching to defining my Java class with gen-class macro instead of using deftype. Now the class is properly resolved from the library. I am fine now, but if anybody can share an experience, please do!

shriphani04:04:02

well igortn your post gives me an idea.

shriphani04:04:20

for my problem so I feel that’s fairly useless 😄

kwladyka09:04:38

What solutions do you use for OAuth 2? ( server app, client app, resource app)

slipset11:04:24

Cemerick/friend

abtv13:04:38

Could anyone help me with transducer problem? I use lazy-seq and several chains of map and filter. Every second I receive a message and write it to the console. When I replaced it with (sequence x-form my-lazy-seq) it became work in the strange manner: it doesn't print for a 30 or more seconds and then prints all the values, then freezes and prints again and so on. What can be a problem?

lopalghost14:04:28

When you say print, do you mean the function print?

lopalghost14:04:53

It's probably an issue with chunking--how lazy sequences are realized

lopalghost14:04:19

If you want to return the entire collection at once, you should use into instead of sequence

abtv14:04:44

I receive data from a core.async channel: 1 message every second. It's infinite process, that's why I need a sequence. @lopalghost

abtv14:04:57

I write to console when I receive message from a channel. It writes every second. But processing writes every 30 seconds ...

lopalghost14:04:23

If you're calling print on items in a lazy sequence, it will print a chunk at a time

lopalghost14:04:56

For side effects, since you're consuming a channel, you should probably just use a go block

abtv14:04:19

I use (doseq [x xs] (prn x)) inside async.thread

abtv14:04:54

xs is (sequence x-form my-lazy-seq)

abtv14:04:48

I can't understand why it works in such a manner.

lopalghost14:04:47

Why not just apply the transducer to a channel and consume from the channel in a thread loop?

lopalghost14:04:01

I think that would be idiomatic

abtv14:04:23

yes, I think it will work, but I don't understand why my code doesn't work. It's really strange

abtv14:04:35

I think I miss something

abtv14:04:49

The question: what I do wrong? simple_smile

lopalghost14:04:55

I'm afraid I can explain why your way doesn't work--I understand transducers well enough to use them, but it's difficult to explain how they work :)

lopalghost14:04:59

I'm pretty sure it has to do with the fact that sequence returns a lazy sequence, though

abtv14:04:32

hm, I use (doseq... Is it not enought to consume it?

lopalghost14:04:12

What if you used into with prn in a transducer?

lopalghost14:04:20

Just curious

abtv14:04:37

Will into work with lazy seq?

lopalghost14:04:47

Oh nvm, you said it's infinite

abtv14:04:58

exactly simple_smile

lopalghost14:04:44

Well, pretty sure sequence applies transducers as the seq is consumed, so doseq receives xs in chunks

abtv14:04:06

yes, looks like you said

lopalghost14:04:19

If you composed (map prn) with xform and just tried to return the sequence in a thread, that might work

lopalghost14:04:29

But again, wouldn't be idiomatic

lopalghost14:04:04

No, actually that might not work either. But maybe if you called first on it in a loop

abtv14:04:19

I will try to use dorun...

abtv14:04:03

yes, dorun works! you gave me a good idea about chunks! thanks simple_smile

abtv14:04:55

interesting... is it ok to make side effects inside map function? I think it's a bit non-idiomatic, but why not? my map will return just a seq of nils... Any opinions?

lopalghost14:04:42

Oh cool, forgot about dorun

lopalghost14:04:42

Ehh... Because map returns a lazy sequence, and you want to avoid the kind of problems you had

lopalghost14:04:43

As for using map to make a transducer... I'm actually not sure whether side effects are ok inside transducers

abtv14:04:21

yep, it's bad

abtv14:04:43

so, I'll put the transducer on the channel

kwladyka15:04:04

slipset: thx.

cky16:04:01

I ❤️ that map is lazy. It makes a clear case for why people should not abuse map for side-effecting stuff. 😛

cky16:04:27

Scheme’s map is not lazy (since Scheme doesn’t have built-in laziness, except via delay/`force`), but it does say that the function is not called on the list elements in any specific order (and in practice, right-to-left is a common ordering), so using it for side-effecting stuff is also not recommended. 😛

jjttjj16:04:09

Anyone ever interact with a system outside your control which requires an ID, and it really should be a UUID but the outside system only allows ints, so you just use a secure random int and hope for the best? Pretty sure I'm comfortable with the tradeoff of the chance of clashes vs having to manage unique IDs manually in my particular scenario, but it feels weird

macrobartfast17:04:01

working on seesaw for fun ( https://gist.github.com/daveray/1441520 ) in LightTable... (-> f pack! show!) isn't producing a window in LightTable; any insights? (feel free to direct me to the right place if this isn't it).

macrobartfast17:04:30

I got it working in the repl directory (not in LightTable)... but I do like LightTable at times.

macrobartfast17:04:17

(-> f pack! show!) did produce #object[seesaw.core.proxy$javax.swing.JFrame... in LightTable... but not a swing window. I'm sure the word swing is making everyone ill right now (but I've always kinda liked it).

josh.freckleton17:04:24

so, clojure as a backend deployed on heroku is abominably slow, and I notice from heroku logs that everytime the server boots up (I'm on the free tier), it has to retrieve 100s of libs, and takes a few minutes before it'll be up and running. How do you correctly deploy to heroku? Package a jar, and make Heroku a Java app instead of Clojure? Something else?

2016-04-02T17:40:35.829911+00:00 heroku[web.1]: Unidling
2016-04-02T17:40:35.830275+00:00 heroku[web.1]: State changed from down to starting
2016-04-02T17:40:40.228772+00:00 heroku[web.1]: Starting process with command `lein with-profile production trampoline run`
2016-04-02T17:40:42.480333+00:00 app[web.1]: Setting JAVA_TOOL_OPTIONS defaults based on dyno size. Custom settings will override them.
2016-04-02T17:40:42.484769+00:00 app[web.1]: Downloading Leiningen to .lein/leiningen-2.6.1-standalone.jar now...
2016-04-02T17:40:43.057991+00:00 app[web.1]: Picked up JAVA_TOOL_OPTIONS: -Xmx350m -Xss512k -Dfile.encoding=UTF-8
2016-04-02T17:40:52.793194+00:00 app[web.1]: Retrieving lein-cljsbuild/lein-cljsbuild/1.1.1/lein-cljsbuild-1.1.1.pom from clojars
2016-04-02T17:40:53.262799+00:00 app[web.1]: Retrieving lein-cljsbuild/cljs-compat/1.0.0-SNAPSHOT/cljs-compat-1.0.0-20151218.091126-41.pom from clojars
2016-04-02T17:40:53.434015+00:00 app[web.1]: Retrieving org/clojure/clojure/1.5.1/clojure-1.5.1.pom from central
2016-04-02T17:40:53.483088+00:00 app[web.1]: Retrieving org/sonatype/oss/oss-parent/5/oss-parent-5.pom from central
2016-04-02T17:40:53.618584+00:00 app[web.1]: Retrieving fs/fs/1.1.2/fs-1.1.2.pom from clojars
2016-04-02T17:40:53.766365+00:00 app[web.1]: Retrieving org/clojure/clojure/1.3.0/clojure-1.3.0.pom from central
2016-04-02T17:40:53.889886+00:00 app[web.1]: Retrieving org/apache/commons/commons-compress/1.3/commons-compress-1.3.pom from central
... and many dozens more rows like this

codefinger17:04:55

@josh.freckleton: yea, you need to package an uber jar, otherwise lein will try to download the dependencies on boot

josh.freckleton17:04:14

thanks @codefinger, haha it takes sooo long! so if I make an uber jar, will it still be a "clojure" app on heroku, or should it just be a java one?

codefinger17:04:53

@josh.freckleton: it's really up to you. some people prefer to deploy with Git, and let Heroku run lein to package the uberjar. Others prefer to package the uberjar on CI and deploy with heroku-deploy or lein-heroku: https://github.com/heroku/heroku-deploy https://github.com/heroku/lein-heroku

codefinger17:04:07

i maintain both. so feel free to ask me questions simple_smile

codefinger17:04:57

also, if the uberjar is big (like > 100mb), it's usually fast to Git push. so that's the most common solution for dev work

codefinger17:04:08

s/fast/faster/

codefinger17:04:14

also... i think there's a way to include the lein deps in the slug, so that you can run your app with lein and it doesn't download the deps. but i forget....

codefinger17:04:38

i really encourage folks not to run with lein in production (or maven or any other build tool)

josh.freckleton17:04:42

oh awesome, thanks! So the uberjar should hopefully be small, on my project, not a ton of deps, not a ton of code, which method would you recommend?

josh.freckleton17:04:57

too many options in clojure scare me!

codefinger17:04:51

for small jars, i really like lein-heroku

codefinger17:04:07

but the most common (be a very large number) is Git push

codefinger17:04:55

@josh.freckleton: g2g. if you have trouble with uberjar or anything else, ping me on here. i lurk.

josh.freckleton17:04:48

thanks @codefinger! I'll try to figure out the common way you mention, with git push. I assume you mean that people git push heroku master, and then in project.clj, something tells Heroku's lein how to compile an uberjar. If not, when you get a chance, I'd love a ping back, but otherwise I'll just be working on this simple_smile

codefinger18:04:39

@josh.freckleton: yea, if Heroku see "uberjar" in your project.clj it will attempt to build an uberjar and every thing should "just work" (assuming everything works locally when you run lein uberjar). depending on your application, it can be easy or hard to convert from lein run or whatever to an uberjar. so yea, give it a try and hit me up if you have problems

josh.freckleton18:04:29

thanks @codefinger, this is a big help! Have a good weekend simple_smile

codefinger18:04:16

actually. to clarify, Heroku needs to see "uberjar-name" in your project.clj. otherwise, try to follow the steps here https://github.com/heroku/heroku-buildpack-clojure#uberjar

abtv18:04:08

Suppose we use a core.async channel with a transducer. What thread will perform the transducer transformations? The same thread which puts data on the channel, right?

hiredman18:04:00

it is not specified

abtv18:04:05

so, it can be another thread from thread pool, no?

hiredman18:04:08

for channels where the buffer is not full, I think it is the thread doing the put, but if the buffer is full so the put is queued up until there is room in the buffer, it could be anything from the pool

hiredman18:04:22

but I am not 100% on that, best not to care about it

abtv18:04:26

As I understand if buffer is full it means we wait until we can put something there. So, it blocks (in a sense of waiting), right?

hiredman18:04:31

for go blocks with >! or <! it logically blocks, for threads with <!! and >!! it actually blocks

hiredman18:04:37

what I mean by logically blocks is, from the perspective of the code it blocks and continues once the action completes, but in reality the code is sort of suspended and shoved off to the side until the action completes, so it isn't actually blocking a thread

abtv18:04:20

what about put! fn? for example, I receive data in callbacks (say, some legacy code) and call put! there. If channel is full, what is happened with callback? Will it wait until we can put something on the channel?

hiredman18:04:29

yeah, those get queued up somewhere waiting for the channel to empty, if you queue up enough of them core.async will throw errors

hiredman18:04:27

the only place it is reasonable to use put! is in callbacks in javascript, and even then you should, with in reason, avoid it

abtv19:04:10

hm, I use twitter streaming api in Clojure. There is a callback for data receiving. How can I avoid it?

hiredman19:04:34

use the real blocking versions <!! and >!!

abtv19:04:28

not sure I understand. There is callback which triggers when there is new data available. How can I use >!! there?

abtv19:04:49

the callback is outside thread block

hiredman19:04:07

>!! and <!! are not tied to thread

hiredman19:04:18

you can use them on any real thread

abtv19:04:35

ah, ok. and it's best practice?

abtv19:04:57

to put data with >!! from a callback?

hiredman19:04:23

yes, using put! is bad

abtv19:04:42

I thought put! is designed for such cases as interop... I understand your phrase as it means that >!! is always better than put!, right?

Lambda/Sierra19:04:06

put! is truly asynchronous. It returns immediately, never blocks. Therefore it's typically used in callbacks where you don't control the calling context.

Lambda/Sierra19:04:37

>!! is blocking, typically used when you do control the calling context and want it to block the current thread.

Lambda/Sierra19:04:10

If some API is putting messages on a channel faster than you can consume them, use sliding or dropping buffers to shed the excess.

abtv20:04:03

oh, great advice about sliding buffer! I really don't know how this callbacks are implemented and it means I should use put! and sliding-buffer, right? @stuartsierra

abtv20:04:40

and I don't use dropping-buffer because it's not my case

tungsten21:04:35

(defn dumb [a] (a)) (dumb [5])

tungsten21:04:39

why doesnt that work

d-side21:04:34

@bfast you're effectively getting (5) as "invoke 5".

d-side21:04:22

Looks like you're looking for (defn dumb [a] a)

d-side21:04:12

This may be a better fit for #C053AK3F9 channel though. Cheers!

tungsten21:04:23

i dont really understand though

macrobartfast21:04:11

sweet... got seesaw working from lighttable! amazing.

macrobartfast21:04:39

can't wait to show mom.