Fork me on GitHub
#clojure
<
2016-11-03
>
fenton10:11:10

I'm trying to cider instrument for debugging a function in om.next.impl.parser but i get the error: Namespace not found. any thoughts?

baptiste-from-paris12:11:07

hey guys, is it possible to create a defmethod in a macro ?

moxaj12:11:29

@baptiste-from-paris sure it is. Expand to whatever you will, if it looks like a defmethod, it is a defmethod

baptiste-from-paris12:11:22

ok so basically I can generate defmethod and use it just after in my macro

Pablo Fernandez12:11:02

Are there any clojure/clojurescript implementation of challenge response authentication?

moxaj12:11:16

@baptiste-from-paris yes, it will be exactly the same as if you typed that defmethod by hand, except the macro did it for you

mpenet13:11:43

any recommendation for a job scheduling library with persistence ?

kurt-yagram14:11:40

well, I use https://github.com/jarohen/chime for job scheduling. Pretty straight forward to add persistence, I think?

mpenet14:11:42

not so sure, state is basically in channel queues

kah0ona14:11:44

Question: I have a webapp, and when i compile my cljs to .js using leiningen for use in an uberjar, it compiles to app-min.js. But now users sometimes have to press CTRL+F5 or CMD+SHIFT+R or similar to prevent viewing the old version. I thought about compiling to app-min-<projectversionnumber>.js, but how do I do that in a project.clj file? Can I get the project version string somehow from the project.clj file?

kah0ona14:11:53

during uberjar building that is

dominicm15:11:05

@mpenet There's also https://github.com/juxt/tick which takes care of scheduling for you

dominicm15:11:39

You just pass it a function to call back

mpenet15:11:01

nice, it's closer to what I had in mind

dominicm15:11:33

Disclaimer: I do work for JUXT, and am sat next to tick's creator

dominicm15:11:56

@mpenet what did you have in mind?

mpenet15:11:39

lazy-seq of instants optionally fed to c.async channels via go/timeouts (acting as the poor man's clock)

mpenet15:11:58

so tick has the lazy-seq part it seems, half the work is done

mpenet15:11:28

the rest is straightforward

shdzzl15:11:46

@kah0ona I assume you require app-min.js via a script tag in a html document? If so, how is that html document served? Is it generated dynamically?

shdzzl15:11:44

The answer is likely going to be that you want to randomly generate some number and add it as a query param to the url you're requiring in that script tag.

mpenet15:11:07

@dominicm I need to be able to express correct "every wednesday at 2pm" sort of things too

kah0ona15:11:12

@shdzzl, it actually now is a static file, but that can easily be changed come to think of it. Is that reliable, adding a random parameter?

mpenet15:11:29

Really nice lib, I like what I see. will use it.

dominicm15:11:49

@mpenet tick is well designed for this. For example, these are easter sundays https://github.com/juxt/tick/blob/master/src/tick/core.clj#L102-L112

shdzzl15:11:07

@kah0ona There are other solutions. You can look into cache busting strategies if you want. But, yes, that works as long as the parameter is sufficiently random, you could use a timestamp alternatively.

dominicm15:11:43

Or you can filter days using filter like normal in combination with periodic-seq, depends how complicated you want to go

mpenet15:11:42

that should do, I can't think of a thing it couldn't handle in my use case

jrheard15:11:40

(the thing about using a random number is that it makes the file uncacheable - you want the query-string parameter to have some relationship with the file’s contents, so if the file doesn’t change for a week, users are able to use the cached version during that week)

jrheard15:11:59

you often see people use a hash of the file as the query-string parameter for this reason

shdzzl15:11:40

^ This is correct.

kah0ona16:11:00

yeah i figured that. thanks for mentioning though

shayanjm18:11:56

Does anyone here use redis in a cluster set up? Preferably using redis-sentinel? I know carmine doesn’t have sentinel support yet so wondering what people do as their workaround

seancorfield18:11:07

We’ve just switched from Carmine to Jedis to support Redis Cluster.

seancorfield18:11:21

We’re still using Nippy for serialization (like Carmine).

seancorfield18:11:02

Be aware in Redis Cluster mode, you cannot run MULTI/EXEC so you need to use Lua scripts or some other approach for those use cases.

seancorfield18:11:26

(but we’re too early in the process to give you much guidance beyond that @shayanjm )

shayanjm18:11:53

awesome, @seancorfield, thanks so much

jcsims18:11:22

@shayanjm we moved to Jedis as well. I don't have a lot more context than that at the moment 🙂

shayanjm18:11:46

@jcsims @seancorfield : are you guys using the Java library directly? I found a clj-jedis wrapper library

shayanjm18:11:58

not sure if it’s worth introducing yet another point of failure

ghadi18:11:40

I had a former team that also moved from carmine to Jedis, much happiness

jcsims18:11:00

@shayanjm we use the Java Jedis lib directly

ghadi18:11:20

(we used Jedis directly too -- java interop is smooth)

shayanjm18:11:22

Cool. Additional question: this’ll be my first time playing with clustered redis instances. What sort of consistency guarantees do you get with them? My specific usecase is that I need data to be made available immediately to my API users. My understanding is that unless you’re reading and writing from master (which defeats the purpose of the clustering) - you have no guarantees that you’re not just reading stale data

shayanjm18:11:46

is my intuition/understanding correct?

tom19:11:12

Using test selectors in leiningen, is there a way to exclude multiple selectors from :default?

tom19:11:20

I was reading lein help test

tom19:11:19

Here they use complement :integration as the example. But how do you use complement in the context of project.clj to check multiple selectors?

tom19:11:50

Just gonna do (fn [s] (not (or ... )))

agigao19:11:38

Any hints about text summarization library for Clojure? :)

dpsutton19:11:03

I'm using jdbc, specifically db-do-commands

dpsutton19:11:10

the documentation says that >(defn- execute-batch > "Executes a batch of SQL commands and returns a sequence of update counts. > (-2) indicates a single operation operating on an unknown number of rows. > Specifically, Oracle returns that and we must call getUpdateCount() to get > the actual number of rows affected. In general, operations return an array > of update counts, so this may not be a general solution for Oracle..."

dpsutton19:11:33

However, i'm getting back (-1)

dpsutton19:11:40

Not really sure what's going on here

seancorfield19:11:09

@shayanjm We’re using a small protocol layer over Jedis (so we can handle operations seamlessly for both clustered and non-clustered usage), no wrapper.

seancorfield19:11:02

@dpsutton And you’re using Oracle?

dpsutton19:11:18

no i'm using MS Sql Server

dpsutton19:11:22

i just quoted the documentation

dpsutton19:11:33

was just surprised to see -1 but -2 was mentioned in the docs

dpsutton19:11:44

but i'm guessing that just means the same but in MS speak?

seancorfield19:11:51

I’ve not seen MS SQL Server return -1 so I don’t know what that means.

seancorfield19:11:22

My tests against MS SQL Server have always seen it return update counts (0, 1, 2, etc).

seancorfield19:11:43

What driver are you using? I test with both the jTDS and Microsoft Type 4 JDBC drivers.

dpsutton20:11:34

i'm using the microsoft type 3 jdbc driver

dpsutton20:11:45

i've been meaning to try using the jTDS one though

dpsutton20:11:02

i'm frustrated by MS not distributing the type 4 one

seancorfield20:11:25

I downloaded it and added it to our local Archiva repo 🙂

seancorfield20:11:46

But I can start testing java.jdbc with the type 3 driver as well for better coverage.

seancorfield20:11:14

Where do I find the Type 3 driver?

seancorfield20:11:39

I can only find the Type 4 driver...

dpsutton20:11:50

clojars is where i'm pulling it

dpsutton20:11:57

or am i mistaken in version numbers

dpsutton20:11:02

perhaps type 4 version 3?

dpsutton20:11:18

[com.microsoft/sqljdbc4 "3.0"]

Pablo Fernandez20:11:23

I need to implement key-based authentication, something similar to what ssh does, where the server is Clojure and the client is ClojureScript. Any ideas of any libraries that can hlp me?

dpsutton20:11:29

i think i was confusing my terminology

tom20:11:37

I'm trying to use core test for integration tests with a database and compojure-api. I want to know the best way to control the order in which tests occur? For example, A depends on B in the database. I can use a POST request to set up A and test the post route at the same time. Then I can run B after.

tom20:11:28

seems nice and economical. But I presume, using test selectors or not, lein test will just run the deftests in any random order.

sveri20:11:27

@tom Tests, no matter the language or the type, should always be self contained and independent of others. Every test should make sure it creates the environment it needs.

tom20:11:08

@sveri Ok, just looking for a way to avoid having to keep an up to date test database or other similar monstrosities.

sveri20:11:59

@tom What I usually do, is to setup every test with the needed entries in the database.

tom20:11:55

@sveri I see, how do you like to organize the code tests? Everything inside deftest or in functions that can be shared?

sveri20:11:40

It depends, but, what can be shared, will be shared. And I try to test every business function alone with some testdata without looking into the db. This way I only have very few that interact with the db

seancorfield21:11:02

@tom I would definitely encourage you to try to separate out your code so you have a DB query function, a pure business logic function, and a DB update function (if needed), so your route endpoint calls the query function, then calls the business function passing in all the data it needs, then calls the update function with the result (if needed) — that lets you unit test your business logic independently of a database.

seancorfield21:11:12

The query function (and update function) should be essentially trivial and shouldn’t require much testing at that point. You can do an integration level test on your API if you still feel you need it, but try to do as much testing on the pure business logic as possible.

devn21:11:12

What do people use for logging? Java logging bums me out.

seancorfield21:11:28

We switched to Timbre… it’s… OK.

devn21:11:39

With timbre, do you still wind up with a log4j.properties file or a logback.xml?

devn21:11:50

like if you want to tune log levels for some particular dep

seancorfield21:11:45

We route all the log4j and other stuff through Timbre (via an add-on library) and then control everything through Timbre itself.

devn21:11:47

Other query is, what about adjusting log levels for tests

seancorfield21:11:17

Let me look it up...

seancorfield21:11:59

com.fzakaria/slf4j-timbre

hiredman21:11:43

my preference for logging during testing is to have a log4j2.xml on the classpath for tests that redirects all logging to a log file

seancorfield21:11:26

I expect we’ll wrap Timbre in a Component and set it up one way for testing and a different way for production, at some point.

hiredman21:11:52

and then when building a jar to deploy either ship with a different log to console or whatever you want in production, or have that supplied by whoever is deploying it

hiredman21:11:02

(some thing if you are using a log4j.properties)

hiredman21:11:10

the reason to direct logging to a log file during test runs is all you see is your tests passing unless something goes wrong

hiredman21:11:01

https://github.com/hiredman/raft/blob/master/project.clj is an example of my old logback copy and paste setup, but in a new project I was playing with some parts used log4j2, which hated being forced to log to logback

sveri21:11:57

I switched to clojure tools logging from timbre. I do not like that timbre does that macro magic with its info / error / debug / ... functions. They look ugly as unresolved symbols in intellij

hiredman21:11:16

I am not very sympathetic to complaints about logging, because generally it is something you setup once, and then touch again. My last job had pretty much the same log4j.properties for the entire five years I was there. And it was a similar setup, a testing log4j.properties and a production log4j.properties

hiredman21:11:51

so to me timbre solves a non-problem, which annoys me 🙂

sorenmacbeth22:11:05

anybody here level 40 wizard level at dealing with type hints (metadata i guess) in macros?

sorenmacbeth22:11:27

to avoid reflection

sorenmacbeth22:11:39

concrete example:

sorenmacbeth22:11:49

will that, in fact apply a type hints that mimics (.iterator ^java.lang.Iterable (function-that-returns-an-iterable-collection foo))

bfabry22:11:20

@sorenmacbeth let's test it out!

bfabry22:11:40

probably not?

user=> (defmacro hint-thing [& body] `(with-meta (do ~@body) {:tag java.lang.Iterable}))
#'user/hint-thing
user=> (.iterator (hint-thing (a)))
Reflection warning, /Users/bfabry/Code/repl/target/09c3093ab7781029f5c68b951342f5ad52a0c32a-init.clj:1:1 - reference to field iterator can't be resolved.
#object[clojure.lang.PersistentVector$2 0x31000a10 "clojure.lang.PersistentVector$2@31000a10"]

bfabry22:11:26

but maybe it will work if it was a symbol

bfabry22:11:56

oh I'm putting the tag on the wrong thing altogether

bfabry22:11:12

clearly not a wizard, I give up

bfabry22:11:28

I swear I made that work once before

sorenmacbeth22:11:13

thanks for the try

sorenmacbeth23:11:17

this seems to work

sorenmacbeth23:11:15

feels like there should be a nicer looking way to cons that bit around the body than what I'm doing but whatever

jasonjckn23:11:09

@sorenmacbeth have you done any talks/blogs on your latest stack? you're no longer on cascalog, it's all flambo + ?

jasonjckn23:11:27

i've been out of the loop for 1-2 years on streaming computation & batch, would like to learn about the current cutting edge

sorenmacbeth23:11:36

no talks recently

sorenmacbeth23:11:08

but it's all just storm (via our dsl marceline) + flambo

jasonjckn23:11:31

your stack is spark and storm now? no hadoop, no spark streaming?

sorenmacbeth23:11:40

no hadoop, no spark-streaming

jasonjckn23:11:02

cool, so spark streaming didnt meet some of your requirements then?

jasonjckn23:11:07

or it's just how things evolved

sorenmacbeth23:11:30

haven't really had a need/opportunity to put it through it's paces

sorenmacbeth23:11:44

when we need streaming, we use the storm infra we already have in place

jasonjckn23:11:50

there was some talk of cascalog onto trident a while ago, but I guess nobody put in the effort yet

jasonjckn23:11:02

marceline looks nice

bfabry23:11:40

fwiw we're no longer using cascalog at zendesk either. a lot of work on google dataflow currently. we have our own little clojure wrapper which I haven't got round to making open yet =/

sorenmacbeth23:11:03

problem with spark-streaming is that you have to do all the hard work of keeping track of kafka offsets and things that the storm kafka spouts take care of for you

bfabry23:11:07

sorry, we're no longer using cascalog for anything new. we have some rather large things still running on cascalog that are being eol'd

jasonjckn23:11:19

@bfabry ah nice, yah i would enjoy seeing that wrapper published even if just rough example

sorenmacbeth23:11:45

and i haven't seen any projects or anything that makes that as easy as it is in storm

jasonjckn23:11:58

@sorenmacbeth are the majority of your topologies trident?

sorenmacbeth23:11:19

i think we one of the few

sorenmacbeth23:11:24

that really use trident

sorenmacbeth23:11:35

I don't ever hear much about it on the mailing lists, etc

jasonjckn23:11:48

yah I think that's right

sorenmacbeth23:11:48

as a java api it's pretty clunky

bfabry23:11:57

@sorenmacbeth any dataflow connectors I've seen manage all the offsets etc for you as well, thought he kafka connectors are pretty new

sorenmacbeth23:11:59

marceline makes it easy to write

bfabry23:11:19

(we primarily use pubsub)

sorenmacbeth23:11:59

our storm stuff and flambo stuff are very seperate

sorenmacbeth23:11:39

I could see perhaps looking at spark-streaming seriously if I need to apply part of a streaming computation directly in a batch thing or vice-versa

sorenmacbeth23:11:00

but we haven't had that need either

sorenmacbeth23:11:26

when we need to communicate across that boundary, shared state in a DB or kafka topic has worked fine

jasonjckn23:11:47

then there's onyx too

jasonjckn23:11:52

haven't used it yet though

sorenmacbeth23:11:58

haven't heard good or bad about it

sorenmacbeth23:11:08

but I'm skeptical out of the gate

sorenmacbeth23:11:38

distributed computation is hard and has so many sharp corners

sorenmacbeth23:11:10

building something from scratch seems risky/unnecessary

jasonjckn23:11:27

yah they have dynamic topologies as I understand that's got to add a lot complexity / edge cases

jasonjckn23:11:39

very excited about it regardless

sorenmacbeth23:11:43

id be interested to hear your experience if you end up using it production

jasonjckn23:11:14

nods i'll let you know if we do