This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2016-11-03
Channels
- # beginners (20)
- # boot (407)
- # cider (17)
- # cljs-dev (29)
- # cljsrn (33)
- # clojure (169)
- # clojure-greece (17)
- # clojure-russia (47)
- # clojure-spec (40)
- # clojure-uk (81)
- # clojurescript (64)
- # clr (3)
- # copenhagen-clojurians (3)
- # core-async (1)
- # cursive (28)
- # datomic (26)
- # editors-rus (4)
- # emacs (10)
- # events (1)
- # figwheel (1)
- # funcool (1)
- # hoplon (82)
- # jobs (1)
- # klipse (10)
- # lein-figwheel (26)
- # leiningen (1)
- # off-topic (2)
- # om (153)
- # overtone (2)
- # pedestal (15)
- # proton (1)
- # re-frame (6)
- # ring-swagger (1)
- # rum (1)
- # slack-help (4)
- # untangled (56)
- # vim (24)
- # yada (2)
I'm trying to cider instrument for debugging a function in om.next.impl.parser
but i get the error: Namespace not found.
any thoughts?
hey guys, is it possible to create a defmethod
in a macro ?
create/generate
@baptiste-from-paris sure it is. Expand to whatever you will, if it looks like a defmethod
, it is a defmethod
ok so basically I can generate defmethod
and use it just after in my macro
Are there any clojure/clojurescript implementation of challenge response authentication?
@baptiste-from-paris yes, it will be exactly the same as if you typed that defmethod
by hand, except the macro did it for you
well, I use https://github.com/jarohen/chime for job scheduling. Pretty straight forward to add persistence, I think?
Question: I have a webapp, and when i compile my cljs to .js using leiningen for use in an uberjar, it compiles to app-min.js. But now users sometimes have to press CTRL+F5 or CMD+SHIFT+R or similar to prevent viewing the old version. I thought about compiling to app-min-<projectversionnumber>.js, but how do I do that in a project.clj file? Can I get the project version string somehow from the project.clj file?
@mpenet There's also https://github.com/juxt/tick which takes care of scheduling for you
lazy-seq of instants optionally fed to c.async channels via go/timeouts (acting as the poor man's clock)
@kah0ona I assume you require app-min.js via a script tag in a html document? If so, how is that html document served? Is it generated dynamically?
The answer is likely going to be that you want to randomly generate some number and add it as a query param to the url you're requiring in that script tag.
@dominicm I need to be able to express correct "every wednesday at 2pm" sort of things too
@shdzzl, it actually now is a static file, but that can easily be changed come to think of it. Is that reliable, adding a random parameter?
@mpenet tick is well designed for this. For example, these are easter sundays https://github.com/juxt/tick/blob/master/src/tick/core.clj#L102-L112
@kah0ona There are other solutions. You can look into cache busting strategies if you want. But, yes, that works as long as the parameter is sufficiently random, you could use a timestamp alternatively.
Or you can filter days using filter like normal in combination with periodic-seq
, depends how complicated you want to go
(the thing about using a random number is that it makes the file uncacheable - you want the query-string parameter to have some relationship with the file’s contents, so if the file doesn’t change for a week, users are able to use the cached version during that week)
you often see people use a hash of the file as the query-string parameter for this reason
Does anyone here use redis in a cluster set up? Preferably using redis-sentinel? I know carmine doesn’t have sentinel support yet so wondering what people do as their workaround
We’ve just switched from Carmine to Jedis to support Redis Cluster.
We’re still using Nippy for serialization (like Carmine).
Be aware in Redis Cluster mode, you cannot run MULTI/EXEC so you need to use Lua scripts or some other approach for those use cases.
(but we’re too early in the process to give you much guidance beyond that @shayanjm )
awesome, @seancorfield, thanks so much
@shayanjm we moved to Jedis as well. I don't have a lot more context than that at the moment 🙂
@jcsims @seancorfield : are you guys using the Java library directly? I found a clj-jedis wrapper library
Cool. Additional question: this’ll be my first time playing with clustered redis instances. What sort of consistency guarantees do you get with them? My specific usecase is that I need data to be made available immediately to my API users. My understanding is that unless you’re reading and writing from master (which defeats the purpose of the clustering) - you have no guarantees that you’re not just reading stale data
Using test selectors in leiningen, is there a way to exclude multiple selectors from :default
?
Here they use complement :integration
as the example. But how do you use complement in the context of project.clj
to check multiple selectors?
the documentation says that >(defn- execute-batch > "Executes a batch of SQL commands and returns a sequence of update counts. > (-2) indicates a single operation operating on an unknown number of rows. > Specifically, Oracle returns that and we must call getUpdateCount() to get > the actual number of rows affected. In general, operations return an array > of update counts, so this may not be a general solution for Oracle..."
@shayanjm We’re using a small protocol layer over Jedis (so we can handle operations seamlessly for both clustered and non-clustered usage), no wrapper.
@dpsutton And you’re using Oracle?
I’ve not seen MS SQL Server return -1 so I don’t know what that means.
My tests against MS SQL Server have always seen it return update counts (0, 1, 2, etc).
What driver are you using? I test with both the jTDS and Microsoft Type 4 JDBC drivers.
I downloaded it and added it to our local Archiva repo 🙂
But I can start testing java.jdbc with the type 3 driver as well for better coverage.
Where do I find the Type 3 driver?
I can only find the Type 4 driver...
I need to implement key-based authentication, something similar to what ssh does, where the server is Clojure and the client is ClojureScript. Any ideas of any libraries that can hlp me?
I'm trying to use core test for integration tests with a database and compojure-api. I want to know the best way to control the order in which tests occur? For example, A depends on B in the database. I can use a POST request to set up A and test the post route at the same time. Then I can run B after.
seems nice and economical. But I presume, using test selectors or not, lein test
will just run the deftests
in any random order.
@tom Tests, no matter the language or the type, should always be self contained and independent of others. Every test should make sure it creates the environment it needs.
@sveri Ok, just looking for a way to avoid having to keep an up to date test database or other similar monstrosities.
@tom What I usually do, is to setup every test with the needed entries in the database.
@sveri I see, how do you like to organize the code tests? Everything inside deftest
or in functions that can be shared?
It depends, but, what can be shared, will be shared. And I try to test every business function alone with some testdata without looking into the db. This way I only have very few that interact with the db
@tom I would definitely encourage you to try to separate out your code so you have a DB query function, a pure business logic function, and a DB update function (if needed), so your route endpoint calls the query function, then calls the business function passing in all the data it needs, then calls the update function with the result (if needed) — that lets you unit test your business logic independently of a database.
The query function (and update function) should be essentially trivial and shouldn’t require much testing at that point. You can do an integration level test on your API if you still feel you need it, but try to do as much testing on the pure business logic as possible.
We switched to Timbre… it’s… OK.
We route all the log4j and other stuff through Timbre (via an add-on library) and then control everything through Timbre itself.
@seancorfield what add-on?
Let me look it up...
https://gist.github.com/anonymous/066e28fb8936822ecf934fc78fd154fb is my current copy and paste logging setup
com.fzakaria/slf4j-timbre
my preference for logging during testing is to have a log4j2.xml on the classpath for tests that redirects all logging to a log file
I expect we’ll wrap Timbre in a Component and set it up one way for testing and a different way for production, at some point.
and then when building a jar to deploy either ship with a different log to console or whatever you want in production, or have that supplied by whoever is deploying it
the reason to direct logging to a log file during test runs is all you see is your tests passing unless something goes wrong
https://github.com/hiredman/raft/blob/master/project.clj is an example of my old logback copy and paste setup, but in a new project I was playing with some parts used log4j2, which hated being forced to log to logback
I switched to clojure tools logging from timbre. I do not like that timbre does that macro magic with its info / error / debug / ... functions. They look ugly as unresolved symbols in intellij
I am not very sympathetic to complaints about logging, because generally it is something you setup once, and then touch again. My last job had pretty much the same log4j.properties for the entire five years I was there. And it was a similar setup, a testing log4j.properties and a production log4j.properties
anybody here level 40 wizard level at dealing with type hints (metadata i guess) in macros?
to avoid reflection
specifically
concrete example:
will that, in fact apply a type hints that mimics (.iterator ^java.lang.Iterable (function-that-returns-an-iterable-collection foo))
@sorenmacbeth let's test it out!
probably not?
user=> (defmacro hint-thing [& body] `(with-meta (do ~@body) {:tag java.lang.Iterable}))
#'user/hint-thing
user=> (.iterator (hint-thing (a)))
Reflection warning, /Users/bfabry/Code/repl/target/09c3093ab7781029f5c68b951342f5ad52a0c32a-init.clj:1:1 - reference to field iterator can't be resolved.
#object[clojure.lang.PersistentVector$2 0x31000a10 "clojure.lang.PersistentVector$2@31000a10"]
thanks for the try
this seems to work
feels like there should be a nicer looking way to cons that bit around the body than what I'm doing but whatever
@sorenmacbeth have you done any talks/blogs on your latest stack? you're no longer on cascalog, it's all flambo + ?
i've been out of the loop for 1-2 years on streaming computation & batch, would like to learn about the current cutting edge
no talks recently
but it's all just storm (via our dsl marceline) + flambo
no hadoop, no spark-streaming
haven't really had a need/opportunity to put it through it's paces
when we need streaming, we use the storm infra we already have in place
there was some talk of cascalog onto trident a while ago, but I guess nobody put in the effort yet
fwiw we're no longer using cascalog at zendesk either. a lot of work on google dataflow currently. we have our own little clojure wrapper which I haven't got round to making open yet =/
problem with spark-streaming is that you have to do all the hard work of keeping track of kafka offsets and things that the storm kafka spouts take care of for you
sorry, we're no longer using cascalog for anything new. we have some rather large things still running on cascalog that are being eol'd
@bfabry ah nice, yah i would enjoy seeing that wrapper published even if just rough example
and i haven't seen any projects or anything that makes that as easy as it is in storm
@sorenmacbeth are the majority of your topologies trident?
i think we one of the few
that really use trident
I don't ever hear much about it on the mailing lists, etc
as a java api it's pretty clunky
@sorenmacbeth any dataflow connectors I've seen manage all the offsets etc for you as well, thought he kafka connectors are pretty new
marceline makes it easy to write
our storm stuff and flambo stuff are very seperate
I could see perhaps looking at spark-streaming seriously if I need to apply part of a streaming computation directly in a batch thing or vice-versa
but we haven't had that need either
when we need to communicate across that boundary, shared state in a DB or kafka topic has worked fine
haven't heard good or bad about it
but I'm skeptical out of the gate
distributed computation is hard and has so many sharp corners
building something from scratch seems risky/unnecessary
yah they have dynamic topologies as I understand that's got to add a lot complexity / edge cases
id be interested to hear your experience if you end up using it production