This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-09-08
Channels
- # architecture (8)
- # aws (25)
- # babashka (9)
- # beginners (57)
- # calva (16)
- # cider (16)
- # clj-kondo (3)
- # cljdoc (13)
- # cljsrn (6)
- # clojure (272)
- # clojure-europe (36)
- # clojure-losangeles (1)
- # clojure-nl (8)
- # clojure-poland (3)
- # clojure-spec (4)
- # clojure-uk (8)
- # clojuredesign-podcast (9)
- # clojurescript (92)
- # code-reviews (1)
- # conjure (8)
- # core-async (1)
- # cursive (13)
- # datalog (1)
- # datascript (35)
- # datomic (76)
- # duct (10)
- # emacs (5)
- # events (7)
- # figwheel-main (1)
- # fulcro (35)
- # graalvm (20)
- # graphql (6)
- # jobs (3)
- # klipse (1)
- # london-clojurians (1)
- # malli (3)
- # off-topic (223)
- # pathom (2)
- # pedestal (13)
- # portal (1)
- # reitit (6)
- # remote-jobs (1)
- # shadow-cljs (21)
- # specter (2)
- # sql (63)
- # tools-deps (85)
- # tree-sitter (4)
- # xtdb (6)
A very simple but long running loop
-based function manages to make all 8 CPU cores busy.
How come? Is it GC or something else? I don't do there anything but churning numbers and changing a single transient vector.
And I'm not exactly sure, but I think the first time I ran it it used just 2 cores. And the second time it started using all 8 cores and it takes it quite a bit more time to complete. Same input, no randomness anywhere.
What the hell. It failed with OutOfMemoryError
, but the first time it worked just fine.
that's a strong indicator that it was gc using the cores
perhaps it is leaking resources - creating data that remains accessible via some scope outside the loop, or creating something that cannot be collected?
and it isn't eg. creating new lambdas via eval?
or otherwise creating new classes?
@U2FRKM4TW check for reflection warnings. Sometimes, reflection can do bizarre things...
Thanks. Yeah, maybe it was reflection. I've enabled the warnings later on but only after some drastic modifications that by themselves resulted in a nicer memory usage.
Initially, I was using a vector of vectors in a very tight loop that gets and sets nested double
values). The first run would finish just fine, the second would crash with OOM. So another candidate is boxing, perhaps.
@ahmed1hsn Picking up your question here as it's somewhat of a thread stealer. I scanned the slide deck on zio. It seems to be about taking the type system in scala and making it (I don't understand how) more about algebraic types. Which, I infer from the slides, are about making functions which behave according to algebraic properties (association, etc..) That's a massive shift. I'm not sure it would still be clojure at the end.
as a matter of opinion / design, I personally think the biggest problem with scala is trying to simultaneously work with the JVM (very weak type system, generics as a compile time fiction only), and use the best features of modern type safety (very strong type system, implicit and inferred types) this is exacerbated by a pattern of breaking code compatibility between compiler versions
the jvm type system, as much as it has one, is very compatible with lisp + interfaces, which is what clojure uses, and much of clojure's elegance and simplicity comes from embracing this
simultaneously trying to be compatible for interop with the jvm, and enforcing typing concepts the vm isn't capable of representing directly, is IMHO inevitably going to make things worse not better
type checking is most cases/can be a compile time feature (outside of optimisations & co coming from type annotation), you don't really need a cooperating vm
but that's where scala gets extremely messy
sure, it could be done right, I don't see real world evidence of it happening
equivalent problem: make a type safe language with inference, that allows inline machine specific assembly
and yeah, rust might count for my example :D
this reminds me, I do want to learn rust
though I might just learn zig instead
zig is an odd thing. I'd prefer rust, but personally I have no use case to learn either
zig is what people wish (and often delusionally pretend) C would be
it's simple in the way a lisp is (maximal utility from minimal features), close to "the metal"
right, I'd use rust or zig for DSP, where using an allocator could mean your RT constraints fail
@mpenet that's everyone else's fault for not making that happen sooner IMHO
I like simple things: I find fennel-lang for instance quite nifty, it has no bells & whistles, but is very pragmatic and fit the bill in some the use cases I encountered when clojure was not an option.
I attended the first fennel conf (it was four of us at a bar for a few hours), and hosted the second one (about 10 of us in a rented conference room)
then, sure it's lua with (lisp) lipstick, so aging community & not as many options as the jvm, but it's very usable nonetheless
I implemented the first version of quote / quasiquote / unquote also, but I don't think my grubby hand prints are on that code any more
it self hosts now!
sorry, I'm really not used to other people knowing anything about fennel and its features, I guess I'll have to get used to it :D
I'm slowly migrating my awesome wm config to fennel
you might be able to automate that, if I recall technomancy used a script to port part of lua's fennel code with it
and it looks like fennel will get checked in as a part of the neovim repo soon (to be used in the tree-sitter impl)
right, antifennel
antifennel takes lua and makes equivalent fennel code, I should be able to migrate all my lua code, and just leave a fennel bootstrap stub in its place
seems doable
did you find a better WM?
emacs 🙂 I basically just use a browser and emacs and I rely on boring gnome, I stay full screen with one or the other all the time. My workflow with awesome was quite simple/similar.
aha, yeah, awesome + tabs in neovim is my version, but on multiple screens when possible
also fennel: the ability to compile with bundled lua interpreter resulting in very small files is quite cool
yeah, I had some wacky ideas of how to do that, but luckily Phil found a simpler and more reliable way
there's been talk about bundling byte code instead of source for the fennel code bundled as well, making it slightly more brittle or harder to debug(?) but smaller
it's basically a free feature, since it's what lua was designed for
I've long threatened to bundle fennel into a general purpose kafka tool, maybe I'll find time for it during my current sabbatical
it's super easy to ship a single binary with everything including the ability to make it repl'able
and there's even a lib for nrepl :D
(that is, to make it an nrepl server, usable from those few client impls that don't take clojure on the server for granted...)
I'd go all in on fennel and skip my recent arm64 exploration and planned zig exploration, except I strongly suspect that I want to do things in DSP that lua can't quite hack directly
learning 64 bit arm assembly is extremely humbling, I thought I was so much more clever than I am
though I think I've fully internalized 2's complement and little endian as formats, which I guess is fun trivia though it's sad I even need to care
yeah that sounds more involved that what I could afford with the little free time I have these days.
I had a somewhat embarrassing accident with hallucinogens (never again, I swear), which left me hospitalized for a week with no internet, no hardcover books, no privacy. Working out some basic architectural stuff like 2's complement encoding on paper with a pen (no pencils allowed) was actually a nice way to pass the time
better would have been not to land myself there in the first place of course
apologies if that's TMI
yeah, that's a lot of work
what we need is a "clojure-junior" that you can teach kids so they can be your offshore for tedious parts of your projects
if their work passes the unit tests, just drop it in
I showed scratch to my girlfriend and her children, but it's definitely a "lead a horse to water" type situation
logo is just a lisp subset that is simple enough to not need parens any more
If you want to do statically typed functional programming on JVM, then Scala is great option.
I cross my fingers that Eta becomes usable (with full featured interop) - the way scala works with the vm is kind of messy
Scala is trying to move away from Haskell's abstractions e.g. Cats, ScalaZ. With ZIO and zio-prelude they are trying to do FP in more native way to Scala idioms.
you can use your favorite idioms in your code, but libraries will force you to use the lib authors favorite idioms
eg. a friend was doing a data graph based project, they knew about the pitfalls of implicits and avoided them, but they were cornered into implicits and all the problems they bring because they needed a graph lib that used them everywhere
as a random example (said friend refuses to use scala any more)
Statically typed language guys have argument that it helps in refactoring large code bases. How much true this statement is and how does this help with respect to Clojure for example?
@ahmed1hsn I'm obviously biased but I find clj-kondo and rg/grep helps me refactoring quite well
That is an advantage with static languages. The Clojure solution tends to be using good system boundaries with well defined interfaces, but it's definitely an art not a science. I do miss how easy refactoring was in OCaml for example.
Automated refactorings can also yield garbage code, because the amount of thinking is reduced
@borkdude but a good type system can guide a dev through a manual refactor - in OCaml if I change a module interface, just running :make
in nvim takes me to the needed edits, until I don't get errors, then I know I'm done
there's no real way to have that in Clojure (though static tools and unit tests help a lot)
that's true. there was a nice tweet about this from Stuart H.: https://twitter.com/stuarthalloway/status/1234261008560115712
I often enjoy the glib version: "tooling is a language smell"
that is, the kind of tools users of $LANG consider indispensable is a peek into the things the language itself is poorly suited to handle
I've seen bad refactors that got further than they should have because specs were defined and naiively trusted
oops, nothing actually checked or enforced said specs, so they were about as useful as comments declaring return / arg types
in a static lang, the type is a property of the code, in clojure a spec is not a property of the code or the data alone, it's a checkable assertion about the data seen by specific code in a specific run time context
How is the situation of core.typed or TypedClojure?
@ahmed1hsn I think core.typed is still pretty much a research topic?
@ahmed1hsn I don't know those tools / variants but on the topic of architecture, even with a strongly typed language the types can't extend past a single VM without making a big brittle mess. Your individual app might be strongly typed but your microservice can't be. Eventually, at some level, once you have two computers running, you have the same typing guarantees as Clojure (that is, your data can be checked at runtime for validity but nothing can be known about it statically).
The attempts I've seen to act like data between vms / processes is typed only make the problems worse
(the OCaml approach is cute: you can share typed data between processes, but it's an instant failure / abort if the two processes aren't running literally the same compiled binary)
better :D no side effecting constructors, and it bails out if a trivial assertion isn't successful
it allows what java gets by having multiple threads in one process, but without shared memory, which is kind of cool actually
and it's network transparent
but it's still "cute" rather than "awesome" - it's dealing with a fundamental problem that nothing handles perfectly (though I suspect erlang has found a better local optimum than most)
point being, once you leave the realm of "everything in one process", you get all of Clojure's problems anyway
Does this hold true when using something like Avro? I’ve never used it, but it appears to be “types on the wire”.
but you can't use the same types in your application code
and IMHO trying to combine the avro type with your program types makes things worse not better (based on the attempts I've seen)
Ah yea. It appears similar to the keyword conversion problem I find at the borders of my clj systems.
erlang gets aways with it via good pattern matching, records, no nil, named tuples and semi decent static analysis. All in all it's a good combo
plus a good infrastructure for IPC with retries / monitoring
I haven't yet used it in anger though, I do intend to fix that
@borkdude CircleCI used core.typed but they moved to Prismatic Schema: https://www.mail-archive.com/[email protected]/msg73423.html https://circleci.com/blog/why-we're-supporting-typed-clojure/ https://circleci.com/blog/why-were-no-longer-using-core-typed/
Beyond that I haven't seen core.typed in production use.
@ahmed1hsn Yes. I still think Schema is convenient, but it's also still only runtime only, like spec. Btw, I think clj-kondo could pick up on Schema's defn
, but since there are now at least 3 or 4 libs (schema, spec, malli) doing their own thing, I don't know if this has any priority.
(some work I'm doing related to spec: https://gist.github.com/borkdude/c0987707ed0a1de0ab3a4c27c3affb03#gistcomment-3444982)
Some perceived flaw of schema was that their schemas were closed by default. But it's pretty easy to make them open: {:foo s/Str s/Any s/Any}
?
To be honest, I think the opposite is the flawed approach. You can make a closed schema open, but you can't do the opposite.
Also, you can use closed schemas as security measures, like "don't allow a non-admin user to change the status of a user in this REST path". Rails did decide that everything was open by default, then changed the approach (breaking lots of code in the process) because of lots of bugs and security issues on production code - even github suffered from that flaw, I believe
the approach is spec 2 is that "closing" is a property of the check, not of the spec
something you can enable during validation, conform, etc if needed
yeah, in practice I added s/Any s/Any
to every hash-map in schema, and 98% of my schemas were for hash-maps
tangent: I wonder if an entropy detecting tool would be useful for catching mismatched components. My idea being if you have a "state" or "component" map tree with the same data repeated many times at many levels of branching, that could indicate an app that's growing faster than it's being designed
the open-ness part could be another namespaced key, that would get validated even if not specified in the s/keys
yes, different approaches. one assumes an attribute is always the same thing, the other not
malli has much of that flexibility too and possibly more, maybe it will be schema.next in terms of community impact
I am not a fan of having both approaches for something that should aim to become a standard personally. I tend to prefer Spec style, but that's just me
we're circling back to Scala's discussion 🙂 imho enabling many different styles to deal with something that should be a shared by the community at large is not great
I don't follow. This is the same as spec? You don't have to use a local registry, it's an edge case that is supported optionally
> Example to use a registered qualified keyword in your map. If you don't provide a schema to this key, it will look in the registry. I wonder how you register a qualified keyword associated with a schema globally
I guess you can define a single registry that you'd use everywhere (if you follow that rule)
malli is immutable by default but supports global mutable registries. Just not by default. But, it's pre-alpha, feedback welcome.
@ikitommi So the feedback / question is: if lib A defines a malli spec for keyword :foo/bar, how can you enforce that library B will validate this as intented? (did I say that right @mpenet)?
afaik you cannot enforce it. You have to know where to get the info from the registry of the lib, merge it with yours potentially and then do the same for every lib that has specs registered (if any)
currently, the lib needs to have it's own registry (map) and you can compose that it with your app registry. Could poll if people want a global mutable thing.
one could ask the question if this is a common thing: validating maps from other libs
you can always force a immutable (and serializable) registry inside your app, for some parts where that matters
take that in the context of a haskell like type system, is it useful to have the guarantee a Thing is always assumed to be the same ?
@mpenet it's probably the other libs responsibility to create those maps, which can from then on be assumed to be true Things?
Unless you like to create contructors for all your data 🙂 and guarantee than any producer will use these
But maybe it's quite personal, some people like it one way or the other I imagine. Right now I tend to prefer attributes to have a strong identity (like in datomic or graphql), but I guess that might not be the case of everybody, clojure enables many ways to do the same thing.
If I want to expose a clojure map/data oriented api as JSON, but I use qualified keywords, would the "default" approach just be to stringify the keys, skip the colon, ie :my.qualified/key
to "my.qualified/key"? Just want to make sure I'm not missing some json convention or issues or something
This is a long-discussed topic, and as far as I know there are no conventions. • https://andersmurphy.com/2019/05/04/clojure-case-conversion-and-boundaries.html • https://vvvvalvalval.github.io/posts/clojure-key-namespacing-convention-considered-harmful.html • https://juxt.pro/blog/idiomatic-integration
Here is one discussing the “considered harmful” post above: https://clojureverse.org/t/clojures-keyword-namespacing-convention-considered-harmful/6169/6 I don’t think that’s the one I had in mind tho :thinking_face:
Ah here it is. I think it was the precursor to Valentin’s blog post actually: https://clojureverse.org/t/should-we-really-use-clojures-syntax-for-namespaced-keys/1516
Those are both essentially the same article, both by the same author, two years apart.
(and I still think he's mostly wrong 🙂 )
The discussion itself is what I wanted to link to, because there’s quite a bit of disagreement in this area.
I remember seeing it and not agreeing, and now a month later I need some JSON compatibility and now see it in a whole new light haha
I'm in the camp of use qualified keys wherever you can and convert as appropriate if needed at the edges.
Given that you can go in and out of the database with qualified keys (if you're using Datomic, or a JDBC setup with next.jdbc
), you can stay in qualified keyword land for pretty much everything except JSON interactions -- and even then some systems will happily accept /
in key strings...
I work daily in a system where Clojure is the minority, interfacing with both Ruby and JS. The pain of keyword conversion is real, but I sympathize with both sides of the argument. Clojure best practice can be argued as a general best practice, but it definitely creates friction elsewhere. I liked option 2 in the last link I shared, which were deemed “all terrain keys”, especially a conversion that somehow preserves the namespace portion, allowing for systematic conversion back into clj without loss of data.
I actually tried having some keywords as kebab, and the “remote” keys remain snake, and it was extremely confusing. Ended up reverting back to converting at the edges.
Given that both Cheshire and c.d.j offer controls on preserving or dropping qualifiers when generating JSON and on adding a qualifier or not when parsing JSON means that it's trivial to transform at the edges tho'...
The snake_case / kebab-case thing is something you have to think about with next.jdbc
too, if your DB has tables or columns in snake_case. Hence the built-in support for the camel-snake-kebab library if you have it on your classpath 🙂
Unfortunately, we have some legacy DB stuff written in headlessCamelCase (both tables -- which are case sensitive -- and columns -- which are not, in MySQL at least).
(and somewhere along the way we passed through nodelimitercase in our transition from headlessCamelCase to snake_case in MySQL... argh!)
Ha sounds familiar. The issue with preserving namespaces is less that it’s difficult and more that your coworkers give you funny looks, ie social friction. It can result in some pretty long keys which some people find unappealing (which admittedly is not a great technical reason, but alas). We have good support for them in Clojure at least, other langs aren’t so lucky...
I'd agree that outside my app namespaces aren't very useful, and inside my app I want them if you don't have control of how data gets into your app, and it doesn't happen in a small number of easy to intervene places, namespacing keys is only one of many problems you are going to have
I'd use well placed ingestion / dispersion middleware, not a special "encoding" of the namespace into the key
another thing to consider: {:foo/bar 1 :foo/baz 2 :quux/OK 3}
-> {foo: {bar: 1, baz: 2}, quux: {OK: 3}}
For some reason, that doesn't sit well with me. I think maybe that's the only point in Val's article that I agree with: don't treat qualified keywords as something structural. I think it's because, typically, at the edges you're not going to be mapping to/from a nested (unqualified) structure.
I prefer to explicitly map qualified names<>unqualified. We usually do not control external resources I did this lib: https://github.com/souenzzo/eql-as
@jjttjj Both Cheshire and clojure.data.json
allow you to specify how keys are turned into JSON keys (strings) so you can choose to drop the qualifier or keep it.
user=> (require '[cheshire.core :as ches] '[clojure.data.json :as dj])
nil
user=> (ches/generate-string {:a/b 1 :c/d 2})
"{\"a/b\":1,\"c/d\":2}"
user=> (ches/generate-string {:a/b 1 :c/d 2} {:key-fn name})
"{\"b\":1,\"d\":2}"
user=> (dj/write-str {:a/b 1 :c/d 2})
"{\"b\":1,\"d\":2}"
user=> (dj/write-str {:a/b 1 :c/d 2} :key-fn (comp str symbol))
"{\"a\\/b\":1,\"c\\/d\":2}"
user=>
@jjttjj /cc @U6GFE9HS7c.d.j escapes /
by default but that can be turned off.
good to know! wouldn't have guessed that with the escaped slashes. Haven't settled on a json lib yet
(their defaults are opposite, BTW)
To be honest, I think the opposite is the flawed approach. You can make a closed schema open, but you can't do the opposite.