Fork me on GitHub
#off-topic
<
2020-09-08
>
p-himik08:09:22

A very simple but long running loop-based function manages to make all 8 CPU cores busy. How come? Is it GC or something else? I don't do there anything but churning numbers and changing a single transient vector.

p-himik08:09:20

And I'm not exactly sure, but I think the first time I ran it it used just 2 cores. And the second time it started using all 8 cores and it takes it quite a bit more time to complete. Same input, no randomness anywhere.

p-himik08:09:07

What the hell. It failed with OutOfMemoryError, but the first time it worked just fine.

noisesmith11:09:22

that's a strong indicator that it was gc using the cores

noisesmith11:09:57

perhaps it is leaking resources - creating data that remains accessible via some scope outside the loop, or creating something that cannot be collected?

p-himik11:09:28

I really doubt that - the function is pure and I immediately discard its result.

noisesmith11:09:48

and it isn't eg. creating new lambdas via eval?

noisesmith11:09:01

or otherwise creating new classes?

mauricio.szabo01:09:56

@U2FRKM4TW check for reflection warnings. Sometimes, reflection can do bizarre things...

p-himik07:09:51

Thanks. Yeah, maybe it was reflection. I've enabled the warnings later on but only after some drastic modifications that by themselves resulted in a nicer memory usage. Initially, I was using a vector of vectors in a very tight loop that gets and sets nested double values). The first run would finish just fine, the second would crash with OOM. So another candidate is boxing, perhaps.

Drew Verlee08:09:16

@ahmed1hsn Picking up your question here as it's somewhat of a thread stealer. I scanned the slide deck on zio. It seems to be about taking the type system in scala and making it (I don't understand how) more about algebraic types. Which, I infer from the slides, are about making functions which behave according to algebraic properties (association, etc..) That's a massive shift. I'm not sure it would still be clojure at the end.

👍 3
noisesmith11:09:06

as a matter of opinion / design, I personally think the biggest problem with scala is trying to simultaneously work with the JVM (very weak type system, generics as a compile time fiction only), and use the best features of modern type safety (very strong type system, implicit and inferred types) this is exacerbated by a pattern of breaking code compatibility between compiler versions

noisesmith11:09:23

the jvm type system, as much as it has one, is very compatible with lisp + interfaces, which is what clojure uses, and much of clojure's elegance and simplicity comes from embracing this

noisesmith11:09:22

simultaneously trying to be compatible for interop with the jvm, and enforcing typing concepts the vm isn't capable of representing directly, is IMHO inevitably going to make things worse not better

mpenet11:09:37

type checking is most cases/can be a compile time feature (outside of optimisations & co coming from type annotation), you don't really need a cooperating vm

noisesmith11:09:09

but that's where scala gets extremely messy

mpenet11:09:11

but sure, it's simple to do less and stay close to the vm if/when possible

noisesmith11:09:23

sure, it could be done right, I don't see real world evidence of it happening

mpenet11:09:51

typed racket & gleam come to mind

mpenet11:09:58

hell, even rust

noisesmith12:09:05

equivalent problem: make a type safe language with inference, that allows inline machine specific assembly

noisesmith12:09:18

and yeah, rust might count for my example :D

mpenet12:09:46

It's hard, sure. And difficult to do it right

noisesmith12:09:03

this reminds me, I do want to learn rust

noisesmith12:09:12

though I might just learn zig instead

mpenet12:09:08

zig is an odd thing. I'd prefer rust, but personally I have no use case to learn either

noisesmith12:09:55

zig is what people wish (and often delusionally pretend) C would be

mpenet12:09:31

I'd only use rust to work with embedded devices & co

noisesmith12:09:34

it's simple in the way a lisp is (maximal utility from minimal features), close to "the metal"

mpenet12:09:54

I know, I just find it odd for a new lang in 2020 🙂

noisesmith12:09:57

right, I'd use rust or zig for DSP, where using an allocator could mean your RT constraints fail

noisesmith12:09:13

@mpenet that's everyone else's fault for not making that happen sooner IMHO

mpenet12:09:16

I'd prefer to work in rust in these cases, but that's just me

mpenet12:09:50

but I really don't want to learn rust again

mpenet12:09:56

and likely won't

mpenet12:09:06

(used it quite a bit in the 0.3.x days)

mpenet12:09:54

I like simple things: I find fennel-lang for instance quite nifty, it has no bells & whistles, but is very pragmatic and fit the bill in some the use cases I encountered when clojure was not an option.

noisesmith12:09:38

I attended the first fennel conf (it was four of us at a bar for a few hours), and hosted the second one (about 10 of us in a rented conference room)

parens 9
mpenet12:09:45

then, sure it's lua with (lisp) lipstick, so aging community & not as many options as the jvm, but it's very usable nonetheless

noisesmith12:09:29

I implemented the first version of quote / quasiquote / unquote also, but I don't think my grubby hand prints are on that code any more

noisesmith12:09:33

it self hosts now!

mpenet12:09:48

since 0.5.x

mpenet12:09:05

no more gnarly lua to deal with (almost)

noisesmith12:09:20

sorry, I'm really not used to other people knowing anything about fennel and its features, I guess I'll have to get used to it :D

noisesmith12:09:42

I'm slowly migrating my awesome wm config to fennel

mpenet12:09:13

you might be able to automate that, if I recall technomancy used a script to port part of lua's fennel code with it

noisesmith12:09:16

and it looks like fennel will get checked in as a part of the neovim repo soon (to be used in the tree-sitter impl)

noisesmith12:09:26

right, antifennel

noisesmith12:09:56

antifennel takes lua and makes equivalent fennel code, I should be able to migrate all my lua code, and just leave a fennel bootstrap stub in its place

noisesmith12:09:02

seems doable

mpenet12:09:15

I did the same a long time ago

mpenet12:09:20

but I am no longer using awesome

mpenet12:09:32

I did it manually tho

noisesmith12:09:58

did you find a better WM?

mpenet12:09:09

emacs 🙂 I basically just use a browser and emacs and I rely on boring gnome, I stay full screen with one or the other all the time. My workflow with awesome was quite simple/similar.

mpenet12:09:18

terminals in emacs+libvterm

noisesmith12:09:30

aha, yeah, awesome + tabs in neovim is my version, but on multiple screens when possible

mpenet12:09:59

also fennel: the ability to compile with bundled lua interpreter resulting in very small files is quite cool

noisesmith12:09:33

yeah, I had some wacky ideas of how to do that, but luckily Phil found a simpler and more reliable way

noisesmith12:09:08

there's been talk about bundling byte code instead of source for the fennel code bundled as well, making it slightly more brittle or harder to debug(?) but smaller

noisesmith12:09:35

it's basically a free feature, since it's what lua was designed for

noisesmith12:09:03

I've long threatened to bundle fennel into a general purpose kafka tool, maybe I'll find time for it during my current sabbatical

mpenet12:09:06

it's super easy to ship a single binary with everything including the ability to make it repl'able

mpenet12:09:13

quite cool, really

noisesmith12:09:29

and there's even a lib for nrepl :D

noisesmith12:09:52

(that is, to make it an nrepl server, usable from those few client impls that don't take clojure on the server for granted...)

mpenet12:09:54

as I said, no need to mess with rust, I'd be busy enough with other toys

noisesmith12:09:53

I'd go all in on fennel and skip my recent arm64 exploration and planned zig exploration, except I strongly suspect that I want to do things in DSP that lua can't quite hack directly

mpenet12:09:29

when you are that close to the metal there are not many options sure

noisesmith12:09:49

learning 64 bit arm assembly is extremely humbling, I thought I was so much more clever than I am

noisesmith12:09:38

though I think I've fully internalized 2's complement and little endian as formats, which I guess is fun trivia though it's sad I even need to care

mpenet12:09:41

yeah that sounds more involved that what I could afford with the little free time I have these days.

noisesmith12:09:31

I had a somewhat embarrassing accident with hallucinogens (never again, I swear), which left me hospitalized for a week with no internet, no hardcover books, no privacy. Working out some basic architectural stuff like 2's complement encoding on paper with a pen (no pencils allowed) was actually a nice way to pass the time

noisesmith12:09:34

better would have been not to land myself there in the first place of course

noisesmith12:09:51

apologies if that's TMI

mpenet12:09:53

kids+covid here

noisesmith12:09:26

yeah, that's a lot of work

mpenet12:09:29

I mean, we didn't get covid, just small kids@home

noisesmith12:09:59

what we need is a "clojure-junior" that you can teach kids so they can be your offshore for tedious parts of your projects

noisesmith12:09:16

if their work passes the unit tests, just drop it in

mpenet12:09:51

a bit too early for them, stuff like logo/scratch in time

noisesmith12:09:27

I showed scratch to my girlfriend and her children, but it's definitely a "lead a horse to water" type situation

noisesmith12:09:55

logo is just a lisp subset that is simple enough to not need parens any more

Ahmed Hassan12:09:34

If you want to do statically typed functional programming on JVM, then Scala is great option.

noisesmith12:09:40

I cross my fingers that Eta becomes usable (with full featured interop) - the way scala works with the vm is kind of messy

mpenet12:09:44

scala pretty much support any style of programming 🙂

mpenet12:09:06

which is part of the problem (kidding, kind of)

Ahmed Hassan12:09:19

Scala is trying to move away from Haskell's abstractions e.g. Cats, ScalaZ. With ZIO and zio-prelude they are trying to do FP in more native way to Scala idioms.

noisesmith12:09:54

you can use your favorite idioms in your code, but libraries will force you to use the lib authors favorite idioms

noisesmith12:09:07

eg. a friend was doing a data graph based project, they knew about the pitfalls of implicits and avoided them, but they were cornered into implicits and all the problems they bring because they needed a graph lib that used them everywhere

noisesmith12:09:32

as a random example (said friend refuses to use scala any more)

Ahmed Hassan13:09:35

Statically typed language guys have argument that it helps in refactoring large code bases. How much true this statement is and how does this help with respect to Clojure for example?

borkdude13:09:36

@ahmed1hsn I'm obviously biased but I find clj-kondo and rg/grep helps me refactoring quite well

noisesmith13:09:02

That is an advantage with static languages. The Clojure solution tends to be using good system boundaries with well defined interfaces, but it's definitely an art not a science. I do miss how easy refactoring was in OCaml for example.

borkdude13:09:14

Automated refactorings can also yield garbage code, because the amount of thinking is reduced

noisesmith13:09:14

@borkdude but a good type system can guide a dev through a manual refactor - in OCaml if I change a module interface, just running :make in nvim takes me to the needed edits, until I don't get errors, then I know I'm done

noisesmith13:09:34

there's no real way to have that in Clojure (though static tools and unit tests help a lot)

borkdude13:09:42

that's true. there was a nice tweet about this from Stuart H.: https://twitter.com/stuarthalloway/status/1234261008560115712

noisesmith13:09:15

I often enjoy the glib version: "tooling is a language smell"

noisesmith13:09:04

that is, the kind of tools users of $LANG consider indispensable is a peek into the things the language itself is poorly suited to handle

borkdude13:09:15

funny no-one mentioned spec so far ;)

noisesmith13:09:47

I've seen bad refactors that got further than they should have because specs were defined and naiively trusted

noisesmith13:09:09

oops, nothing actually checked or enforced said specs, so they were about as useful as comments declaring return / arg types

noisesmith13:09:25

in a static lang, the type is a property of the code, in clojure a spec is not a property of the code or the data alone, it's a checkable assertion about the data seen by specific code in a specific run time context

💯 9
Ahmed Hassan13:09:26

@noisesmith refactoring workflow in Scala is similar to OCaml

👍 3
Ahmed Hassan13:09:43

How is the situation of core.typed or TypedClojure?

borkdude13:09:08

Btw #lsp also has some nice refactorings nowadays

borkdude13:09:28

@ahmed1hsn I think core.typed is still pretty much a research topic?

noisesmith13:09:01

@ahmed1hsn I don't know those tools / variants but on the topic of architecture, even with a strongly typed language the types can't extend past a single VM without making a big brittle mess. Your individual app might be strongly typed but your microservice can't be. Eventually, at some level, once you have two computers running, you have the same typing guarantees as Clojure (that is, your data can be checked at runtime for validity but nothing can be known about it statically).

✔️ 3
noisesmith13:09:33

The attempts I've seen to act like data between vms / processes is typed only make the problems worse

noisesmith13:09:22

(the OCaml approach is cute: you can share typed data between processes, but it's an instant failure / abort if the two processes aren't running literally the same compiled binary)

borkdude13:09:12

is that kinda like serialized objects in Java but worse?

noisesmith13:09:40

better :D no side effecting constructors, and it bails out if a trivial assertion isn't successful

noisesmith13:09:03

it allows what java gets by having multiple threads in one process, but without shared memory, which is kind of cool actually

noisesmith13:09:16

and it's network transparent

noisesmith13:09:00

but it's still "cute" rather than "awesome" - it's dealing with a fundamental problem that nothing handles perfectly (though I suspect erlang has found a better local optimum than most)

noisesmith13:09:33

point being, once you leave the realm of "everything in one process", you get all of Clojure's problems anyway

✔️ 3
cjsauer14:09:20

Does this hold true when using something like Avro? I’ve never used it, but it appears to be “types on the wire”.

noisesmith14:09:07

but you can't use the same types in your application code

noisesmith14:09:35

and IMHO trying to combine the avro type with your program types makes things worse not better (based on the attempts I've seen)

cjsauer14:09:28

Ah yea. It appears similar to the keyword conversion problem I find at the borders of my clj systems.

mpenet13:09:40

erlang gets aways with it via good pattern matching, records, no nil, named tuples and semi decent static analysis. All in all it's a good combo

mpenet13:09:16

message passing is first class too, so it's optimized for that, but that's not free

noisesmith13:09:18

plus a good infrastructure for IPC with retries / monitoring

noisesmith13:09:41

I haven't yet used it in anger though, I do intend to fix that

mpenet13:09:43

generally working with erlang codebases is less stressful than clojure codebases imho

mpenet13:09:45

when it comes to refactoring I mean.

Ahmed Hassan13:09:43

Beyond that I haven't seen core.typed in production use.

borkdude13:09:10

@ahmed1hsn Yes. I still think Schema is convenient, but it's also still only runtime only, like spec. Btw, I think clj-kondo could pick up on Schema's defn, but since there are now at least 3 or 4 libs (schema, spec, malli) doing their own thing, I don't know if this has any priority.

borkdude13:09:57

It would be great if the clojure community settled on one thing eventually

borkdude13:09:14

right now it's a bit of a scattered ecosystem

borkdude13:09:42

Honestly, Schema was pretty great and simple (in the sense of easy :P)

ikitommi13:09:37

#malli wip with clj-kondo

🚀 3
👏 3
borkdude13:09:45

Some perceived flaw of schema was that their schemas were closed by default. But it's pretty easy to make them open: {:foo s/Str s/Any s/Any}?

mauricio.szabo01:09:24

To be honest, I think the opposite is the flawed approach. You can make a closed schema open, but you can't do the opposite.

mauricio.szabo01:09:37

Also, you can use closed schemas as security measures, like "don't allow a non-admin user to change the status of a user in this REST path". Rails did decide that everything was open by default, then changed the approach (breaking lots of code in the process) because of lots of bugs and security issues on production code - even github suffered from that flaw, I believe

Alex Miller (Clojure team)04:09:13

the approach is spec 2 is that "closing" is a property of the check, not of the spec

Alex Miller (Clojure team)04:09:36

something you can enable during validation, conform, etc if needed

👍 3
noisesmith14:09:22

yeah, in practice I added s/Any s/Any to every hash-map in schema, and 98% of my schemas were for hash-maps

noisesmith14:09:00

tangent: I wonder if an entropy detecting tool would be useful for catching mismatched components. My idea being if you have a "state" or "component" map tree with the same data repeated many times at many levels of branching, that could indicate an app that's growing faster than it's being designed

mpenet14:09:44

the open-ness part could be another namespaced key, that would get validated even if not specified in the s/keys

mpenet14:09:00

so s/Any doesn't work the same way (lack of global registry is the real issue)

mpenet14:09:19

different approaches

borkdude14:09:18

you can just programmatically merge schemas though which makes it less of a problem

mpenet14:09:45

yes, different approaches. one assumes an attribute is always the same thing, the other not

borkdude14:09:02

malli has much of that flexibility too and possibly more, maybe it will be schema.next in terms of community impact

mpenet14:09:21

I am not a fan of having both approaches for something that should aim to become a standard personally. I tend to prefer Spec style, but that's just me

borkdude14:09:31

what did you mean with "both approaches"?

mpenet14:09:43

we're circling back to Scala's discussion 🙂 imho enabling many different styles to deal with something that should be a shared by the community at large is not great

borkdude14:09:29

yes, I intented to ask which styles you meant

mpenet14:09:44

global registry vs first class specs

borkdude14:09:23

I think malli also has a global registry, local is optional

mpenet14:09:34

yeah that's what I mean

borkdude14:09:14

I don't follow. This is the same as spec? You don't have to use a local registry, it's an edge case that is supported optionally

mpenet14:09:33

the value of a (namespaced) key in a map will always validate the same way

mpenet14:09:45

that's not true with Schema/malli

mpenet14:09:00

it can be, but not necessarly, depends on the style used

borkdude14:09:08

So @ikitommi, malli doesn't have something like s/def ?

borkdude14:09:43

> Example to use a registered qualified keyword in your map. If you don't provide a schema to this key, it will look in the registry. I wonder how you register a qualified keyword associated with a schema globally

mpenet14:09:54

I think it does since it has registries

mpenet14:09:16

I guess you can define a single registry that you'd use everywhere (if you follow that rule)

borkdude14:09:29

in that case, just use the global registry right

mpenet14:09:52

it works only if you control all the code you work with.

mpenet14:09:06

lib author A can define its own, or not

ikitommi14:09:30

malli is immutable by default but supports global mutable registries. Just not by default. But, it's pre-alpha, feedback welcome.

ikitommi14:09:03

no fdefs yet, but will be, soon.

borkdude14:09:38

@ikitommi So the feedback / question is: if lib A defines a malli spec for keyword :foo/bar, how can you enforce that library B will validate this as intented? (did I say that right @mpenet)?

mpenet14:09:25

afaik you cannot enforce it. You have to know where to get the info from the registry of the lib, merge it with yours potentially and then do the same for every lib that has specs registered (if any)

mpenet14:09:59

that's where imho having too many options (or styles) hurts

mpenet14:09:20

but maybe I missed the latest developments

ikitommi14:09:33

currently, the lib needs to have it's own registry (map) and you can compose that it with your app registry. Could poll if people want a global mutable thing.

borkdude14:09:12

one could ask the question if this is a common thing: validating maps from other libs

ikitommi14:09:53

you can always force a immutable (and serializable) registry inside your app, for some parts where that matters

mpenet14:09:05

take that in the context of a haskell like type system, is it useful to have the guarantee a Thing is always assumed to be the same ?

mpenet14:09:28

imo yes. If you make the // between an attribute (nsed key) and a Type

mpenet14:09:48

then outside of ns'ed keys it's the wild west

borkdude14:09:10

@mpenet it's probably the other libs responsibility to create those maps, which can from then on be assumed to be true Things?

mpenet14:09:22

Not necessarly

mpenet14:09:16

Unless you like to create contructors for all your data 🙂 and guarantee than any producer will use these

mpenet14:09:39

But maybe it's quite personal, some people like it one way or the other I imagine. Right now I tend to prefer attributes to have a strong identity (like in datomic or graphql), but I guess that might not be the case of everybody, clojure enables many ways to do the same thing.

jjttjj23:09:54

If I want to expose a clojure map/data oriented api as JSON, but I use qualified keywords, would the "default" approach just be to stringify the keys, skip the colon, ie :my.qualified/key to "my.qualified/key"? Just want to make sure I'm not missing some json convention or issues or something

cjsauer23:09:43

Trying to find the forum post where this is discussed at length...

cjsauer23:09:57

Here is one discussing the “considered harmful” post above: https://clojureverse.org/t/clojures-keyword-namespacing-convention-considered-harmful/6169/6 I don’t think that’s the one I had in mind tho :thinking_face:

cjsauer23:09:32

Ah here it is. I think it was the precursor to Valentin’s blog post actually: https://clojureverse.org/t/should-we-really-use-clojures-syntax-for-namespaced-keys/1516

seancorfield23:09:53

Those are both essentially the same article, both by the same author, two years apart.

seancorfield23:09:29

(and I still think he's mostly wrong 🙂 )

cjsauer23:09:34

The discussion itself is what I wanted to link to, because there’s quite a bit of disagreement in this area.

jjttjj23:09:50

I remember seeing it and not agreeing, and now a month later I need some JSON compatibility and now see it in a whole new light haha

seancorfield00:09:10

I'm in the camp of use qualified keys wherever you can and convert as appropriate if needed at the edges.

seancorfield00:09:46

Given that you can go in and out of the database with qualified keys (if you're using Datomic, or a JDBC setup with next.jdbc), you can stay in qualified keyword land for pretty much everything except JSON interactions -- and even then some systems will happily accept / in key strings...

cjsauer00:09:17

I work daily in a system where Clojure is the minority, interfacing with both Ruby and JS. The pain of keyword conversion is real, but I sympathize with both sides of the argument. Clojure best practice can be argued as a general best practice, but it definitely creates friction elsewhere. I liked option 2 in the last link I shared, which were deemed “all terrain keys”, especially a conversion that somehow preserves the namespace portion, allowing for systematic conversion back into clj without loss of data.

cjsauer00:09:51

I actually tried having some keywords as kebab, and the “remote” keys remain snake, and it was extremely confusing. Ended up reverting back to converting at the edges.

seancorfield00:09:06

Given that both Cheshire and c.d.j offer controls on preserving or dropping qualifiers when generating JSON and on adding a qualifier or not when parsing JSON means that it's trivial to transform at the edges tho'...

seancorfield00:09:02

The snake_case / kebab-case thing is something you have to think about with next.jdbc too, if your DB has tables or columns in snake_case. Hence the built-in support for the camel-snake-kebab library if you have it on your classpath 🙂

seancorfield00:09:51

Unfortunately, we have some legacy DB stuff written in headlessCamelCase (both tables -- which are case sensitive -- and columns -- which are not, in MySQL at least).

seancorfield00:09:40

(and somewhere along the way we passed through nodelimitercase in our transition from headlessCamelCase to snake_case in MySQL... argh!)

cjsauer00:09:28

Ha sounds familiar. The issue with preserving namespaces is less that it’s difficult and more that your coworkers give you funny looks, ie social friction. It can result in some pretty long keys which some people find unappealing (which admittedly is not a great technical reason, but alas). We have good support for them in Clojure at least, other langs aren’t so lucky...

noisesmith14:09:23

I'd agree that outside my app namespaces aren't very useful, and inside my app I want them if you don't have control of how data gets into your app, and it doesn't happen in a small number of easy to intervene places, namespacing keys is only one of many problems you are going to have

noisesmith14:09:28

I'd use well placed ingestion / dispersion middleware, not a special "encoding" of the namespace into the key

noisesmith14:09:44

another thing to consider: {:foo/bar 1 :foo/baz 2 :quux/OK 3} -> {foo: {bar: 1, baz: 2}, quux: {OK: 3}}

seancorfield16:09:03

For some reason, that doesn't sit well with me. I think maybe that's the only point in Val's article that I agree with: don't treat qualified keywords as something structural. I think it's because, typically, at the edges you're not going to be mapping to/from a nested (unqualified) structure.

souenzzo17:09:57

I prefer to explicitly map qualified names<>unqualified. We usually do not control external resources I did this lib: https://github.com/souenzzo/eql-as

seancorfield23:09:52

@jjttjj Both Cheshire and clojure.data.json allow you to specify how keys are turned into JSON keys (strings) so you can choose to drop the qualifier or keep it.

seancorfield23:09:59

user=> (require '[cheshire.core :as ches] '[clojure.data.json :as dj])
nil
user=> (ches/generate-string {:a/b 1 :c/d 2})
"{\"a/b\":1,\"c/d\":2}"
user=> (ches/generate-string {:a/b 1 :c/d 2} {:key-fn name})
"{\"b\":1,\"d\":2}"
user=> (dj/write-str {:a/b 1 :c/d 2})
"{\"b\":1,\"d\":2}"
user=> (dj/write-str {:a/b 1 :c/d 2} :key-fn (comp str symbol))
"{\"a\\/b\":1,\"c\\/d\":2}"
user=> 
@jjttjj /cc @U6GFE9HS7

seancorfield23:09:16

c.d.j escapes / by default but that can be turned off.

jjttjj23:09:38

good to know! wouldn't have guessed that with the escaped slashes. Haven't settled on a json lib yet

seancorfield23:09:13

(their defaults are opposite, BTW)

mauricio.szabo01:09:24

To be honest, I think the opposite is the flawed approach. You can make a closed schema open, but you can't do the opposite.