Fork me on GitHub
#clojure
<
2021-05-19
>
kosengan01:05:27

any thoughts on http://flur.ee anyone ? "Open source semantic graph database that guarantees data integrity, facilitates secure data sharing, and powers connected data insights." "Fluree is more than a database and more than a blockchain. It is a data management platform that merges the analytic power of a modern graph..." it's written in clojure

🤯 16
🍬 4
didibus01:05:07

Looks pretty nice from the front page

borkdude07:05:06

Never heard of it, but it looks interesting

Aron07:05:05

don't want to be that guy again but it doesn't appeal to me • the looks is a very generic template/style • the page is mostly filled with buzzwords • there are statements all over the place that rise serious questions about security, scalability, maintenance costs that are not answered immediately • json based query language? why do they have their own if they support others? Not saying there is anything bad there, but I am curious for the reasons • seems like a lot of stuff that is reimplemented for no obvious reasons https://docs.flur.ee/docs/1.0.0/schema/functions

✔️ 4
Trey Botard21:05:36

thanks for bringing fluree up @UJMU98QDC I'm a dev advocate there, so i can try to answer any questions, if needed.

Trey Botard21:05:14

@U0VQ4N5EE we have a json based query language to facilitate easy interop with other languages and via query/transaction calls via http, and yes our marketing site is somewhat buzzwordy, but we've got some pretty good stuff under the hood.

Aron02:05:08

Well that's just it, 'easy interop' sounds like something to sell with not to build upon 😞 Really easy interop is when you don't even need to learn a new DSL, no?

jeroenvandijk08:05:03

@U0516F690 Interesting! I’m curious about the origin of the db. It’s a technical product, but I can’t find any technical founders, is this correct? I’m looking at this page https://flur.ee/about-us/

👀 4
borkdude08:05:47

The fact that it supports RDF/SparQL is a big plus for us, as we use this format internally and it's a standard

👍 2
Trey Botard13:05:56

@U0FT7SRLP Brian Platz is the technical founder and CEO

👌 2
Trey Botard13:05:43

@U0VQ4N5EE thats why we also support GraphQL, SQL, a subset of SPARQL, and you can call directly via Clojure. But if you dont know Clojure and you're familiar with Javascript or Python, writing a JSON is something you are more than likely familiar with and gets you some of the benefits which using one of the other query languages doesn't support, namely time-based queries. If that is something needed by your app, then using FlureeQL in JSON or Clojure is necessary.

Aron07:05:27

I spent over a year deep diving blockchain tech and in the end my conclusion was that 1. it's a useful technology in a certain situation, e.g. when multiple transport companies want to use a single deposit, putting a blockchain on the system gives audit and a new company needs only to setup the tech and can integrate immediately without any further costs. 2. it still requires integration with the law and everything else like everything else What blockchain is not good for, for physical and philosophical reasons that I am very much happy to delve upon if anyone is interested, is to implement general solutions (e.g. like a programming language or a database).

em22:05:59

Would love to hear some thoughts on the blocking aspects of blockchain with regard to general programming languages or databases - a lot of projects seem to try to do this, perhaps like the recently discussed Fluree DB on reddit https://github.com/fluree/db

kosengan01:05:31

Yes, happy to hear further on this. More on "what blockchain is not good for", and "to implement general solutions(e.g. like a programming language or a database)"

Aron02:05:26

@em not 100% sure which "blocking aspect" you mean. 'block' in the blockchain is not about blocking. 🙂

Aron02:05:55

@UJMU98QDC I am saying blockchain is not good for general solutions because the whole point of blockchain systems is that there is a distributed ledger that can be used anyone on the network. The overhead of building a system that uses such a ledger and integrates it with the rest of the company is huge, if after adopting such a system you still have to develop another custom solution, it will be wasteful for everyone using the network. It makes much more sense for any solution, that uses blockchain datastructures to provide auditability, to be built as lightweight as possible to keep the costs of transactions down. That's the physical argument, it's simply more effective (less cost, less risk, less time consumed) in the end to not have any superfluous incidental complexity. The other side is about time. Blockchain, as the name says, progresses link by link, one block built on the previous one. Please note that this is definedly not how actual distributed systems work. A truly distributed system functions distributed in space AND in time. Transactions in Timbuktu don't have to wait for transactions in Vanuatu to finish. I am oversimplifying because it's less about waiting for others than stepping in unison, but the picture is more or less the same: your local transaction depends on the global system, not the other way around. This provides strong automatic tools to handle failures, but it's costly. Because of this limitation, blockchains either are slow or require additional tricks to speed them up to anything that would be useful in a modern economic setting. I can go on, but I think I stop here and see if what I wrote so far makes sense for you 🙂

kosengan03:05:00

• The main selling point of eth is that blockchain can have a broader and general scope than just crypto - as implemented in its source idea: bitcoin • It even came with the latest buzz: "world computer" or the "internet computer" • The holistic concept of a open and decentralised web (web 3.0)which includes cryptocurrencies but also websites, apps or basically most kind of software. • couple of other examples are http://onflow.org and http://dfinity.org which are implementing blockchains and http://radicle.xyz which is implementing ethereum for peer to peer software versioning and collaboration like GitHub, http://status.im which is implementing ethereum for messaging and there are few which shows that the main selling point of ethereum is actually viable. • So in your point of view, how these ideas and projects hold any real value in terms of what they are promising?

Aron08:05:02

• yes, ethereum is based on this idea, but it's more like a public research project than something that's commercially viable, checking the list of biggest apps: https://www.stateofthedapps.com/rankings/platform/ethereum • In my honest opinion, yes they hold lots of value, but not necessarily in what they promise. I think most of these projects are scams to bring in investor money and then run with it. The developers building most of these projects are there because the work is interesting and the pay is good. I am speaking from experience, we have implemented quite a few POCs for ethereum and was involved in a couple of more serious projects as well. Open ended projects tend to go on longer.

Roman Petrov10:05:54

Hello! I'm looking for a good Clojure/Java developer in Russia. Do they exist? Please contact me directly for details.

caleb.macdonaldblack11:05:04

Can I destructure keywords ignoring namespace? For example I have a generic function that accepts a name key and will handle them all the same regardless of the namespace

delaguardo11:05:51

no, destructure needs namespace for fully qualified keywords

caleb.macdonaldblack11:05:26

Ah no worries. Thank you

caleb.macdonaldblack11:05:51

I could just strip the namespace but maybe there is a feature in destructuring for this

Alex Miller (Clojure team)13:05:37

The ns is stripped by destructuring bc local bindings are always unnamespaced

andrewboltachev11:05:17

Hello. If I need to send EDN data on the wire (from my web app's backend to frontend), is a function like this a way to go?

andrewboltachev11:05:21

(require '[cognitect.transit :as transit])
(import [ ByteArrayInputStream ByteArrayOutputStream])

(defn to-edn-str [data]
  (let [out (ByteArrayOutputStream. 4096)
        writer (transit/writer out :json)]
    (transit/write writer data)
    (.toString out)))

(to-edn-str [:abc 1 2])

thheller11:05:57

thats transit not edn? for EDN just pr-str

andrewboltachev11:05:34

yes. Still not getting 100% of difference (and reasoning)

thheller11:05:23

transit is better for sending stuff over the wire, so that is fine. but calling it to-edn-str is rather confusing since what you get is transit json string

andrewboltachev11:05:59

better in which sense btw? is that 'cause JSON can be "gzipped" (or something) that is more optimal to send than EDN, which for the browser is mere plain/text?

thheller11:05:08

no, both a just text strings. transit is just a little faster for parsing and a little smaller overall

thheller11:05:29

gzip works for all, no difference there

andrewboltachev11:05:55

ah, yes, so that's the browser's/server's parsing algorithm

thheller11:05:28

no, as far as the browser is concerned its just a string. it has no notion of transit or EDN

andrewboltachev11:05:16

I mean, when it tackles that transit JSON and later walks the tree (or sth) to turn it into proper CLJS objects

andrewboltachev11:05:26

as opposed to parsing the EDN string

thheller11:05:43

"it" doesn't do that. YOUR code does that. either via the transit reader or the EDN reader.

andrewboltachev11:05:07

ok. agree. thanks

pinkfrog12:05:40

Might be a vague question. I am going to implement a system with several modules. Each module communicates with other through core.async channel. Haven’t touched this part before. Is there any example code/project for reference? I am mostly interested in the coordination and message passing (pub/sub) between these modules.

andrewboltachev13:05:12

@i is a module something abstract? i.e. they still will be spawned by a single process?

pinkfrog14:05:49

yup. still spawned b a single process.

noisesmith14:05:28

if they were in separate jvms then core.async wouldn't help at all. also please don't use lein as a prod process launcher, lein is a build tool and the run task is a convenience for developer iteration

Ben Sless13:05:40

Anyone here has experience with Jackson serializers? I'm trying to get it to use an IterableSerializer instead of a CollectionSerializer for a LazySeq with jsonista

noisesmith15:05:33

for a while I've been avoiding jackson because of the brittle version sensitive deps and using clojure.data.json instead, ymmv but it never turned out that json encoding was my perf bottle neck

pinkfrog15:05:24

@U04V70XH6 I remember you are doing some removing Jackson work from a codebase. How’s that going?

pinkfrog15:05:55

My concern is, Jackon might be indirectedly referenced by other libs. So it’s still got used.

noisesmith15:05:31

sure, but the problem with jackson is the version change brittleness, so each time you remove a usage of jackson you are mitigating that problem

noisesmith15:05:00

it's not a question of "use it anywhere" vs. "don't ever use it", it's a strategy of reducing the number of places it's used to reduce the brittleness that its usage introduces

Ben Sless15:05:00

I have some use cases where a large chunk of my CPU is wasted in Jackson

noisesmith15:05:14

be careful with that analysis - for example, if jackson is consuming a lazy seq, the profiler will describe the work done realizing that seq as jackson's CPU usage

seancorfield15:05:39

@i We got to the point where we pin the Jackson version for just one subproject now (to 2.8.11 — because 2.9.0 introducing a breaking change around null handling, so at least we’ve tracked down why it causes failures). All the other projects just ignore the issue now and let deps bring in whatever version of Jackson they want (mostly 2.10.x as I recall).

Ben Sless16:05:02

Yeah, I know, and this whole thing started because I saw that lazy seqs are consumed twice because the CollectionSerializer calls .size() first

Ben Sless16:05:49

What I was hoping to do was avoid intermediate allocations as much as possible, it's a very large stream

Ben Sless16:05:01

This analysis still holds

noisesmith16:05:43

lazy seqs are cached though - that would cause heap pressure but not CPU (except indirectly via more GC work)

Ben Sless16:05:08

It is an extremely garbage intensive piece of code

ghadi15:05:55

+1 to all of that, and when I do use jackson, it's not the ObjectMapper ORM-ey stuff

Ben Sless15:05:23

Jsonista is faster so I'm trying to work with that

kenny15:05:05

I updated from data.json 1.1.0 to 2.3.0 and am getting some very odd results back. I'm not sure exactly what this is but in Cursive, one of the decoded strings gets printed in the REPL as a series of NULs (see attached screenshot). I'm also not sure how to repro this since it appears to have something to do with how the inputstream is originating. I am calling a GCP API with the Java 11 HTTP client and getting back an inputstream. I'm then calling json/read on the result of that.

(def resp
  (java-http-clj.core/send
    my-req
    {:client http-client
     :as     :input-stream}))

(with-open [rdr (io/reader (:body resp))]
  (json/read rdr))
The last form is the one returning the oddly decoded JSON. If I spit the inputstream to a file and run the same code with a reader created from a file, the decoded result is correct (no NUL).
(with-open [rdr (io/reader (io/file "test.json"))]
    (json/read rdr))
Seems like this is an issue with the 2.x data.json version. I will revert back to 1.1.0 for now. Happy to provide more info if the maintainers are interested.

Alex Miller (Clojure team)15:05:08

@kenny would be great to learn more about what's up so we can fix if needed - we have a channel #data_json if you could isolate something

4
magra16:05:18

Hi, I would like to point a new clojurian to this slack but I forgot where I got the invitation from.

dpsutton16:05:03

i think http://clojurians.net will help in this case

magra16:05:56

@dpsutton Thanx! That worked!

👍 4
borkdude19:05:40

Has anyone here ever used a different arity than the 2-arity transit/write-handler? If so, could you explain to me why?

dpsutton19:05:20

without committing to any official policy, is there a ballpark number of votes on http://ask.clojure.org that get tickets added to a roadmap or release candidate?

Alex Miller (Clojure team)19:05:55

no, I look at them from top down though for pulling into consideration

Alex Miller (Clojure team)19:05:22

most have ≤ 1 so, more than that is noticeable :)

dpsutton19:05:27

haha. yeah. was just wondering if my fourth vote might hit some threshold 🙂

Alex Miller (Clojure team)19:05:52

even then, this is just one of many things serving as fodders for attention

dpsutton19:05:26

makes sense. thanks for the info

ghadi21:05:57

oh I thought it was 6 votes. I guess I can stop bribing folks!

andy.fingerhut04:05:56

(scoffs, offended he was never offered a bribe. I won't TAKE one to vote on a Clojure issue I do not care about, but just the fact that you didn't think to try bribing me 🙂

Rob Haisfield21:05:18

Best autocomplete for Clojure?

indy22:05:45

How do I make the following function handle 'sequency' collections i.e. sets, lists, vectors properly? Feels like I have to deal with multiple specifities example (conj nil x) returns a list, so the seq-init is not the right one because I'm doing a conj that adds the element at the start of the coll.

(defn deep-remove-fn
  {:test
   (fn []
     (is (= ((deep-remove-fn empty?) {}) nil))
     (is (= ((deep-remove-fn empty?) []) nil))
     (is (= ((deep-remove-fn empty?) '()) nil))
     (is (= ((deep-remove-fn empty?) #{}) nil))
     (is (= ((deep-remove-fn nil? boolean? keyword?)
             [:a {:c true} 9 10 nil {:k {:j 8 :m false}}])
            [{} 9 10 {:k {:j 8}}]))
     (is (= ((deep-remove-fn false? zero?)
             {:a 90 :k false :c {:d 0 :e 89}})
            {:a 90, :c {:e 89}}))
     (is (= ((deep-remove-fn empty?)
             {:a 90 :k {:m {}} :c {:d 0 :e #{}}})
            {:a 90 :c {:d 0}}))
     (is (= ((deep-remove-fn empty?)
             [#{7 8 9} [11 12 13] '(15 14)])
            [#{7 8 9} [11 12 13] '(15 14)]))
     (is (= ((deep-remove-fn empty?)
             {:a {:b {} :c [[]]} :k #{#{}}})
            nil))
     (is (= ((deep-remove-fn nil?)
             {:a {:b {} :c [[]]}})
            {:a {:b {} :c [[]]}}))
     (is (= ((deep-remove-fn nil? empty?)
             {:a {:b {} :c [[]] :k #{#{}}}})
            nil)))}
  [& remove-fns]
  (let [remove-fns
                   (for [remove-fn remove-fns]
                     #(try
                        (remove-fn %)
                        (catch Exception _
                          nil)))
        removable? (apply some-fn remove-fns)
        map-init   (if (removable? {}) nil {})
        seq-init   (if (removable? []) nil [])]
    (fn remove [x]
      (when-not (removable? x)
        (cond
          (map? x) (reduce-kv
                    (fn [m k v]
                      (if-let [new-v (remove v)]
                        (assoc m k new-v)
                        m))
                    map-init
                    x)
          (seq? x) (reduce
                    (fn [acc curr]
                      (if-let [new-curr (remove curr)]
                        (conj acc new-curr)
                        acc))
                    seq-init
                    x)
          :else x)))))

indy22:05:52

Maybe this isn’t even close to the way I should be going about solving this problem, in which case, please suggest what you think might be a better approach

noisesmith15:05:50

I think clojure.walk/postwalk would make this code much simpler

indy15:05:31

Yeah, should try it with postwalk

noisesmith15:05:44

also you might consider a multimethod / some multimethods on type, rather than inline conditionals everywhere

noisesmith15:05:40

that way, to understand what is done with a type I can look at its method(s) instead of finding the relevaant line in each condition

indy15:05:02

Yup makes sense. That way I could easily extend it too

dsp22:05:06

hi! i just spent several hours trying to track down a really strange bug with some of my code. doing an end-to-end splitting of a file, FEC, encrypt, persist to db, write header, then roundtrip back the other way, but while i read from a file initially, to reduce my initial code, instead i just read the whole thing into memory to do the compare (blake2b hash on source and end result) finally managed to track the culprit down after adding logging to my all-nighter mess of a personal codebase 😄

enki.buffers> (byte-array 3145728000)
Execution error (NegativeArraySizeException) at enki.buffers/eval43087 (form-init18270525509685357804.clj:12).
-1149239296
enki.buffers> (. clojure.lang.Numbers byte_array 3145728000)
Execution error (NegativeArraySizeException) at enki.buffers/eval43089 (form-init18270525509685357804.clj:15).
-1149239296
from: https://github.com/clojure/clojure/blob/b1b88dd25373a86e41310a525a21b497799dbbf2/src/jvm/clojure/lang/Numbers.java#L1394
@WarnBoxedMath(false)
static public byte[] byte_array(Object sizeOrSeq){
	if(sizeOrSeq instanceof Number)
		return new byte[((Number) sizeOrSeq).intValue()];
obviously the issue is 3145728000 > integer max size, so it's overflowing.
(defn byte-array
  "Creates an array of bytes"
  {:inline (fn [& args] `(. clojure.lang.Numbers byte_array ~@args))
   :inline-arities #{1 2}
   :added "1.1"}
  ([size-or-seq] (. clojure.lang.Numbers byte_array size-or-seq))
  ([size init-val-or-seq] (. clojure.lang.Numbers byte_array size init-val-or-seq)))
there's nothing obvious in the docstring nor warnings on clojuredocs about max size for byte-arrays. is this a JVM limitation? (i know it's extremely bad practice, but it was the quick-and-dirty way to test my functionality and i have plenty of RAM. i'll of course rewrite it to use some other method.) maybe at least the docstring should be modified, or maybe Numbers.java can be extended, i dunno. what do you think? at least it is a hidden footgun.

hiredman22:05:27

it is a jvm limitation

hiredman22:05:43

e.g. arrays are indexed by integers

dsp22:05:28

yep, just read up on it

dsp22:05:29

obvious when one knows the limitations of the underlying platform, but was a nightmare to discover (as i calculate the array size from a custom binary datastructure and summing block sizes, so assumed i had a mistake somewhere. of course, upon discovering it only blew up > max int, narrowed the scope somewhat...) gone midnight here, but after some sleep i might see if i can add a note somewhere as a suggestion. (64bit sbcl spoiled me.)