Fork me on GitHub

any thoughts on anyone ? "Open source semantic graph database that guarantees data integrity, facilitates secure data sharing, and powers connected data insights." "Fluree is more than a database and more than a blockchain. It is a data management platform that merges the analytic power of a modern graph..." it's written in clojure

🤯 16
🍬 4

Looks pretty nice from the front page


Never heard of it, but it looks interesting


don't want to be that guy again but it doesn't appeal to me • the looks is a very generic template/style • the page is mostly filled with buzzwords • there are statements all over the place that rise serious questions about security, scalability, maintenance costs that are not answered immediately • json based query language? why do they have their own if they support others? Not saying there is anything bad there, but I am curious for the reasons • seems like a lot of stuff that is reimplemented for no obvious reasons

✔️ 4
Trey Botard21:05:36

thanks for bringing fluree up @UJMU98QDC I'm a dev advocate there, so i can try to answer any questions, if needed.

Trey Botard21:05:14

@U0VQ4N5EE we have a json based query language to facilitate easy interop with other languages and via query/transaction calls via http, and yes our marketing site is somewhat buzzwordy, but we've got some pretty good stuff under the hood.


Well that's just it, 'easy interop' sounds like something to sell with not to build upon 😞 Really easy interop is when you don't even need to learn a new DSL, no?


@U0516F690 Interesting! I’m curious about the origin of the db. It’s a technical product, but I can’t find any technical founders, is this correct? I’m looking at this page

👀 4

The fact that it supports RDF/SparQL is a big plus for us, as we use this format internally and it's a standard

👍 2
Trey Botard13:05:56

@U0FT7SRLP Brian Platz is the technical founder and CEO

👌 2
Trey Botard13:05:43

@U0VQ4N5EE thats why we also support GraphQL, SQL, a subset of SPARQL, and you can call directly via Clojure. But if you dont know Clojure and you're familiar with Javascript or Python, writing a JSON is something you are more than likely familiar with and gets you some of the benefits which using one of the other query languages doesn't support, namely time-based queries. If that is something needed by your app, then using FlureeQL in JSON or Clojure is necessary.


I spent over a year deep diving blockchain tech and in the end my conclusion was that 1. it's a useful technology in a certain situation, e.g. when multiple transport companies want to use a single deposit, putting a blockchain on the system gives audit and a new company needs only to setup the tech and can integrate immediately without any further costs. 2. it still requires integration with the law and everything else like everything else What blockchain is not good for, for physical and philosophical reasons that I am very much happy to delve upon if anyone is interested, is to implement general solutions (e.g. like a programming language or a database).


Would love to hear some thoughts on the blocking aspects of blockchain with regard to general programming languages or databases - a lot of projects seem to try to do this, perhaps like the recently discussed Fluree DB on reddit


Yes, happy to hear further on this. More on "what blockchain is not good for", and "to implement general solutions(e.g. like a programming language or a database)"


@em not 100% sure which "blocking aspect" you mean. 'block' in the blockchain is not about blocking. 🙂


@UJMU98QDC I am saying blockchain is not good for general solutions because the whole point of blockchain systems is that there is a distributed ledger that can be used anyone on the network. The overhead of building a system that uses such a ledger and integrates it with the rest of the company is huge, if after adopting such a system you still have to develop another custom solution, it will be wasteful for everyone using the network. It makes much more sense for any solution, that uses blockchain datastructures to provide auditability, to be built as lightweight as possible to keep the costs of transactions down. That's the physical argument, it's simply more effective (less cost, less risk, less time consumed) in the end to not have any superfluous incidental complexity. The other side is about time. Blockchain, as the name says, progresses link by link, one block built on the previous one. Please note that this is definedly not how actual distributed systems work. A truly distributed system functions distributed in space AND in time. Transactions in Timbuktu don't have to wait for transactions in Vanuatu to finish. I am oversimplifying because it's less about waiting for others than stepping in unison, but the picture is more or less the same: your local transaction depends on the global system, not the other way around. This provides strong automatic tools to handle failures, but it's costly. Because of this limitation, blockchains either are slow or require additional tricks to speed them up to anything that would be useful in a modern economic setting. I can go on, but I think I stop here and see if what I wrote so far makes sense for you 🙂


• The main selling point of eth is that blockchain can have a broader and general scope than just crypto - as implemented in its source idea: bitcoin • It even came with the latest buzz: "world computer" or the "internet computer" • The holistic concept of a open and decentralised web (web 3.0)which includes cryptocurrencies but also websites, apps or basically most kind of software. • couple of other examples are and which are implementing blockchains and which is implementing ethereum for peer to peer software versioning and collaboration like GitHub, which is implementing ethereum for messaging and there are few which shows that the main selling point of ethereum is actually viable. • So in your point of view, how these ideas and projects hold any real value in terms of what they are promising?


• yes, ethereum is based on this idea, but it's more like a public research project than something that's commercially viable, checking the list of biggest apps: • In my honest opinion, yes they hold lots of value, but not necessarily in what they promise. I think most of these projects are scams to bring in investor money and then run with it. The developers building most of these projects are there because the work is interesting and the pay is good. I am speaking from experience, we have implemented quite a few POCs for ethereum and was involved in a couple of more serious projects as well. Open ended projects tend to go on longer.

Roman Petrov10:05:54

Hello! I'm looking for a good Clojure/Java developer in Russia. Do they exist? Please contact me directly for details.


Can I destructure keywords ignoring namespace? For example I have a generic function that accepts a name key and will handle them all the same regardless of the namespace


no, destructure needs namespace for fully qualified keywords


Ah no worries. Thank you


I could just strip the namespace but maybe there is a feature in destructuring for this

Alex Miller (Clojure team)13:05:37

The ns is stripped by destructuring bc local bindings are always unnamespaced


Hello. If I need to send EDN data on the wire (from my web app's backend to frontend), is a function like this a way to go?


(require '[cognitect.transit :as transit])
(import [ ByteArrayInputStream ByteArrayOutputStream])

(defn to-edn-str [data]
  (let [out (ByteArrayOutputStream. 4096)
        writer (transit/writer out :json)]
    (transit/write writer data)
    (.toString out)))

(to-edn-str [:abc 1 2])


thats transit not edn? for EDN just pr-str


yes. Still not getting 100% of difference (and reasoning)


transit is better for sending stuff over the wire, so that is fine. but calling it to-edn-str is rather confusing since what you get is transit json string


better in which sense btw? is that 'cause JSON can be "gzipped" (or something) that is more optimal to send than EDN, which for the browser is mere plain/text?


no, both a just text strings. transit is just a little faster for parsing and a little smaller overall


gzip works for all, no difference there


ah, yes, so that's the browser's/server's parsing algorithm


no, as far as the browser is concerned its just a string. it has no notion of transit or EDN


I mean, when it tackles that transit JSON and later walks the tree (or sth) to turn it into proper CLJS objects


as opposed to parsing the EDN string


"it" doesn't do that. YOUR code does that. either via the transit reader or the EDN reader.


ok. agree. thanks


Might be a vague question. I am going to implement a system with several modules. Each module communicates with other through core.async channel. Haven’t touched this part before. Is there any example code/project for reference? I am mostly interested in the coordination and message passing (pub/sub) between these modules.


@i is a module something abstract? i.e. they still will be spawned by a single process?


yup. still spawned b a single process.


if they were in separate jvms then core.async wouldn't help at all. also please don't use lein as a prod process launcher, lein is a build tool and the run task is a convenience for developer iteration

Ben Sless13:05:40

Anyone here has experience with Jackson serializers? I'm trying to get it to use an IterableSerializer instead of a CollectionSerializer for a LazySeq with jsonista


for a while I've been avoiding jackson because of the brittle version sensitive deps and using instead, ymmv but it never turned out that json encoding was my perf bottle neck


@U04V70XH6 I remember you are doing some removing Jackson work from a codebase. How’s that going?


My concern is, Jackon might be indirectedly referenced by other libs. So it’s still got used.


sure, but the problem with jackson is the version change brittleness, so each time you remove a usage of jackson you are mitigating that problem


it's not a question of "use it anywhere" vs. "don't ever use it", it's a strategy of reducing the number of places it's used to reduce the brittleness that its usage introduces

Ben Sless15:05:00

I have some use cases where a large chunk of my CPU is wasted in Jackson


be careful with that analysis - for example, if jackson is consuming a lazy seq, the profiler will describe the work done realizing that seq as jackson's CPU usage


@i We got to the point where we pin the Jackson version for just one subproject now (to 2.8.11 — because 2.9.0 introducing a breaking change around null handling, so at least we’ve tracked down why it causes failures). All the other projects just ignore the issue now and let deps bring in whatever version of Jackson they want (mostly 2.10.x as I recall).

Ben Sless16:05:02

Yeah, I know, and this whole thing started because I saw that lazy seqs are consumed twice because the CollectionSerializer calls .size() first

Ben Sless16:05:49

What I was hoping to do was avoid intermediate allocations as much as possible, it's a very large stream

Ben Sless16:05:01

This analysis still holds


lazy seqs are cached though - that would cause heap pressure but not CPU (except indirectly via more GC work)

Ben Sless16:05:08

It is an extremely garbage intensive piece of code


+1 to all of that, and when I do use jackson, it's not the ObjectMapper ORM-ey stuff

Ben Sless15:05:23

Jsonista is faster so I'm trying to work with that


I updated from data.json 1.1.0 to 2.3.0 and am getting some very odd results back. I'm not sure exactly what this is but in Cursive, one of the decoded strings gets printed in the REPL as a series of NULs (see attached screenshot). I'm also not sure how to repro this since it appears to have something to do with how the inputstream is originating. I am calling a GCP API with the Java 11 HTTP client and getting back an inputstream. I'm then calling json/read on the result of that.

(def resp
    {:client http-client
     :as     :input-stream}))

(with-open [rdr (io/reader (:body resp))]
  (json/read rdr))
The last form is the one returning the oddly decoded JSON. If I spit the inputstream to a file and run the same code with a reader created from a file, the decoded result is correct (no NUL).
(with-open [rdr (io/reader (io/file "test.json"))]
    (json/read rdr))
Seems like this is an issue with the 2.x data.json version. I will revert back to 1.1.0 for now. Happy to provide more info if the maintainers are interested.

Alex Miller (Clojure team)15:05:08

@kenny would be great to learn more about what's up so we can fix if needed - we have a channel #data_json if you could isolate something


Hi, I would like to point a new clojurian to this slack but I forgot where I got the invitation from.


i think will help in this case


@dpsutton Thanx! That worked!

👍 4

Has anyone here ever used a different arity than the 2-arity transit/write-handler? If so, could you explain to me why?


without committing to any official policy, is there a ballpark number of votes on that get tickets added to a roadmap or release candidate?

Alex Miller (Clojure team)19:05:55

no, I look at them from top down though for pulling into consideration

Alex Miller (Clojure team)19:05:22

most have ≤ 1 so, more than that is noticeable :)


haha. yeah. was just wondering if my fourth vote might hit some threshold 🙂

Alex Miller (Clojure team)19:05:52

even then, this is just one of many things serving as fodders for attention


makes sense. thanks for the info


oh I thought it was 6 votes. I guess I can stop bribing folks!


(scoffs, offended he was never offered a bribe. I won't TAKE one to vote on a Clojure issue I do not care about, but just the fact that you didn't think to try bribing me 🙂

Rob Haisfield21:05:18

Best autocomplete for Clojure?


How do I make the following function handle 'sequency' collections i.e. sets, lists, vectors properly? Feels like I have to deal with multiple specifities example (conj nil x) returns a list, so the seq-init is not the right one because I'm doing a conj that adds the element at the start of the coll.

(defn deep-remove-fn
   (fn []
     (is (= ((deep-remove-fn empty?) {}) nil))
     (is (= ((deep-remove-fn empty?) []) nil))
     (is (= ((deep-remove-fn empty?) '()) nil))
     (is (= ((deep-remove-fn empty?) #{}) nil))
     (is (= ((deep-remove-fn nil? boolean? keyword?)
             [:a {:c true} 9 10 nil {:k {:j 8 :m false}}])
            [{} 9 10 {:k {:j 8}}]))
     (is (= ((deep-remove-fn false? zero?)
             {:a 90 :k false :c {:d 0 :e 89}})
            {:a 90, :c {:e 89}}))
     (is (= ((deep-remove-fn empty?)
             {:a 90 :k {:m {}} :c {:d 0 :e #{}}})
            {:a 90 :c {:d 0}}))
     (is (= ((deep-remove-fn empty?)
             [#{7 8 9} [11 12 13] '(15 14)])
            [#{7 8 9} [11 12 13] '(15 14)]))
     (is (= ((deep-remove-fn empty?)
             {:a {:b {} :c [[]]} :k #{#{}}})
     (is (= ((deep-remove-fn nil?)
             {:a {:b {} :c [[]]}})
            {:a {:b {} :c [[]]}}))
     (is (= ((deep-remove-fn nil? empty?)
             {:a {:b {} :c [[]] :k #{#{}}}})
  [& remove-fns]
  (let [remove-fns
                   (for [remove-fn remove-fns]
                        (remove-fn %)
                        (catch Exception _
        removable? (apply some-fn remove-fns)
        map-init   (if (removable? {}) nil {})
        seq-init   (if (removable? []) nil [])]
    (fn remove [x]
      (when-not (removable? x)
          (map? x) (reduce-kv
                    (fn [m k v]
                      (if-let [new-v (remove v)]
                        (assoc m k new-v)
          (seq? x) (reduce
                    (fn [acc curr]
                      (if-let [new-curr (remove curr)]
                        (conj acc new-curr)
          :else x)))))


Maybe this isn’t even close to the way I should be going about solving this problem, in which case, please suggest what you think might be a better approach


I think clojure.walk/postwalk would make this code much simpler


Yeah, should try it with postwalk


also you might consider a multimethod / some multimethods on type, rather than inline conditionals everywhere


that way, to understand what is done with a type I can look at its method(s) instead of finding the relevaant line in each condition


Yup makes sense. That way I could easily extend it too


hi! i just spent several hours trying to track down a really strange bug with some of my code. doing an end-to-end splitting of a file, FEC, encrypt, persist to db, write header, then roundtrip back the other way, but while i read from a file initially, to reduce my initial code, instead i just read the whole thing into memory to do the compare (blake2b hash on source and end result) finally managed to track the culprit down after adding logging to my all-nighter mess of a personal codebase 😄

enki.buffers> (byte-array 3145728000)
Execution error (NegativeArraySizeException) at enki.buffers/eval43087 (form-init18270525509685357804.clj:12).
enki.buffers> (. clojure.lang.Numbers byte_array 3145728000)
Execution error (NegativeArraySizeException) at enki.buffers/eval43089 (form-init18270525509685357804.clj:15).
static public byte[] byte_array(Object sizeOrSeq){
	if(sizeOrSeq instanceof Number)
		return new byte[((Number) sizeOrSeq).intValue()];
obviously the issue is 3145728000 > integer max size, so it's overflowing.
(defn byte-array
  "Creates an array of bytes"
  {:inline (fn [& args] `(. clojure.lang.Numbers byte_array [email protected]))
   :inline-arities #{1 2}
   :added "1.1"}
  ([size-or-seq] (. clojure.lang.Numbers byte_array size-or-seq))
  ([size init-val-or-seq] (. clojure.lang.Numbers byte_array size init-val-or-seq)))
there's nothing obvious in the docstring nor warnings on clojuredocs about max size for byte-arrays. is this a JVM limitation? (i know it's extremely bad practice, but it was the quick-and-dirty way to test my functionality and i have plenty of RAM. i'll of course rewrite it to use some other method.) maybe at least the docstring should be modified, or maybe can be extended, i dunno. what do you think? at least it is a hidden footgun.


it is a jvm limitation


e.g. arrays are indexed by integers


yep, just read up on it


obvious when one knows the limitations of the underlying platform, but was a nightmare to discover (as i calculate the array size from a custom binary datastructure and summing block sizes, so assumed i had a mistake somewhere. of course, upon discovering it only blew up > max int, narrowed the scope somewhat...) gone midnight here, but after some sleep i might see if i can add a note somewhere as a suggestion. (64bit sbcl spoiled me.)