Fork me on GitHub

Nice and nice


And at this point we're almost finished with Season 2...


morning 😼 mogge


🌞 Goodbye UK


Where are you off to, @dominicm?


@yogidevbear gran canaria :)


I've literally spent 6-7 hours over the last week filling out prescreening forms for a role I don't really want to take but is best I've got on offer ATM and they've just sent me more paperwork to complete. So I'm in a really foul mood! :thunder_cloud_and_rain: 😠 😖


For hols or work?


@agile_geek sounds horrible, good luck though.


or "sterkte" as we would say in Dutch. (Which translates as strength)


thomas: I won't start typing in Afrikaans to you 😉


Apparently it's like children's Dutch 😂


I've another Dutch friend who says Flemish is the equivalent of a west country bumpkin speaking English.


@yogidevbear I can kinda understand Afrikaans, though there might be a few words I don't know.


and yes, Flemish is different in its own way as well. But still (somehow) qualifies as Dutch


@yogidevbear holiday. Then back in time for euroclj


Anyone else headed to euroclj? I’ll be about from Wednesday afternoon/evening and looking for friendly faces.


chrjs: I wish I was


I'm not going and if I'm lucky my face won't be there either


I'll be there. Arriving Wednesday afternoon.


@otfrom we'll let you know bruce


I think he'll figure it out if he looks in the mirror and sees something is missing


I’ll be sure to alert @otfrom to the presence of his face.


@U052852ES, I guess you’ll probably be the only lego person present. Easy to identify, but probably hard to spot.


@U4E5W80P7 yes... I am only about 4 cm tall... so easy to miss 😉


I'm there from Tuesday :)


I spent a lot of the weekend working with Clojure and I have to say that once again I am reminded of how much fun coding can be when you have powerful / interesting tools at hand…


Morning Everyone!


As an idle question, while I am waiting for an email to send (yes my connection is that__ slow), has anyone else had a fair bit of trouble re-using functions that were put together to build maps for adding data to Datomic to use for inserting data into more “traditional” databases..? I mean with specific reference to how maps are not “order guaranteed” so you can’t just call

(values coll)
on them and be sure that the data is in the right order for the query..?


I ended up writing a little function that reduces over a vector of keys so that I could build a vector of values in the right order… Does this sound like an idiomatic thing to do, or is there a built-in that I am missing that would have solved my issue for me..? (in my defence I was coding without the safety net of an active internet connection, therefore clojuredocs and stack overflow were unavailable to me)


TLDR; Can one order a map based on a vector of keys without reducing over said vector to produce an ordered collection of values?


Side note… My functions for preparing data to push into Datomic make use of

(-> ...)
(update ...)
quite a lot in order to turn string-based dates into instances and to turn string representations of numbers into yer actual numbers… Is this something that I could__ be using schema or indeed clojure.spec to do in a more elegant fashion..? Example of current situation:
     (-> (reduce
         (fn [acc vecpair]
          (conj acc vecpair))
          (zipmap (:gsod-measurement-keys ingest-cfg) (map gsod-replace-zero-placeholders (map gsod-trim-flag gsod-rest)))
          (zipmap (:gsod-frshtt-keys ingest-cfg) (gsod-frshtt-to-standard-length gsod-frshtt-value))))
        (update :gsod-measurement/date #(instant/read-instant-timestamp (noaa-date-to-rfc3339 %)))
        (update :gsod-measurement/tempaverage #(Double/parseDouble %))
        (update :gsod-measurement/tempaveragenumments #(Double/parseDouble %))
        (update :gsod-measurement/dewpoint #(Double/parseDouble %))
        (update :gsod-measurement/dewpointnumments #(Double/parseDouble %))
        (update :gsod-measurement/sealevelpressure #(Double/parseDouble %))
        (update :gsod-measurement/sealevelpressurenumments #(Double/parseDouble %))
        (update :gsod-measurement/stationpressure #(Double/parseDouble %))
        (update :gsod-measurement/stationpressurenumments #(Double/parseDouble %))
        (update :gsod-measurement/visibility #(Double/parseDouble %))
        (update :gsod-measurement/visibilitynumments #(Double/parseDouble %))
        (update :gsod-measurement/windspeedaverage #(Double/parseDouble %))
        (update :gsod-measurement/windspeednumments #(Double/parseDouble %))
        (update :gsod-measurement/windspeedmax #(Double/parseDouble %))
        (update :gsod-measurement/gustspeedmax #(Double/parseDouble %))
        (update :gsod-measurement/tempmax #(Double/parseDouble %))
        (update :gsod-measurement/tempmin #(Double/parseDouble %))
        (update :gsod-measurement/precipitation #(Double/parseDouble %))
        (update :gsod-measurement/snowdepth #(Double/parseDouble %))))))


Depending on what you're doing you can also look at array-map (`hash-map` is just the default associative data structure there are other options but you need to create them explicitly)


@agile_geek - I had no idea that there was a sorted version of map… facepalm I will have a look at array-map as well…


there are similar variants for other data structures - sorted-set is very useful, especially if diff'ing structures where dups don't matter


There are clearly far more varied and “interesting” options than I was aware of…


You could use spec or schema to conform or coerce your data BTW. Or you could map a generic fn over each k-v pair which takes a mapping of key and conversion fn and applies the fn to the appropriate key's value.


Oooh, I like the idea of that… Although I always assume that map means I have to touch each value / kv pair… How does one tell the map which ones to act on and which ones to ignore?


You can just write your fn to only convert the ones that you've told it too in your conversion mapping and pass the other k-v pairs thru untouched (basically an if based on whether the key exists in your conversion mapping)


Ah I see - yeah I can wrap my head around that…


If you close over the conversion mapping (using a high order 'factory' fn that takes your conversion mapping and returns a function that does the conversion) you can easily test the concept and make it generic and reusable.


I’ve gone and looked at sorted-map / sorted-set… They are interesting, but they are “sorted” on the keys, where I need to coerce an arbitrary order, so unless I create my DB columns in alphabetic order they are not going to be what I am looking for…


@agile_geek - Thanks for that, I am not going to try it right now__ as I have stuff I need to get done, and the above situation while unwieldy does work, but I am going to put this, as an approach, into my refactor notes… 🙂


Why do you need to align positionally on the database? Do you not have column names you can map to?


I may have missed some context here BTW


If you can build the relational query using explicit column names you could use the same kind of conversion to map datomic keywords to col names?


@agile_geek - I am using yesql, so my INSERT query is in the queries.sql file thus:

-- name: populate-gsod-measurements!
-- INSERTS GSOD measurement data into the database
INSERT into gsod_measurements
(stationid, wbanid, mdate, tempaverage, tempaveragenumments, dewpoint, dewpointnumments,
sealevelpressure, sealevelpressurenumments,stationpressure, stationpressurenumments,
visibility, visibilitynumments, windspeedaverage, windspeednumments, windspeedmax,
gustspeedmax, tempmax, tempmin, precipitation, snowdepth, fog, rain, snow, hail,
thunder, tornado)
(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)


As such I need the values to be in the right order when I execute:

(sql/populate-gsod-measurements! {:? (ordered-values-vector-from-map (:gsod-canonical-db-columns ingest-cfg) (first (gsod-prepare-measurements-data row)))}))))


So you could either specify the columns in alphabetical order in the insert and use sorted-map


or you can reconstruct your map entry using reduce-kv using a conversion function that builds the entry in the correct order but the sorted-map option is a lot easier



(gsod-prepare-measurements-data row ...)
creates a map of attribute names and values for datomic, and because it’s longer than 8-10 entries and built using (zipmap …) Clojure “chooses” hashmap and so the order is arbitrary


Yeah, I went with option 2:

(defn ordered-values-vector-from-map
  [ordered-headings unordered-map]
   (fn [output hdg]
     (conj output (hdg unordered-map)))


except that it would appear that I created my own reduce-kv…


but you have control over the order that the col names appear in the insert? You can either dynamically construct the insert in the same order that hashmap uses or you can sort the map into alphabetical order and hard code the insert in that order?


@agile_geek - I could, but not and carry on using yesql, at least not “as intended” as the library is designed to completely separate SQL queries from Clojure, and the queries themselves can’t be dynamically created - at least not that I have ever seen…


But you can write your Yesql insert with the column names in whatever order you want


so if you write them in alphabetical order and use sorted-map?


It’s also worth bearing in mind that I am re-using a function that builds datomic inserts / transactions, so the column-names are not quite__ the same 😉


@agile_geek - Yes I could, but having them in table-order makes the DB and Queries a little more self-documenting 😉


I guess that I am happy with the solution I have, just wondering if there is a way to impose order on a map using a vector of keys that are in a specific order…


Not sure why that matters. I don't normally care (or know) what order columns are physically stored in inside the RDBMS. Even if I specified them in an order most RDBMS won't guarantee they're physically stored that way.


i thought yesql supported named params, or is that engine specific?


It matters to me / my brain @agile_geek


@glenjamin It does, but doing an INSERT on that basis is painful compared to using bound vars…


i always liked that MySQL allows INSERT INTO table SET a=1, b=2 ..., and the nodejs client let you auto-expand a map into that format


Very useful for WHERE clauses


If it really matters use reduce taking a conversion fn that has a mapping of column names - conversion fn's in the required order and reorders the key value pairs.


@maleghast but surely that slight pain in writing the query is better than the pain of messing with sorted maps and having to repeat the ordered list in the code?


@agile_geek - That’s what I did (see code above)


or maybe for simple inserts like that, you’d be better off generating the query instead of writing it


@glenjamin - I am not repeating the list, I re-use it out of config…


but it’s written in the .sql file?


to make that available as a vector to sort your map, it must be repeated or parsed out of SQL?


Don’t get me wrong, these are all helpful / valid points, but what I was asking is “is there a way to coerce an arbitrary order on a map using an ordered collection of keys?”


yeah, sorted-map-by


on something like #(- (nth %1) (nth %2))


@glenjamin - No, in this case I have a config file that has the Datomic attributes in a vector as the function was originally just building a map to make a Datomic transaction. The table was “built” with the same “column” order so I can reduce over the vector of attribute names to get an ordered collection of the values in the right order


unless i’m missing something, that still requires the SQL file and the config file to contain the same list of columns in the same order?


but i guess the point is that the config file has to exist anyway?


@glenjamin - you are missing something 🙂 Here is the query:

INSERT into gsod_measurements
(stationid, wbanid, mdate, tempaverage, tempaveragenumments, dewpoint, dewpointnumments,
sealevelpressure, sealevelpressurenumments,stationpressure, stationpressurenumments,
visibility, visibilitynumments, windspeedaverage, windspeednumments, windspeedmax,
gustspeedmax, tempmax, tempmin, precipitation, snowdepth, fog, rain, snow, hail,
thunder, tornado)
(?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
Here is the configured list (well vector) of Datomic attributes:


(I realise that the “column” list is the same apart from the repetition of “:gsod-measurement/” but that’s to help my brain, not out of any kind of necessity)


so the list is currently repeated twice, if you used named params it’d be 3 times. If you generated the SQL from the datomic list you could get it down to only being in one place


However I don’t know of any way to use yesql and dynamically generate queries, so I would have two SQL implementations if I did the latter.


(I am not saying it’s not possible, just that I don’t know how to do it 🙂 )


i suppose you could do it statically


build the SQL files via a task


(and I want to have my other queries “managed” via yesql - I like the way it separates SQL from Clojure and yet at the same time allows me to “use” the queries in Clojure in a very Clojuric manner)


@glenjamin - Yeah that would work. I do think that it would be better to be able to impose an arbitrary order on a map, say using an ordered collection of keys, when it is useful to be able to do so.


oh, a record


records guarantee that iirc


Now there’s an interesting idea…


If I can use a record to transact data into Datomic AND a record does preserve order of properties then that would be a winner, in this case…


Instead of building a map of attribute names -> values I could define a record and push the values into it and then they would come out in the correct order for the SQL query as well.


Record might work but array-map maintains insertion order of keys too iirc


@agile_geek - I think you are right, but I am using zipmap to create my map(s) and Clojure “decides” for you which implementation to use - if there are more than 8-10 kv pairs it “chooses” hash-map (according to clojuredocs, I’ve not done my own testing)


Yeah you would have to convert. Record is probably the way to go or a function to reorder which is what you would have to do to use array-map anyway in your case.


(def maps
  [{:a 1 :c 2 :d 3 :e 4 :f 5}
   {:a 11 :c 12 :d 13 :e 14 :f 15}
   {:a 21 :c 22 :d 23 :e 24 :f 25}])

(defrecord OrderedColumns
    [c d e a f])

(map map->OrderedColumns maps)

;; => (#try_clojure.core.OrderedColumns{:c 2, :d 3, :e 4, :a 1, :f 5} #try_clojure.core.OrderedColumns{:c 12, :d 13, :e 14, :a 11, :f 15} #try_clojure.core.OrderedColumns{:c 22, :d 23, :e 24, :a 21, :f 25})


@agile_geek - I LIKE that 🙂


If you want to do conversion too you could create your own record constructor function that takes a map entry and creates a record but also converts values using a lookup conversion map as I mention ☝️


Right-oh 🙂


As previously stated, I will stick with the method I have right now__ as it works, but I will be coming back to this code in the coming weeks, specifically looking to improve it, and I am going to try this approach out as an alternative to my little reduce function, which is fine while I have an ordered list of keys available to me, but there may be times when I don’t…


(I have saved your example to my code notebook 😉 )


@maleghast just for completeness...

(def maps
  [{:a "1" :c 2 :d 3 :e 4 :f 5}
   {:a "11" :c 12 :d 13 :e 14 :f 15}
   {:a "21" :c 22 :d 23 :e 24 :f 25}])

(defrecord OrderedColumns
    [c d e a f])

(defn create-ordered-columns
  (fn [x]
    (let [convertor (fn [m k v]
                      (if-let [conversion-fn (k conversion-mapping)]
                        (assoc m k (conversion-fn v))
                        (assoc m k v)))]
      (map->OrderedColumns (reduce-kv convertor {} x)))))

(def conv-mapping
  {:a #(Double/parseDouble %)})

(map (create-ordered-columns conv-mapping) maps)

;; => (#try_clojure.core.OrderedColumns{:c 2, :d 3, :e 4, :a 1.0, :f 5} #try_clojure.core.OrderedColumns{:c 12, :d 13, :e 14, :a 11.0, :f 15} #try_clojure.core.OrderedColumns{:c 22, :d 23, :e 24, :a 21.0, :f 25})


Oh this is very cool 🙂 Thanks!


I need to re-visit the bits of the code that are turning all numbers into Doubles and switch out Long (datomic doesn’t have Int) for the non floating-number attributes


When I do that I will look at this ^^ as a refactor, so that I can put the conv-mapping into config (I’m using Aero for config, which is very covenient), and this will eliminate the need for the reduce I am doing to put the values in order for SQL usage as the map(s) will be in order when I call (vals ordered-map) 🙂


Nice one - really, thanks so much 🙂


And here's the same thing using array-map instead of Record...

(defn convert-using-array-map
  (fn [x]
    (apply array-map (mapcat (fn [[k f]] [k (f (k x))]) col-conversion-mapping))))

(def col-conv-mapping
  [[:c identity]
   [:d identity]
   [:e identity]
   [:a #(Double/parseDouble %)]
   [:f identity]])

(map (convert-using-array-map col-conv-mapping) maps)
;; => ({:c 2, :d 3, :e 4, :a 1.0, :f 5} {:c 12, :d 13, :e 14, :a 11.0, :f 15} {:c 22, :d 23, :e 24, :a 21.0, :f 25})


Except can I replace > — even Java with > — even COBOL


Anyone know if there is a new version of Clojure Applied coming out? Seem to remember an ebook beta version being mentioned about a week ago


@agile_geek - That version using array-map is EVEN NICER!


maleghast: You're welcome


@yogidevbear I thought it had been out for ages...I read it in prerelease well over a year ago.

yogidevbear15:07:51 - Is that the same version you're referring to?


In fact it must have been 2 years ago as I used some of the material around core.async pipelines in June 2015 for a bit of code for Mastodon C that @otfrom ended up rewriting!


Coolio 👍 It's next on my list to buy so didn't want to buy an out-of-date version if a newer one was in the pipelines


I haven't heard about Alex and Ben writing a new version and Alex seems to have been busy with the latest version of


That's probably what I'm thinking of


Yeah, that's the one


I think the major revisions to that book are around Spec (from the TOC)


I must admit I am beginning to suspect @alexmiller is either an AI construct or he just never sleeps.


I think it comes down to having kids or going through a similar life shocking experience 😉


I don't sleep much anymore and tend to do a lot of work from morning until late at night


(Not that this is necessarily a good thing, mind you)


I have kids and I used to sleep whenever I could...never had energy for side projects!


Just saw this story... That's just half a mile or so from where I was working at style


I hear you @agile_geek - parenthood has led to a severe downturn in my overall productivity


Oh no… 😞


Sad for the stall holders but I can't see any mention of casualties which is good


Yeah, that does sound good compared to what might have been…


…but that’s going to be the end of that market, I would expect.


I hope not.... it was really cool


According to that story it only affected one building and the market was open again after the fire.


@seancorfield is a good example of little sleep and lots of side projects. And he has 50 cat children 😉


There are only five cats in my "cat-free" office! 😸