Fork me on GitHub
#off-topic
<
2020-02-04
>
danie12:02:41

If I have an hour at a meetup, with curious developers, and I want to do a Clojure code-a-long: What fun problem/thing to work with comes to mind?

hobosarefriends12:02:02

When we did dojos at our local one, we sometimes did some of the Advent of Code things

hobosarefriends12:02:03

another time we attempted to make a very basic cli version of the 0h n0 mobile game.

danie12:02:45

Thank you for the suggestions Mno

dharrigan13:02:16

How experienced are the curious devs?

danie15:02:55

Varies quite a bit, skewed towards “have done things worth doing in other languages”

dharrigan16:02:11

I would do something a bit more realistic, i.e, show how easy it is to wire up a very very simple API that responds to a RESTful request

dharrigan16:02:44

and not having to worry a lot about rigid data types 😉

lady3janepl16:02:10

given the large amount of sequence and map processing, maybe a simple data processing pipeline with several datasets in JSON

lady3janepl16:02:23

I have a half-finished project where I was merging some data exported from wikimedia (xml, human-input) and a govt-provided JSON dataset with keywords I looked up inside xml, people seemed to like it

lady3janepl16:02:56

or generate a site map? etc

idiomancy17:02:31

any data scientists/engineers here know anything about serving user specific data directly from columnar representations for user specific queries? I know that columnar data / parquet etc is generally used for analytical workloads and end user specific queries are typically served from some other, more row oriented datastore -- however, it seems to me that projects like Apache Arrow and calcite and stuff are making progress in the idea of "maybe we don't need to have copies of data for everything that wants to consume it" Are there any useful working patterns to have a primarily analytical data store that also has decent performance on user specific data? like a user wants to retrieve their activity history on an app where they submit their current mood, and retrieve some information about what their mood has been historically. Thats an app thats mostly used for aggregating information across the user population but needs to secondarily serve user level requests for their own information

idiomancy17:02:45

I want to build a kind of "land and expand" analytical mvp app that stores user generated information for analytics without dealing with the complexity of multiple data stores

p-himik17:02:03

We have experimented with different kinds of storage for GWAS data, which seems somewhat similar to what you need. The best experience and performance that we could get was with ClickHouse.

idiomancy17:02:31

huh, now that's an interesting analogy

idiomancy17:02:03

nice, Im gonna check this out!

p-himik17:02:21

Apart from the analogy, ClickHouse was initially created to handle time series, which is exactly what user activity is. Although, now it has become a bit more generic, in a good way.

idiomancy17:02:32

nice, thats great! I had heard the name clickhouse before but always assumed it was a proprietary cloud service

idiomancy17:02:58

thanks for the tip!

idiomancy18:02:22

oh man, @p-himik this "last point" query concept is perfect

p-himik18:02:01

Nice! :) There are some idiosyncrasies in the DBMS, so be sure to read the documentation a bit more. And if you think about really investing your time in it, make sure to look around in its GitHub issues. But overall, for us it has definitely been worth it.

idiomancy18:02:25

very cool. will do!

dominicm22:02:08

Is there a name for aux cables with 3 black lines vs 2? Just bought some useless cables. :(

p-himik22:02:24

You probably mean 3.5mm combo jack.

p-himik22:02:36

Or not 3.5mm. But combo. :)

sogaiu00:02:32

may be the following has some relevant info? there are images that look pertinent: https://en.wikipedia.org/wiki/Phone_connector_(audio)#General_use

dominicm07:02:26

I image. Ts to Trrrs being like the crunchy nut tiger getting more and more excited by it...

dominicm07:02:32

Thanks for the pointers :+1: