This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-07-01
Channels
- # announcements (23)
- # babashka (66)
- # babashka-sci-dev (7)
- # beginners (24)
- # biff (2)
- # calva (19)
- # cider (10)
- # clj-kondo (12)
- # cljs-dev (3)
- # cljsrn (2)
- # clojure (37)
- # clojure-art (1)
- # clojure-europe (50)
- # clojure-gamedev (1)
- # clojure-nl (1)
- # clojure-norway (22)
- # clojure-uk (7)
- # clojurescript (6)
- # conjure (28)
- # cursive (19)
- # data-science (11)
- # fulcro (21)
- # holy-lambda (12)
- # honeysql (6)
- # hyperfiddle (2)
- # jobs (1)
- # lsp (5)
- # malli (4)
- # meander (3)
- # missionary (8)
- # nbb (5)
- # off-topic (39)
- # rdf (9)
- # reitit (1)
- # releases (1)
- # sci (21)
- # shadow-cljs (42)
- # specter (1)
- # xtdb (11)
Given a tablecloth
dataset like
(tc/dataset [{:name "alice"} {:name "bob"} {:name "bob"}])
| :name |
|-------|
| alice |
| bob |
| bob |
how can I count the number of times that each value appears in the :name
column?
The output I want is
| alice | 1 |
| bob. | 2 |
In SQL terms, I am trying to do
SELECT column, count(*)
FROM table
GROUP BY column
This works
(->
(tc/dataset
[{:name "alice"}
{:name "bob"}
{:name "bob"}])
(tc/group-by [:name])
(tc/aggregate
{:name-count (fn [coll] (tc/row-count coll))} ))
_unnamed [2 2]:
| :name | :name-count |
|-------|------------:|
| alice | 1 |
| bob | 2 |
but it is too slow for my real use-case.
My dataframe has 50 columns and 2,333,409 rows. Is this more than what tablecloth/tech.ml.dataset can handle
Too slow means that it did not complete within 10 minutes.@U03A8HUF1C2 - The https://techascent.github.io/tech.ml.dataset/tech.v3.dataset.reductions.html#var-group-by-column-agg has high performance reductions for larger datasets that avoid intermediate dataset creation and take sequences of datasets. What you are looking for is something like:
tech.v3.dataset.reductions-test> (def ds (ds/->dataset "test/data/stocks.csv"))
#'tech.v3.dataset.reductions-test/ds
tech.v3.dataset.reductions-test> (ds-reduce/group-by-column-agg "symbol" {:num-sym (ds-reduce/row-count)} [ds])
symbol-aggregation [5 2]:
| symbol | :num-sym |
|--------|---------:|
| AAPL | 123 |
| IBM | 123 |
| AMZN | 123 |
| MSFT | 123 |
| GOOG | 68 |
Note that for larger datasets it may be better to either subsample or perhaps keep them as a sequence of datasets and use something like eduction along with various operations.
Also parquet will automatically chunk your large dataset up into many large-ish chunks into the same parquet file. For arrow you need to do the chunking before writing the dataset sequence out to a file.
TC might be slow in certain cases, I'll check if there is something to do with it (however 2M rows is not too much imho). TMD has much more options as Chris described.
Thank you for the detailed responses. The reductions approach outlined above too does not work, but for a different reason. I start getting a null reference exception the moment I try to perform the aggregation on 64793 rows rather than 64792 rows (see below snippet).
(ds-reduce/group-by-column-agg "id" {:id (ds-reduce/row-count)} (ds/head ds 64792))
;; Instantly returns the results
(ds-reduce/group-by-column-agg "id" {:id (ds-reduce/row-count)} (ds/head ds 64793))
;; Instantly fails with a null reference exception
Interestingly, this limit does not have to do with the total size of the dataset. I say this because after dropping 45 out of 50 columns, the breaking point is still 64739 rows.
The stack trace for the null reference exception can be found here: https://pastebin.com/qX68G6f2I think that the issue may be that my source CSV contains commas in some columns, but also uses comma as a separator.
The issue was indeed the input data. With commas properly escaped, everything works smoothly.