This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2021-03-03
Channels
- # announcements (2)
- # babashka (154)
- # beginners (63)
- # calva (4)
- # cider (2)
- # clara (19)
- # clj-kondo (94)
- # cljfx (8)
- # cljs-dev (6)
- # clojars (2)
- # clojure (82)
- # clojure-australia (1)
- # clojure-europe (134)
- # clojure-italy (4)
- # clojure-nl (5)
- # clojure-serbia (11)
- # clojure-taiwan (1)
- # clojure-uk (39)
- # clojurescript (83)
- # community-development (108)
- # conjure (10)
- # cursive (32)
- # data-oriented-programming (1)
- # datomic (22)
- # defnpodcast (9)
- # depstar (4)
- # docker (3)
- # events (3)
- # figwheel-main (2)
- # funcool (9)
- # graalvm (19)
- # honeysql (23)
- # jackdaw (4)
- # jobs (4)
- # jobs-discuss (2)
- # kaocha (24)
- # leiningen (1)
- # lsp (12)
- # membrane (6)
- # off-topic (21)
- # pathom (13)
- # polylith (1)
- # releases (7)
- # remote-jobs (2)
- # reveal (8)
- # ring (7)
- # sci (2)
- # shadow-cljs (9)
- # sql (10)
- # tools-deps (21)
¡månmån
moin moin
Morrrrgen!
@chokheli I realize that the georgian script (mchedrull, right?) is liked by a lot of people. I wonder if there are different variations like sans-serif or like that. To me it seams like there is only that single “rounded one”.
Interesting! Found this: https://fonts.ge/en/
Something I never really thought about when choosing fonts, that some alphabets are not supported.
And be careful when choosing, publishing and redistributing fonts, because fonts have licenses too.
@javahippie that’s an intersting collection. It seems to me that the western scripts have a wider variety in styles. But that might also be because computers and electronic fonts clearly a are biased to western scripts.
(Or even just ASCII)
If you type some letters from the georgian alpabet into google fonts, you have a really hard time finding any that work. I also guess that this is because most of IT is western-centric
Even Noto Sans only works in bold:
@U054UD60U Mkhedruli is exactly the rounded one, some variations of fonts fs, but that’s a standard. On the other hand, previous “version” of Georgian alphabets were quite different, more squared - https://ka.wikipedia.org/wiki/ქართული_დამწერლობა#ანბანი
Oh nice. Ther german page is missing that kind of comparison. But is this used like a font (“Times” vs. “Helvetica”)?
For a thing I am writing, I’d like to persist EDN to a database in a way, that I can search for (and index) certain values. Time series DB is a plus. Managed to avoid this topic until now, do you have any pointers and experiences? Needs to be open source, as the software I am writing will be, too. The first thing coming to mind for me is datahike.
@javahippie I would go with datalevin probably
But that doesn't support history, if you need that, datahike probably works. Datalevin works with GraalVM which is a plus for me :)
Interesting, didn’t know Datalevin. Also I see loom graph protocols in the roadmap, that’s great as I use loom, From first skimming their page, it seems like it needs lmdb as a backend technology, right? That’s not too common
Also still under development, and having a clusterable database underneath would be important. But still interesting, will definitely take a look at it!
Definitely a cool project!
i had my hopes up for a moment there that i could add a datalog db to our mobile app... i don't think the clojurey bit will work though
will datalevin & asami eventually support similar use cases? Datalevin is getting loom graph protocols https://github.com/juji-io/datalevin#earth_americas-roadmap and asami durable storage https://github.com/threatgrid/asami/compare/storage
what about crux on a local rocksd (or lmdb)?
The ongoing friendly competition between Asami, Datalevin and Datahike really excites me, but I wish I could fast-forward about 1 year to know which one to pick for new projects.
Dang, I wasn’t even aware of all of these cool things. Sucks that Datomic didn’t work out for us back then. I’m happy to be back in SQL land, and I wonder (after Cassandra) whether some basic (for me) SQL stuff is missing from these alternatives.
Like it was in Cassandra, ugh
I mean, their briefs are all good
I’ve had to learn a lot about RDF and the Semantic Web at work and could see some clear parallels with the Datalog scene in Clojure, so I documented what I could find here: https://github.com/simongray/clojure-graph-resources#datalog
BTW Søren, you might be interested in this https://www.youtube.com/watch?v=zoOXCaZ3M2Y
I am 🙂 Saw it on reddit. Haven’t had time to see it yet
Hard to distribute what little time I have among interesting things, there are so many. “Do I really need to see another video about rules engines? I already saw one and grasped the overall concept. Will this one bring anything new to the table?” I’m constantly making decisions like this, and mostly choosing the boring, uninformative and most importantly, time-saving option. Parent life, eh!
I see that you have an easy child 😛
Crux is also a good candidate! to be honest, the architecture diagrams on their site scare me 😅
FWIW Datalevin is used in production within Juji (company) and Asami is used within Cisco
@mkvlr not sure the directions are so different. AFAIK Asami, Datalevin and Datahike are all forks of Datascript.
I've been using datahike for a while now, and it seems to work as promised (history support was important to me), the datascript/datalog query language is really quite interesting!
That language is directly based on Datomic's query language which has been around at least since 2013 (which is when I first used it) and that is based on Datalog in turn.
which is in turn a subset of prolog, which has been around for a very long time 😛
Horn clauses 😛 ?
alternatively you can also use datascript itself and read/write the edn in a serialization format of your choosing with a watcher as a form of persistence, but it really depends on the size of the data if that makes sense
That works better than I want to confess actually.
How deeply nested data structures have you worked with (and been happy about)?
I don’t mind too much about depth as more about the connectedness (how interconnected a graph is)
Just asking because we’ve got some fairly deep structures at work, and they seem a hassle to deal with in the query language - and they’re heavy to query as well, so ideally, you’d put indices in a lot of places in those structures, I guess. I was thinking that maybe it’s a dream for lighter structures.
Indeed! I’ve had map fatigue, it’s very real, and I really liked the O’Doyle presentation that @U4P4NREBY linked to in the Clojure Reddit recently for opening my eyes to triples. (I wasn’t mature enough to see the light when I used Datomic early in my Clojure carreer 🙂 )
I think Domain Modeling with Datalog (https://www.youtube.com/watch?v=oo-7mN9WXTw) is what really sparked my interest.
I think the simplicity of modeling using tuples as well as the ability to apply both graph theory and logic programming is what is drawing me to it. SQL is fine and a known quantity, but it’s not without its flaws.
It’s hard to move SQL out of the backend database, whereas the Clojure Datalog paradigm is much more universally applicably and portable.
I like the idea of datalog in the frontend.... Where it makes sense.
Also change based communication with the backend allows for a nice synchronisation story and even concurrent editing.
I mean when you source in events from the server you can then build your local projected datastructures in the way you need them best to query them. Like CQRS does on the server side.
Yup, something like that is the dream. I feel that there is a lot of momentum in the Clojure ecosystem towards creating that kind of thing.
like the whole Fulcro framework, but also lots of smaller libraries. And if you can decompose your data into EAV tuples, you then get to recompose it with lots of things.
What I like about Crux and Datahike is, that you are able to plug a JDBC datasource for persistence. The software is intended to be hosted on site, and choosing your own persistence provider, even one that supports JDBC and is clusterable on its own would be great
yeah it can run through konserve https://github.com/replikativ/konserve
yes, jdbc, the whole storage story is abstracted so in theory it's quite easy to use on any k-v store
I've heard some performance concerns about datahike vs crux btw, also good to keep in mind
not sure datahike is optimised (for the write path at least) yet, that will come eventually
I'll second that from experience, the datahike transactions are very slow in comparison with both the leveldb and the file based backend
queries are generally very fast though
crux for writes you essentially benchmark kafka, so yeah it's fast, read-after-write on both would be more interesting
both are really cool projects, and quite different too again. I am not sure raw performance is a good metric to compare them tbh
the nice thing is that they're pretty much drop in replacements of each other from the query/transaction perspective, so as the projects mature more you can try all of them with relatively little effort
well, the core features I guess, history/loom protocols/replication etc will probably remain project dependent
nice, this is from the datalevin readme: > If you are interested in using the dialect of Datalog pioneered by Datomic®, here are your current options: > If you need time travel and rich features backed by the authors of Clojure, you should use https://www.datomic.com. > If you need an in-memory store that has almost the same API as Datomic®, https://github.com/tonsky/datascript is for you. > If you need an in-memory graph database, https://github.com/threatgrid/asami is fast. > If you need features such as bi-temporal graph queries, you may try https://github.com/juxt/crux. > If you need a durable store with some storage choices, you may try https://github.com/replikativ/datahike. > There was also https://github.com/Workiva/eva/, a distributed store, but it is no longer in active development. > If you need a simple and fast durable store with a battle tested backend, give https://github.com/juji-io/datalevin a try.
what I like about crux is that it solves a problem I have a lot in financial services and other things where the application date is more important than the transaction insertion date
I wish datomic (on-prem) was oss, it's really impressive and I really think it would be a huge boost to clj.
yeah, I'd be happier to have it fl/oss and pay for support. Had too many proprietary dbs disappear
looking at hitchhiker-tree looks like a useful thing for me. I do lots of sort these things by date under and id and then reduce the events
tho some of the work with arrow, clojure and memory mapping being done by the http://tech.ml crew looks interesting too for different sizes of data
So I have many things to look into now, thanks 😅
I do store the raw simulations as compressed transit. I think I might go over to arrow and http://tech.ml.dataset for it in the future though https://github.com/techascent/tech.ml.dataset
@otfrom excel local files or excel stored on sharepoint/onedrive/that-office-online-thing?
Wonder if this works with Excel on S3 Buckets https://www.cdata.com/drivers/excel/jdbc/
excel makes me 😿 - it has been the source of so many data issues with it's dubious habit of silently changing values in CSV files
@mccraigmccraig isn’t the more likely the problem with the understandardized CSV?
@mccraigmccraig I've started working more w/excel directly to avoid some of the csv export issues
yeah, if csv specified type information, then the issues would go away - but they would also go away if excel didn't make type assumptions and convert values into an assumed type. either way it's painful
yeah, all the source data I deal with is in Excel, so mostly I'm trying to limit the number of changes they are making
Isn’t the something in the context of scientific data, R or the like which is a robust data format more flexible than CSV but not an “application format” like Excel? (No, not JSON)
My wife works as a statistician, and most of what they are using is CSV / other text formats
Pour souls. :)
CDF maybe?