This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-07-12
Channels
- # announcements (2)
- # babashka (22)
- # babashka-sci-dev (15)
- # beginners (62)
- # calva (2)
- # cider (8)
- # clj-kondo (33)
- # clojure (52)
- # clojure-europe (46)
- # clojure-losangeles (1)
- # clojure-norway (5)
- # clojure-spec (7)
- # clojurescript (31)
- # conjure (20)
- # data-science (4)
- # datalevin (16)
- # fulcro (28)
- # hyperfiddle (71)
- # introduce-yourself (3)
- # lsp (50)
- # off-topic (16)
- # polylith (8)
- # portal (3)
- # practicalli (1)
- # reitit (1)
- # releases (2)
- # tools-build (22)
- # vim (8)
- # xtdb (17)
Does datalevin
have a restriction of 2^20
just like in datomic
or is it like xtdb
where in each entity can have as many attributes as it wants.
[1] What are the best practices to scale dtlv after we exceed the total data size limits? [2] If I’d like to store relational/tuple/record data on the eav model for its elasticity, what are the performance gap when compared with relational DB? [3] In dtlv readme: The total data size of a Datalevin database has the same limit as LMDB’s, e.g. 128TB on a modern 64-bit machine that implements 48-bit address spaces. But it seemed to refer to the virtual memory of an OS. What is the actual size limit if I have say a 30GB SSD + 8GB memory server? Thanks.
2. Currently, large gap. Thats the issue we will solve for the query engine rewrite. Our plan is to bring the query speed to be on par with RDBMS. You can read about the plan and WIP in the query branch.
3.less than 30gb? We use LMDB, which use MMAP, so the limitation on data size is the same as the OS virtual memory size, if the disk size is unlimited, whichever is met first.
That’s fine. What would the informal comparison you would subjectively conclude on read and write performance?
Really depends on data size and query. Small datasets wouldn’t be too different. Large datasets and complex queries, will be problematic.