This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2024-02-05
Channels
- # announcements (1)
- # asami (25)
- # babashka (78)
- # beginners (24)
- # clojure (15)
- # clojure-europe (21)
- # clojure-losangeles (4)
- # clojure-nl (1)
- # clojure-norway (13)
- # clojure-uk (11)
- # cursive (17)
- # datomic (29)
- # events (1)
- # fulcro (4)
- # hyperfiddle (6)
- # jobs (1)
- # lsp (20)
- # malli (9)
- # off-topic (7)
- # pedestal (1)
- # polylith (2)
- # practicalli (1)
- # rdf (7)
- # reitit (1)
- # remote-jobs (1)
- # spacemacs (17)
- # specter (24)
- # vim (1)
Thanks for the reply Paula. The reason I asked is because I’m in the middle of trying to figure out how to use SQLite as in process storage for a Datomic database. It’s a struggle, at least for me. The rationale is to start with simple, in process storage and if the app I’m working on scales, then I can switch storage to a distributed approach. Your documentation is certainly much better than Datomic’s, and it is much easier to get started with Asami in production. So I’m wondering if I should just use Asami.
Well, if you’re in memory, then storage is actually pretty robust. There are various tweaks, but it’s just a set of nested Hashmaps. It’s hard to get that wrong
If querying were to have a bug in it, you can still just pull data out with simpler interfaces at any time.
OK… that’s a different risk profile, but I think it’s pretty good. There are faster systems, but it’s been kept simple, which is why I think it works well.
It shares many of its features with http://mulgara.org/, which was used for nearly 20 years in many commercial and government applications, but the persistency makes it simpler and faster
I’m building an app to finance climate solutions. It might scale, it might not. If it scales, and Asami can no longer handle the load (I have no idea at this point … just trying to plan ahead) what could be done to deal with that?
So… I know I’m being immodest, but in terms of storage and retrieval, I do think it’s robust
At that point you’ll be looking for a different storage mechanism. I do have plans to do more there, but this is where the lack of commercial support could hurt. What I would do then, is port the data to a new graphdb. But that’s actually easy to do. (IMO)
Though, I personally would ensure the data was RDF compliant and move to a big RDF system at that point
How do I ensure the data is RDF compliant? I’m a part-time programmer, part-time engineer, part-time sales lead, part-time everything …
Is it just a question of triples? entity attribute value ? or are there some data types that are not compliant?
You can use vectors as IDs, which isn’t compliant. Most other things will work, with varying amounts of effort
This is actually a good point… maybe I should build a tool to automate this for people :thinking_face:
On the other hand… I’m the only one maintaining it, and I’m doing 20 things at the same time right now. That makes me nervous.