Fork me on GitHub
#asami
<
2024-02-05
>
quoll21:02:04

Good question! On one hand… Cisco Systems thought it was 🙂

nando10:02:39

Thanks for the reply Paula. The reason I asked is because I’m in the middle of trying to figure out how to use SQLite as in process storage for a Datomic database. It’s a struggle, at least for me. The rationale is to start with simple, in process storage and if the app I’m working on scales, then I can switch storage to a distributed approach. Your documentation is certainly much better than Datomic’s, and it is much easier to get started with Asami in production. So I’m wondering if I should just use Asami.

quoll11:02:02

Well, if you’re in memory, then storage is actually pretty robust. There are various tweaks, but it’s just a set of nested Hashmaps. It’s hard to get that wrong

quoll11:02:20

If querying were to have a bug in it, you can still just pull data out with simpler interfaces at any time.

quoll11:02:40

(Also, please tell me and I’ll fix the bug!)

nando11:02:16

I’d be working with storage to disk.

quoll11:02:45

OK… that’s a different risk profile, but I think it’s pretty good. There are faster systems, but it’s been kept simple, which is why I think it works well.

quoll11:02:07

It shares many of its features with http://mulgara.org/, which was used for nearly 20 years in many commercial and government applications, but the persistency makes it simpler and faster

nando11:02:13

I’m building an app to finance climate solutions. It might scale, it might not. If it scales, and Asami can no longer handle the load (I have no idea at this point … just trying to plan ahead) what could be done to deal with that?

quoll11:02:24

So… I know I’m being immodest, but in terms of storage and retrieval, I do think it’s robust

quoll11:02:38

At that point you’ll be looking for a different storage mechanism. I do have plans to do more there, but this is where the lack of commercial support could hurt. What I would do then, is port the data to a new graphdb. But that’s actually easy to do. (IMO)

quoll11:02:00

Though, I personally would ensure the data was RDF compliant and move to a big RDF system at that point

quoll11:02:30

Because there are lots of commercial vendors who offer a lot of support there.

quoll11:02:00

This is an easy change to make, because I build it all around RDF principles

nando11:02:19

How do I ensure the data is RDF compliant? I’m a part-time programmer, part-time engineer, part-time sales lead, part-time everything …

nando11:02:22

Is it just a question of triples? entity attribute value ? or are there some data types that are not compliant?

quoll11:02:43

You can use vectors as IDs, which isn’t compliant. Most other things will work, with varying amounts of effort

quoll11:02:47

eg, if you use a lot of keywords, you can map them to URIs

quoll11:02:27

This is actually a good point… maybe I should build a tool to automate this for people :thinking_face:

nando12:02:48

The data in my case will be simple. Thanks for your help Paula. I appreciate it.

quoll12:02:07

You’re welcome

quoll21:02:40

On the other hand… I’m the only one maintaining it, and I’m doing 20 things at the same time right now. That makes me nervous.

quoll21:02:48

But I try to respond to bugs quickly

quoll21:02:16

In general, I’ve been surprised at how robust it has proven to be. So that’s good. And I should remember that no production system is perfect.

quoll21:02:30

But with a couple of exceptions it’s almost entirely my personal code, and I feel like I’m letting my ego get out of control if I say that it’s great!

❤️ 1
1