Fork me on GitHub
#rdf
<
2017-02-14
>
nblumoe07:02:16

Thanks for the information @rickmoynihan ! I will have a deeper look at grafter soon hopefully. Atm I have to deal too much with designing an ontology (mostly using Protege, but had a look at Tawny-OWL too). I am not so happy with the tooling for that I have to say. Especially getting a good, interactive visual representation seems to be lacking.

nblumoe07:02:04

So grafter might become more important once we put more effort into actually processing data. The tabular -> rdf aspect might not be needed but I am curious about the RDF processing you mentioned.

nblumoe07:02:30

About Incanter: I think you assessment is fair unfortunately. Every once in a while I consider taking some action on this and driving it forward. But then I am just using R for the stuff I would be using Incanter for (algorithm development and data exploration). For the things were I did some actual analysis with Clojure I am using core.matrix directly. So I do not have such a strong need for Incanter myself, the (former) contributors became quite inactive and IMHO Incanter could need some good amount of refactoring and splitting up. I always wondered about the usage of Incanter: How actively is this being used and what are people doing with it?

rickmoynihan09:02:15

@nblumoe: seems like we’re entirely in agreement about incanter then. For a while I had thoughts about moving the tabular side of grafter onto core.matrix - and implementing some lazy versions of the Dataset protocols… However if we were to make a radical shift on the ETL side of things I’d be much more inclined to build on top of transducers - i.e. viewing an ETL job as a transduction; and I doubt that would be compatible with the core.matrix protocols.

rickmoynihan10:02:50

Regarding the grafter/rdf processing side of things… it’s quite cleanly separated from the rest of things (just not at the project.clj dependencies level)… so 0.9 or 0.10 will likely be broken into separate clojar artifacts. If you have any questions about it I can answer them… but we have pretty good support for all sesame repositories (native disk stored ones / memory repo's (great for testing) & remote sparql repo’s too… also reading/writing RDF in basically any serialisation format, and as I said the coercions of data from sesame RDF type’s to native clojure/java ones (with a protocol layer that basically makes native types all behave like RDF types e.g. you can do (datatype-uri 10) ;; => and get the proper URI back etc… also coercions for #inst‘s (java Date’s/DateTime's etc…).

rickmoynihan10:02:16

longer term I have plans to improve the shape of the API (core.rdf — currently WIP - and not updated for 7 months) and implement it as a generic protocol layer that’s extended to both RDF4j and sesame… Since then @stain has independently started a similar project to do a similar thing - and we’ve spoken about potentially collaborating on something - but so far no movement on that.

rickmoynihan10:02:13

Anyway we have lots of plans to improve things… and have plans on improving things in future versions - but we’re making incremental improvements/updates all the time… but we have to move relatively slowly because we have a lot of code built ontop of it.

rickmoynihan10:02:18

0.8.x-SNAPSHOT is the latest btw — it’ll be merged to master soon — maybe in the next month or so… and it’s basically stable for anything you’re likely to use it for.