This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-01-26
Channels
- # aws-lambda (1)
- # beginners (71)
- # boot (70)
- # bristol-clojurians (1)
- # cider (2)
- # clara (13)
- # cljs-dev (96)
- # cljsjs (6)
- # cljsrn (5)
- # clojure (74)
- # clojure-android (3)
- # clojure-austin (4)
- # clojure-dev (10)
- # clojure-russia (6)
- # clojure-spec (28)
- # clojure-uk (128)
- # clojurescript (64)
- # cursive (2)
- # datascript (18)
- # datomic (116)
- # dirac (1)
- # emacs (12)
- # events (10)
- # hoplon (109)
- # jobs (1)
- # jobs-discuss (21)
- # leiningen (2)
- # luminus (6)
- # off-topic (19)
- # om (21)
- # om-next (5)
- # onyx (4)
- # parinfer (29)
- # perun (20)
- # re-frame (53)
- # reagent (21)
- # remote-jobs (5)
- # ring-swagger (2)
- # spacemacs (6)
- # untangled (42)
- # vim (5)
I was wondering if it was possible to pass optional arguments to a datomic query like in this example here. https://gist.github.com/jdkealy/174741e33b88b09b66e6f0281e3cd6ca I saw this google groups question which was pretty similar to mine and the answer seemed pretty complex, showing yes, you can have optional arguments, like optional attribute names in a datomic query https://groups.google.com/forum/#!topic/datomic/20hHmzXK3PE but my query above has a certain structure to it and yes, the gist above works, but it's really unnecessarily duplicating code there.
@dominicm why do you want the field to be gone?
@val_waeselynck it's incredibly unperformant to query details about that field.
We are using the pull syntax to retrieve nested data structures (quite big ones sometimes). Now we want to add auditing, so that for every value that pull returns we also want to retrieve who was adding the fact and when. We are free to model the schema to solve this problem, because we are not in production yet. We need a simple solution and we own all the code that is writing to the database. It seems like the pull syntax does not have support to also retrieve the tx-id for every attribute, for example not only first-name and last-name but also first-name-tx-id and last-name-tx-id, which would otherwise solve our problem, because then we could use the log database.
@dominicm so why do you query it? You can just stop referring to this attribute in your queries, right ?
@val_waeselynck it seems to me under the terms of Stu's recommendations (never destroy a field), queries must maintain references to deprecated attributes. Or am I misreading?
I don't think that's what he meant. You don't delete a deprecated attribute from the schema, but you can totally stop writing it and querying it.
@robert-stuttaford may I ask what drove your choice to compute-optimized vs general-purpose or memory-optimized?
We see this occasionally when a nightly job tries to open a datomic connection: HornetQException[errorType=INTERNAL_ERROR message=HQ119001: Failed to create session
What does this message mean?
@val_waeselynck we don’t do any of our own caching; we generate html afresh for every request. thank you Datomic peer library.
@val_waeselynck if the intention is to grow schema & never break, then no longer writing to the old attribute seems like it would break existing queries which are looking for new data based on that field?
All the examples online (including datomic docs) seem to have murky or incomplete examples (another acceptable answer is that I am a moron :D)
@erichmond Have you seen the reified transactions talk? http://www.datomic.com/videos.html
@erichmond tl;dr: {:db/id “datomic.tx” :your/thing “here”}
@dominicm, well I'm assuming that if it's deprecated then at some point you actually stop using it 🙂
@val_waeselynck I think I'm struggling to tease out when "stop using it" is different from removing it, in cases where you're no longer adding entities with that field, but you will still be breaking old programs, as they can no longer get-latest-xyz
@casperc have you tried simply (count (seq (d/datoms db :eavt)))
?
@dominicm ah, so there are some clients that you don't control which are accessing a field, and you'd like to change the implementation of reading from that field?
@val_waeselynck This is slightly into territory of theory now I'll admit. But if you control all clients, is the only reason to "grow" schema is to minimize impact of refactorings?
@casperc In addition to Roberts approach you can get a count of datoms in the datoms metric line in the log.
@dominicm ease of deployment and developer environment management is also a concern.
@val_waeselynck not sure how growth plays into those?
@robert-stuttaford I'm sorry, I think I'm missing a step in your reasoning here.
could you elaborate please?
@val_waeselynck well, remember that we have mature apps that Do Lots Of Stuff for a fair number of users. we’re using java apps that love to make threads, so we decided to give the jvm plenty of compute to handle that
having said that, we were on t2.mediums for a good long while
@robert-stuttaford I see. Funny, I would have thought a Datomic Peer would have more pressure on memory and IO, what with the Object Cache and the loading of segments
we haven’t got any of our own caching code (e.g. for html view fragments; we’ve been leaning on the fact that Datomic caches datoms in the peer). so, trading paying for cpu over writing and maintaining and working with custom caching code
keeps our code flexible and our ability to reason less burdened
at the cost of more AWS bill 🙂
We do use our own caching for aggregations, but I guess you guys have Onyx for that 🙂
@jaret: I have looked in the transactor log, but I am not finding it. Can you give an example of what I should be looking for?
2017-01-12 16:44:24.642 INFO default datomic.update - {:IndexSegments 477, :FulltextSegments 71, :Datoms 1113239, :IndexDatoms 2228416, :event :update/adopted-index, :id "my-test-db", :pid 8009, :tid 12}
Does datalog support any notion of GroupBY?
@jaret: And Datoms is the one I want to watch out for, so that it is not bigger than 10 billion?
Yeah you'll want to watch Datoms. If you think you are approaching 10 billion we should arrange a call to discuss.
@drewverlee Datalog does not have internal support for aggregation. However in datomic you can control grouping via :with
@jaret thanks!
@jaret: Last question: I have been looking for documentation on the events in the datomic log, but have come up with nothing. Do you have some around that I am not finding?
Sorry to repeat this, does anyone know under what conditions this message is shown (datomic pro)? HornetQException[errorType=INTERNAL_ERROR message=HQ119001: Failed to create session
@casperc I am sure you found the monitoring section of the docs. http://docs.datomic.com/monitoring.html What events are you specifically talking about?
@pesterhazy is this in the peer?
@jaret, yes, we get this occasionally when it tries to connect
in periodic nightly jenkins jobs
@jaret: Well taking the :update/adopted-index event as an example. Some explanation of what it means would be useful.
@pesterhazy I would want to see what is going on with the transactor at the time of these exceptions. A metrics report before and after if nothing stands out around the time stamp
HQ exceptions are a bit general and might not indicate a real problem as long as HQ eventually connects
good point, I'll check the transactor's log on s3
@robert-stuttaford haha thanks!
@casperc I agree I think this would be a useful addition to the documentation. I am going to look through the events we report and see if we can create a table in the docs with a basic definition. I know the rule of thumb was to make the name as self evident as possible so the adopted-index
event for instance, represents the adoption of a new index after indexing has completed.
@jaret, in the transactor log I see a couple of warnings like this around the time of the connection failure
@val_waeselynck Nice post. Wanted to ask
As we've seen, adding an attribute (the equivalent of adding a column or table is SQL) is straightforward. You can just reinstall your whole schema at deployment time. Same thing for database functions.
When we add an attribute, how do we ensure type safety when querying the same attribute at an older time instance when the attribute did not exist. I assume the application code must be full of null checks? Am I right in thinking so? And
Modifying an attribute (e.g changing the type of :person/id from :db.type/uuid to :db.type/string) is more problematic, and I suggest you do your best to avoid it. Try to get your schema right in the first place; experiment with it in the in-memory connection before committing it to durable storage. If you have committed it already, consider versioning the attribute (e.g :person.v2/id).
Isn’t there any better approach to this rather than just trying to avoid it
?bin\run -m datomic.peer-server -p 8998 -a obviouslynotmykey,mysecret -d firstdb,datomic:
is there anything wrong in this command?
there's this error happening
n: [:db "firstdb" 1] val: nil fails spec: :datomic.peer-server/non-empty-string at: [:db 1] predicate: string?
In: [:auth "obviouslynotmykey" 1] val: nil fails spec: :datomic.peer-server/non-empty-string at: [:auth 1] predicate: string?
@vinnyataide I just checked your command on Windows OS and on iterm on mac. It worked in both cases
jbin at Jarets-MacBook-Pro in ~/Desktop/Jaret/Tools/releasetest/5554/datomic-pro-0.9.5554
$ bin/run -m datomic.peer-server -p 8998 -a obviouslynotmykey,mysecret -d firstdb,datomic:
Serving datomic: as firstdb
I ran inside a datomic-pro folder
latest 0.9.5554
lol, it worked with this string
but not with my actual key
wtheck
So the key you use when starting the peer is not the license key you get from http://my-datomic.com
it has to be a non-empty string and there may be other requirements which caused you to have an error
ok the issue appears when using powershell only
sorry for bothering
that causes what you’re putting in for the key get sent as something that fails the “must be a string” check
it is resolved now, and I didn't know yet the architecture
probably
We switched to using a single :db/uuid
attribute for all our entities' identity. For integration purposes I’m now considering using distinct :<entity-type>/uuid
attrs instead. However I’m not grasping the relative trade-offs of each approach. Is anyone familiar with this?
So Datomic can use Cassandra, which gets used frequently for iot data. Due to it being a column store. But i don’t see much “buzz” around Datomic as a “timeseries” database. Why is that?
It seems possible to express similar queries through Datomic to Cassandra as i could envision using the Karios Client (https://kairosdb.github.io/). But i dont know where to begin comparing these two.
@seantempesta need to see the query to figure out whats going on.
(regarding my previous question) ah never mind. looks like it was just a sample data issue.
Bumping a question from last week: When running datomic/bin/maven-install I saw this error message toward the end; is it anything to worry about?
[WARNING] Some problems were encountered while building the effective model for com.datomic:datomic-pro:jar:0.9.5544
[WARNING] 'dependencies.dependency.exclusions.exclusion.artifactId' for com.datastax.cassandra:cassandra-driver-core:jar with value '*' does not match a valid id pattern. @ line 125, column 18
[WARNING]
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
@abhir00p the issue you raised is one of the reasons why querying historical dbs in datomic is much more limited than people think. I recommend to simply not do it in application code.
what happened to datomic.api?
clojure.lang.ExceptionInfo: Could not locate datomic/api__init.class or datomic/api.clj on cl
asspath.
file: "om_learn\\db.clj"
line: 1
Hi. I have a question that might be simple for someone who knows more than me… How do you bind an entity to itself in a datalog where clause? Here’s a contrived example. I want a query that, given a parent, returns the whole nuclear family (including the passed-in parent):
:find ?p2
:in $ ?p1
:where
(or
[?p1 :person/children ?p2]
[?p1 :person/partner ?p2]
[(= ?p1 ?p2)])
This throws an exception: :db.error/insufficient-binding [?p2] …
@bballantine do you mean "or"?
I think I mean or… either ?p2 is a child or a spouse or the parent herself
to "rename" a var (for binding purposes) use identity
, example here: https://groups.google.com/d/msg/datomic/_YiBRnBkeOs/0Gd-6lJmDwAJ
alright, let me play with that.. thanks @favila
@favilla it works, thanks — although I have to admit I don’t quite understand it.
If I just have [?p1 ?p2]
, it also works.
After chatting about it with my colleague, I get why your example works. Thanks @favila
@bballantine care to elaborate? not sure i do.. =)
@spieden - someone should correct me if I’m wrong, but [(identity ?p1) ?p2] is the syntax to bind something to the result of a function. So this binds ?p2 to the value of (identity ?p1).
In this snippet (and [?p1] [(identity ?p1) ?p2])
, the [?p1]
part ensures that ?p1 isnt’ nil
(I think)
yeah @spieden I think that’s right