Fork me on GitHub
#datomic
<
2017-01-26
>
jdkealy01:01:07

I was wondering if it was possible to pass optional arguments to a datomic query like in this example here. https://gist.github.com/jdkealy/174741e33b88b09b66e6f0281e3cd6ca I saw this google groups question which was pretty similar to mine and the answer seemed pretty complex, showing yes, you can have optional arguments, like optional attribute names in a datomic query https://groups.google.com/forum/#!topic/datomic/20hHmzXK3PE but my query above has a certain structure to it and yes, the gist above works, but it's really unnecessarily duplicating code there.

val_waeselynck03:01:41

@dominicm why do you want the field to be gone?

dominicm07:01:06

@val_waeselynck it's incredibly unperformant to query details about that field.

casperc09:01:32

Anyone know what the best way to count the total number of datoms in a database is?

casperc09:01:00

Is it output via some metric in the transactor logs?

tengstrand09:01:31

We are using the pull syntax to retrieve nested data structures (quite big ones sometimes). Now we want to add auditing, so that for every value that pull returns we also want to retrieve who was adding the fact and when. We are free to model the schema to solve this problem, because we are not in production yet. We need a simple solution and we own all the code that is writing to the database. It seems like the pull syntax does not have support to also retrieve the tx-id for every attribute, for example not only first-name and last-name but also first-name-tx-id and last-name-tx-id, which would otherwise solve our problem, because then we could use the log database.

val_waeselynck09:01:27

@dominicm so why do you query it? You can just stop referring to this attribute in your queries, right ?

dominicm09:01:27

@val_waeselynck it seems to me under the terms of Stu's recommendations (never destroy a field), queries must maintain references to deprecated attributes. Or am I misreading?

val_waeselynck09:01:19

I don't think that's what he meant. You don't delete a deprecated attribute from the schema, but you can totally stop writing it and querying it.

val_waeselynck09:01:50

@robert-stuttaford may I ask what drove your choice to compute-optimized vs general-purpose or memory-optimized?

pesterhazy10:01:55

We see this occasionally when a nightly job tries to open a datomic connection: HornetQException[errorType=INTERNAL_ERROR message=HQ119001: Failed to create session

pesterhazy10:01:18

What does this message mean?

robert-stuttaford10:01:27

@val_waeselynck we don’t do any of our own caching; we generate html afresh for every request. thank you Datomic peer library.

dominicm10:01:20

@val_waeselynck if the intention is to grow schema & never break, then no longer writing to the old attribute seems like it would break existing queries which are looking for new data based on that field?

erichmond12:01:15

Does anyone have a good solid example of adding additional data to transactions?

erichmond12:01:46

All the examples online (including datomic docs) seem to have murky or incomplete examples (another acceptable answer is that I am a moron :D)

danielstockton12:01:47

@erichmond Have you seen the reified transactions talk? http://www.datomic.com/videos.html

erichmond12:01:41

watching now! thanks !

robert-stuttaford12:01:17

@erichmond tl;dr: {:db/id “datomic.tx” :your/thing “here”}

jdkealy14:01:50

@casperc i have the same question. let me know if you find the answer!

val_waeselynck14:01:47

@dominicm, well I'm assuming that if it's deprecated then at some point you actually stop using it 🙂

dominicm14:01:20

@val_waeselynck I think I'm struggling to tease out when "stop using it" is different from removing it, in cases where you're no longer adding entities with that field, but you will still be breaking old programs, as they can no longer get-latest-xyz

robert-stuttaford14:01:27

@casperc have you tried simply (count (seq (d/datoms db :eavt))) ?

val_waeselynck14:01:00

@dominicm ah, so there are some clients that you don't control which are accessing a field, and you'd like to change the implementation of reading from that field?

dominicm14:01:01

@val_waeselynck This is slightly into territory of theory now I'll admit. But if you control all clients, is the only reason to "grow" schema is to minimize impact of refactorings?

jaret14:01:24

@casperc In addition to Roberts approach you can get a count of datoms in the datoms metric line in the log.

val_waeselynck14:01:53

@dominicm ease of deployment and developer environment management is also a concern.

dominicm14:01:02

@val_waeselynck not sure how growth plays into those?

val_waeselynck14:01:47

@robert-stuttaford I'm sorry, I think I'm missing a step in your reasoning here.

val_waeselynck14:01:38

could you elaborate please?

robert-stuttaford14:01:04

@val_waeselynck well, remember that we have mature apps that Do Lots Of Stuff for a fair number of users. we’re using java apps that love to make threads, so we decided to give the jvm plenty of compute to handle that

robert-stuttaford14:01:20

having said that, we were on t2.mediums for a good long while

val_waeselynck14:01:32

@robert-stuttaford I see. Funny, I would have thought a Datomic Peer would have more pressure on memory and IO, what with the Object Cache and the loading of segments

robert-stuttaford14:01:56

we haven’t got any of our own caching code (e.g. for html view fragments; we’ve been leaning on the fact that Datomic caches datoms in the peer). so, trading paying for cpu over writing and maintaining and working with custom caching code

robert-stuttaford14:01:24

keeps our code flexible and our ability to reason less burdened

robert-stuttaford14:01:33

at the cost of more AWS bill 🙂

val_waeselynck14:01:52

We do use our own caching for aggregations, but I guess you guys have Onyx for that 🙂

casperc14:01:03

@jaret: I have looked in the transactor log, but I am not finding it. Can you give an example of what I should be looking for?

jaret16:01:19

@casperc Datoms gets reported when a new index is adopted. It will look like this:

jaret16:01:30

2017-01-12 16:44:24.642 INFO  default    datomic.update - {:IndexSegments 477, :FulltextSegments 71, :Datoms 1113239, :IndexDatoms 2228416, :event :update/adopted-index, :id "my-test-db", :pid 8009, :tid 12}

drewverlee16:01:53

Does datalog support any notion of GroupBY?

casperc16:01:14

@jaret: And Datoms is the one I want to watch out for, so that it is not bigger than 10 billion?

casperc16:01:26

Or is it IndexDatoms?

jaret16:01:13

Yeah you'll want to watch Datoms. If you think you are approaching 10 billion we should arrange a call to discuss.

jaret16:01:07

@drewverlee Datalog does not have internal support for aggregation. However in datomic you can control grouping via :with

casperc16:01:14

@jaret: Last question: I have been looking for documentation on the events in the datomic log, but have come up with nothing. Do you have some around that I am not finding?

pesterhazy16:01:34

Sorry to repeat this, does anyone know under what conditions this message is shown (datomic pro)? HornetQException[errorType=INTERNAL_ERROR message=HQ119001: Failed to create session

jaret16:01:17

@casperc I am sure you found the monitoring section of the docs. http://docs.datomic.com/monitoring.html What events are you specifically talking about?

jaret16:01:28

@pesterhazy is this in the peer?

pesterhazy16:01:49

@jaret, yes, we get this occasionally when it tries to connect

pesterhazy16:01:19

in periodic nightly jenkins jobs

casperc16:01:50

@jaret: Well taking the :update/adopted-index event as an example. Some explanation of what it means would be useful.

jaret16:01:21

@pesterhazy I would want to see what is going on with the transactor at the time of these exceptions. A metrics report before and after if nothing stands out around the time stamp

jaret16:01:47

HQ exceptions are a bit general and might not indicate a real problem as long as HQ eventually connects

jaret16:01:59

for instance you would get these exceptions if you ran out of memory

pesterhazy16:01:03

good point, I'll check the transactor's log on s3

jaret16:01:09

@casperc I agree I think this would be a useful addition to the documentation. I am going to look through the events we report and see if we can create a table in the docs with a basic definition. I know the rule of thumb was to make the name as self evident as possible so the adopted-index event for instance, represents the adoption of a new index after indexing has completed.

pesterhazy16:01:49

@jaret, in the transactor log I see a couple of warnings like this around the time of the connection failure

abhir00p17:01:14

@val_waeselynck Nice post. Wanted to ask

As we've seen, adding an attribute (the equivalent of adding a column or table is SQL) is straightforward. You can just reinstall your whole schema at deployment time. Same thing for database functions.
When we add an attribute, how do we ensure type safety when querying the same attribute at an older time instance when the attribute did not exist. I assume the application code must be full of null checks? Am I right in thinking so? And
Modifying an attribute (e.g changing the type of :person/id from :db.type/uuid to :db.type/string) is more problematic, and I suggest you do your best to avoid it. Try to get your schema right in the first place; experiment with it in the in-memory connection before committing it to durable storage. If you have committed it already, consider versioning the attribute (e.g :person.v2/id).
Isn’t there any better approach to this rather than just trying to avoid it?

vinnyataide18:01:14

bin\run -m datomic.peer-server -p 8998 -a obviouslynotmykey,mysecret -d firstdb,datomic:
is there anything wrong in this command? there's this error happening
n: [:db "firstdb" 1] val: nil fails spec: :datomic.peer-server/non-empty-string at: [:db 1] predicate: string?
In: [:auth "obviouslynotmykey" 1] val: nil fails spec: :datomic.peer-server/non-empty-string at: [:auth 1] predicate: string?

jaret18:01:27

@vinnyataide I just checked your command on Windows OS and on iterm on mac. It worked in both cases

jaret18:01:42

jbin at Jarets-MacBook-Pro in ~/Desktop/Jaret/Tools/releasetest/5554/datomic-pro-0.9.5554
$ bin/run -m datomic.peer-server -p 8998 -a obviouslynotmykey,mysecret -d firstdb,datomic:
Serving datomic: as firstdb

vinnyataide18:01:55

I ran inside a datomic-pro folder

jaret18:01:02

on Windows OS?

jaret18:01:07

which version of Datomic?

vinnyataide18:01:45

latest 0.9.5554

jaret18:01:52

ok let me try that

jaret18:01:04

I am on a virtual machine so it takes me a bit to get it over there

vinnyataide18:01:55

lol, it worked with this string

vinnyataide18:01:01

but not with my actual key

jaret18:01:57

So the key you use when starting the peer is not the license key you get from http://my-datomic.com

jaret18:01:13

The key is used to connect to the peer from the client

jaret18:01:31

it has to be a non-empty string and there may be other requirements which caused you to have an error

vinnyataide18:01:51

ok the issue appears when using powershell only

marshall18:01:19

it may be that powershell does some kind of non-standard character escaping

vinnyataide18:01:23

sorry for bothering

marshall18:01:39

that causes what you’re putting in for the key get sent as something that fails the “must be a string” check

vinnyataide18:01:57

it is resolved now, and I didn't know yet the architecture

jaret18:01:18

no bother. We're all in the same boat 🙂

jfntn19:01:09

We switched to using a single :db/uuid attribute for all our entities' identity. For integration purposes I’m now considering using distinct :<entity-type>/uuid attrs instead. However I’m not grasping the relative trade-offs of each approach. Is anyone familiar with this?

drewverlee19:01:57

So Datomic can use Cassandra, which gets used frequently for iot data. Due to it being a column store. But i don’t see much “buzz” around Datomic as a “timeseries” database. Why is that?

drewverlee19:01:14

It seems possible to express similar queries through Datomic to Cassandra as i could envision using the Karios Client (https://kairosdb.github.io/). But i dont know where to begin comparing these two.

jaret19:01:10

@seantempesta need to see the query to figure out whats going on.

seantempesta19:01:11

(regarding my previous question) ah never mind. looks like it was just a sample data issue.

limist20:01:34

Bumping a question from last week: When running datomic/bin/maven-install I saw this error message toward the end; is it anything to worry about?

[WARNING] Some problems were encountered while building the effective model for com.datomic:datomic-pro:jar:0.9.5544
[WARNING] 'dependencies.dependency.exclusions.exclusion.artifactId' for com.datastax.cassandra:cassandra-driver-core:jar with value '*' does not match a valid id pattern. @ line 125, column 18
[WARNING] 
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.

val_waeselynck21:01:37

@abhir00p the issue you raised is one of the reasons why querying historical dbs in datomic is much more limited than people think. I recommend to simply not do it in application code.

vinnyataide21:01:39

what happened to datomic.api?

vinnyataide21:01:15

clojure.lang.ExceptionInfo: Could not locate datomic/api__init.class or datomic/api.clj on cl
asspath.
    file: "om_learn\\db.clj"
    line: 1

bballantine22:01:40

Hi. I have a question that might be simple for someone who knows more than me… How do you bind an entity to itself in a datalog where clause? Here’s a contrived example. I want a query that, given a parent, returns the whole nuclear family (including the passed-in parent):

:find ?p2
:in $ ?p1
:where
(or
  [?p1 :person/children ?p2]
  [?p1 :person/partner ?p2]
  [(= ?p1 ?p2)])
This throws an exception: :db.error/insufficient-binding [?p2] …

favila22:01:30

@bballantine do you mean "or"?

bballantine22:01:57

I think I mean or… either ?p2 is a child or a spouse or the parent herself

favila22:01:39

to "rename" a var (for binding purposes) use identity, example here: https://groups.google.com/d/msg/datomic/_YiBRnBkeOs/0Gd-6lJmDwAJ

favila22:01:41

so change [(= ?p1 ?p2)] to [(identity ?p1) ?p2] at least

favila22:01:07

to (and [?p1] [(identity ?p1) ?p2]) to be safe

bballantine22:01:31

alright, let me play with that.. thanks @favila

bballantine22:01:47

@favilla it works, thanks — although I have to admit I don’t quite understand it.

bballantine22:01:54

If I just have [?p1 ?p2], it also works.

bballantine22:01:10

After chatting about it with my colleague, I get why your example works. Thanks @favila

spieden23:01:14

@bballantine care to elaborate? not sure i do.. =)

bballantine23:01:46

@spieden - someone should correct me if I’m wrong, but [(identity ?p1) ?p2] is the syntax to bind something to the result of a function. So this binds ?p2 to the value of (identity ?p1).

spieden23:01:51

ah ok, so it differentiates the syntax i guess

bballantine23:01:54

In this snippet (and [?p1] [(identity ?p1) ?p2]), the [?p1] part ensures that ?p1 isnt’ nil

bballantine23:01:14

yeah @spieden I think that’s right

spieden23:01:18

cool thanks