Fork me on GitHub
#datomic
<
2015-11-20
>
danielcompton00:11:37

On that note, does running the dashboard consume a license too?

Ben Kamphaus03:11:18

Right, each peer takes up a process in the license - REST and Console peers both included in that. The postgres comparison is tricky, though - Datomic peers are part of the database and directly access storage, cache segments, etc. You can still do things a traditional relational database client can without being peer and taking up a process (i.e. making a query and getting results via the REST API).

danielcompton04:11:13

@bkamphaus: the transactor consumes a license too?

Ben Kamphaus04:11:44

yes, simultaneous process use count includes peers and transactors

bplatz14:11:07

I have an app that has a significant collaboration part to it. Datomic is a win for storing the core transactional data, I'm not sure about the collab part.

bplatz14:11:16

The colloaboration/chat part is more like a very active event stream, virtually no updates and high volume.

bplatz14:11:22

My concern is consuming signficant Datomic Peer memory for event stream data, but perhaps I shouldn't be concerned. Anyone tackle anything similar?

bplatz14:11:32

I've contemplated using DynamoDB + maybe S3 for archives, but keeping all data in one source would be very attractive.

paxan18:11:36

About transaction fns. What's the best practice for returning failures from txn functions? Exception? We've been raising IllegalArgumentException

Lambda/Sierra18:11:56

@paxan Throw ex-info and include data describing the failure.

mattg19:11:37

Silly question du jour. Assuming average commodity hardware (think laptop quality) and default tuning; given a model where consumers talk to peers but are not peers themselves, rough ballpark where is the tipping point size of dataset returned for a “realtime” frontend application, using datomic as the source of data. I’m being asked to off the cuff estimate without being able to explore and measure and get into details. (“10 thousand foot view”). The rough hypothetical use case is a service backed by datomic that allows people to return “large” datasets. They want to pull it into memory and operate on it from languages like Ruby, PHP, Python, Perl, (not Java, not JVM-based). They want to know at what point pagination will be forced. 10k records, 500k records, 1MM records. Unfortunately all of the meaty details I would ask if I heard this question are unknowns. I guess I’m just looking for anecdotal feedback. rough context: think data warehouse trapped behind an API.

mattg19:11:17

(Thinking out loud: I wonder if the peer can stream the data to the non-peer. Am I lucky enough to have that supported without significant custom development.)