Fork me on GitHub

I'm thinking about storing time series data in DynamoDB. This article: suggests using a table per day. If I understand correctly, it's due to the fact that in the example they give there is no other partition key besides time, so using multiple tables allows one to adjust the provision capacities separately for less frequently used days in the past, otherwise some of the provisioned capacity will kind of be "wasted" (in a sense). But what if I do have other partition keys that make sense. For example what if I'm recording stock prices for 100 stocks with similar activity. Does this make sense now to use a single table, with the stock ticker as a partition key and time as the sort key?

👀 1

is there actually different amount of data points for each ticker?


Roughly. I suppose that could be dealt with by dividing up ones with a lot more points into multiple partitions (ie AAPL#A / AAPL#B. I guess I'm mainly wondering if DDB will remain efficient for range queries for this type of table with ~100 partitions with a large number (millions) of points


I'd be really interested on what you find 🙂


have been thinking this partition by time and by other dimensions (like IoT devices) elsewhere too


I'd think that other databases too have some hard time, if you put all data into same slot


maybe there could be two types of "indexes", one per time, one per ticker, would mean more tables


Yeah really all I want, in Clojure data structure terms, is a map of ticker symbol -> sorted maps (by time) like

{"aapl" {1633094934884 {:price 500 :qty 1}}}