Fork me on GitHub
Drew Verlee01:10:48

code i wrote recently to get logs:

;; credit for lazy-concat this goes to juxt post on iterat
(defn lazy-concat
  "A concat version that is completely lazy and
  does not require to use apply."
   (when-first [c colls]
     (lazy-cat c (lazy-concat (rest colls))))))

(defn log-group-name->logs
  "given a `log-group-name` (e.g 'staging') returns the log locally"
  (->> (iteration
        (fn [token]
          (icc :DescribeLogStreams
               (cond-> {:logGroupName log-group-name} token (assoc :nextToken token))))
        :vf (fn [{:keys [logStreams]}]
               (fn [all-log-streams log-stream]
                 (concat all-log-streams (-> (icc :GetLogEvents
                                                  (-> log-stream
                                                      (select-keys [:logStreamName])
                                                      (assoc :logGroupName log-group-name)))
        :kf :nextToken)
         ;;TODO make this lazy by using something other then apply concat
And then i do this:
(defn get-codebuild-logs []
  (log-group-name->logs "/aws/codebuild/cacapi-training-codebuildproject"))
    (halt-when (fn [{:keys [message]}] (str/includes? message "FAILED"))
              (fn [completed-result input-triggered] {:trigger input-triggered
                                                      :last-5-leading-upto-it (take-last 5 completed-result)}))
It still takes about 15 seconds, which is 15 seconds longer then i would like.. I mean, that seems really slow honestly. Whats going on here? Do i have some huge area for improvement or is aws just throttling requests because they want me to use the console (i'm only half joking).


I have queried cloudwatch in the past and found it extremely slow. Depends on the amount of logs I suppose. For some production log stream it took 2-5minutes to query a timerange of 15 hours or so. And limiting the time range sped it up significantly

👍 1