This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-03-28
Channels
- # announcements (3)
- # babashka (36)
- # beginners (77)
- # boot (3)
- # chlorine-clover (10)
- # cider (27)
- # clj-kondo (1)
- # cljs-dev (4)
- # clojure (256)
- # clojure-belgium (1)
- # clojure-europe (9)
- # clojure-uk (18)
- # clojuredesign-podcast (9)
- # clojurescript (54)
- # cryogen (8)
- # cursive (3)
- # data-science (1)
- # datomic (2)
- # duct (31)
- # events (1)
- # exercism (3)
- # fulcro (116)
- # joker (20)
- # kaocha (5)
- # meander (2)
- # nrepl (4)
- # off-topic (10)
- # other-languages (15)
- # re-frame (18)
- # reagent (4)
- # shadow-cljs (44)
- # sql (14)
- # tools-deps (17)
anyone have suggestions for free blog software with good support for clojure syntax highlighting?
If you are wanting to do static site generation, cryogen and bootleg are good choices
if self-hosting isn’t a requirement and you’d like to try runnable code instead of just syntax highlighting, try http://nextjournal.com
I use codemirror and its syntax highlighter (via javascript) on my blog: http://thegeez.net/2017/04/28/data_transducers_machine_learning_clojure.html view source shows how it works
i think https://highlightjs.org/ has clojure support
prismjs has clojure support https://prismjs.com/#supported-languages i've used it in gatsby https://www.gatsbyjs.org/packages/gatsby-remark-prismjs/?=remark
hmm.... I actually started using Jekyll a long time ago and forgot about it. It's written in Ruby and generates static github pages.
and it looks like github uses pygments for syntax highlighting.
I use Octopress for my blog, hosted on GitHub, and it's based on Jekyll.
I have the following string after evaluating the expression
(slurp (:body req))
:
(prn (slurp (:body req)))
;; gives "{:foo \"abc\"}" -------- (1)
But when I try to read it into edn:
(prn (read-string (slurp (:body req))))
And wrap it around try, catch, I get the EOF while reading exception.
But when I copy paste result (1) directly into the repl and do
(read-string "{:foo \"abc\"}")
, it gives the expected map. Why is this?Maybe the webserver you use only lets you read the body once. You could check if this works for you:
(let [body-str (slurp (:body req)) ;; only do one slurp on the body
_ (prn body-str)
body-edn (read-string body-str)
_ (prn body-edn)]
body-edn)
is there a go-to CQRS/ES library for Clojure?
Just toying with an interesting on for Elixir and wondered if there’s any for clojure
I’ve seen a couple but they seem heavily tied to the web component
Hi everyone. I need to add a type hint to avoid reflection in a clojure call. But the type I'm passing to the function is an array. How do I type hint a java array?
If it's a string array, it's like this:
"[Ljava.lang.String;"
Just change it to whatever array type you need
Anyone has experience with FreeTTS? I made a quick test and the quality isn't exactly stellar. It sounds like Kevin speaks under water. Is there anything I can do to get better quality? Here's the code:
(ns speak
(:import
java.util.Locale
javax.speech.Central
javax.speech.synthesis.Synthesizer
javax.speech.synthesis.SynthesizerModeDesc))
(System/setProperty "freetts.voices" "com.sun.speech.freetts.en.us.cmu_us_kal.KevinVoiceDirectory")
(Central/registerEngineCentral "com.sun.speech.freetts.jsapi.FreeTTSEngineCentral")
(def synthesizer (Central/createSynthesizer (SynthesizerModeDesc. Locale/US)))
(.allocate synthesizer)
(.resume synthesizer)
(defn speak [text]
(.speakPlainText synthesizer text nil)
(.waitEngineState synthesizer Synthesizer/QUEUE_EMPTY))
;(.deallocate synthesizer)
(comment
(speak "nine hundred eighty-seven billion six hundred fifty-four million three hundred twenty-one thousand one hundred twenty-three"))
Command query responsibility segregation and event sourcing @emccue
my gut feeling is that this is just an acronymization of separating db queries and writes to different places in the code
in which case, just having different sql files and different namespaces with something like
and event sourcing seems like its just "call me with data off a stream" or "have a consumer read off a queue"
Well, this kind of library would generally provide a framework for each component, from the interfaces to the data flow
CQRS as a paradigm has a pretty set-in-stone flow, eg.
and to use the ‘Commanded’ Elixir library, this provides a lot of the scaffolding and you just implement behaviours (satisfy the interfaces) for your command handlers and define command & event structs to wire it all up (as an oversimplification)
Commanded is awesome, if I were working in Elixir I’d be building on top of it. I’ve been using its docs for inspiration in designing my own stuff
“I want event structs and handlers.” -> “I want maps and dispatch (i.e. multimethod).”
You can spec and do static checks to taste. (e.g. check the methods
of the multimethod to make sure every command type has an entry)
that’s one of the conclusions I came to but I don’t know Clojure well enough to know if it’s the right path
but obviously that’s only a small part
I’m also super new to Clojure, so this might be a complete anti-pattern for Clojure 🤷
Not sure you'll find a library to do all of that, partially because Clojure is a small language with very few concepts, so you don't have to abstract it that much
I have found https://github.com/metosin/kekkonen but it seems to be very API-related
I mean, that’s the main focus of CQRS, but there are some predefined patterns which I thought might have been put into a library
From what I’ve seen so far, there’s less a focus on frameworks, but there are still the standouts like reagent/reframe, which provide the building blocks and data-flow patterns and you simply satisfy them
Look at Fulcro, It have pattern of Query and Mutations, with Ident for storing data in the tables. But it's solves very specific problems.
Maybe it’d be a learning experience for me to try and implement that as a library
not for the library as the end-result, but just to learn the pieces and what it’d looke like in Clojure
Haven’t touched the likes of protocols yet so it’d be interesting to see
Records and protocols would be terrible for this use case. Commands will need to be serialized in the event store. Serializing records is not a fun game to play. (Trust me. I have some war stories.)
Ah, this is interesting. I assumed it’d be just be something you could easily tack on
Guess I was thinking of them as decorated maps, but they’re actually Java classes right?
also, this seems like a perfect fit for Datomic
Definitely, you usually need much less - for starters, as @emccue suggested: try HugSQL + 2 JDBC connections, one of them being read only and see how it goes
@james662 FWIW I’m also working on CQRS/ES in Clojure, perhaps we can continue the conversation in #architecture? +1 for Fulcro as a tool for building complex frontends re: Datomic: https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html
Records and protocols would be terrible for this use case. Commands will need to be serialized in the event store. Serializing records is not a fun game to play. (Trust me. I have some war stories.)
Which is to say, don't do it, it's a terrible architecture that most likely leads to failed projects "Unless you know better"
Uuuuh, absent whatever context you’re thinking of, I’m gonna disagree pretty hard here.
That's my best practice advice absent any context. The cases where CQRS will be a win are the minority. So it's almost always best avoided. Obviously, my personal opinion.
If you’re thinking, “CQRS backed by a SQL data store is terrible and probably not worth the cost,” then yeah. That’s sane.
So yes. Big diffs. But the extremely important concept of “track everything that happens and don’t forget it,” is still there.
But CQRS is an architectural pattern. It's like an implementation. It's not really a guiding principle
You're kind of talking about use cases almost. I just find it concerning to associate the two.
CQRS isn't what you have to reach for if you want to have a history of your transactional changes.
But there still exists a different design which is different from how Datomic works, and some people call that CQRS as well, and they don't work the same way
And when I was more precise, you continue to make vague assertions about what “counts as CQRS.”
Ok, so maybe I don't know what you're saying. It seemed you were saying you think CQRS is great, and gave me Datomic as an example of why?
No. I said CQRS is better than the industry standard because of certain properties it has.
I'm saying, CRUD is best for the average project, and choosing to implement CQRS instead is often a big mistake.
I mean, I’m not gonna hard disagree, except many projects need event history. It’s not an uncommon requirement.
Or if you use say DynamoDB, it even supports publishing events automatically on a change. And I'm sure probably Postgress Sql has something similar
But, Datomic (and I might be wrong), is not really a log. In that, the log is used by the writer, but then data is just records that are indexed, from which you can query slices no?
But one of the data-removal strategies is to replay the log into a different db, removing data you don’t want anymore.
I was under the impression in was more simply a n-tuple store. With a log based single node writer to it
Super nice. And you can use it to, say, sync an external, eventually-consistent data store.
Being interested in it and trying to build it is very different from saying you're choosing to use Datomic or Crux or DynamoDB, etc as your database
Martin Fowler puts it well: > CQRS is a significant mental leap for all concerned, so shouldn't be tackled unless the benefit is worth the jump. While I have come across successful uses of CQRS, so far the majority of cases I've run into have not been so good, with CQRS seen as a significant force for getting a software system into serious difficulties
> Being interested in it and trying to build it is very different from saying you’re choosing to use Datomic or Crux or DynamoDB, etc as your database
Yeah, I’d agree with all of that. If you’re uncertain, you should pretty universally just start w/ Postgres and figure the rest out. But for many apps, you need a reified log (which isn’t provided by most RDBMS’s). And that mental model is super powerful once you have it.
Curious what your DB experience has been in. Not sure how many dbs have an event streams baked in.
Very interesting piece by Fowler. Thanks for highlighting that. I'd be interested to know if his thinking on this has changed any in the nearly ten years since he wrote that...
I've been working with AWS for the last 5 years. And mostly rely on DynamoDB. Though what we often do for audit/metrics/analytics is publish a separate log to SQS and batch process those into S3 files, which we later run queries using AWS Athena. We also maintain the last 2 years of data in a AWS RedShift cluster.
We need to do this anyways because we have data in multiple places across multiple services and teams. And the business needs a unified history view of all that. So having a separate publisher, everyone can just make sure to integrate with it where required no matter what they use for managing their data
It does mean data can drift. Since not all store will guarantee to be 100% in sync at all times. You can try and make the publishing of event atomic with your data storage, but when the storage layer isn't designed from the ground up to support this, it's almost impossible
The best we do sometimes is perform a kind of Saga of events. Like publish the event, make the change to the DB, if the change to the DB failed, publish a redaction event. And if the first publish failed, we'd fail the request itself.
But you can see there's still point of failures here, like failing to publish the redaction
If I was modeling a transactional system moving money around, probably I'd want something which eleminates even those possible faults.
I’m playing with CQRS/ES in toy project(s), but the event-based pattern is what I’m aiming for as the entire project is essentially a timeline and needs an audit trail
Are you explicitly saying to stay away from CQRS or ES?
Which you could build on DynamoDB, since it gives you a log stream with all these guarantees. Or Datomics would be a good option for that too I think
And you can easily turn a simple business use case into an accidental complexity mess
The effort to properly implement CQRS and ES, and then build all your requirements over them, is huuge, compared to some alternatives
Like you said, these are toy projects so I’m not sinking business-paid dev time on them, but it’s fair warning, thanks.
Ya, I mean they're entirely valid approaches, but I think they're often used because I guess they're the architecture "du jour", and using it for the wrong use cases is an anti-pattern
I can see that’d be the case, but it’s also why I wondered if there was any kind of framework to make it a little more smooth sailing. I can see that CQRS/ES as an architecture probably benefits a certain scale and attitude, but I want to give it a go mostly out of curiosity
The in-built log stream is interesting, although I don’t use AWS
Datomics was one of the things that made me think Clojure would be a good fit
the project I’m playing with in Elixir uses Postgres, but Elixir brings its own benefits to this kind of architecture
Datomic has a free offering right?
I'm surprised PoatressSql doesn't support some form of it. I wonder if any free open source DB does
> 2 simultaneous peers and transactor-local storage only.
Honestly, I’d not thought of a transaction log as a side effect at the DB level
I’m really just toying with different architectures so I can get a bit of experience with them
But on the flip-side, I’ve spent a few days just figuring out the architecture and implementation, whereas I could have had half of the app running so far, haha.
Hoping it’s initial investment
I'm more interested in messing around then delivering anything of value it seems hehe
You'll have a separate table of logs which can record all insert, updates, deletes and truncates operation
This describes how: https://aws.amazon.com/blogs/database/stream-changes-from-amazon-rds-for-postgresql-using-amazon-kinesis-data-streams-and-aws-lambda/
Yeah, you can access the WAL, BUT (and it’s a big but) you’re consuming resources on the server for the duration. Postgres does not keep the entire WAL (not by a long shot). So if you’re, for example, disk constrained, and your log reader gets behind, you risk hosing your entire system.
That's not how I'm reading the docs. The trigger is on your regular table, and what the trigger does is write a log to an audit table
Ya, it's not ideal for replicating to a different DB. I guess you could then export the audit table and import it elsewhere
Reading from the WAL is an interesting approach
I know Kafka can consume the pg wal
unless you’re massively over-provisioned, so resources are guaranteed to be not constrained.
but I’m currently in a situation where getting behind on WAL reading would hose the whole app :derp:
thanks for your input
So you've got a big buffer. If you know your consumption rate, you can just plan and put some write limits to the DB.
Dunno if PG can be set to have a write limit. If not, you'd have to build an additional layer.
feels a bit circular though. you write to postgres then consume the WAL update and process that in your app.
I'd still contend these type of approaches I think are simpler (even if still complicated), then full on CQRS
Like, if you just want to model a timeline of some logical app domain entity. I wouldn't even do any of this. Just model it directly in your data model
There's kind of a difference between you need to build a audit like feature in your app. And having to audit the entire application data
Generally that's for things like government compliance, record keeping, reporting, business analytics, etc.
If all you care is like users being able to go in a view that shows their activity timeline, all of this is overkill
For that all you do is insert a new row for each new activity and then filter over the ones for your user sorted by date
There's an internal we have, they modeled comments as ES, and it's the worse user experience ever. You type a comment submit, you see your comment, refresh the page, the comment is gone. All because of what you described. ES is naturally eventually consistent. It's a huge roundabout. Your comment was first sent as an event: Please create this comment. And then a reader later consumes it and creates the comment in the Queryable store. And only then can you refresh the app and see the comment
In the meantime, it's possible others commented before you, yet you were not yet seeing their comment
Nothing more frustrating to a user then doing something and seeing it disappear and later reappear
Oh, and one last thing 😋, CQRS and ES comes a bit from DDD (domain driven design). I love DDD. And just want to say. You can seperate your queries from your writes in your app, but still use the same DB for both. And that I'd say is a pretty nice pattern.
But even DDD, is a pretty complicated arrangement, and you need a complex enough domain to justify using it too
But all it means is that, you should have entities structured in terms of your writes. Store those in the DB. They will protect your data invariants, and define the level of atomicity required.
Then, you have seperate queries, only for the purpose of displaying data, not to be modified
Those queries can choose to return the data in whatever structure or shape, with whatever level of processing applied to them, since they are read only
user=> (def names '#{take take-while drop drop-while map filter reduce partition partition-all partition-by drop-last take-nth interleave interpose})
#'user/names
user=> (->> (ns-publics 'clojure.core)
#_=> (filter (comp names key))
#_=> (map (comp last last :arglists meta val))
#_=> (frequencies))
{coll 12, colls 2}
All the fns above require a seqable as their last arg, and despite that, the arg name is coll
. Is/Was there some reasoning behind this?