Fork me on GitHub
#clojure
<
2020-03-28
>
cpmcdaniel02:03:57

anyone have suggestions for free blog software with good support for clojure syntax highlighting?

Bobbi Towers05:03:21

If you are wanting to do static site generation, cryogen and bootleg are good choices

mkvlr09:03:46

if self-hosting isn’t a requirement and you’d like to try runnable code instead of just syntax highlighting, try http://nextjournal.com

thegeez13:03:18

I use codemirror and its syntax highlighter (via javascript) on my blog: http://thegeez.net/2017/04/28/data_transducers_machine_learning_clojure.html view source shows how it works

dpsutton02:03:44

i think https://highlightjs.org/ has clojure support

cpmcdaniel02:03:19

hmm.... I actually started using Jekyll a long time ago and forgot about it. It's written in Ruby and generates static github pages.

cpmcdaniel03:03:32

and it looks like github uses pygments for syntax highlighting.

seancorfield03:03:21

I use Octopress for my blog, hosted on GitHub, and it's based on Jekyll.

Spaceman15:03:28

I have the following string after evaluating the expression

(slurp (:body req))
:
(prn (slurp (:body req)))
;; gives "{:foo \"abc\"}"         -------- (1)
But when I try to read it into edn:
(prn (read-string (slurp (:body req))))
And wrap it around try, catch, I get the EOF while reading exception. But when I copy paste result (1) directly into the repl and do
(read-string "{:foo \"abc\"}")
, it gives the expected map. Why is this?

thegeez16:03:42

Maybe the webserver you use only lets you read the body once. You could check if this works for you:

(let [body-str (slurp (:body req)) ;; only do one slurp on the body _ (prn body-str) body-edn (read-string body-str) _ (prn body-edn)] body-edn)

absolutejam16:03:10

is there a go-to CQRS/ES library for Clojure?

absolutejam16:03:22

Just toying with an interesting on for Elixir and wondered if there’s any for clojure

absolutejam16:03:36

I’ve seen a couple but they seem heavily tied to the web component

Crispin17:03:19

Hi everyone. I need to add a type hint to avoid reflection in a clojure call. But the type I'm passing to the function is an array. How do I type hint a java array?

Nir Rubinstein17:03:01

If it's a string array, it's like this: "[Ljava.lang.String;" Just change it to whatever array type you need

Crispin17:03:06

awesome! That worked! Thanks 👍

👍 4
emccue17:03:18

@james662 just for my edification, what is CQRS/ES?

pez17:03:15

Anyone has experience with FreeTTS? I made a quick test and the quality isn't exactly stellar. It sounds like Kevin speaks under water. Is there anything I can do to get better quality? Here's the code:

(ns speak
  (:import
   java.util.Locale
   javax.speech.Central
   javax.speech.synthesis.Synthesizer
   javax.speech.synthesis.SynthesizerModeDesc))

(System/setProperty "freetts.voices" "com.sun.speech.freetts.en.us.cmu_us_kal.KevinVoiceDirectory")
(Central/registerEngineCentral "com.sun.speech.freetts.jsapi.FreeTTSEngineCentral")

(def synthesizer (Central/createSynthesizer (SynthesizerModeDesc. Locale/US)))
(.allocate synthesizer)
(.resume synthesizer)

(defn speak [text]
  (.speakPlainText synthesizer text nil)
  (.waitEngineState synthesizer Synthesizer/QUEUE_EMPTY))

;(.deallocate synthesizer)

(comment
  (speak "nine hundred eighty-seven billion six hundred fifty-four million three hundred twenty-one thousand one hundred twenty-three"))

absolutejam17:03:47

Command query responsibility segregation and event sourcing @emccue

emccue17:03:05

so what would a library for that do?

emccue17:03:06

I see some examples that draw lines across services

emccue17:03:18

and some that are just drawing lines across code

emccue17:03:47

my gut feeling is that this is just an acronymization of separating db queries and writes to different places in the code

emccue17:03:29

in which case, just having different sql files and different namespaces with something like

emccue17:03:39

would fit the bill

emccue17:03:03

and event sourcing seems like its just "call me with data off a stream" or "have a consumer read off a queue"

emccue17:03:23

(not to diminish anything at all - I just want to understand)

absolutejam17:03:54

Well, this kind of library would generally provide a framework for each component, from the interfaces to the data flow

absolutejam17:03:42

CQRS as a paradigm has a pretty set-in-stone flow, eg.

absolutejam17:03:46

and to use the ‘Commanded’ Elixir library, this provides a lot of the scaffolding and you just implement behaviours (satisfy the interfaces) for your command handlers and define command & event structs to wire it all up (as an oversimplification)

adamfeldman18:03:24

Commanded is awesome, if I were working in Elixir I’d be building on top of it. I’ve been using its docs for inspiration in designing my own stuff

potetm18:03:15

“I want event structs and handlers.” -> “I want maps and dispatch (i.e. multimethod).”

potetm18:03:23

You can spec and do static checks to taste. (e.g. check the methods of the multimethod to make sure every command type has an entry)

absolutejam21:03:48

that’s one of the conclusions I came to but I don’t know Clojure well enough to know if it’s the right path

absolutejam21:03:57

but obviously that’s only a small part

absolutejam17:03:34

I’m also super new to Clojure, so this might be a complete anti-pattern for Clojure 🤷

lukasz17:03:32

Not sure you'll find a library to do all of that, partially because Clojure is a small language with very few concepts, so you don't have to abstract it that much

absolutejam17:03:44

I have found https://github.com/metosin/kekkonen but it seems to be very API-related

lukasz17:03:52

To me it looks like strong separation between read and write paths

lukasz17:03:58

but what do I know ¯\(ツ)

absolutejam17:03:57

I mean, that’s the main focus of CQRS, but there are some predefined patterns which I thought might have been put into a library

absolutejam17:03:39

From what I’ve seen so far, there’s less a focus on frameworks, but there are still the standouts like reagent/reframe, which provide the building blocks and data-flow patterns and you simply satisfy them

Ahmed Hassan18:03:04

Look at Fulcro, It have pattern of Query and Mutations, with Ident for storing data in the tables. But it's solves very specific problems.

emccue18:03:05

@james662 I think with clojure things like that dont need to be libraries

emccue18:03:27

if that resonates with anyone else

emccue18:03:53

try out hugsql

emccue18:03:05

and try to emulate the pattern

absolutejam18:03:13

Maybe it’d be a learning experience for me to try and implement that as a library

absolutejam18:03:24

not for the library as the end-result, but just to learn the pieces and what it’d looke like in Clojure

absolutejam18:03:05

Haven’t touched the likes of protocols yet so it’d be interesting to see

potetm18:03:06

Records and protocols would be terrible for this use case. Commands will need to be serialized in the event store. Serializing records is not a fun game to play. (Trust me. I have some war stories.)

absolutejam21:03:45

Ah, this is interesting. I assumed it’d be just be something you could easily tack on

potetm22:03:44

Yeah, no. Friends don’t let friends put records on the wire. Use maps.

absolutejam22:03:27

Guess I was thinking of them as decorated maps, but they’re actually Java classes right?

absolutejam18:03:15

also, this seems like a perfect fit for Datomic

lukasz18:03:57

Definitely, you usually need much less - for starters, as @emccue suggested: try HugSQL + 2 JDBC connections, one of them being read only and see how it goes

adamfeldman18:03:48

@james662 FWIW I’m also working on CQRS/ES in Clojure, perhaps we can continue the conversation in #architecture? +1 for Fulcro as a tool for building complex frontends re: Datomic: https://vvvvalvalval.github.io/posts/2018-11-12-datomic-event-sourcing-without-the-hassle.html

absolutejam18:03:58

Read that post and it's awesome

👍 4
didibus18:03:33

By the way, CQRS is super complex

didibus18:03:46

And adds a ton of complexity to your app

didibus18:03:59

You really really got to be sure you'll benefit from it

potetm18:03:06

Records and protocols would be terrible for this use case. Commands will need to be serialized in the event store. Serializing records is not a fun game to play. (Trust me. I have some war stories.)

didibus18:03:31

Which is to say, don't do it, it's a terrible architecture that most likely leads to failed projects "Unless you know better"

potetm18:03:12

Uuuuh, absent whatever context you’re thinking of, I’m gonna disagree pretty hard here.

potetm18:03:33

The model is much saner than the industry standard.

didibus18:03:06

That's my best practice advice absent any context. The cases where CQRS will be a win are the minority. So it's almost always best avoided. Obviously, my personal opinion.

potetm18:03:08

If you’re thinking, “CQRS backed by a SQL data store is terrible and probably not worth the cost,” then yeah. That’s sane.

potetm18:03:42

But datomic is basically CQRS embodied (depending on your definition).

potetm18:03:52

And it’s super nice to work w/.

didibus18:03:59

It's not CQRS at all

didibus18:03:11

I know lots of people make these huge leap trying to compare them

potetm18:03:17

it’s not a leap

didibus18:03:23

But it's just a very different model

potetm18:03:39

No. It’s different, but related in obvious ways.

didibus18:03:02

Related the same way a for loop relates to reduce in my mind

didibus18:03:29

The differences matters immensely here

potetm18:03:40

They’ve taken the work out of creating “aggregates” by doing it by default.

potetm19:03:04

You can debate whether you call txns “commands” or not. The end effect is the same.

potetm19:03:25

They’ve also made it trivial to revisit aggregate states.

potetm19:03:01

So yes. Big diffs. But the extremely important concept of “track everything that happens and don’t forget it,” is still there.

didibus19:03:14

That's not really anything to do with CQRS

potetm19:03:25

ok, not gonna debate semantics w/ you

didibus19:03:42

You could make a CQRS implementation that has that property

didibus19:03:08

But CQRS is an architectural pattern. It's like an implementation. It's not really a guiding principle

didibus19:03:49

You're kind of talking about use cases almost. I just find it concerning to associate the two.

potetm19:03:13

Yes. I’m sure it’s an architectural pattern devoid of any principles.

didibus19:03:20

CQRS isn't what you have to reach for if you want to have a history of your transactional changes.

didibus19:03:29

Just one option

didibus19:03:36

Datomic being another 😜

potetm19:03:02

There’s a reason people always tack on “ES.”

potetm19:03:15

apparently

potetm19:03:16

It’s so semantic police will not attack them.

didibus19:03:48

But I'm not playing with the word semantics here

didibus19:03:59

I'm talking about the semantics of the software design

didibus19:03:37

If you want to call the pattern Datomic follows CQRS go for it

potetm19:03:04

Did I say that?

didibus19:03:13

But there still exists a different design which is different from how Datomic works, and some people call that CQRS as well, and they don't work the same way

potetm19:03:30

You’re pretending I’m saying things just so you can argue.

potetm19:03:58

And when I was more precise, you continue to make vague assertions about what “counts as CQRS.”

didibus19:03:08

Ok, so maybe I don't know what you're saying. It seemed you were saying you think CQRS is great, and gave me Datomic as an example of why?

potetm19:03:10

So, yes, you are playing word games.

potetm19:03:43

No. I said CQRS is better than the industry standard because of certain properties it has.

potetm19:03:55

> The model is much saner than the industry standard.

didibus19:03:22

Oh, okay. So I guess we are in violent agreement then

didibus19:03:43

Wait, the CQRS model or Datomics?

didibus19:03:04

Maybe I'm confused now

potetm19:03:30

Both. They share certain properties. (But Datomic is a major improvement.)

didibus19:03:40

I'm saying, CRUD is best for the average project, and choosing to implement CQRS instead is often a big mistake.

didibus19:03:55

Using Datomic I consider is a third option

didibus19:03:19

Which is a better choice then CQRS, and depending on your requirements, CRUD as well

potetm19:03:53

I mean, I’m not gonna hard disagree, except many projects need event history. It’s not an uncommon requirement.

didibus19:03:27

Ya but you don't need CQRS

potetm19:03:49

But saying, “prefer SQL to CQRS” is much softer than what you said initially 😛

didibus19:03:53

You can just publish some separate events to some data warehouse store

potetm19:03:29

Well, then you have another data store you gotta keep transactionally in-sync 😄

didibus19:03:51

Or if you use say DynamoDB, it even supports publishing events automatically on a change. And I'm sure probably Postgress Sql has something similar

potetm19:03:05

To my knowledge, postgres does not.

potetm19:03:10

I wish it did.

potetm19:03:39

Reading a log (a la datomic) is by far preferred.

didibus19:03:18

But, Datomic (and I might be wrong), is not really a log. In that, the log is used by the writer, but then data is just records that are indexed, from which you can query slices no?

potetm19:03:31

No, it has a reified log.

potetm19:03:36

You can ask for it and troll it yourself.

didibus19:03:47

You can replay and stuff?

didibus19:03:16

I mean, okay fair enough if it keeps the writer log durably

✔️ 4
potetm19:03:23

In-memory sure. There’s no button for resetting the db back and re-playing.

potetm19:03:15

But one of the data-removal strategies is to replay the log into a different db, removing data you don’t want anymore.

didibus19:03:18

I was under the impression in was more simply a n-tuple store. With a log based single node writer to it

potetm19:03:45

It is. But the log is reified as data as well.

didibus19:03:49

In which case, the writer and reader use the same object model

didibus19:03:59

Ya okay, that's nice

potetm19:03:44

Super nice. And you can use it to, say, sync an external, eventually-consistent data store.

didibus19:03:24

To be fair, I haven't used a DB that didn't have an event stream baked in in a while

didibus19:03:58

So maybe the "industry standard" is way behind on my assumptions

didibus19:03:34

I just feel I need to warn people venturing in CQRS

didibus19:03:25

Being interested in it and trying to build it is very different from saying you're choosing to use Datomic or Crux or DynamoDB, etc as your database

didibus19:03:18

Like the person asking for a library/framework for it

didibus19:03:28

I've seen this path crash and burn multiple times before

didibus19:03:30

Martin Fowler puts it well: > CQRS is a significant mental leap for all concerned, so shouldn't be tackled unless the benefit is worth the jump. While I have come across successful uses of CQRS, so far the majority of cases I've run into have not been so good, with CQRS seen as a significant force for getting a software system into serious difficulties

potetm20:03:13

> Being interested in it and trying to build it is very different from saying you’re choosing to use Datomic or Crux or DynamoDB, etc as your database

potetm20:03:00

Yeah, I’d agree with all of that. If you’re uncertain, you should pretty universally just start w/ Postgres and figure the rest out. But for many apps, you need a reified log (which isn’t provided by most RDBMS’s). And that mental model is super powerful once you have it.

potetm20:03:10

Curious what your DB experience has been in. Not sure how many dbs have an event streams baked in.

seancorfield20:03:12

Very interesting piece by Fowler. Thanks for highlighting that. I'd be interested to know if his thinking on this has changed any in the nearly ten years since he wrote that...

didibus21:03:56

I've been working with AWS for the last 5 years. And mostly rely on DynamoDB. Though what we often do for audit/metrics/analytics is publish a separate log to SQS and batch process those into S3 files, which we later run queries using AWS Athena. We also maintain the last 2 years of data in a AWS RedShift cluster.

didibus21:03:40

We need to do this anyways because we have data in multiple places across multiple services and teams. And the business needs a unified history view of all that. So having a separate publisher, everyone can just make sure to integrate with it where required no matter what they use for managing their data

didibus21:03:50

It does mean data can drift. Since not all store will guarantee to be 100% in sync at all times. You can try and make the publishing of event atomic with your data storage, but when the storage layer isn't designed from the ground up to support this, it's almost impossible

didibus21:03:28

The best we do sometimes is perform a kind of Saga of events. Like publish the event, make the change to the DB, if the change to the DB failed, publish a redaction event. And if the first publish failed, we'd fail the request itself.

didibus21:03:57

But you can see there's still point of failures here, like failing to publish the redaction

didibus21:03:18

We kind of just live with those

didibus21:03:36

They'll often just disappear in the wash.

didibus21:03:34

If I was modeling a transactional system moving money around, probably I'd want something which eleminates even those possible faults.

didibus21:03:51

And then you might need a more integrated solution

absolutejam21:03:57

I’m playing with CQRS/ES in toy project(s), but the event-based pattern is what I’m aiming for as the entire project is essentially a timeline and needs an audit trail

absolutejam21:03:34

Are you explicitly saying to stay away from CQRS or ES?

didibus21:03:56

Which you could build on DynamoDB, since it gives you a log stream with all these guarantees. Or Datomics would be a good option for that too I think

didibus21:03:34

I'm kind of saying to stay away from both

didibus21:03:03

I mean, in toy projects it's irrelevant, if the point is to learn

didibus21:03:32

It's not that they don't have merits, but they are very complex systems

didibus21:03:50

And you can easily turn a simple business use case into an accidental complexity mess

didibus21:03:57

The effort to properly implement CQRS and ES, and then build all your requirements over them, is huuge, compared to some alternatives

absolutejam21:03:47

Like you said, these are toy projects so I’m not sinking business-paid dev time on them, but it’s fair warning, thanks.

didibus21:03:54

Ya, I mean they're entirely valid approaches, but I think they're often used because I guess they're the architecture "du jour", and using it for the wrong use cases is an anti-pattern

absolutejam21:03:54

I can see that’d be the case, but it’s also why I wondered if there was any kind of framework to make it a little more smooth sailing. I can see that CQRS/ES as an architecture probably benefits a certain scale and attitude, but I want to give it a go mostly out of curiosity

absolutejam21:03:12

The in-built log stream is interesting, although I don’t use AWS

didibus21:03:56

I guess Datomic also just gives you an history trail for free

absolutejam21:03:13

Datomics was one of the things that made me think Clojure would be a good fit

didibus21:03:13

But I guess both of those are paid $$$ solutions

absolutejam21:03:46

the project I’m playing with in Elixir uses Postgres, but Elixir brings its own benefits to this kind of architecture

absolutejam22:03:02

Datomic has a free offering right?

didibus22:03:10

I'm surprised PoatressSql doesn't support some form of it. I wonder if any free open source DB does

absolutejam22:03:12

> 2 simultaneous peers and transactor-local storage only.

didibus22:03:24

Ya, but I think for non-commercial use only

didibus22:03:40

Oh that's nice. So a free tier then

absolutejam22:03:41

Honestly, I’d not thought of a transaction log as a side effect at the DB level

absolutejam22:03:14

I’m really just toying with different architectures so I can get a bit of experience with them

didibus22:03:30

Ya, then it's worth looking into

didibus22:03:37

Good learnings and experience for sure

absolutejam22:03:50

But on the flip-side, I’ve spent a few days just figuring out the architecture and implementation, whereas I could have had half of the app running so far, haha.

absolutejam22:03:57

Hoping it’s initial investment

didibus22:03:04

😅 ya, most of my personal toy project end up like that

didibus22:03:25

I'm more interested in messing around then delivering anything of value it seems hehe

didibus22:03:48

Seems you kind of can do it with PG as well

didibus22:03:29

You'll have a separate table of logs which can record all insert, updates, deletes and truncates operation

didibus22:03:09

Seems you can even just access the PostressSQL write ahead log directly

didibus22:03:44

You don't need to use AWS for it. You can adapt the pattern.

potetm22:03:47

So, Audit triggers assume an audit table.

potetm22:03:00

If you’ve got everything recorded in a log, there’s no need for a trigger.

potetm22:03:13

Yeah, you can access the WAL, BUT (and it’s a big but) you’re consuming resources on the server for the duration. Postgres does not keep the entire WAL (not by a long shot). So if you’re, for example, disk constrained, and your log reader gets behind, you risk hosing your entire system.

didibus22:03:46

That's not how I'm reading the docs. The trigger is on your regular table, and what the trigger does is write a log to an audit table

potetm22:03:23

> write a log to an audit table

potetm22:03:52

ah, I see what you mean

potetm22:03:09

but yeah, basically a form of: store the log in a sql table

didibus22:03:44

Ya, it's not ideal for replicating to a different DB. I guess you could then export the audit table and import it elsewhere

didibus22:03:16

But if you don't care about that. Just want an audit trail, it works

potetm22:03:37

yeah, postgres has a pretty robust WAL replication story already

potetm22:03:41

no need to side-car that

didibus22:03:12

Ya, the wal2json approach seems nicer to me

didibus22:03:35

You're right that you can get behind I guess

didibus22:03:53

Wonder if there's a way to handle that, and throttle the writes to the wal

didibus22:03:59

Or stop them in that case

absolutejam22:03:08

Reading from the WAL is an interesting approach

absolutejam22:03:18

I know Kafka can consume the pg wal

potetm22:03:33

yeah that seems bad news bears to me

didibus22:03:57

I'm guessing Datomic just starts failing on writes in that case

potetm22:03:02

unless you’re massively over-provisioned, so resources are guaranteed to be not constrained.

didibus22:03:12

Say It's writer node runs out of disk space?

potetm22:03:24

uh, I dunno. Datomic doesn’t assume it can trim the log

didibus22:03:11

I mean, it's a scenario I feel shouldn't be too hard to plan for

potetm22:03:12

problem w/ postgres is: you’re interfering with its internal assumptions to a degree

didibus22:03:20

The wal isnto disk correct?

potetm22:03:24

you know, you’d think that

potetm22:03:50

but I’m currently in a situation where getting behind on WAL reading would hose the whole app :derp:

😧 4
potetm22:03:00

g2g, dinner

absolutejam22:03:10

thanks for your input

didibus22:03:11

So you've got a big buffer. If you know your consumption rate, you can just plan and put some write limits to the DB.

didibus22:03:16

Dunno if PG can be set to have a write limit. If not, you'd have to build an additional layer.

didibus22:03:28

Eh, anyways

absolutejam22:03:29

feels a bit circular though. you write to postgres then consume the WAL update and process that in your app.

didibus22:03:58

I'd still contend these type of approaches I think are simpler (even if still complicated), then full on CQRS

didibus22:03:31

Well, it depends on your app

didibus22:03:03

Like, if you just want to model a timeline of some logical app domain entity. I wouldn't even do any of this. Just model it directly in your data model

didibus22:03:26

Like if you want list of X, just have a table for it

didibus22:03:06

There's kind of a difference between you need to build a audit like feature in your app. And having to audit the entire application data

didibus22:03:21

I've been talking about the latter the whole time

didibus22:03:05

Generally that's for things like government compliance, record keeping, reporting, business analytics, etc.

didibus22:03:35

If all you care is like users being able to go in a view that shows their activity timeline, all of this is overkill

didibus22:03:49

For that all you do is insert a new row for each new activity and then filter over the ones for your user sorted by date

didibus22:03:02

In some activity table

didibus22:03:11

And done, you now have user activity audit

didibus22:03:25

Each user can go and see their history of activity

didibus22:03:18

There's an internal we have, they modeled comments as ES, and it's the worse user experience ever. You type a comment submit, you see your comment, refresh the page, the comment is gone. All because of what you described. ES is naturally eventually consistent. It's a huge roundabout. Your comment was first sent as an event: Please create this comment. And then a reader later consumes it and creates the comment in the Queryable store. And only then can you refresh the app and see the comment

didibus22:03:37

In the meantime, it's possible others commented before you, yet you were not yet seeing their comment

didibus22:03:40

It's a mess

didibus22:03:09

Nothing more frustrating to a user then doing something and seeing it disappear and later reappear

didibus22:03:22

Oh, and one last thing 😋, CQRS and ES comes a bit from DDD (domain driven design). I love DDD. And just want to say. You can seperate your queries from your writes in your app, but still use the same DB for both. And that I'd say is a pretty nice pattern.

didibus22:03:50

CQRS takes this principle from DDD, and applies it to the storage layer as well

didibus22:03:24

But even DDD, is a pretty complicated arrangement, and you need a complex enough domain to justify using it too

didibus22:03:20

But all it means is that, you should have entities structured in terms of your writes. Store those in the DB. They will protect your data invariants, and define the level of atomicity required.

didibus22:03:45

Those entities you only retrieve for the purpose of modifying them

didibus22:03:25

Then, you have seperate queries, only for the purpose of displaying data, not to be modified

didibus22:03:09

Those queries can choose to return the data in whatever structure or shape, with whatever level of processing applied to them, since they are read only

didibus22:03:36

But they could still be performed over the same darastore, same tables

jaihindhreddy19:03:58

user=> (def names '#{take take-while drop drop-while map filter reduce partition partition-all partition-by drop-last take-nth interleave interpose})
#'user/names
user=> (->> (ns-publics 'clojure.core)
  #_=>      (filter (comp names key))
  #_=>      (map (comp last last :arglists meta val))
  #_=>      (frequencies))
{coll 12, colls 2}
All the fns above require a seqable as their last arg, and despite that, the arg name is coll. Is/Was there some reasoning behind this?

didibus03:03:28

They also accept collections

didibus03:03:27

I don't think there was any reason then whatever whoever chose the name thought of calling it