This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-08-10
Channels
- # announcements (9)
- # aws (11)
- # babashka (37)
- # beginners (97)
- # biff (2)
- # calva (73)
- # clj-kondo (17)
- # cljfx (3)
- # clojure (89)
- # clojure-europe (45)
- # clojure-norway (12)
- # clojurescript (17)
- # datahike (8)
- # datomic (13)
- # deps-new (4)
- # figwheel-main (1)
- # graalvm (2)
- # hyperfiddle (8)
- # introduce-yourself (6)
- # leiningen (38)
- # lsp (57)
- # malli (13)
- # nbb (46)
- # off-topic (40)
- # pathom (3)
- # polylith (8)
- # rum (4)
- # shadow-cljs (14)
- # spacemacs (1)
- # sql (11)
- # xtdb (10)
When reading at the source code of penpot: https://github.com/penpot/penpot/blob/develop/backend/src/app/msgbus.clj/#L82 I often see qualified keys on a map. I wonder the pros and cons of it. For me, I feel using a qualified keys is: • good: I can use IDE to jump to usage of the keys • bad: it looks ugly? (maybe). And it’s hard for a user to specify the key
qualified keys make greping and renaming across languages (templates, json) pretty easy
Guys, I want to know , how you are using Clojure in Fintech ? (what frameworks you choose , what are being done, why you didnt choosed strongly typed programming language like java instead used clojure) ? ....... Appretiate for your answer
I think Ed Wible’s blog about Clojure and Datomic at Nubank when Cognitect joined Nubank remains an exceptionally clear statement of how Nubank become the fastest growing fintech in Latin America https://building.nubank.com.br/welcoming-cognitect-nubank/
http://www.Crescent.app is all Clojure on the backend. Using: • Pedestal (web service libraries) • Lacinia (GraphQL client API) • Datomic (database and deployment stack on AWS) • Pathom (interservice queries, admin API) • etc. Clojure allows us to iterate quickly and Clojure Spec allows us to specify our functions and data to a much richer degree than any statically-typed language. Fwiw, Clojure is strongly typed.
i was shocked, why would they choose no sql
databases, besides i heard in cs theory like CAP problems of no sql databases .
I'm thinking if that DAtomic follows database ACID properties and transaction managment.
also those points from irc
http://cryto.net/~joepie91/blog/2015/07/19/why-you-should-never-ever-ever-use-mongodb/
i'm thinking how datomic (no sql databases ) exists still in fintech.
Thank you for your article. It will give me insight and help me to learn valuable things 🙂 .
In fact, it is more strict regarding consistency than most SQL DBs because it provides whole database read consistency at a point in time. Its awareness of time also enables horizontal read stability without sacrificing consistency (at a given time value).
what's really shocking is how many businesses are built on SQL databases that routinely destroy information
@U087E7EJW i briefly looked into phantom, but it wasn't obvious to me how i would use it. how has it been useful in your admin API?
@U0LAJQLQ1 It allows us to expose part of our domain via a graphql-like API (but with Clojure sensibilities)-- e.g., fully-qualified attribute names (namespaced) with global semantics (i.e., the attribute has the same semantics everywhere it is used, not context dependent, like fields in GraphQL). This allows us to use the same attribute names that we use in our domain (e.g., in Datomic). Also, rather than calling named queries, you just ask for the data you want–– in Pathom you define the input and output of resolvers in terms of what attributes they require or provide and Pathom uses those definitions to build indexes that can be used to determine what resolver(s) need to be invoked to satisfy a particular query. This makes it much easier to build rich APIs from the bottom up, rather than having to define a bunch of rigid types and queries up front. It also makes the composition of resolvers implicit, rather than explicit like in GraphQL (i.e., field resolvers defined on object types). Mutations are much the same as in GraphQL or any REST API-- a bag of attributes sent to a named mutation, but like in GraphQL you can provide a "query" against the result (query is kind of a misnomer in GraphQL and pathom).
EQL (EDN Query Language-- Pathom query language) is similar to Datomic pull selectors, but adds parameters.
It's all definitely rough around the edges, but we have a team of Clojure devs that are comfortable fixing issues with the library as they arise.
@U0LAJQLQ1 sure, here is a simple query for two attributes:
[:user/id :user/date-of-birth ]
here is the same query, but I'm passing a parameter to the :user/date-of-birth
attribute:
[:user/id (:user/date-of-birth {:format-str "MMddYYYY"})]
@U0LAJQLQ1 here is a great resource with lots of comparisons to Datomic: https://edn-query-language.org
Regarding Datomic immutability in Fintech: It's stressful to even imagine having to cover all of our bases w.r.t. retaining history for auditability and troubleshooting without Datomic's "accumulate-only" model. Being able to build and evolve our data model without explicit schema migrations and entity history has allowed us to iterate more quickly than possible with SQL. It's amazing how often being able to answer the question "when was that attribute value changed/deleted?" has saved the day. We also annotate every transaction with provenance information (request-id, user-id, source, etc).
syntactically, yes
after reading the params part of the link you gave me i'm left confused, but i think reading more about phantom will probably make things clear
mutations are modeled similarly, but using symbols instead of keywords, e.g. [(my-mutation {:arg1 3 :arg2 5})]
That website I linked is a fantastic resource, I think everything you want to know about EQL can be found there.
Something amusing I learned about nosql some time back, it does not stand for "no SQL", but for "not only SQL". SQL is just a bad language for relational algebra. You can have other languages with ACID databasese. Datomic's value proposition is greater than a simple SQL db
“nosql” is often used as a synonym for “not relational / not ACID”
Yeah, the term certainly ran away from the original intention. It's hanging out with object orientation at the bar
So is the concept of OOP completely thrown out the window when it comes to clojure?
Clojure embraces polymorphism, but not concrete derivation (preferring composition of immutable data), or encapsulation (again, by exposing immutable data)
both multimethods and protocols allow for open extension of polymorphic calls, and with multimethods you are not tied to a single hierarchy of types - you can dispatch based on not just the type of args but on the values as well, and you can use not just one hierarchy but multiple if needed
protocols leverage fast type-based single dispatch (highly performant in the JVM) but also support more dynamic means of extension such as via metadata
> but also support more dynamic means of extension such as via metadata interesting, can you point an example or doc?
https://clojure.org/reference/protocols#_extend_via_metadata
And you can always impl derivation or encapsulation locally to scratch an itch, if it makes the most sense for some niche problem. Clojure is very malleable and doesn't stop you for creating those kinds of things where it makes sense.
It just doesn't need to be a core idiom because you don't need those things for most things
I found this helpful: https://clojure.org/about/state When I was an imperative programmer and there was a buzz about Smalltalk in the mid-1980s, I tried to understand what OOP was. It was always presented as a list of characteristics: encapsulation, polymorphism, inheritance.... I was left asking, "AND? Why?". Objects try to simulate the 'life of an object'. OOP conflates the local process and state of a thing then builds networks of things, communicating by message. Clojure doesn't do that conflation but you can still have some of the words on that list I didn't understand, and build networks of pipelines through which data flows.
Yeah, my understanding was that it was born for front end UI composition. Over time I've come to appreciate inheritance for that use case, but I wouldn't want to use it for everything in the general case
When anyone asks questions about Smalltalk on Quora, there's a reasonably high likelihood that Alan Kay will answer, so there's a very good archive there of what he's said about his intentions when he designed it. I know he wishes he'd emphasised messages rather than objects. He says that's the more important concept.
also, what does data driven development even mean? Code IS data no matter which programming language you use, it seems like this statement is just semantics. Why is Clojure considered to be "code as data"?
data driven development means that you start with representing your data as a composition of a small number of immutable collections (mostly maps and vectors) and basic types. this data can be directly manipulated, is immutable (safe to share across threads and method boundaries), and has value-based equality
your program is then largely a set of pure transformation methods that manipulate data from one immutable form to another, which is easy to test and reason about
the "code as data" thing is related to the fact that Clojure code is read by the Clojure reader from a string into a Clojure data structure. the compiler takes clojure data structures, and compiles them to JVM bytecode. Because the code is represented in Clojure data, you can provide macros, which are simply functions that transform code (Clojure data) into other code (Clojure data). So macros are like compiler extensions, written in Clojure, to let you extend the language
see https://clojure.org/guides/learn/syntax for more info on that
on Twitter, https://twitter.com/lexi_lambda and https://twitter.com/ShriramKMurthi (professor at Brown) argue you are exactly correct. Most (all?) languages allow you to get an AST and do whatever you want. they argue that “homoiconity” doesn’t exist as a well-formed idea. In practice it is much nicer to manipulate the list forms of Clojure forms (code as data) in a macro than the api you might have to modifying an AST type structure. In Clojure, this can be seen by the difference in using defmacro
versus the AST style library like https://github.com/clojure/tools.analyzer
> data driven adding a quote from @U050P0ACR (https://ericnormand.me/article/data-functions-macros-why): > I've been coding in Lisp for a long time, so I've internalized this idea of data-driven programming. It's the main idea of http://www.amazon.com/exec/obidos/ASIN/1558601910, one of the best Lisp books out there, and a big influence on me. What the guideline of "Prefer data over functions" means to me is that when it's beneficial, one should choose data, even if functions are easy to write. Data is more flexible and more available at runtime. It's one of those all-things-being-equal situations. But all things are rarely equal. Data is often more verbose and error-prone than straight code. Data is more flexible than code. If you represent something as pure data, you can serialize it, send it around, inspect it, etc.
Sure, and all turing complete languages can do the same.
@U03P01XST0W Compare how hard it is to transform classes in Java to how easy it is to transform anything in Clojure. Java (as an example) puts a barrier between code and data that way. our Clojure code is represented the same way as data, and can be treated the same way as data. It really is data in the same sense as data is. It might be you don't fully agree with the phrasing, or perhaps you don't fully realize what difference is being pointed out here. Both of which are fine! I think you're raising an interesting question. But I hope you recognize what distinction between LISPs (like Clojure) and many other languages is being pointed to.
if it's all just semantics then why don't we all use the same programming language?
> Yes but isn't this just semantics? Your code performs some type of action and it is up to your data structures to be stored, inspected, sent around, serialized etc, just like with any other language An example: in clojure, I often prefer to model commands as data. Take this piece of Clojure code (using babashka/cli):
(cli/dispatch [{:cmds ["relations"] :fn relations}
{:cmds ["page"] :fn create-page :cmds-opts [:slug]}
{:cmds ["create-page"] :fn create-page :cmds-opts [:slug]}
{:cmds ["random-page"] :fn random-page}
{:cmds ["makefile"] :fn makefile}
{:cmds ["help"] :fn print-help}
{:cmds ["index-by-uuid"] :fn index-by-uuid}
{:cmds [] :fn print-help}]
args
{:coerce {;; relations
:from :keyword ;; page relation format
:to :keyword ;; page relation format
:dry-run :boolean
;; page
:title :string ;; Page title
:n :long ;; Count - eg random page count
:uuid :string}})
Compare that to this piece of Go code:
var (
Cmd = &cobra.Command{
Use: "server",
Short: "",
Run: func(cmd *cobra.Command, args []string) {
configfile, _ := cmd.Flags().GetString("config")
if err := configloader.Load(configfile, &); err != nil {
panic(err)
}
logLevel, err := logrus.ParseLevel(config.App.Log.Level)
if err != nil {
panic(err)
}
logger.Init(logLevel, config.App.Log.PrettyPrint)
log = logger.Get()
if config.App.InfluxDB.Enabled {
log.Infof("starting influxdb reporter at %s", config.App.InfluxDB.Host)
go metrics.StartInfluxDBReporter(
logrus_adapter.AsLevelledLogger(log),
config.App.InfluxDB.Host,
config.App.InfluxDB.Port,
config.App.InfluxDB.Database,
"carbon",
config.App.Environment,
config.App.InfluxDB.Fields.Host,
config.App.InfluxDB.Fields.ClusterName,
)
}
// ...
In the Clojure code, both the dispatch table and the "opt spec" / "coercion table" is just data. So if it needs to be reused, it can simply be pulled out.
In the go code, ... :thinking_face:
Well, actually, now that I think about it, we still create a cobra.Command. So that's "data-ish" too. But consider the argument coercion. Having a coercion table as data is data driven.
You do have functions, and you do have data. But you can lean towards preferring data, or lean towards preferring code / functions / imperative instructions. Idiomatic Clojure leans towards data.Yeah the key thing that I want to point out here that's been alluded to but hasn't been directly said: The specific value of "code as data" in Clojure is not that you "can represent source code as data" since you can do that in any language with an AST, but rather that the data structures used to represent language source code are the same as the ones that we use to write other programs. This means that we can use the same skills we have in other areas of programming to help us write macros. It doesn't require an entirely new skillset like in Rust or Julia or Ruby or C++ or any number of other languages that provide you a way to do macros or similar forms of metaprogramming.
code as data is imo not of the most important things about Clojure. For me the important things are: • focus on representing data as immutable with value-based semantics • common abstractions for manipulating data (seqs, but also other more subtle aspects of Clojure's collection and stdlib design building on key ideas from both Lisp and Java collections) • polymorphism with open extension "code as data" is merely one example of using those tools, and not one that's essential (although it is a great source of flexibility and leverage)
don't be distracted by people blathering on about "code as data" / homoiconic / macro blah blah blah, the important part is the ideas that make that easy and obvious
Right, those are some of the most important things about the user experience of Clojure and what makes it good, but the question was specifically about "code as data".
once you start representing stuff as data, and you have good tools for manipulating data, you want to do that with everything. code is just one example close to home
I talked to Gerry and Julie Sussman last year and they had an interesting aside that they regretted using the word "homoiconic" in SICP at all and have been trying to remove that from curricula that use it
That's interesting. Did they mention why? Was it because of the risk it becoming a misunderstood/overvalued buzzword?
I think they just didn't think it was as elevated or even correct as they thought originally
Those aren't directly connected. You can represent your code as data that way just fine, but alex's point about having a data-driven programming language provides value to the programmer, and the reason "code as data" is repeated in the clojure community is because we don't need any special data to represent code. Just standard structures.
I don't even know what "data as code" means
https://www.expressionsofchange.org/dont-say-homoiconic/ is in the vein of what the Sussmans were talking about (this not from them, but sounds similar)
https://twitter.com/phenlix/status/1047654474683228160 This thread has lots of info
if you really want to meditate on this, it's interesting to think about the difference between what the reader (meant in the double sense of the Clojure Reader and also the reader of the code) reads and what is evaluated. macros (by allowing syntactic manipulation) create the space to make new language that is made up of primitives but does not follow our intuition of how Clojure evaluation normally occurs. a small gap can be useful to allow for greater expressivity. as the gap widens between what macros can parse and the recursively expanded macro, we may wonder if what we are writing is still "Clojure".
Just chiming in here since I was tagged above. Arguments that bring up Turing completeness often leave out the idea that programming process matters. That is, sure, once you know exactly what you want, code it in assembly or C or Haskell or Clojure. It doesn't matter. The code/data thing doesn't matter. But if you don't know what you need, having a language that lets you cross meta-levels (such as the code/data distinction) freely as you explore can be helpful. And taken to its extreme, we never really know what we want, since requirements change, etc. And, also, I think there is something rather special about being able to build a new language inside your existing language. Sometimes the implementation of the language plus the solution to your problem written in that language is way smaller than the solution to your problem written in the host language. It's weird, but I've seen it many times. If you pick the right abstractions for your language, you get leverage. An easy example lots of people are familiar with is JSONSchema. Imagine writing custom data validation code in JS for a deeply nested JSON value. It's way more code than writing out the schema. This is possible in any language. Clojure and Lisps are not special. What makes them better suited to making these DSLs is that they were built in the same way they encourage you to write your DSL. There's a good reader (with a more convenient syntax than most data literal syntaxes), the language's literal syntax is the same, there must be good routines for manipulating the data, and there must be other constructs that help, like closures. Even if the language wasn't built that way to the core (like Clojure is largely built in Java), the spirit is definitely there.
> https://www.expressionsofchange.org/dont-say-homoiconic/ is in the vein of what the Sussmans were talking about (this not from them, but sounds similar) Really enjoying this article. Here's a small excerpt where the author (Klaas van Schelven) quotes the https://dl.acm.org/doi/pdf/10.1145/800197.806048: > Before introducing the term itself, the importance of a single representation for viewing and manipulation by the user and interpretation by the computer is repeated no less than 5 times; numbered here in square brackets for clarity: > >> One of the main design goals was [1] that the input script of TRAC (what is typed in by the user) should be identical to the text which guides the internal action of the TRAC processor. In other words, [2] TRAC procedures should be stored in memory as a string of characters exactly as the user typed them at the keyboard. [3] If the TRAC procedures themselves evolve new procedures, these new procedures should also be stated in the same script. [..] [4] At any time, it should be possible to display program or procedural information in the same form as the TRAC processor will act upon it during its execution. [5] It is desirable that the internal character code representation be identical to, or very similar to, the external code representation. This makes me think "have a single way to represent your program's data model" is the important aspect. It's about how you structure the programs you write, not about the properties the the language (the tool) you use.
I’m trying to include a Java (w/ Maven) dependency in my deps.edn. The upstream project has not made a release in a long time, so I can’t simply pull it in via Maven Central. I can of course just vendor the dep, but I’m wondering if anyone has any advice for the cleanest way to do that. I think I’m mostly happy with a git submodule; once I have that, I can build a jar and use :local/root
to point to it. But happy to hear suggestions 🙂 (Will that also pull in deps if that jar has a pom.xml in it?)
In my experience having a private Maven host and copying over Jars to it worked the best, then vendoring
If it has a pom.xml
the local root on the jar should pull in dependencies.
Alternatively if it is on a maven repo just not maven central you could add a repository to your deps.edn that it will check for dependencies in.
clj git deps don't work with submodules fyi - possibly a concern for projects consuming the repo with the submodule as git dep, don't know
@U064X3EF3 ah, good point — though those in turn only work for clojure (and specifically deps.edn) projects, right? not jost for random maven java projects?
only clojure source projects
Sente has a function signature like this (defn foo [& [{:keys [x]}]] x)
is this kind of thing done just so it won't throw arity errors?
Just ridiculous. Who can parse that?
it certainly took me a minute. This is the specific http://ptaoussanis.github.io/sente/taoensso.sente.html#var-make-channel-socket-server.21 in case i'm missing something, which is more then half the time...
I think it's an older style where they wanted optional "options maps".
I think these days someone would just make two arities, one with the arg and one without.
Plus look how much they destructure! It loses its helpfulness after a dozen keys.
I think ericnormand is right in most people would have overloaded, but I think Clojure did make you have to choose a bit. Now at least it should almost always be [& {:keys [x]}]
which gives you both.
I'd say it's pretty poor style. (defn foo [& [thing]] ...)
is a way to allow thing
to be optional -- and it has the unfortunate side-effect that (foo bar quux)
will "work" and just ignore quux
(or any number of additional arguments).
I would definitely make it a multi-arity fn with the shorter arg list just delegating to the long one and passing nil
(or {}
to clearly suggest an empty options map in this case).
I also agree with Eric about the crazy number of keys being destructured...
Devils advocate here... the [& [{:keys [x]}]]...
formalism can be used to enforce the scenario where you want to take nil or a map as a first arg that has x as a key. Granted, you're locking yourself into a not-very-evolvable signature into the future
Sometimes, if there's an existing fn with a sig of [thing-a thing-b]
and I want to add one more optional thing, without breaking existing callers, a quick [thing-a thing-b & [thing-c]]
can be a quick way to add and destructure the new thing-c
thing into the fn. It always feels dirty though, cause I know that if I want to add a thing-d one day, I probably should have just done [thing-a thing-b & other-things]
in the first place so I don't have to keep changing the sig