This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-04-11
Channels
- # announcements (14)
- # beginners (119)
- # boot (9)
- # calva (7)
- # cider (12)
- # cljdoc (2)
- # cljsrn (28)
- # clojure (127)
- # clojure-dev (2)
- # clojure-europe (3)
- # clojure-italy (2)
- # clojure-losangeles (9)
- # clojure-nl (6)
- # clojure-spec (15)
- # clojure-uk (39)
- # clojurescript (35)
- # community-development (4)
- # cursive (9)
- # datascript (8)
- # datomic (5)
- # duct (3)
- # emacs (10)
- # fulcro (45)
- # graphql (3)
- # jobs (1)
- # kaocha (8)
- # luminus (2)
- # off-topic (121)
- # onyx (3)
- # pathom (15)
- # pedestal (31)
- # planck (5)
- # reagent (25)
- # reitit (3)
- # remote-jobs (1)
- # shadow-cljs (48)
- # slack-help (1)
- # sql (142)
- # tools-deps (78)
It’s great! I think the best part for me was how much I learned about PostgreSQL’s capabilities by using it. You seem to know them well already from your above comment, but those were mostly unknown to me at the time. Whether or not it is a good idea to build a whole application with a database as the sole backend, I think it’s definitely a fun process. Databases like postgres also do some things really well that are traditionally rather hard to achieve in applications. Access control is one of those things; Postgres is very declarative about it.
Windows folks... is anyone else testing the dev or canary channel versions of the new Chromium-based Microsoft Edge browser? Thoughts so far?
I need to check in a couple of generated JSON files into git. There are data, not code. Does git lfs make sense here for an OSS project? Or are there any disadvantages? Maybe just check them in as normal files?
I guess that depends on how large they are/how much you expect them to change
and whether you care about the diff ^^
changes: don’t know, but I don’t expect them to change more than once every month or so
well the worst thing in Git is having binaries in your repo because every single version is kept (makes checkout slow), since this is JSON, and frankly it isn’t that large, I guess putting it in the repo will be alright ^^. Reading up on Git LFS, it seems that your contributors need to have it installed as well
yeah, that’s what I’m worried about… inflicting some tech on other that’s not generally well established
ya, and a 86kb json, I guess stock Git will track that just fine
(my opinion obviously!)
(does it make a clone of depth 1?)
@U09LZR36F “with this” is with vanilla git, or lfs?
@UDF11HLKC I don't believe it is shallow, because that would make it harder to checkout arbitrary shas.
ah right, that makes sense
Vanilla Git absolutely handles 86kb files without any problems. git lfs is really for binary files that changes often
yeah. the only objection I would have is that it’s a bit ugly to have data, not code, in git. but I guess it’s fine
https://martinfowler.com/articles/dblogic.html this seems really interesting. It's sometimes hard to follow the full expected behaviour of something, because the business expectations are scattered across several layers. The flipside is that SQL is faster than doing a filter, especially if you are loading all your data into memory first! I feel like clojure encourages modelling data, then functions on that data, then figure out how to pass data in. But I don't see that so much in practice.
I had a similar issue at work recently (based on Mongo, but still the same principles). Writing Clojure code that just fetches all necessary data from mongo then does the calculations and aggregations manually was trivial, easy to test, reason etc. Writing the same functionality but using Mongo’s “pipelines” took a month. Performance is better with Mongo, but not so much better.
has anyone tried vert.x modular framework on Java. They are working on adding Clojure support. You can run a dozen different languages inside vert.x It looks incredible.
I have production experience. I think it can be amazing. But in our case it's creating a lot more problems than it solves (with java). And we don't have a real reactive applications, since we don't use vert.x in the front-end. Also the java code is a mess because it's not so functional. But I did a little poc vert.x with a more functional java style, and that worked great.
oops.. thanks for threading… I forgot to ask for it.
is the Java code turning into spaghetti because of the async nature?
Just a small piece of code, as prove of concept on how it could work, using advent of code. Still haven't shown it to the colleagues. They kind of really hate vertX and are building a new piece with spring boot.
it says that separating the code…. using the event bus versus the workers can be a mess?
the async code (event loop) versus the backend threading worker stuff
We are now calling it mud rather then spagetti. But introducing async and lazyness to java has some complications if not set op properly. I would really like to see pure functions + endpoints on the database + some routing. I think it could work, but not it takes some discipline. Spring boot is much safer/easier in that sense.
if you are open to it… you could github your poc? the more functional style
I guess Vert.x seemed attractive because its also easier to setup than Spring
I followed you and starred it. Do I need to setup a maven project to use your repo?
I see the pom.xml file
not sure if it's easier, it was a 'new' thing when they started the shared library, and it looks like they are coming back from it. Code for production runs different than local, we've had several issues with that, like missing dependencies.
dependency problems with maven?
mvn exec:java -P vertx
should work I guess, it's a small front-end to do the advent of code 2018 questions
ok, let me try it, now
Not really to do with maven, but with how depencies are handled by the verticles. Locally yoiu just have 1 jvm, but on production multiple, and if the scope is set inccorectly for transitive dependencies the verticle will not load them.
java.lang.ClassNotFoundException: com.gklijs.adventofcode.MainVerticle
I did two steps… 1. brew install maven 2. cd in your gerard dir… 3. run your mvn command
I was hoping the mvn command started a vert x server on port 8080
It's multiple things, it both is something like an actor model, spinning multiple jvm depending on the configuration, but there's also a lot of methods to handle network traffic, like adding headers, validating requests.
the authors say that if the language can run on the jvm, then it can run inside vert.x
it uses jruby to run the ruby code, for example
that’s a very good question. I have no idea how it transpiles Javascript to the JVM
this article uses npm to setup the JS stuff on vert x. But I am not sure if its actually using the node js runtime…
I guess, you can also use it to serve static files. But when they started the project, it did not support redirects. Which is why we have an additional layer of nginx. It would be much nicer/easier if we didn't had that in between.
you couldn’t send a 301/302 redirect header to the browser?
mvn clean install
workedI think the rationale is that you don’t need to worry about the common concurrency problems with deadlocks, race conditions etc… because you are using an actor style message bus… but vert.x can scale vertical or horizontal. And it doesn’t force you to use the async event loop for everything. It has a background worker pool for threading.
It’s also modular. It’s not a monolithic frameworks that takes over your code.
Yes, but the bad thing is that's it's not scaling automatically. You really need to put in a lot of work to get high performance. Using kubernetes for scaling is probably both easier, and then it does auto scale.
Does it require a lot of work to setup clustering with hazel cast? So you can scale across multiple servers?
Not sure as I don't have any experience with hazel cast. But the problem is you need to configure how many of each 'verticle' you want. Which takes time, and depends on the use. While these days I think it's much easier to use Kubernetes and auto-scale.
Ahhh. So there’s not an easy built in auto scaling tool with Vert.x? One must use a 3rd party auto scaling tool? Although, I think almost any language would need an auto scaling tool? Right? Or wrong?
Omg Chaim Kirby in the Python slack just told me to try using
git bisect
… it’s saving me hours if not days of workThis seems like the coolest thing in the world. Does anyone have any insight as to why it failed so hard? https://en.wikipedia.org/wiki/WinFS
I think I read somewhere that it was cut from the ill-managed Vista
yeah — I guess because Vista wasn’t coming along, adding a new FS was maybe too ambitious
I guess timing is everything. But the idea itself… Seems like it would redeem the desktop as a platform for general purpose productivity (as opposed to the web apps like G-Suite that are currently winning).
Didn't BeOS do this a zillion years ago?
https://en.wikipedia.org/wiki/Be_File_System - I always thought BeOS had a lot of good ideas
This is cool, thanks! Shows there’s really nothing new under the sun.
computing was done in 1970 I think
still catching up
It really seems like it! I’m a wee youngin’, so all this is news to me. But Clojure and the community are constantly pointing backwards when I ask where the cutting edge is. I mean, it’s 2019 and ya’ll have got me programming in LISP on Emacs and watching lectures from when shouldpads were cool.
I suppose you'd want to be able to also view the db as a file system too, for backwards compatibility with other stuff?
Yeah they seemed to exist in parallel.
Far-flung spitballing, mostly for reaction:People do Datomic+S3 for blob storage. Could you do the same thing locally and implement a whole OS? All non-blob data in userspace (kernal/graphics aside) would be represented as EAVT tuples. You can look up blobs in the file system as you need to. OS ships with a default ontology/schema, but can be extended just like Datomic schemas. The OS API for normal user applications boils down to mostly two API’s: query and transact. Thoughts?
isnt this basically the idea of a CoW filesystem
if you write to it, you actually assert a “new file”
only when there is no space left, a filesystem like Btrfs/ZFS will start clearing unreferenced files
Well, I’m immutability is an essential ingredient, but I’m more talking about how data is represented, both in memory and on the disk.
like, your email app and word processor, and all the rest have shared access to a datalog store.
right, so your file system is no longer a ‘bunch of blobs’
?! Maybe?! Time to start a reading list…
has some of these ideas in it at least
Windows has COM objects, but the problem is the same problem that the entire stack has - all the interfaces are ad-hoc, so you have no universal traction on them.
You have to write new code for each and every interaction.
Actually, a huge part of what made COM so awful was that there was a huge amount of standard interfaces you were supposed to understand and implement on your components. These interfaces were all complex and hard to work with.
There's certainly more important reasons why us old timers shudder when COM is mentioned, but the idea that all COM coding involved ad-hoc interfaces (any more than any API has ad-hoc interfaces, BTW) totally misses the point.
I suspect it would be possible to patch things up in a way that smoothed over remote connections, such that apps had no concern whether queries/transactions were resolved locally or against a remote DB somewhere.
a la Pathom.
Plenty of technology has attempted to make what you describe a reality and all have failed, for one important reason. The wire is not reliable or latency free. You can never treat it the same as local calls no matter what RPC mechanism you build around it. For this reason, it's better to not try and make remote calls seem to be the same as local calls.
you are always concerned with that difference
most of distributed computing is about the set of tradeoffs you can make in between those points
I’ve done little distributed computing, so perhaps I can’t understand. However, network drives seem to work really well, such that garden-variety apps can remain ignorant of their remoteness. Wouldn’t the same thing be possible for more structured data?
there is a famous list of assumptions about distributed computing that have turned out to be false many times https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
the history of computing is littered with the corpses of local/remote abstractions
which is not to say they aren't useful
but you should have a skeptical eye towards magical claims :)
rich has a talk, something about the system? maybe language of the system that talks a little bit about the trade offs you make when you make things network transparent (like erlang)
yes, language of the system
of which a key suggestion is to embrace queues as a tool for connections