Fork me on GitHub
#off-topic
<
2024-05-16
>
vonadz11:05:30

Anyone done any projects related to generating javascript code from user interfaces? We're building an interface for non-technical people that is supposed to produce javascript files (with valid code in them). High-level overview is we have a bunch of javascript files in a specific format that generate string outputs. We want to build an interface for non-tech people to use, so that they can type in text, do logical checks, and add data points. I was looking at https://github.com/acornjs/acorn as a possible solution, but would love to get insight from anyone who has experience doing something similar. This seems like it would be a great project for clojure(script), but unfortunately we can't use it in this case.

jpmonettas12:05:46

I worked many years ago (like 10) in a Clojure project to define UIs in Photoshop files, with a predefined language you would use on groups and layers names to add semantics to them, and then this "compiler" generated java (android) and objective-c (ios) UI code from it. I got it to work okish, but never got any serious use.

vonadz13:05:43

Ah that's pretty interesting. Seems a lot more difficult than what we're trying to do though.

jpmonettas13:05:04

Seems a lot more difficult than what we're trying to do though.I'm pretty sure it is the other way around hehe. I'm not sure about the ASTs returned by acorn, never used that, but JS has many features (unless you constraint to only use a subset of it in your javascript files). IIUC you then need to render a UI from those ASTs, allow users to tweak that tree, and then emitting JS back from it? I'm not sure acorn allows that, so you need to write the emitter also?

jpmonettas13:05:31

If you just need to generate JS from a UI, then maybe you can generate Clojure instead, which is much easier to build because s-expressions, and then call the ClojureScript compiler to generate the JS for you?

ericstewart13:05:49

Not certain it fits, but instead of direct Clojurescript, you might take a look at #squint if the goal is to produce Javascript. (https://github.com/squint-cljs/squint)

vonadz14:05:42

@U0739PUFQ we're only using arrays, logical operators, variables, and string templates. I think acorn has its own export tree to code feature. I'd love to use clojure(script), but the dev working on the project has no experience and that's a bit of a plunge for them to take. @U068R74HE thanks for the suggestion! We're trying to stay purely in the JS ecosystem though. No one other than me has clojure(script) experience. I was just asking here because clojure devs are usually on the more experienced side and though maybe someone has done this before.

šŸ‘ 2
Noah Bogart13:05:08

for those that use the version scheme MAJOR.MINOR.COMMITS, what is the purpose of tracking the commits? How is it better than a PATCH version?

dpsutton13:05:19

i think itā€™s an unambiguous, determinate differentiator in the face of no other good way to differentiate the releases. Using an autoincrementing number requires knowing how many releases are before it so itā€™s an input to your code from a build and release system. Assuming you just tag a release, the number of commits is discernable from the code itself so it can know itā€™s version without getting feedback from the release system.

šŸ‘ 1
Alex Miller (Clojure team)14:05:04

The first two parts are semantic, the last tied you concretely to a commit

šŸ‘ 1
Ben Sless14:05:41

Annoying thing about gitlab and commit counts, gitlab only clones up to pretty shallow depth by default

1
lread15:05:10

It is all subjective with pros and cons... I was a fan of the commit count - but now prefer the auto-incrementing release count because: ā€¢ it is predictable, and I can use it before a release, like for a meta :added or whatever ā€¢ it is less to type and easier for users to remember and visually compare (at least for my little libs with not so many releases!) I did compare various strategies when thinking of this for rewrite-clj, https://cljdoc.org/d/rewrite-clj/rewrite-clj/1.1.47/doc/design/merging-rewrite-clj-and-rewrite-cljs#_library_version_scheme.

šŸ‘ 3
seancorfield16:05:47

What I like about the commit count is that it provides some indication of how many changes were made between releases, especially when major/minor haven't changed. The downside is that it's definitely harder to make releases and update the documentation around a release.

Ben Sless14:05:02

Today I realized I don't like merge commits

šŸ˜Ø 1
āž• 3
šŸ¤ 3
jpmonettas14:05:08

I have always been on rebasing, linear history camp on my projects

ā¤ļø 1
Ben Sless14:05:21

I always rebase, but by default popular forges insert merge commits on MR/PR I'm beginning to think this is a very bad default

Noah Bogart14:05:08

Yeah, merge commits are not good

Noah Bogart14:05:10

I discovered my dislike of them when I had an issue with a merge conflict, fixed it but accidentally introduced a new bug, and then couldn't find the source commit of the bug because it was "hiding" in a merge commit that both github and git cli didn't show by default (merges were treated as empty)

vemv14:05:27

Merge commits give us a simple anchor for reverting sets of commits

oyakushev14:05:17

Merge commits are fine but only if the history is linear and you --ff-only merges.

āœ… 1
ā˜ļø 2
Ben Sless14:05:55

@U45T93RA6 simple or easy? šŸ˜‰

oyakushev14:05:02

I prefer having a merge commit for a PR/MR with several meaningful commits. Obviously, the merge commit itself must be empty.

āž• 1
ā˜ļø 1
vemv14:05:55

it's simple, actually not so easy when it comes to reverting but I like it as a simple, universally understood fact i.e. this PR was merged at this point (which also is helpful for jumping from the CLI to the PR on the browser)

vemv14:05:16

anyway, what was today's discovery? Curious about that part

Ben Sless14:05:45

Just putting words to a feeling having seen a branch which merged master into itself to catch up. (I don't believe you always need to catch up, either) I guess I could live with empty merge commits.

oyakushev14:05:04

Just putting words to a feeling having seen a branch which merged master into itself to catch up.Merge commits may be fine in the main branch but never in the feature branches. Squash and rebase there is the only acceptable way.

āž• 1
ā˜ļø 1
vemv14:05:59

The sweet spot which I've seen many teams converge to is: regularly rebase main into $feature_branch, then merge $feature_branch into master

ā¤ļø 1
oyakushev14:05:48

May I present you The Braid:

šŸ˜‚ 6
dpsutton14:05:57

a classic

11
slipset15:05:48

@U45T93RA6 you might be disgusted, but we tend to squash our commits into ā€œrevertable unitsā€ and then rebase or merge with fast forward or whatever. Gives us easy reverts and a linear history.

āž• 3
oyakushev15:05:14

Squash is godsent if the developers don't prettify their local history before merging. But I personally dislike when several meaningful commits from a single PR that do different things get squashed together.

āž• 1
ā˜ļø 1
slipset15:05:32

Agreed. Commits to master should be meaningful units of work. No more no less. Problem is that ā€œmeaningful unit of workā€ requires thinking from the devsā€¦

šŸ‘ 3
vemv15:05:47

Yep, normally I don't mind if others prefer to squash, but forcing it for everyone seems to assume that everyone is lazy šŸ™ƒ

slipset15:05:08

The only thing we encourage (enforce :) is a master with a linear history of commits with meaningful units of work. How the devs arrive at that, donā€™t care.

šŸ’Æ 1
mauricio.szabo17:05:24

Well, I actually disagree with most opinions here, but I know I lost this battle long ago :rolling_on_the_floor_laughing:. I am on the team that hates rebases with the strength of a thousand suns

Ben Sless17:05:55

Stand up for what you believe in!

oyakushev17:05:26

Would you say that you rebased your opinion and merged with the common voice?

šŸ˜‚ 1
mauricio.szabo17:05:20

Also, the "Guitar example" for me is just how git works. I really don't understand why it bothers so much people, but work happens in parallel and I feel it's amazing that git captures that, instead of telling a lie like "hey, this change was applied over this old version" which is actually not what happened (and usually breaks bisect too)

mauricio.szabo17:05:23

@U06PNK4HG actually, no, that's why my opinion and the common voice ever end in conflict :rolling_on_the_floor_laughing:

oyakushev17:05:14

How can it break bisect and how do you bisect across parallel branches?

oyakushev17:05:23

Actually, I think I understand what you mean.

mauricio.szabo17:05:35

Finally - some workflows are impossible with rebase

oyakushev17:05:58

I do much more code reading than bisecting, that's why I'm OK with the tradeoff.

mauricio.szabo17:05:30

I also do more code reading; but I never actually review individual commits, and honestly, I still don't get when this started to be a big deal

mauricio.szabo17:05:18

I also like how git blame can tell me "hey, this code here was added with a weird commit message so it might not be as good because that's probably what it worked at the time", instead of a pretty, but meaningless commit message like "added connection feature"

oyakushev17:05:16

I don't think I understand your last point

mauricio.szabo17:05:38

Git hygiene is an interesting concept, but I prefer the harsh, ugly, and weird truth of the world - that sometimes, a commit is just "CI broke again, not our fault" so I know that this weird line that cleans the cache is just because CIs are weird :rolling_on_the_floor_laughing:

mauricio.szabo17:05:06

Finally, my last point, is how Pulsar - our Atom fork - uses git. Unfortunately, the old code we inherited was not in a good shape, so we usually have long-lived branches. An interesting example is the newest Electron branch

oyakushev17:05:55

You can have both. A pretty commit message that broke CI again. Git hygiene to me is the same as not including all typos, undos, and non-compiling code in your commits. The fact that it was once in this state and you changed it is not important (and distracting) to future you.

mauricio.szabo17:05:07

To test stuff, usually we check if it works on master and then we merge it on Electron branch; sometimes, these things can conflict with other ongoing features, so there are some short-lived branches that are basically "newest Electron + tree-sitter enhancements + SQlite bump + Fixes on recompilation of packages" so we can test stuff; and also, because Electron branch is long-lived, usually we merge from time to time our master on it, and the only thing that conflicts is the lockfile (because older conflicts were already solved)

oyakushev17:05:43

What you describe sounds like parallel implementations. I agree that rebases wouldn't work there, but it's a different usecase for branches. Maybe it is the "originally intended" usecase, but not the one we appropriated to use branches for nowadays.

mauricio.szabo17:05:52

> not including all typos, undos, and non-compiling code in your commits Yep, that's what I don't like. Maybe non-compiling is the only I agree, but if I have a typo and fixed it, and then something broke, I prefer to have the first breaking commit the "fix typo". And the "undo" is "the path not taken", and I kinda like the possibility of going back in time and checking what was different

mauricio.szabo17:05:08

I still feel that we need a "better git" - maybe treat all "merge to default branch" as "commits" and then all the intermediate stuff is hidden, but you could find it if you wanted? Also, get rid of the default merge message and force users to write a commit message always for merges, and allow us to bring our own "diff/patch algorithm" so that Clojure don't conflict on the "Pringles lines" and stuff.

mauricio.szabo17:05:29

I don't believe that in 2024, we still use a tool that reverts changes with git checkout, that changes branches with git checkout, that creates branches with git checkout, that deletes branches and tags with git push, and that have some weird names like "git blame" (I really don't like the message - like, "git explain" or "git last-changes" is way better). Like, for example, "Pull Requests" and "Merge requests". They both represent the same thing, and the name is meaningful on both cases. "Push request" could also work. We're discussing "merge" and "rebase" but they are both "merges" in a way...

oyakushev18:05:17

You suddenly jumped from fundamental issues (which may or may not have to be addressed ā€“ different usecases require different solutions) to minor things like command names. If you don't like the names, just use a porcelain that offers better ones ;)

dpsutton18:05:36

like git since version 2.23.0 šŸ™‚ > * Two new commands ā€œgit switchā€ and ā€œgit restoreā€ are introduced to > split ā€œchecking out a branch to work on advancing its historyā€ and > ā€œchecking out paths out of the index and/or a tree-ish to work on > advancing the current historyā€ out of the single ā€œgit checkoutā€ > command.

šŸ¤ 1
mauricio.szabo18:05:07

Actually, kinda - I meant that we should be researching on something different, more "modern", that can accommodate use cases that were not predicted by the original git command line, and also fix some of the "naming" issues

oyakushev18:05:52

I don't know what is checkout, but I know what is b b and k šŸ˜„

oyakushev18:05:45

"Blame" is a wonderful name by the way. Its inherent non-git meaning is universally understood (even if it has ironic connotation) and at the same time "artificial" in the sense how @U066UJ2KE defines such names in Elements of Clojure. It is easy to google and everybody knows what are you talking about when you say "blame" in git context.

mauricio.szabo18:05:06

An example is - while I do like merge workflow... a "merge commit" is not exacly like a normal commit. Supposedly, it just captures all the work on parent commits, but you can't actually revert a merge commit, you can't cherry-pick a merge commit, you can't do a lot of things that you can with "normal" ones. Sure, you can create a "patch" and then revert or apply that "patch", but now you're "out" of git; Also, we do have git log --first-parent for the illusion of a linear commit; but there's no option for it in blame or other arguments

Daniel Gerson22:05:37

Squash fan here šŸ‘Œ You can (dare I say should) always attempt to deliver a feature as a series of smaller squashed commits (rather than just one fat one). Makes history simple and that is a win in the long run. :face_in_clouds:

mauricio.szabo23:05:16

Again, depends on the case. In Pulsar, that would not work because we would conflict lots of things over and over with our eternally long-lived branches laughcry

šŸ˜° 1
Daniel Gerson23:05:33

Sorry if I didn't get it, but what is the constraint that requires a separate Electron branch?

mauricio.szabo23:05:10

We still have bugs, but we need to test the newer Electron over the most popular plug-ins and the newer features. Electron (and Node) is a huge mess regarding backwards compatibility, and there are very weird cases like Pulsar working correctly in every system, except Macos on ARM for example facepalm

šŸ‘ 1
Daniel Gerson23:05:17

And the reason you are holding onto the older Electron is...? Or does most of Pulsar not work on the new Electron and you just have some test suites to prevent regressions? (Sorry, never coded for Electron) Edit: I think I get it. "We still have bugs" means for the newer version.

jjttjj15:05:51

What are peoples thoughts on zfs vs btrfs? I'm setting up a new develpment PC. Using Arch linux. I'm looking to try a fancier FS for no particular reason besides learning a new thing. I'm just going to have a single nvme ssd, so I don't need raid. Googling a bit, I'm learning that this might be another tech holy war. But I'm curious about clojurians' general take on it.

jpmonettas15:05:31

FWIW here is gpt-4o take on this https://chat.openai.com/share/7e1d1ba5-efa2-4aa7-87ae-975d69e70239 At the end :

While ZFS offers robust features, the additional complexity and resource requirements make Btrfs a more practical choice for most developer laptops.

āž• 1
ericstewart15:05:32

Have not used btrfs, but I have used zfs pretty heavily over the past 3 years and it is very solid for a lot of use cases. I will definitely use it again, but admittedly need to dive into btfs more to compare

oyakushev15:05:03

I'm running BTRFS on my homelab. I don't use any of the fancy features like snapshotting, snapshot checkouts, etc. I use RAID0 (called SINGLE in btrfs) and RAID1, and LUKS2 encryption. This is my storage setup:https://i.redd.it/cx7tebnepr4b1.png

šŸ†’ 2
dharrigan15:05:21

I use ZFS - been using ZFS for about 11 years

dharrigan15:05:46

I have 2x24TB ZFS raidz2 clusters

oyakushev15:05:51

From a completely subjective perspective, BTRFS is in a weird place. It's in the kernel but it seems that it doesn't receive enough love for ongoing development. RAID5/6 is completely broken. There are new darling FSes on the block. However, I've heard only good things from people who use ZFS.

dharrigan15:05:52

I've never had a failure.

dharrigan15:05:25

And when zfs does it's regular monthly scrubs, I love it when it says it's self-repaired files that may have been bit corrupted šŸ˜‰

dharrigan15:05:50

I do love me some zfs šŸ™‚

jjttjj15:05:10

Really useful info so far, thanks all!

dharrigan15:05:48

I'm actually going to put my home on zfs, so that when I do what I think may be something I want to roll back, I'll just create a snapshot of my home, then do a rollback if required.

dharrigan15:05:06

A summer project šŸ™‚

jjttjj15:05:14

I think the linux kernal part was the strongest point I'd seen in favor of btrfs so far, so it's good to know that's not perfect

dharrigan15:05:57

For my cluster, it runs ubuntu (I will swap to FreeBSD in due course - I actually started with FreeBSD)

oyakushev15:05:13

BTRFS has some advantages over ZFS for multi-disk setups. ZFS requires all disks to be of the same size, BTRFS is flexible in that regard. But this is irrelevant for your usecase.

dharrigan15:05:17

for home, I run Arch, so will just roll with the archzfs repo

p-himik15:05:20

This paper from 2010 outlines data corruption on ZFS when there's memory corruption. So at least up to ~2010, ZFS wasn't infallible. But probably still better than most other FSs. https://research.cs.wisc.edu/adsl/Publications/zfs-corruption-fast10.pdf No clue whether anything has changed since 2010 in this regard. Use ECC. :)

p-himik15:05:41

And if anyone's up for a bit of a deep dive that doesn't require specialized knowledge: https://danluu.com/file-consistency/

thyth17:05:34

Maybe obscure, but if you use a lot of snapshots, btrfs performance degrades badly and quickly after a few hundred (i.e. 300). The btrfs docs do mention this if you go digging, but the suggested mitigation was an unhelpful "don't do that". I switched a snapshot heavy project (10^4+) to ZFS, and have yet to see any similar degradation.

ericstewart17:05:08

I've also managed zfs pools of up to 100 TB across Linux and FreeBSD with hundreds and hundreds of snapshots without problems. But that brings up something else to consider. If you only do an occasional manual snapshot you won't need anything else, but if you snapshot automatically, want to thin those out as they age, replicate to other systems, etc (which are things you'd typically need in server use cases), the zfs ecosystem has some great packages that people have built like syncoid/sanoid and zrepl that can help significantly if you don't want to script all that yourself. Not sure about such things in the btrfs world. Even in personal usage cases I can see someone wanting automatic snapshotting (with automatic prucning) and potentially replication to somewhere else for backup/redundancy. In the latter case I have heard of people running encrypted zfs on their local system and replicating offsite (just a simple ssh session required) and keeping the encryption key only local so that zfs manages all those offsite snapshots but access is protected. Certainly a lot of possibilities

dharrigan20:05:40

IMHO, ZFS rocks šŸ™‚

šŸ’Æ 1
Jason Bullers23:05:02

Modern Java looking snazzy:

void main() {
    String name = readln("Please enter your name: ");
    print("Pleased to meet you, ");
    println(name);
}

Jason Bullers23:05:44

ā˜ļø:skin-tone-2: JDK 23 with preview features enabled

bibiki23:05:28

are those read and print functions static imports or are they available without importing them?

Jason Bullers23:05:17

Both. They're static methods from a new class that are implicitly static imported when you use an implicit class like this

šŸ‘ 1
bibiki23:05:23

"Every implicitly declared class automatically imports these static methods, as if the declaration" from the link you sent

Jason Bullers23:05:49

So you can use them yourself in larger programs too

bibiki00:05:28

yeah, i see

Alex Miller (Clojure team)01:05:06

And I think you can run a single file source .java now without compiling it

šŸ„² 4
Jason Bullers01:05:43

Multiple, actually. It started with what you said, but now it'll also find related files for you

Jason Bullers01:05:53

The on ramp is definitely smoother

Lennart Buit07:05:24

you can probably also use the var keyword for the name variable:

void main() {
    var name = readln("Please enter your name: ");
    print("Pleased to meet you, ");
    println(name);
}

henrik08:05:17

I read somewhere that the footprint of Java ā€œhello worldā€ programs were some sort of canary driving these changes

phill09:05:05

People who write about Java should be strongly-typed, so their typewriter should segfault if they put a canary in that position. Canaries are boolean (they die for lack of oxygen, or not). A more likely analogy is the truffle-pig.

Noah Bogart13:05:21

so when can we expect clojure to consume .java files? šŸ‘€

souenzzo13:05:03

@UEENNMX0T I don't think so. Clojure has a special class loader and other wired things to make classpath dynamic in JVM. I think that it is doable, but clojure team will only do when JVM23 turn old

Jason Bullers13:05:01

TLDR it's about stripping away as many unnecessary concepts as possible (static, classes, tooling) and allowing someone to get started very minimally, gradually introducing new concepts if and when necessary

hiredman17:05:59

all you need to directly consume .java files is a classloader that compiles them on demand, I am sure there is one on github somewhere

hiredman17:05:38

apparently there was a series of jeps for that https://openjdk.org/jeps/330