This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-05-04
Channels
- # announcements (1)
- # architecture (7)
- # beginners (44)
- # biff (11)
- # calva (15)
- # cider (5)
- # clerk (9)
- # clj-kondo (20)
- # clj-on-windows (19)
- # clj-yaml (2)
- # cljs-dev (39)
- # clojure (52)
- # clojure-czech (2)
- # clojure-dev (11)
- # clojure-europe (28)
- # clojure-hamburg (10)
- # clojure-hungary (3)
- # clojure-nl (1)
- # clojure-norway (59)
- # clojure-uk (5)
- # clojured (2)
- # clojurescript (33)
- # conjure (2)
- # datahike (1)
- # datomic (5)
- # defnpodcast (5)
- # emacs (18)
- # figwheel (2)
- # funcool (6)
- # graphql (1)
- # hyperfiddle (11)
- # jobs (3)
- # joyride (13)
- # malli (6)
- # music (4)
- # off-topic (45)
- # polylith (11)
- # practicalli (3)
- # rdf (3)
- # releases (1)
- # scittle (8)
- # shadow-cljs (13)
- # specter (2)
- # squint (8)
- # testing (6)
- # tools-deps (21)
- # xtdb (2)
One of Clojure's big selling points, at least to me, has always been the concise/expressive syntax and incentivizing play and experimentation at the repl (helping humans achieve their objectives). It's starting to look like in the near future, our objective will change to getting a machine to produce verifiable code based on certain requirements, and we'll need a concise language to communicate/verify requirements. Anyone have any thoughts on this, and Clojure's future?
And the thread that got it started: https://clojurians.slack.com/archives/C03RZGPG3/p1682271926943299
Thanks, was not aware of that!
Yes, I think Clojure being concise and high level makes it a great language to use with an AI assistant [https://clojurians.slack.com/archives/C03RZGPG3/p1680017409869149?thread_ts=1680011622.707439&cid=C03RZGPG3]. An LLM spitting out thousands of lines of boilerplate code in a procedural language will not be ideal for the AI or human. Clojure is also data - which is make it great for the AI tools to be able to parse, output and manipulate it.
The context and output size may become less important over time, and people will probably want to transpile into the most performant option for the environment they're targeting. I think the important question is what language will emerge that best allows for a human to specify, visualize and validate requirements. The LLM will probably then write that into whatever language you please for execution.
I think that's a step away though. Especially since non trivial translation of programs between programming languages can introduce changes and bugs. I think initially the programmers and AI will be collaborating on the cod and the humans will want to be able to read and fix the output from the AI. When there are bugs in the code produced by the AI, humans will want to be able to read the code and debug it.
Definitely a step away, yes. A feature of this language i'm envisioning would probably be to minimize bugs that can occur in the output (little to no ambiguity). The target environments may very well then get swept away with new ones that are written via the same model. I agree that Clojure lends itself well to where we're at currently, if not for the lack of training data with respect to other more mainstream languages.
The way things are progressing though, the step away could come quicker than expected.
Interestingly LLMs learn reusable skills. So an AI trained on other languages (like JavaScript/Python/Hasckell/Scala etc) can then be fine tuned on Clojure and reuse elements of what it has previously learned. Lots of programming languages (e.g. BASIC) tried to be user friendly & unambiguous but have not succeeded. Description "A button that blinks and has a cool hover over effect" is ambiguous. Making it unambiguous could take a lot of text. An example (strawman) workflow for modern development might look like this: CEO or Client-> [email with requirement] -> Product/Project manager -> [email] -> Tech lead -> [Github Issues] -> Junior Developers -> [Programming language] -> Computer Each step is not just translating the previous item, but also enriching it with additional information, context and assumptions. The CEO won't want to spell out all possible aspects unambiguously in their original email, but if they don't then they still need someone to write it (the developer) - which is basically the same as coding in which case AI hasn't helped.
I'm not necessarily advocating for a language that is more like plain English or saying that developers will quickly become irrelevant. There will still be a need to translate requirements into code. I just think the criteria by which that coding language will be judged will change significantly. It may well look like Clojure, but I think we need to examine it from another perspective.
Don't really have any ideas as yet in this regard, just my thoughts.
Now, we're targeting ourselves and other developers with our code. That will still be the case I suppose, but we'll have an additional AI target to think about.
Yeah - I guess what I am getting at is that an unambiguous description of a program is a programming language. Despite many efforts (e.g. NoCode and Visual programming etc) - programming languages are still the best way to represent a program. I do agree that the language can change a bit to become more declarative and high level etc. It may be that all developers converge on just one new programming language that works best for AI etc. Hallucinations are basically fundamental feature of current AI - AI can also certainly produce bugs even if it gets very good. So I think we will want to see and read the code the AI produces for a while.
The high level description of change to a program is intentionally lossy/ambigous so this can probably remain in pictures, verbal and written human language. So the AI should just learn to understand these inputs and produce code that developers can look at/fix. Rather than adding in another step (the unambiguous requirement language for the AI to follow).
Perhaps testing will become even more central. You define and verify what you want to happen in tests, which could also include performance criteria. Quickly writing tests is quite a good use for LLMs at the moment, but flip that on it's head and make testing central to the language.
The prompt would be to design a program that fulfils all the tests. What would it look like it tests included benchmark criteria and environmental context?
Composability will also be a big thing, I think. Building larger systems from smaller ones.
I've found LLMs are very good at writing glue code and automated infrastructure configurations.
Hey, I imagine a moment when the LLM can learn from their mistakes with a good set of tests and a way to execute the code, given a set of constraints such as run in a sandbox environment with restrictions to reach the outside apis that should be mocked.
So the LLM produces the test suit first, and after that the code to satisfy the suite, trying until it get it working properly.
We have been doing this in some way, I heard people that states a problem to chatgpt, asking to produce those tests cases, something similar to Gherkin notation, given that set of statements produce the test suite, and further after some ajusts, they ask to produce the code. The human in this process functions as a kind of tutor.
@U0DHDNZR9 Some of your comments here remind me of the work I did back in the '80s with MALPAS -- https://en.wikipedia.org/wiki/MALPAS_Software_Static_Analysis_Toolset -- where other languages could be compiled down to MALPAS and then various things could be "proved" about the MALPAS code. My team produced the C-to-MALPAS transpiler -- which took three passes over the entire program source so that all globals could be lifted into the main program and passed throughout the entire call tree. Given how confidently wrong ChatGPT etc can be with their answers -- and how prone they are to make up libraries and functions out of whole cloth -- I think we're a long way from fully automated code generation, and that most of the benefit of these tools will be to help programmers reduce boilerplate in more verbose languages. That will still produce a lot of code so while the authoring burden might be substantially reduced, there will still be a big maintenance burden, possibly made worse by sub-optimal code generation. The arms race then will be for AI coding assistants that can refactor existing code (correctly!) and add new features across entire codebases... and we're a very long way from that right now. The "impressive" demos tend to focus on spot solutions rather than whole codebase operations, which is what a lot of our jobs tend to be...
That's sort of where my mind was leading and design by contract. The hallucination problem is less severe with increasing input size (if there is a strict set of building blocks passed in).
Guys, i'm feeling stuck. I need some music. What kind of music represent essence of LISP/Clojure?
I've replied here (and invited you to that channel): https://clojurians.slack.com/archives/C0JLT0UKD/p1683199671277479
To me it's the intro music from the SICP lecture series https://www.youtube.com/watch?v=2Op3QLzMgSY
But @UK0810AQ2 that is incredible and I've never seen that before!
It should have a REPL, right? https://www.youtube.com/watch?v=G1m0aX9Lpts
This how I feel when writing Clojure (or writing about writing Clojure) https://youtu.be/eVM1nUmDHHc
In case folks didn't follow Martynas's link, there's a #C0JLT0UKD channel that could definitely benefit from more traffic and more suggestions!
https://www.youtube.com/watch?v=f7JAioASm8c&list=RDMM&index=1&pp=8AUB
https://arxiv.org/pdf/2304.15004.pdf > Here, we present an alternative explanation for [LLMs'] emergent abilities: that for a particular task and model family, when analyzing fixed model outputs, one can choose a metric which leads to the inference of an emergent ability or another metric which does not. Thus, our alternative suggests that existing claims of emergent abilities are creations of the researcher’s analyses, not fundamental changes in model behavior on specific tasks with scale. > I've thought for a while that the recurring stories about LLMs' "emergent abilities" seem mostly like post-hoc rationalization of impressive-looking anecdotes - this paper provides specific evidence to support that line of thinking and counter the hype.
Is it me or is this an unfortunately ambiguous construction: "To create an empty keystore using the above load
method, pass null
as the InputStream
argument."? I initially interpreted this as "actually pass null
/`nil` to the constructor of the InputStream
" which naturally throws a NPE. From https://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html
Seems unambiguous to me. The topic is the load
method, so the construction of InputStore
is not under consideration.
Even if it were, I think the "pass null
as the InputStream
argument" part would have to be rewritten. "Argument to InputStream
or "`InputStream`'s argument".
After reading "the InputStream
argument" I immediately thought "`null` should be passed as the value of the argument with the type of InputStream
". Pretty sure that are quite a few other places that also use similar language, because overall it sounds quite familiar.
yeah, it's probably my relative lack of familiarity with reading Javadocs. Once I realized my error, it seemed quite clear in hindsight. 😅
Hmm, although my last sentence might be a hallucination - can only find "It can be specified as the serverPrincipal argument" in the OpenJDK sources, which is different.
I think you're right about the distinction between InputStream
argument and "argument to InputStream
, though. I had the same thought as soon as I encountered the error.
I think part of the confusion comes from the fact that it's using the name of the class (`InputStream`) rather than the name of the argument https://docs.oracle.com/javase/8/docs/api/java/security/KeyStore.html#load-java.io.InputStream-char:A- (`stream`). So, it would seem more intuitive to me too if it wrote "as the stream argument", "as the first argument" or "instead of an InputStream object" or something.
Not having followed your link, having zero context, and basing my interpretation purely on the exact text of the quoted instruction, I would read this as:
"Use null
as an argument to load
in the position which normally accepts an InputStream
.
Since that seems to align perfectly with p-himik's contextualized interpretation, I would have to go with, "No, it is not ambiguous."