This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-05-24
Channels
- # aleph (1)
- # beginners (43)
- # calva (22)
- # cider (51)
- # clerk (1)
- # clj-kondo (20)
- # clojure (29)
- # clojure-denmark (1)
- # clojure-europe (73)
- # clojure-finland (28)
- # clojure-nl (1)
- # clojure-norway (7)
- # clojure-spec (7)
- # clojure-uk (4)
- # clojurescript (12)
- # data-science (2)
- # datomic (51)
- # events (1)
- # fulcro (20)
- # hyperfiddle (28)
- # integrant (6)
- # malli (20)
- # matrix (2)
- # music (1)
- # off-topic (66)
- # reitit (17)
- # releases (5)
- # ring (1)
- # shadow-cljs (31)
- # xtdb (6)
https://arxiv.org/pdf/2305.10601.pdf on helping GPT 4 with it's reasoning process by implementing a "tree of thought" from DeepMind and Princeton University. "Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, “Tree of Thoughts” (ToT), which generalizes over the popular “Chain of Thought” approach to prompting language models, and enables exploration over coherent units of text (“thoughts”) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices."
I'm very tempted to try to implement this kind of search algorithm using Joyride. It'd be cool to be able to write a set of tests and be able to then as described in the paper have GPT 4 think about several possible solutions, see how many tests pass then do a depth first search traversing down the tree till it finds a solution that passes all the tests.
Awesome. Would be a super cool experiment! With the REPL it could get quite extra powerful. We could have ChatGPT provide some sample runs of a function (it often does this without asking) in a predefined format and #C03DPCLCV9N could run the samples and compare with what ChatGPT guessed. It quite often understands the requirements correctly, but fails in its first attempts at implementing. Feeding it back the actual results from the REPL could help ChatGPT correct the implementation.
Where are the logs for Calva Jack-In saved? It’s suddenly started silently failing to do anything at all, and reverting to 2.0.358 or 2.0.357 doesn’t help. There’s nothing relevant in either of the Output pane subsets (Calva says, Calva Connection Log) or in the clojure-lsp log. I’ve deleted the REPL output file, but that doesn’t solve the problem either.
Yes, in that there is no jack-in terminal created. It doesn’t get that far.
Yes. That’s actually what triggered it initially, although I don’t know all the details; I had a pending Calva update and a huge REPL output that was making it unbearably slow.
Restarting again does nothing different.
I double-checked and it doesn’t matter whether I use the hotkey or the command palette to jack-in.
See if it works with a fresh install of VS Code Insiders. Your VS Code could have some bad state.
Yeah, it works fine that way.
How do I figure out where the bad state is?
Looks like that worked. Very strange; before I restarted, I was able to mess VSC Insiders up too by exporting the profile.
You exported something form regular VS Code and imported to Insiders and it messed Insiders up? That’s a clue to where the bad state lives, I guess.
Yeah, but after restart both are now working.