Fork me on GitHub
#calva
<
2023-05-24
>
James Pratt12:05:47

https://arxiv.org/pdf/2305.10601.pdf on helping GPT 4 with it's reasoning process by implementing a "tree of thought" from DeepMind and Princeton University. "Language models are increasingly being deployed for general problem solving across a wide range of tasks, but are still confined to token-level, left-to-right decision-making processes during inference. This means they can fall short in tasks that require exploration, strategic lookahead, or where initial decisions play a pivotal role. To surmount these challenges, we introduce a new framework for language model inference, “Tree of Thoughts” (ToT), which generalizes over the popular “Chain of Thought” approach to prompting language models, and enables exploration over coherent units of text (“thoughts”) that serve as intermediate steps toward problem solving. ToT allows LMs to perform deliberate decision making by considering multiple different reasoning paths and self-evaluating choices to decide the next course of action, as well as looking ahead or backtracking when necessary to make global choices."

James Pratt12:05:03

I'm very tempted to try to implement this kind of search algorithm using Joyride. It'd be cool to be able to write a set of tests and be able to then as described in the paper have GPT 4 think about several possible solutions, see how many tests pass then do a depth first search traversing down the tree till it finds a solution that passes all the tests.

pez13:05:25

Awesome. Would be a super cool experiment! With the REPL it could get quite extra powerful. We could have ChatGPT provide some sample runs of a function (it often does this without asking) in a predefined format and #C03DPCLCV9N could run the samples and compare with what ChatGPT guessed. It quite often understands the requirements correctly, but fails in its first attempts at implementing. Feeding it back the actual results from the REPL could help ChatGPT correct the implementation.

🙂 2
👏 2
Nathan Tuggy20:05:38

Where are the logs for Calva Jack-In saved? It’s suddenly started silently failing to do anything at all, and reverting to 2.0.358 or 2.0.357 doesn’t help. There’s nothing relevant in either of the Output pane subsets (Calva says, Calva Connection Log) or in the clojure-lsp log. I’ve deleted the REPL output file, but that doesn’t solve the problem either.

pez20:05:08

Do you see anything suspicious in the Jack-in terminal?

Nathan Tuggy20:05:25

Yes, in that there is no jack-in terminal created. It doesn’t get that far.

pez20:05:35

Interesting…

pez20:05:56

Tried restarting VS Code?

Nathan Tuggy20:05:07

Yes. That’s actually what triggered it initially, although I don’t know all the details; I had a pending Calva update and a huge REPL output that was making it unbearably slow.

Nathan Tuggy20:05:15

Restarting again does nothing different.

Nathan Tuggy20:05:01

I double-checked and it doesn’t matter whether I use the hotkey or the command palette to jack-in.

pez20:05:32

See if it works with a fresh install of VS Code Insiders. Your VS Code could have some bad state.

Nathan Tuggy20:05:44

Yeah, it works fine that way.

Nathan Tuggy20:05:16

How do I figure out where the bad state is?

pez20:05:33

Sometimes a restart of the computer clears it.

Nathan Tuggy20:05:30

Looks like that worked. Very strange; before I restarted, I was able to mess VSC Insiders up too by exporting the profile.

pez21:05:24

You exported something form regular VS Code and imported to Insiders and it messed Insiders up? That’s a clue to where the bad state lives, I guess.

Nathan Tuggy21:05:44

Yeah, but after restart both are now working.

pez21:05:20

Usually VS Code just behaves. This thing is very unfortunate.

Nathan Tuggy21:05:04

Thanks for help, anyway.

🙏 2
skylize01:05:33

"Did you try turning it off and on again?" is only funny because it's so true to life.

😂 6