This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-10-28
Channels
I wonder if we could replace git with just a series of GPT-engineer prompts that makes the desired changes. Git repos could be a lot smaller then! In-fact it would probably be quite easy to implement such a system. And it's self-documenting!
I wrote an entire script with nothing but a series of GPT-engineer prompts earlier today.
yes the rng seed would have to be recorded along with the prompt
actually a problem does occur to me. it would be very resource intensive to reverse a commit
because if you ask gpt-engineer to remove a feature it's not clear you would end up with the same thing as when you asked it to add the feature
so would have to re-compute the entire history just to rollback a commit
neural net compression is like 40 years old, as technology goes. of course the modern stuff is better but brutally resource-impractical
is the $25 million number legit?
I think simple "prompts" like rename this var would save some time in code reviews hehe. But you don't need an LLM for this
probably a neural compressor thing will have smaller total addressable market and you can scale that up or down as you will, so i guessed 10x less. this would be for a centralized firm, but consumer-grade gpu can't deal with llm bullcrap or can only deal marginally
your own personal gpu is still like 2 grand and 5 cents/hr usage for viciously worse models, and amortization is far worse of course
total git societal effort is junio hamano and his buddies writing some c occasionally, and everyone reading their manpages and books and articles and stuff
When you use prompts to do something - would it be true that the next time you use the same prompts you might get a different answer as there might have been extra learning input. Doesn't that make it less deterministic than we have been used to?
Software stops working over time when it's not maintained due to libraries changing etc. This method might actually solve that problem, even as it creates a new one.
Agreed, but there is lots of software that is in the it works don't change it mode. Some of my earliest code was running at least 25 years after I first wrote it.
neural nets are only nondeterministic because of sensitive dependence on initial conditions. if you slam down initial conditions perfectly (like, "ok, crc32 matches on model, same seed" perfectly), it'll be deterministic
I was not commenting on whether the same model would give you the same results, but that these llms are constantly evolving so overtime we could see very different results.
neural nets are only nondeterministic because of sensitive dependence on initial conditionsIt's not entirely correct - it depends on implementation. Even without explicit randomness, there can still be non-deterministic results due to at the very least unpredictable order of multithreaded computations. Floating point operations are not associative.