This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-12-07
Channels
- # adventofcode (94)
- # babashka (29)
- # babashka-sci-dev (2)
- # beginners (103)
- # calva (15)
- # cider (17)
- # clj-kondo (62)
- # cljsrn (24)
- # clojars (13)
- # clojure (97)
- # clojure-belgium (3)
- # clojure-berlin (3)
- # clojure-czech (1)
- # clojure-europe (68)
- # clojure-nl (1)
- # clojure-norway (3)
- # clojure-seattle (3)
- # clojure-uk (1)
- # clojurescript (7)
- # community-development (29)
- # conjure (2)
- # cursive (14)
- # data-science (15)
- # emacs (3)
- # graphql (10)
- # gratitude (1)
- # holy-lambda (32)
- # hoplon (21)
- # hyperfiddle (2)
- # jobs (2)
- # joyride (36)
- # lsp (4)
- # meander (13)
- # off-topic (203)
- # pathom (3)
- # polylith (6)
- # re-frame (4)
- # reagent (1)
- # reitit (28)
- # releases (1)
- # shadow-cljs (16)
- # slack-help (2)
- # sql (27)
- # vim (2)
transducers!
Almost...
transducers are an interface with 3 method:
"init (zero args)", "finalize(one arg)", "process(two args)"
Actually, they are not a clojure-thing:
https://github.com/cognitect-labs/transducers-js
They can be implemented in any language.
Languages that don't have per-arg-count dispath may be tricky to implement.
in JAVA, you can do via interface ITransducer {...}
Actually, the idea is pretty similar to TransformStreams, an Web API that JS implements.
https://developer.mozilla.org/en-US/docs/Web/API/TransformStream/TransformStream#parameters
You can see the 3 methods: start
, transform
, flush
.
I've been fiddling with clojure + htmx + jetty websockets + tailwind and so far it's a great combo. This means I don't have to write js and for the most part don't have to write css. š
Win win!
Very nice frontend tech stack! I think we should be encouraging it more within the community for websites that don't need the full power of SPA (e.g. when you're not building something like Google Docs, Google Maps or Spotify UI). We use a very similar stack at work, except we use picocss instead of tailwind and therefore do have to do a bit of CSS (via SCSS). If you hit a limit on htmx: ā¢ then you can also use a bit of _<https://github.com/bigskysoftware/_hyperscript|hyperscript> which is from the same person behind htmx (they work well together). ā¢ If that's not enough you could also try using <https://github.com/squint-cljs/squint|Squint> to compile snippets of clojure direct to javascript with no size overhead for standard library. ā¢ Or use ClojureScript to create Custom Elements that you can then use directly as HTML/Hiccup tags (e.g. `<my-sidebar sort="alphabetical" content="full"></my-sidebar>`).
@UJVEQPAKS, thanks very much for the suggestions, I will certainly keep them in mind. I do love the stack primarily because I can simply stay in the REPL. One thing I miss from Clojure react wrappers though is preserving state through reloads. Have you any suggestions on this?
Good question. Several solutions all with different trade offs. ā¢ Cookies (short lived, long lived, session etc) and/or in-memory sessions (just need an atom + map) can be a solution - nice thing is they are available easily on the backend. ā¢ URL / GET forms - which allow you to encode lots of state across the UI together into a URL. ā¢ Navigate infrequently - use HTMX to update elements of the page instead (like an SPA). Refreshing the page will lose some/all state, but for some users they won't care - they're used to it these days. ā¢ Navigate frequently and just do one main thing per page - only use HTMX to update elements that you don't mind losing the state for (since their state won't be saved). One thing is not to worry about encoding every element of state - many users are satisfied if the just the main element is retained and they can do the rest themselves.
Has anyone tried out Common Lisp after starting with Clojure? Was it a worthwhile experience?
I wrote some home automation type stuff in common lisp. Can't say I had any deep revelations about meta object protocols or condition systems. I also wrote it using sbcl specific features so not really ansi standard
(started with lua jit and it's ffi, but not enough paren, and sbcl's ffi to c isn't too bad)
Funny thing for me when trying other lisps after Clojure - is that I generally prefer Clojure's flavouring of Lisp. Clojure minimises the brackets e.g. doesn't require them around key value pairs
(let((a 1) (b 2))(+ a b))==>(let[a 1 b 2] (+ a b))
and also uses square brackets in some places to give some differentiation from just paranthesis.Heh, I went from CL to Clojure and the sparser let bindings and use of square brackets were both initially offputting.
One thing that Clojure absolutely does better is having literals for maps. And, for that matter, making me not think about how the map is implemented (with CL I'm always wondering if I should use an alist or a plist or a hash table)
To answer the original question, both CLOS and the condition system are worth spending some time with. format
(in Clojure as cl-format
) and loop
are also interesting. And CL makes it real easy to mutate variables, for better or for worse, which definitely gives it a different flavor. And there are various "Oh, that was actually a really clever idea that nobody else does" features scattered around.
Now that I'm on the spot I can't make an exhaustive list, but one example is how CL handles returning multiple values: if a function returns multiple values and you treat the result normally, you get the first value. But if you want all of them, you can use multiple-value-bind
to get them. They hyperspec's example (http://clhs.lisp.se/Body/m_multip.htm) is nice: floor returns the quotient and also the remainder. Sometimes you just want the quotient and can ignore the fact that it returns other things, sometimes you want both. In Clojure you'd have to return [quotient remainder]
and deal with unpacking the vector every time.
> Clojure and the sparser let bindings and use of square brackets were both initially offputting. How do you feel about them now? Still offputting? For me, they make Clojure feel more concise and expressive than other lisps.
I also miss the multiple-values feature. Itās nice for things like HTTP clients, where the response body can be the primary value, but everything else (headers, status, etc.) is available in additional values. Metadata in Clojure comes this close to being even more useful (because the additional values are named rather than positional), but you canāt really use it the same way because you canāt put metadata on strings and numbers.
It's seems a trade off was made around metadata on strings and numbers - they could have implemented new types for strings+numbers (these are final
in Java) which allow metadata - then they would be less compatible with standard java libraries and miss out on performance optimisations like unboxed/primitive numbers (which can be faster than boxed object versions).
But agree not having metadata everywhere does make the feature less used.
Regarding multiple return values - I find that second guessing the primary one is difficult (e.g. for HTTP response, the primary value could be status code or headers or body or TCP connection or something else) - it's a pretty subjective decision - so returning a map/vector of these just removes that choice aspect. Also Clojure's destructuring syntax for maps/seqs makes accessing return values pretty trivial.
Common Lisp feels like it was designed by committee (because it was). It has a massive standard library of functions with inconsistent names and inconsistent argument orders, you have to read the spec to know what mutates and what doesn't, and I think that it having a "spec" is a massive negative. Means that there's little to no forward movement in the language and fans of the language love to act like it is perfect and needs no changes
"worthwhile" is quite context-sensitive... here's a video which, while kind of lengthy for a technical talk, provides some contextualized comparisons between clj and CL that might help in making a preliminary determination: <https://www.youtube.com/watch?v=44Q9ew9JH_U>
> Has anyone tried out Common Lisp after starting with Clojure? Was it a worthwhile experience? No, but I did study macros in CL before I learned Clojure and it was definitely worth my while. I learnt macros from this book: https://letoverlambda.com/
I'm obsessed with ChatGPT. https://blog.agical.se/en/posts/a-conversation-with-chatgpt-about-chatgpt/
I also have fun with it https://benjamin-asdf.github.io/faster-than-light-memes/the-deeper-chat-gpt.html
(Apologies in advance for yet another AI / ChatGPT post.) Regarding writing Clojure with AI models (like ChatGPT / Copilot), I wonder if they are not a particularly good fit for Clojure given: 1. These AI are not particularly good at using libraries (especially private libraries). So they often end up re-inventing the wheel (writing code from scratch with just the standard library) - something Clojure programmers often avoid doing. 2. These AI are good at generating boilerplate, but clojure developers generally try to minimise boilerplate (e.g. we write libraries and macros to eliminate it!). 3. They're not self hosted or open source and many Clojure Devs like open source tools and self hosted development. 4. Clojure is concise - often writing a comment to describe some logic would be more verbose than just writing the actual clojure code. 5. Clojure is an enjoyable language to write - it's not arduous or verbose to type out so many clojure devs don't feel a need to automate the process as much. 6. Clojure is interactive and has a REPL, so its often faster to try out a function directly or inspect a data-structure in the REPL than it is to ask an AI to describe it for you. 7. There's (generally) no type signature in Clojure code so the AI needs lots more context (which the AIs are currently bad at) to understand how to use/access data/objects/parameters. Do you think uptake of AI authoring tools will therefore be less strong in Clojure compared to other programming languages?
have you tried it? it works surprisingly well and can save you writing the first draft of something, so for some kinds of code, yes I think it will be just as strong in Clojure as anywhere else
My experience with Copilot is that it doesn't really understand parentheses so when it suggests code, it often produces unbalanced )
when you accept it.
chatgpt seems to do reasonably well and you can tell it to fix the problems
(but Copilot is pretty good at suggesting tests for my OSS projects)
I tried chatgpt and it can't emit code with balanced parans for some reason. After fixing the parens issues, it works surprisingly well. I don't think they have a lot of input data to train on compared to some other more popular languages so that might be one reason it's not so good at Clojure (if that's the case)
Alex Miller
write me a clojure program to import a json data file
Here is a basic example of a Clojure program that imports a JSON data file:
(ns my-app.core
(:require [clojure.data.json :as json]))
(defn -main
"This function is called when the program is run."
[& args]
(let [json-data (slurp "data.json") ; read the JSON data file into a string
data (json/read-json json-data)] ; parse the JSON string into a Clojure data structure
; do something with the data here
(println data)))
To run this program, you will need to have the clojure.data.json library installed. You can do this by adding the following dependency to your project.clj file:
[org.clojure/data.json "0.2.6"]
Then you can run the program using the lein command:
$ lein run
This will read the JSON data file data.json from the current directory and print the parsed data to the console.
I've only played with some of the AIs, but I haven't tried to get them to work on a large 10,000+ line clojure codebase making use of hundreds of libraries (both internal and external/open source).
I have Copilot installed on my personal laptop where I do most of my OSS work and it is surprisingly good (parens aside) but I deliberately do not have it installed on my work desktop so I don't know how it would cope with 130k lines of code.
In your example - the company may have a preference for jsonista or cheshire or data.json etc - then it quicker to just write
(json/read-str (slurp abc))
Than it is to write
write me a clojure program to import a json data file
but you can tell chatgpt to use that
I'm not saying this is replacing us or anything :) but you can't tell me that was not a useful starting point
I guess my question was around the benefit to a very capable and knowledgeable Clojure programmer.
the original question was "Do you think uptake of AI authoring tools will therefore be less strong in Clojure compared to other programming languages?" and I think the answer is that there is no reason to believe so
Interesting, because I think it will be very high in some programming languages.
I guess one standing question for me is how "game-able" it will become. There are whole industries build around getting my business on the front page of a google search. These have had the cumulative effect of making google less useful over time. Would I be able to eventually alter ChatGPT's tendencies to choose one library over another, say, by some means?
LLMs like ChatGPT soak up large amounts of data - I don't think they yet have a PageRank like algorithm to assess the importance of something they are training on. They're expensive to train so they are trained infrequently (e.g. every 6 months), I imagine you could bias the model the same way as creating SEO spam.
I've already seen people trying to reverse engineer it. I think I saw something where someone figured out that it ignores words that are surrounded with some character delimiter <|
or something like that.
I have sort of assumed ChatGPT would be less useful for Clojure than for, say, JavaScript, mainly because that's my experience with CoPilot. But @U064X3EF3 makes me rethink that a bit now. With CoPilot it is that I need to balance the noice it creates for me with the signal it provides. For Clojure it has been a tradeoff not worth to make, there is not enough boilerplate, and it misses more than with JavaScript (well, TypeScript in my case) where there is quite a lot of boilerplate going on. Also CoPilot is not good at refining its suggestions. ChatGPT works quite differently and it is amazingly good at refinement. I notice that with some training it gets better at that. Training of me, that is. It forces me to write clearer and really nail what needs to be improved. Giving it examples works extra well, and for me to have to think up examples just helps me think about my problem better. Then I can ask it better questions. Wow, what a wall of text I just have put out here... In summary: 1. I think Alex is probably right about there being no reason to assume that ChatGPT-ish bots would become less useful for Clojure 2. ChatGPT creates much less noice and much more signal than CoPilot 3. ChatGPT's ability to refine its answers is super powerful and where I think the real magic lies
I think these things will end up being a tool for making programmers more powerful. programmers still need to be able to describe and refine solutions to match business needs but it ought to cut out a lot of bullshit work that's repeated a lot for little value
more interesting to me is how much something like this might make the need for libraries less pressing. as in, you might not bother with a library that makes things more concise if you're not typing out all the garbage yourself.
and then the fallout from that that makes codebases larger and harder to grok
...and also bug fixes/improvements that could have been done centrally in a library now need to be done in multiple places across the codebase instead.
if people can use it to make things worse, they probably will
This could be a boost to Clojure... Software written in other (more verbose) languages will likely accept big commits of AI generated boilerplate and re-invented wheels. Whereas Clojure programmers would likely reject these commits as 'code smell' keeping the Clojure codebases still manageable/grokable.
I appreciate your (likely misplaced) faith in superiority of Clojure programmers :)
I wonder how much better this thing will be at frameworks where there are so many examples out there of hanging business logic in very specific places in a codebase
if, without a framework, you can hang business logic in N different places, then it would in theory have a lot of places to choose from. in a framework it ought to have a lot fewer places to choose from, so maybe it will make better choices? or maybe have fewer ways to go wrong?
I suspect it will reflect all the wrongness available, which is vast :)
nods sagely
maybe I should ask it to generate code to do something but also ask it to include extremely subtle bugs
"generate some awful clojure code, plz"
has anyone asked it to try generating quines yet, I wonder?
asking now.... :)
I'm sold on this purely because you can make it do funny voices when it does its thing https://twitter.com/goodside/status/1598129631609380864
at least the AI didn't come up with that
it's only a matter of time before my puns are replaced with AI puns
I'll be out of the pun business because of all this AI punny business
Google yielded this in 0.41 seconds, so I guess that's the benchmark:
(#(println (str "(#" % "\n '" % ")"))
'(println (str "(#" % "\n '" % ")")))
chatgpt seems too overloaded to answer me :)
but really, we need to ask it to generate a chatgpt prompt that will elicit the same chatgpt response
Chat GPT is definitely remarkable as a chat bot. Maybe the kind of person I am but when I sit down to code I've never framed it in a way where I think about how I can ask google to find an answer for me on SO and I'm baffled by how many programmers admit to relying on it as a crutch. My flow is to figure out the most effective thing I can be doing, get context to see how I can insert it into the code base then get it done. A lot of the context includes future goals of the project and how to implement what I'm doing in a way to accommodate those things. Yeah you can use these generators to spit off context-less things and solve these micro problems where the full context can be loaded in a code-interview style question, but just like driving solely following a GPS not being able to return to a place you've been to 10 times without one (I'm a repeat offender) if you start relying this as a crutch you start losing your ability to solve those problems yourself. Personally I'm not worried at all but I would be worried about working with people who rely on things like this.
what if you think about it this way... code is a precise description of what you want the computer to do. chatgpt lets you state a much less precise description of what you want the computer to do. are these meaningfully different? or just points on a continuum?
I am not arguing for humans not programming, hopefully we've got some runway for that left. certainly figuring out what problem needs to be solved is the most important part, but sometimes the programming part is somewhat obvious at the end, and maybe it's not a big deal to direct a computer to write a first draft for you, especially as this stuff improves. I'm trying to keep an open mind about it.
you'd think lispers would be ready for a "sufficiently smart compiler"
Personally, Iām pretty amazed by it, and I can imagine a future where I donāt actually have to type to program. Iāve tried some experiments with getting it to help me write code, and it can be amazingly useful. Things like āWrite me an AWS lambda in Typescript which accesses the Twitter APIā; āGreat, write me some CDK code to deploy thatā; āThanks - Iāll also need a DynamoDB table called Users with an integer key called idā. Iām a reasonably experienced dev, I am perfectly capable of writing all the code to do that (and have done many times), butā¦ why would I want to?
When I get some time, Iām going to try implementing kanaka/mal using only voice dictation. I suspect that, if it doesnāt work, the limitation will be the dictation rather than ChatGPT.
However, Iāll wait a bit since ChatGPT is getting pretty flakey these days, I think itās being hugged to death.
ChatGPT did a very creditable job of āImplement me a lexer for Clojure code in Kotlinā.
I feel like this replaces so much. Whats the point of most webpages if i can just ask my AI butler for just the information i need?
honestly, most wepages are of dubious value anyway. I feel like half 50% of the internet is devoted to various games.
I think you mean half the internet is devoted to fun, which isn't such a bad thing when put that way
all hyperbole of course
I guess what im getting at is, instead of building a wepage to display games, were going to have an AI generate an experience tailor made for the user requesting the games.
the conclusion is that were already in the matrix.
it's matrixes all the way down.
I think this stuff will most just turn into assistants of a sort and we'll just end up doing more instead of being replaced
like most technological advances have done
it's more like, i assumed i would be somewhat less caught off guard by how powerful this tool is. I was trying to use copilot and hadn't found much success. for probably personal reasons, I'm finding chatGPT much more immediately useful and it's like wow, i should have been using something like this for years. I should be focusing on this rather then what OS is use or, honestly, what programming language i prefer. It just seems so cross cutting that it's exciting and terrifying. LIke i want to run off and try and build a ... (look around the room...) dog to human translator based on ... dog body language.
@U02N27RK69K Isn't that just a myth? That technological advances do not put people out of jobs? I guess for the first time, programmers are feeling like their jobs are not so safe anymore. Until now it's been programmers' jobs to put other people out of jobs (from Airline ticket agents to drivers), but now the gun turns to us. I'm pretty sure you are right, but for how long I wonder.
well sure, the printing press screwed over scribes. sometimes industries die and you need to retrain for other things. most advances don't put us out of work wholesale but instead just make us make more crap. factories put certain workers out but in general it didn't reduce the amount of work we do but rather make us do more stuff faster
I have my own doubts about the ability of these systems to supplant programmers wholesale but those are just personal doubts that I won't try to sell you on. I think it's more likely that you as a programmer will be able to make more things quicker with this tool than be put entirely out of a job. I guess with devs producing more hiring won't be as pressing but business expands ambition based on what's economically possible and so this will likely, imo, just make businesses grow and expand faster
my suspicion is that we need some pretty fundamental advances in computing to make these sorts of systems actually intelligent (if it's possible at all). brains are complicated
I'm not an AI researcher so take that for what it's worth
maybe someone should have the AI code up a new twitter
right, the scope of something like that puts the abilities of this tool into perspective
I am not an AI researcher either. But I do think much of software dev can be just automated away. We don't need AGI, we just need a slightly better version of chatgpt I guess. I don't know about elite level developers, but in the normal land, most development consists of doing repetitive stuff. Lots of junior/mid positions in mainstream languages are about just doing the same old thing again and again. Copy pasting from Stackoverflow and etc. I think productivity increase will come there. Boring stuff that a senior engineer might have delegated to a junior engineer today might be delegated to chatgpt tomorrow and the junior engineer might go jobless or be left with a bullshit job. Assuming that this technology improves a little bit more and becomes better.
I feel like i live in normal land myself. I'm not considered being replaced, but rather how i should use my time and resources. Like, what do you study or build with this tool... Like i was starting to get into emacs more because i wanted to customize my env, that now, to some degree, seems far less useful then learning to train an ai to help me on my specific tasks. It's not all one or the other, but i think I'll gain from both. But I'm currently at a loss as how to customize my ai buddy.
have it customize your emacs config
That's the conclusion i came to as well. It's all coming full circle.
although honestly, I suspect getting a working emacs config is beyond even chatgpt
I asked it about that line of thought @U02N27RK69K shared above. š
Well of course it would say that š My 2 cents: Value is created by human labor. The way this value is created changes over time but it's still the part that makes progress possible. 10000 years ago you had to be a farmer/shepherd, then came trade, industry, services. Technological advances make some jobs obsolete and create others. No matter how much AI advance, humans will still need to have jobs (e.g. to program the AI, to build its infrastructure, to serve as CPU cores for it š). I'm personally very concerned that the future will be dystopian. The promise of technology was that it would help us work less. That never happened though, at least worldwide, production keeps ramping up instead with every breakthrough. I suppose we'll see how it goes. :man-shrugging:
It begs a definition of āworkā. I think it is in our wiring as humans to not only be able to imagine the future, but also to imagine how it can be better than today, and then work our asses of to move it in the direction we have seen. The ability to be content is for other species, like dogs and cats. š
Or for AI assistants, for that matter. ChatGPT only works when it is asked to. There is no initiative there. No picturing of a better state of affairs worth striving for.
Value is created by human labor at this moment, but that may change. Thinking takes time. And this is a weakness. Technology and capital will be responding to each other. With human consumer removed from equation. We are consumers of programming languages so we're subjected to this as well. https://www.youtube.com/watch?v=2k9_bSqQQH8
Not sure if I'd subscribe to Nick Land š ... but , thinking does need time. And deep reflective thinking which leads to real change (instead of reactions) needs idle time. This is not needed anymore it seems.
> Not sure if I'd subscribe to Nick Land This is not Nick Land talking. It's a guy talking about Nick Land's ideas.
Some lyrical deviation: https://www.youtube.com/watch?v=4mhlzyxLgoI
Just watched the video about accelerationism. I'm not sure I buy the prophecy but it's an interesting perspective for sure.
My mumbling on what I see as human nature above started me thinking it might be material for a Turing test. I tried making it take initiative. No dice, but it did spit out a plan. Then I reminded it about that it should be iterating and involving me for feedback. Which it acknowledged, but ignored. Pretty decent plan anyway, maybe it doesn't need feedback. š
Again this is just my opinion but Ī believe the assumption that technology can artificially ramp up demand infinitely without risking stability is pretty weak, I'm quite sure that the system will collapse long before the human factor is removed from the equation.
Well, economic systems come and go, but the world is still there, so I wouldn't worry about capitalism ending. A climate (or nuclear) apocalypse are still the most likely ends of the world by far (again, imho), so let's focus on one problem at a time I'd say š
> can artificially ramp up demand infinitely without risking stability is pretty weak It doesn't need to work at this point because the theory says that capitalism should break down on itself. Which means that this could be the point that it does so. This could be taken as a possible outcome if humans won't be able to cope with it anymore. And again -- we don't need to take this theory for 100%, maybe it will just stall as it won't be able to sustain the development of demand enough.
Do they really? There hasn't been any economic system besides markets, all alternatives have been anti economies. Markets always win, and once you fuse them with technology they run off in a feedback loop (e.g. acceleration) The only question is will human capital run out before techno capital achieves self sustaining levels
I think about it as about a car -- if I press the gas to the max then either it will limit itself (because somebody built a limiter) or it will explode. (you can also add some NO2 for better and bigger exploding action) And the limiter can be built into the engine or it could be built into the fuel. If it's in the engine that limits itself then it's a top-down action -- manufacturer-decided. If it's fuel quality (explosivity in this case) that limits the engine then it's the user that can't make a better fuel for it. In both cases the car won't explode.
Option C, car becomes sentient, grows wings, eats you for bio fuel A car it technically static. A complex adaptive system isn't
> fuel quality (explosivity in this case) I think about human brains here. Because they have capacity of some sort. So if they can't keep up then the system should stall. Unless it will not be limited by it and then there is no stopping. But at this point we aren't thinking about individuals at all. They're just mollecules at this point.
The problem with this system is it isn't linear or predictable. It wasn't built. It's immanent and emergent If you try to put breaks on it, you'll just get eaten up by someone who didn't Or explode gloriously (metaphorically (most likely starve to death))
> There hasn't been any economic system besides markets, all alternatives have been anti economies (We're already off-topic and this is a very complicated topic that has ruined many friendships so I'll drop it here. I'll just say that I looked up anti-economy (hadn't heard it before) and I found that it describes an economy that is based on principles that are opposed to those of the dominant economic system, so, sure, by that definition an economic system that isn't free-market capitalism is indeed an anti-economy!)
@UEQPKG7HQ Concepts themselves are setup in a way to make markets seem "natural" so there's no arguing the point š
In all the messages so far Iām missing a bit what we want the future to be. Do we want to be dependent on super computers to generate our code (how good or bad it might be)? What individual can afford a super computer? Maybe we want to keep some sovereignity? I think Clojure is a step in the right direction, but sometimes I feel like we are just optimizing for āthe corporationā and not for humanity. A race to the bottom if you ask me.
@U0FT7SRLP I mean, I'm very happy if super computers can write code for us and give me freedom from coding. Perhaps the time saved can be invested in leisure and family time. But we all know that this isn't going to happen. It almost seems like we lost sight of the destination somewhere along the way and now tell ourselves there is no destination.
In this compact anti-free-market consensus, I'll change the subject. š On Twitter, @U4P4NREBY made me aware of what it means that ChatGPT is a multi-language model. Mind Blown. Again.
Coming late to the party, but I remember I once saw a short story about a man that decided to rise up against a system where AIs dominated government, and that the only "goal" of the AI was to have "human happiness" at its top-level. It's an interesting read, and I never found it again
But, about ChatGPT - maybe my mind doesn't work "AI-like" yet, but I actually could not make a prompt that made the AI answer meaningful things for me.
Also: I would love for the AI to be able to make things easier for me. For example, supposing I could write a bunch of test-cases and make the AI write the code that passes all tests; or, make a bunch of code and make the AI write the tests for me; Even better - ask the AI to change libs or platforms(like, "I have this code, it uses Node API, please re-write it to use WASM instead"). Instead, what I see is people asking the AI to write a snippet on how to use a specific library. Which, sure, helps, but nothing a copy-paste from the libraries' README couldn't solve....
I feel this direction that Copilot/ChatGPT is going is not the one that I appreaciate. It writes boilerplate that's hard to remove/replace, where my experience is that the hardest things on any codebase is to fix the stupid ideas I had in the past when I wasn't too deep / didn't understand the problem in its fullest / was too naĆÆve to know better....
(and also it worries me that the models and code are not open, but that's another different discussion...)
Alan Kay said it best years ago: > On the other hand, it takes a very special value system for children and adults to be able to exist as learning creatures--indeed as humans at all--in the presence of an environment that does all for them. 20th century humans that don't understand the hows and whys of their technologies are not in a position to make judgments and shape futures. At some point it is necessary to understand something about thermodynamics and waiting until then to try to learn it doesn't work. Nature's rule is "use it or lose it"--most social systems that have incorporated intelligent slaves or amanuenses have "lost it". In fact most never gained it to lose. In a technopoly in which we can make just about anything we desire, and almost everything we do can be replaced with vicarious experience, we have to decide to do the activities that make us into actualized humans. We have to decide to exercise, to not eat too much fat and sugar, to learn, to read, to explore, to experiment, to make, to love, to think. In short, to exist. > Difficulties are annoying and we like to remove them. But we have to be careful to only remove the gratuitous ones. As for the others--those whose surmounting makes us grow stronger in mind and body--we have to decide to leave those in and face them. http://acypher.com/wwid/FrontMatter/index.html
FWIW, FranƧois Chollet, author of Keras, has offered an argument that there are material limits to the kind of ārunaway AIā scenario that seems to keep a lot of people up at night. If you believe that cognition is a physical process, it seems very implausible that you get exponential increases in cognitive power without corresponding exponential increases in resource, energy, data consumption, and so on. https://medium.com/@francois.chollet/the-impossibility-of-intelligence-explosion-5be4a9eda6ec
OpenAI themselves have noted that itās taking exponentially more data and compute to achieve incrementally better results on AI benchmarks, indicating that some limits on the current paradigm are showing themselves. https://openai.com/blog/ai-and-compute/
Oh, I actually tried to trick ChatGPT into telling me how big its training dataset was. Not sure if the answer is accurate or if it's hallucinating though š
Lastly, Chollet has said his real concerns about AI are about its place in the social order: whether itās used to control or empower people, who decides what AI applications get built and what purposes they are put towards. If AI feels like itās āout of controlā perhaps thatās because there arenāt any democratic governance structures to control the institutions dumping resources towards its development. OpenAI certainly didnāt care about how many teachers would have to contend with AI-generated essays when they decided to release ChatGPT, even though theyāre the self-appointed custodians of āresponsible AI.ā All they did was ask people pretty please to label its output- thatās more a way of declaiming responsibility than thinking carefully about the social and ethical problems posed by AI. https://medium.com/@francois.chollet/what-worries-me-about-ai-ed9df072b704
Well, Wikipedia didn't do anything to shield teachers from extra work with pupils sourcing information from there. And Gutenberg didn't either. If OpenAI would have had put such constraints on themselves there would not have been any ChatGPT at all today.
I think that's the point, that it represents such a danger that it either shouldn't exist or it should have democratically imposed constraints placed upon it
The primary difference is that Wikipedia has ethical standards that it tries to inculcate in contributors, attribution requirements, and audit trails for edits - social norms that have guided scholarship for a long time, however imperfectly. ChatGPT has none of those things because it is a stochastic parrot.
Iām also totally ok with ChatGPT not existing lol
I think it's neat tech but it's pretty obvious how dangerous AI in general might be even without AGI
politics?
We had philosophy so why not politics š
Technical systems cannot be separated from the social context in which they operate.
@U0ETXRFEW But this subject is totally political
I think ethics belongs next to any technology, thatās not politics IMO
As long as people disagree about ethical questions there is a political element to them
I read in a book that it's not the technological advances that matter but their policy. That book is about the effects of technology on democracy. It's rough. We clearly see this with Twitter as the prime example: New owner, new policy.
> Oh, I actually tried to trick ChatGPT into telling me how big its training dataset was. It's just a hallucination, you can think of GPT as just a very good Markhov Chain Generator - it's good at predicting the next word (token) given previous tokens.
Services that captures crowds and sustain off them like twitter/facebook/google/etc have no business being under private ownership if you ask me š
I don't really know who actually owns OpenAI, but I think it's partially Microsoft. I remember that they were mentioned in Copilot's trial webpage. So to understand what they want to achieve with this we need to understand who is behind it. The only thing I hear is "GPT this, GPT that". But who and why?
Why not AI that solves world hunger... why replace jobs?
Also if it's microsoft then it means that the same data from github was surely used. And that means this trial on Copilot is a really an important one.
malcolm from jurassic park has something to say about technology like this š
Someone once said to me long ago: āIād rather deal with a superintelligent AI than a very intelligent AI controlled by Peter Thiel and Elon Musk.ā
AGI is the cognitive version of a nuclear weapon
> AGI is the cognitive version of a nuclear weapon So why not unleash it to ourselves? xD
For now this looks like "hey mom, look at this radioactive green dust on my tongue!"
It's going to be quite hard to control/restrict for 2 reasons: 1. A country can ban/unilaterally opt out of AI but that doesn't mean other countries will. 2. GPT-3 is similar in code to GPT-2 (which is open source) - mostly it is massively scaled up training data. Meta and Google and others have similar LLMs too. The code isn't that big, it's a few few thousand lines of Python. We could delete GPT code from the internet today and a good number of people could independently write a version of it from scratch again in a few weeks.
For now, we're not really dealing with AGI or sentience or anything like that. As I said, best to think of GPT/ChatGPT/Co-pilot as a very good/next gen Markhov Chain Generator - it's very good at predicting the next word (token) given previous words (tokens). Not more than that.
The code itself has no value though, you need access to all the training data, the supercomputer to train it, ML experts to tune the parameters, hundreds of domain experts to judge the responses. Without constant funding you can't really develop these "AI"s, it's not a home server you can set up in your basement.
And that's part of the reason why it's problematic, along with who controls it (i.e. funds and drives its progress) like others said
āGPT-3 has 175 billion parameters and would require 355 years and $4,600,000 to train - even with the lowest priced GPU cloud on the market.ā https://lambdalabs.com/blog/demystifying-gpt-3
This stuff is not beyond our ability to regulate. If we can negotiate arms control treaties, climate agreements, and curtail pollution (again, however imperfectly), we can regulate AI. Itās not that special.
> If we can negotiate arms control treaties Imagine being able to prevent russia's war š Didn't work that time. So no, we missed that one.
> GPT-3 has 175 billion parameters and would require 355 years and $4,600,000 to trai Those are 355 "GPU years" - e.g. you could train it in a few weeks on cloud using multiple GPUs. $4.6 million is within reach for many well funded startups or other well capitalised groups. Cost of GPUs will reduce with competition and Moores law. It's also possible that optimisations to the algorithm could reduce trraining costs too. Certainly not for your average dev to do on their own, but doesn't sound that discouraging.
GPT is also primarily unsupervised - so don't need annotors for that bit (although ChatGPT did use some at a later stage). The data is mostly open source. You can download the majority of it from https://pile.eleuther.ai/.
So did we decide anything? Is it safe to share new libraries? What do we do?
The same thing as before only with an ai to help? My big take away is that i need to build my own ai that has my personal interests in mind.
Yes but would you make a new library and upload it to github at this point?
And you also need to make sure your project never gets mirrored to any forge which is used for similar purposes
I didn't send my library to anybody, but well... I guess it has to remain this way then. :thinking_face: Not sure what to do.
The open source is ruined :thinking_face: What would Richard Stallman do...?
I found some interesting parallels to our current situation in the supreme court case, White-Smith Music Publishing Company v. Apollo Company (1909) . Music roll producers would take sheet music and turn them into music rolls that could be loaded into pianos to automatically play songs. The sheet music industry was not happy. > After all, what is the perforated roll? The fact is clearly established in the testimony in this case that even those skilled in the making of these rolls are unable to read them as musical compositions Basically, they ruled that under the law at the time, music rolls could be produced from sheet music without compensation to the sheet music writer. This eventually lead to the Copyright Act of 1909 which created Mechanical Licenses. It's not unlikely that we need new legislation that allows AI to progress, but also somehow provides "fair" compensation to material that is used to train AIs.
> Why do you want to publish your code to begin with? I want to understand whether I should. I'd like to publish it so that somebody could use it and it's a pretty nice namespace, but yes, I don't want to publish it because of these bots. > Or find a license to toxic that training an AI on it would be a legal liability I could always make my own license but for this to work I think that Copilot trial has to reach some kind of conclusion.
I think there's a few court cases challenging training APIs using code with licenses like GPL, but it's not clear whether or not training an AI is considered fair use. If it is considered fair use, then it doesn't matter what license you use (probably, IANAL).
I think the future probably is similar to the past. There will be owners, who own the means of production, then there will be the goons, that owners give a larger chunk of the produced value too in exchange for military work. And there will be the rest of us, we work for whatever they give us, doing whatever labor isn't automated yet. Sounds bleak, but I'm not really seeing how things are progressing towards something different.
Does anyone have experience decrypting S3 objects that were encrypted using a client-side KMS key before storage in S3 using https://github.com/cognitect-labs/aws-api/ ? Struggling to figure it out.
Unfortunately I haven't tested that out. One thing to consider is decoupling your encryption from storage (S3). I.e. use https://github.com/lvh/caesium to encrypt/decrypt the bytes first . Keys can often be managed/injected using something like https://docs.github.com/en/rest/actions/secrets?apiVersion=2022-11-28.
I don't really want to do that, because another Amazon service is storing directly to S3. To decouple I'd have to make a lambda in between the two services, which just sounds like more work.
I see. Although it does seem slightly odd to encrypt/secure the data from Amazon in S3 by handing the keys to a different Amazon Service. Although, I guess you could say that they have access to the keys anywhere on AWS (e.g. memory dumping EC2 instances). Having Amazon do storage directly gives you no chance to modify the data before it is stored in S3: ā¢ Add encryption, add compression ā¢ Change seriealization (EDN/Nippy/Transit/XML etc) ā¢ Migrate between versions of your data (e.g. renaming keys) ā¢ Remove unnecessary data fields or enrich with extra data (eg extra context) ā¢ Split data into smaller/differeent chunks ā¢ Discard and deduplicate data that doesn't need to be stored ā¢ Replicate data to multiple S3 buckets or migrate to a completely different database (e.g. Wasabi). Agree that setting up an extra lambda seems like extra work so certainly a trade off.