This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-03-23
Channels
- # announcements (2)
- # babashka (25)
- # beginners (33)
- # biff (13)
- # calva (13)
- # clerk (82)
- # clj-commons (3)
- # clj-kondo (8)
- # clj-on-windows (23)
- # cljdoc (6)
- # clojure (16)
- # clojure-belgium (1)
- # clojure-dev (58)
- # clojure-europe (53)
- # clojure-nl (1)
- # clojure-norway (15)
- # clojure-uk (2)
- # clojurescript (17)
- # core-async (5)
- # cursive (6)
- # datahike (1)
- # datomic (8)
- # emacs (25)
- # etaoin (21)
- # events (4)
- # graalvm (33)
- # honeysql (7)
- # hyperfiddle (1)
- # lsp (49)
- # luminus (4)
- # malli (18)
- # off-topic (63)
- # reagent (11)
- # releases (1)
- # shadow-cljs (200)
- # timbre (1)
- # tools-build (17)
Anyone knows any software that allows you to create graphs? (not functions, but the more general concept with nodes and edges)
I'm looking for capabilities like infinite zoom, possibility of moving the edges with the mouse
For the web I usually use https://visjs.org (https://visjs.github.io/vis-network/examples). For native programs I've been experimenting with dearpyguis node-editor (https://dearpygui.readthedocs.io/en/latest/documentation/node-editor.html) since it's the most expressive GUI library variant I've found, where I can make a node editor. After some hours of experimenting I haven't found a way to "autolayout" nodes, but I'll be continuing where I left off the coming days. I use dearpygui with Hy (https://hylang.org: https://github.com/hylang/hy, https://github.com/hylang/hyrule) so I have something which feels like clojure.
If you just need to make some graphs, in the context of documentation for example, Obisdian (https://obsidian.md) makes graphs based on links between markdown documents - which might be handy sometimes. I guess that editing the graph there usually means editing links instead of manipulating the graph directly.
gephi ( http://gephi.org ) may be worth a try, it is java based and there exists a library that allows you to include its functionalities in your own projects
I have been thinking on this. Even trying to formulate a question seems difficult. Let's try: Seeing this current generative ai revolution (particulaly LLMs), how do you think clojure and its community will land in next couple of years? Or even generally, do you think less popular languages will continue to offer "edge"? I really see now that "boilerplaters" are getting incredible boost now. I understand the question is very broad and fuzzy. But I feel there is already a lot of thoughts about upcoming reshape of swe.
Maybe I’m being too pessimistic about the “revolution”, but let me bring you back to the age of EJB 2.0, CMP, and Hibernate. So the thing was that for Hibernate and CMP to work, you needed to create POJOs (and some XML) that could basically be derived from the tables in your database. Lazy as we devs are, tools were devised to generate said POJOs and XML from those very tables. This saved you at least a day of tedious work (perhaps some more if you also had to write the XML). Now contrast that savings with the life time of the project. Also, while your initial generation of those POJOs worked, you’d always end up tweaking the generated code, which then led to problems if you for reasons had to regenerate the code and so on and so forth. Basically what I’m saying is that if writing the boilerplate is the most time consuming part of your project you’re doing it wrong. And if the interesting bits of code in your project can be generated by something, then you’re operating at the wrong end of the wardley map.
I will share more later on, first wanted to listen to "unbiased" thoughts. But I will reply that I don't think Hibernate/POJO stuff is comparable, although I was not much into java at the time, I was starting my tech career around that time so I roughly remember seeing that and a bit overly optimistic reactions from community.
> Or even generally, do you think less popular languages will continue to offer "edge"? I really see now that "boilerplaters" are getting incredible boost now. Alternatively: • Why bother with static types anymore when there's a magic thing that's omnipotent about your codebase and can infer in real time stuff about your code in a way that sort of mirrors a type checker. • Why does language choice matter anymore at all in this future world where there's a universal transpiler? • Maybe all programming languages lose to some extent to human language interfaces? What if future api's just expose a single where you pass it a string like "give me the email and address of the last 50 user's who signed up in a flat edn map" These aren't really well thought out but i guess the point is the tech is so new and hard to say how things will shape. Though I am pretty optimistic overall, and particularly optimistic about your fears not coming true, I think it will be only a positive for people that like less popular langs
> What if future api's just expose a single where you pass it a string like "give me the email and address of the last 50 user's who signed up in a flat edn map" I remember we discussed this a lot during my college time. Internet was already well established and there was talks of autonomously communicating applications, ideas around generalized syntaxes for api explanations that can be programmatically consumed etc.
Like if these things do actually become good enough to the point where it would significantly help boilerplate heavy langs at the expense of concise ones, i think there are just going to be much more profound impacts on the field in general, and I don't think that when the dust settles "java is more popular now and clojure is less popular" will be a takeaway
I saw my first chat-gpt produced bug in production this week. I can see all sorts of cleanup jobs needed based on the current set of tools out there
@U017AGUF30R interesting... so was it your company's code or private? how the process looked (who generated, who reviewed, who discovered bug)
I'm hoping to have mortgage paid off in three years. Just give me three more years of SWE salary, then I can take a big drop and should be fine. But I really just want 3 more years before I'm replaced by a robot. Do you think we have three more years?
It was regex that was subtly wrong, made it through review to prod and caused problems. Should have been tested more thoroughly but an issue is the guy who wrote it didn’t understand it fully
I haven't personally used the code related tools such as co-pilot. I really like the ones that spit out text. It's really useful to generate stuff that is otherwise tedious! But otherwise: But I have watched and read from people who have. Even those who are excited about these code generation tools, see a lot of caveats and issues with them. Apparently they are really only useful for actual boilerplate code and otherwise they can even be dangerous or at least annoying. Another use is basically a stack overflow competitor, with similar, slightly different issues around that. Both of these things seem incredibly uninteresting to me. The speed at which I type code is never the bottleneck. What I want is not speed, it is accuracy and flow. I want to be able to easily and seamlessly move around code. Boilerplate is basically a solved problem since Lisp Macros were invented. Also judging from the demonstrations/examples: this is not how I write code at all! I don't invent a bunch of somewhat similar types and things that I then need to type out very fast. I first try a bunch of stuff, write throwaway code. And then I think about how to structure the data, often away from the keyboard. Then I write out examples of that data and manipulate it and so on. These tools don't help me at all with this kind of thing. My most cynical interpretation of why people like it, is that they simply don't understand how to write code that is DRY and expressive.
@U013YN3T4DA I'm pretty sure most of SWEs have more than 3 years left.
@U01EFUL1A8M, I think the bigger point isn't about typing speed. I tried, assuming I have no programming knowledge at all, to create an API for instance that talked to a db. Just via prompts I had a working API, with unit tests, code to talk to the db, sql written.
Now, I still needed to understand some coding to know abotu the project structure, making the solution, compiling it etc. But I could teach a BA that in no time.
ANd I'm sure eventually tools will be so good that it will create all teh files, build scripts etc too
> Boilerplate is basically a solved problem since Lisp Macros were invented. I wish this were catching on more. I did Clojure professionally for 4 years, and am having a hard time finding companies that do it (at least that want to hire me). I've also been hearing ever since I started, that the Clojure companies struggle to hire and are moving more to JS/Typescript
and who cares if the code is repetative, not DRY, not expressive, as long as GPT understands it.
WHere I've found co-pilot to be less useful / entirely useless is in giving suggestions in our massive enterprise solutions. I think devs mostly working in there are probably safe for at least a bit longer! Hell, some people can work in these systems for years and still not understand it
I've worked in code bases before that if I pointed GPT at it, it would have an AI existential crisis
@U013YN3T4DA I would say majority of our work is actually testing (in spirit, not in function). Even the coding part is often testing if our assumptions are correct (thus we use repl to get there quick).
Couple of posts above there is cool story with ai + repl. I am actually starting to think now that if I should incorporate chatgpt-4 into how I design systems. So that it can receive some common datastructure, instruction on what to do with it (using this map, write a function that talks to s3, obtains objects, stores in /tmp, assocs fullpath to :full-path key in input-map) and info and what i would like it to return... and it could just try multiple paths using repl to get there.
@U013YN3T4DA so basically it did scaffolding for you? We have that since 2006ish in modern web frameworks?
NO, it wrote the API scaffold, it wrote the SQL for the db queries, it wrote the sql to create teh schemas, it wrote the code to call the db and used an ORM, and it wrote tests to tests the controller methods
at the end I had a fully working API I could deploy, scripts to create the db schema and queries to get the data I wanted.
I think thats impressive for waht is v3.5. A couple of years ago, that would be unimaginable, considering I was jsut talking to it in plain english. Where is it going to be in 5 years time ?
The Clojure community seems to move away from needing that kind of stuff regardless. A lot of the people who do web dev stuff just generate all of these things via data driven models and configs. Routing, tests, database access... I rather design a data model that acts as the source of truth for these things and produce them from that model, than having an IDE that spits out a whole bunch of low level code for me.
We had one big paradigm shift which was the step from machine code to higher level languages. We can express things in a more precise and expressive manner. This isn't like that though. It is less expressive and less precise. It is basically a way to give up on programming instead of improving on it.
You would rather do that, the people that pay your salary just want an endpoint. If they can get it cheaper by having the BA talk to an AI, better believe they will go for that. The AI won't moan the code needs re-written either.
I have found the AI chat's to be a minor change over what we had before, the ability to search and find answers. I find that it's actually a slight regression for most tasks I have to do, as it often produces incorrect answers presented in an overly confident way. But I assume it will improve and become slightly better in time. It certainly should reduce noise, but make no mistake, that will come at a cost, if you let another company steer your life, they will take it in a direction they want to go. Again, this is already happening, but as they tools become more powerful so will the pull of their creators. @U013YN3T4DA What greater purpose did the software project the AI created have? I have found the biggest issue isn't software, it's having something worth building. Sure, the AI helps, but I feel like at a certain point the questions we ask it are as hard to get correct as the answers it gives. If we don't have interesting questions, how can it produce interesting and useful answers? And how else can we learn to ask interesting questions if not through trial and error?
> You would rather do that, the people that pay your salary just want an endpoint. If they can get it cheaper by having the BA talk to an AI, better believe they will go for that. From what I've seen so far I very much doubt that this is feasible at all?
Often people in the business don’t fully know what they want. The current set of chat bots are quite compliant and don’t really push back
@U01EFUL1A8M Why have a BA at all? Why not just have the AI talk directly to the person who has a question?
I'm just thinking about the stuff I'm working on right now. One is a bespoke web UI that cannot be expressed with standard methods that gets fed from multiple sites. The other is a reliability issue that I need to fix via caching and temporal logic. How is generating boilerplate even helping me here? there is no boilerplate to begin with.
> Why have a BA at all? Why not just have the AI talk directly to the person who has a question? Because my job is to reliably store, process and display information in unique ways.
There is no person who has a question. They are typically discovering what they want to ask by looking at the things we as programmers show them. Someone needs to do the thinking and design to make that happen.
Sure, I'm saying why not have the Ai produce on-demand suggestions to help them seek for their answers? I have felt for years that many web applications complicate front end and back end communication by introducing intermediate query languages. Why not have the front end client make it's request directly to the database? There are reasons, but now people are getting excited about an AI auto-generating API and database code, which would require the AI know the database and it's purpose, and i imagine what information is in the database. It's confusing, and it starts to raise, for me, the question, why then not just ask the AI the questions about the data directly? Is the reason related to security? Performance? etc...
For one, asking sql database is much less computationally intense then asking an AI. Maybe a lot of our historical software and hardware choices are now largely based on pre-existing limitations that are less valid. The same way rich argued it doesn't make sense to have mutable databases, and store things in slots, maybe it's now just as true that we are seeing a shift towards even more flexible ways of storing data like neural nets.
If the user actually has questions: yes! These tools will provide much leverage. But much of the work I do is not about that. It's about making structured information discoverable with a nice presentation. The user just knows "the stuff about X is at this URL lets see what they have".
But yes, what you are talking about here seems to be very much a hot use case for these tools.
as far as building applications "out of" LLMs (e.g. stuff like langchain) goes, I have my doubts about whether anyone actually wants to pay for the levels of energy consumption required for the continuous use of LLMs in that way (to say nothing of the environmental costs). I strongly suspect OpenAI and Microsoft are marketing GPT-4 at "loss leader" prices so they can nab early market advantage. However, they've cut people off from things like Codex (https://aisnakeoil.substack.com/p/openais-policies-hinder-reproducible/comments), which suggests to me that they're already trying to contain the "eye-watering" (according to Altman himself) costs somehow. it seems like they're trying to make their models more energy-efficient, but none of the efficiency gains I've seen written about have obviated the fact that each successive benchmark improvement takes exponentially more compute than the one that came before it. at some point the check will come due. anyone who built a business on Twitter's API prior to last year will tell you that depending entirely on someone else's API and technology for your business can leave you uniquely vulnerable to changes in the TOS for that product. I don't know whether the "locally run" versions of LLMs will really solve this problem either, either due to reliability concerns or their own compute costs.
my more general thought on the topic of using non-deterministic AI systems as important components of software was already said long before LLMs came on to the scene http://worrydream.com/refs/Lamport%20-%20The%20Future%20of%20Computing%20-%20Logic%20or%20Biology.pdf, creator of https://en.wikipedia.org/wiki/Temporal_logic_of_actions: > The fundamental problem with approaching computer systems as biological systems is that it means giving up on the idea of actually understanding the systems we build. We can’t make our software dependable if we don’t understand it. And as our society becomes ever more dependent on computer software, that software must be dependable. ... > When people who can’t think logically design large systems, those systems become incomprehensible. And we start thinking of them as biological systems. Since biological systems are too complex to understand, it seems perfectly natural that computer programs should be too complex to understand. > We should not accept this... If we don’t, then the future of computing will belong to biology, not logic. We will continue having to use computer programs that we don’t understand, and trying to coax them to do what we want. Instead of a sensible world of computing, we will live in a world of homeopathy and faith healing.
> We should not accept this... This is the hardest part. We are not a single mind. I might not find the trade-off worth it, but history has shown time and time again that it's willing to sacrifice tomorrow for today.
So even if an AI model ends up burning the world's energy so we can have AI-generated art, when we already had enough art that no person could see it all in their life time, it won't matter because novelty trumps conservation right up until the moment when the person requesting it starts to suffer.
I'll stop lamenting now and get back to work.
"it often produces incorrect answers presented in an overly confident way" -- this is my biggest concern with this tech, and certainly has been my experience so far. The improvements in the tech just seem to make it more plausible and sound more confident -- I haven't seen much evidence of an improvement in correctness yet. So far, almost every conversation with ChatGPT has led to very confidently delivered factually wrong information 😞 I don't have Copilot installed at work (because that requires a paid subscription), but I do have it installed on my OSS machine -- yes, I keep the two separate -- because GitHub (i.e., Microsoft) granted me free usage based on my OSS maintenance work. I find it slightly useful for suggesting tests, or at least for sketching out the structure even when it gets the details wrong (which it mostly does). It's also not very good with closing parentheses correctly in many situations (because it is, after all, suggesting text not code really). Microsoft just announced an updated GitHub Copilot version that will mix'n'match Codex-based code suggestions with full ChatGPT 4 explanation/discussion features... It will be interesting to how this will work: https://github.blog/2023-03-22-github-copilot-x-the-ai-powered-developer-experience/ I think this stuff may evolve to eat away at the low end of the developer food chain but not the mid-to-high end -- but then we've been promised that repeatedly over the decades. I've been a commercial developer for about forty years at this point and I've seen all sorts of "productivity improvements" promised that haven't delivered really anything dramatic: the systems we build have gotten much more complex and that's eaten up any of the improvements provided by automation -- and the programming community still keeps growing. I don't see AI evolving fast enough to get ahead of that growth -- of either system complexity or community growth -- but it might slow that growth down a little?
To contribute without really contributing - I had lots of experiences with companies that would gladly replace all devs with chatgpt. But that's because said devs wrote code quite like chatgpt - wrote code until the compiler was satisfied, then made changes until it was correct, shipped to prod. When somebody complained about lack of tests, made tests that basically weren't testing anything (or where testing the wrong thing) and be done with it
Some particular company comes to my mind, they basically had a process that involved a very complicated code, and the process literally didn't work - there was someone working full time to manually correct errors on the batch files that were generated by the system. That was tedious, prone to error, and actually completely illegal (infringed a couple of laws, not a single one, and some of these were even international laws)
We were hired to fix the process. One of the directors didn't like this, and basically sabotaged the whole process. For this company, chatgpt would work flawlessly, because that's already how they are working. I personally worked on 3 other companies that had almost the same problems
So, what I believe is that companies that actually want quality, want people to understand trade-offs between solutions, etc, will not migrate to these AI tools. For other companies, sure - I mean, there are companies that could replace their developers by chatgpt the way it works today and they would probably not notice... but I also argue that these companies are not the ones I want to work in, so 🤷
Reminds me of some Java conferences I've been to, where I got talking to 9-to-5 devs who work on massive "enterprise" codebases and they deal with so much boilerplate code and rote work -- and their minds were blown by nearly every talk at the conference (even really basic talks). So, yeah, I'm sure there's a class of developers that could be replaced by ChatGPT, churning out thousands of lines of near-identical, barely-working code, day-in, day-out 😞
I'm using it as a tool here and there on small experiments but I've been shocked at how much better it has been getting in the last couple months. The first copilot didn't do well with ClojureScript at all, causing me to ditch it quite quickly as it caused me to think about "hmm why would it say that" more than actually help. But after a couple of days with GPT-4 I have been amazed at how well it understood my instructions, it will often do the right thing on the first shot, with very bare instructions. When using outdated api's just dumping in relevant documentation would often fix it without extensive "this and that is wrong" instructions. When it did something technically correct but ambiguous because of my instructions it often "got" my follow-up comments. Now I don't know how far this will scale, but copilot is not that long ago, and GPT-4 is obviously -much- better. The context window of gpt-4 is also much larger, so you can dump in more context (docs/existing code). The speed of progress is terrifying, and I think a lot of people are actually underestimating what it can/will do. Energy efficiency seems to be something they can still optimize (see the recent 10x price reduction on the GPT-3.5-turbo), and the approach people are taking is to have the LLM interface with other (cheaper) systems for structured/unstructured queries to enrich the context of the LLM. Many AI experts seem to be more reserved than me about this tech, so I'm probably just too hyped (and too scared honestly, not for my job, but for how powerful it seems). Do have a look for yourself I'd say. ChatGPT plus (20$/month) gives access to GPT-4.
I've been thinking about this a bit more... I think I see a use case for it that I didn't think about previously. It's frontend work. Don't get me wrong. I love making nice GUIs. I care about UX a lot to the degree that any flaw in GUIs that I make actually haunts me. But a large part of frontend work is very tedious and repetitive in a bad way: You can't really abstract away a huge amount of it. Sure, you make re-usable components and utility libraries. But GUIs are at the edge of your program. They are the ultimate concretions. If you start parametrising all the things, you get worse performance and make your code less readable. They are more like data than code if that makes sense. It's very hard to keep GUIs code organised and there is usually a lot of it.Pushing pixels as I call it, is very time consuming work, but it needs to be done. So I feel like work like that is where these tools can help a lot.
There's usually a part that is "special" about any GUI, where these tools cannot help though. I referred to this in an above comment. They need my focus more than the other parts, so I'm thinking it might be worth looking at when it comes to the parts that are repetitive and tedious.
I did have a funny interaction with Co-pilot GPT a couple of weeks back. I had written a method, and then went to add a comment above it. I started it with
// This is
and it autocompleted.
// This is a dirty hack because there is no way to do blah
Where blah was quite accurate 😄With Will Byrd and Matt Might, I am organizing a workshop entitled Declarative Programming in Biology and Medicine (DeclMed) that is affiliated with the International Conference on Functional Programming, which will happen in early Sept. 2023 in Seattle, Washington. The workshop's https://icfp23.sigplan.org/home/declmed-2023 is open. Would appreciate any help in passing along the link to folks who might be interested in applications of FP or declarative/logic programming in life sciences.
might want to cross-post in #C03RZRRMP