This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2022-12-11
Channels
- # adventofcode (52)
- # announcements (3)
- # aws (2)
- # babashka (36)
- # babashka-sci-dev (4)
- # beginners (69)
- # biff (45)
- # calva (9)
- # cider (3)
- # clara (8)
- # clj-kondo (24)
- # clojure (20)
- # clojure-dev (12)
- # clojure-europe (12)
- # clojurescript (2)
- # conjure (1)
- # emacs (17)
- # lsp (69)
- # malli (12)
- # off-topic (32)
- # polylith (2)
- # re-frame (4)
- # releases (2)
- # scittle (6)
- # shadow-cljs (21)
- # tools-deps (10)
- # vim (11)
- # xtdb (11)
Is OSS-development even good for us? There are GPT and Copilot out there and I can only imagine they are being trained on OSS. In the future when they are becoming so good that there are fewer jobs out there that are being payed less, why should we continue writing OSS? It's harming us, it's shifting the power away from us developers.
Freedom of information is a good thing. The problem you are describing is that not enough people have the information that would allow them to write and distribute software in a way that would resist centralizing forces. We can write OSS to address these problems.
It's an interesting topic. I've always considered OSS itself bad for developers from a financial point of view. Imagine if there was no Postgress, no Clojure, no React, no Rails, etc. Every company would need to hire even more developers to implement all these, or there'd be additional companies building them and selling them which would also be hiring. But from a convenience point of view, and what a developer can achieve on their own or with a small team, you'd get a lot less done without OSS. And that also means less companies would have the means to produce more complex software, since they wouldn't be able to hire the amount of developers required. In that sense I see AI models as following the same trend, as long as the models are also OSS or reasonably cheap. It brings you added convenience and allows one developer or a small team to do even more on their own then they could before. Which also means there are less reason to need more developers. The counter argument I've heard is that OSS allowed more product innovation which created new markets and thus more jobs. So while it lets each dev be more productive, thus reducing how many devs you need to get something done, it also arguably created more companies that want to hire developers. I don't know if I buy that, but if you do, I think people might claim the same for AI assisted coding.
My 2 cents: Well I have written a tiny library and I decided not to share it. If we'll have a nice share model then I may do it.
Assuming that OSS is indeed "bad for us", the solution would be to have all companies get rid of all the OSS code they use and hire developers instead or pay other companies/contractors write the code for them. That would significantly drive costs up. Guess who would not benefit from it 🙂 The tools aren't the problem...
For me open source development would be a nice thing because I could then do a presentation about my project and show that I can do something. Then that may open some opportunities for better jobs. But I've read in a book that things like social networks depend on naive oversharing. And this is what we actually have with source code for the same reason of publicity. GitHub is a social network, there is no doubt about it. If a person doesn't feel secure they won't overshare. And now with ChatGPT sharing code isn't so safe anymore because who knows what it means to share at this point.
Good replies, just wanted to share my thoughts. I don't like the AI being trained on the 'commons' because crawling and computing all this data needs that high resources that it's by design a thing only a corp can do and no one on their own. But I love that so much OSS is out there that I can use for myself and in a company I work.
Use a niche language no corpo would use, that way, publishing your code only burns their cpu cycles 🙂
People will always need support for the last 20% they can't figure out via GPT and they will need bugs solved. Without a deep understanding of the software, they will rely on the OSS devs for those projects.
@U04V15CAJ, I am unsure how long your argument will remain valid. I remembered this from several days ago: https://twitter.com/lauren_wilford/status/1600337819725365248
Time for some rollercoasters bois: https://survivingmomblog.com/wp-content/uploads/2021/03/THE-FIVE-STAGES-OF-THE-GRIEVING-PROCESS-1-1.png
Even before AI generated code, we have been teaching people for the last 50+ years that computer software is a roll of the dice. If it doesn’t work just shrug, restart, and pray. If project is overbudget and underscope, just shrug and push the cost down the pipeline. etc. So why wouldn’t people take the same approach to cheap, low quality software that tends to take over every other consumer market? Sure, you’ll always have the “lovingly hand-crafted in Vim” boutique shops, and the “modding culture” shops modifying generated off the shelf software, but I’m wary we may see a large backlash of companies that go down the AI generated software. And lots of the existing devs will end up as “QA” (not to diminish what a real good QA does now)- refactoring/fixing/rewriting the generated code, but with different salary and influence to steer actual product.
https://githubcopilotlitigation.com/ It is far from a legal certainty that OpenAI can continue to enclose the intellectual commons the way they have been. The "fair use" justification for using training data in a way that contradicts a software project's license has never been tested in a US court.
I do feel, even though I'm personally against data mining without licensing from the author of the data. And hope this ends up in a win for the authors and not for the data miners. That in the long run, it's inevitable. Both from being a global market, where for example already in the UK they've made data mining without licensing or copyright legal. And you'd have China, Russia, and other countries as well. But also, even if they have to pay for licensing the code they data mine, most companies working on such AI have the means to pay, or can use their own private code base for training. So either way, I think we are looking at a future where you have AI writing code, or at least assisting in the writing of code. My personal take is adapt or die. If you're a top developer and you can also become even more productive by embracing AI, you should, to remain competitive within a market where less developers will be needed, meaning the better ones will get the jobs. And the truth is, the need for developers would only get smaller anyways, as more software became open source, as more low code or no code tools are being developed, as higher level languages are being built, as cloud services becomes the norm, as AI is being developed, it's always been the case where less developers were needed to get a similar job done. We haven't seen the impact because the overall demand for digitization kept growing, so as the software market overall grew so fast, we didn't notice that less developers are required than before, since overall the demand for software developers kept up. Nowadays, we're probably closer to saturation here, so we'll see more of the impact of better productivity. And like all other productivity improvements in the last 50 years, the benefits go to shareholders, founders, or to consumers, not to workers.
Open source offers more tools which makes companies hire less devs. These tools get used in all the possible wrong ways (Javascript, Babel, Webpack) which makes the companies need more developers to keep this from blowing up and killing their products
AI will be the same: companies will hire less developers because the AI will write the happy path flawlessly. Then somebody will ask for a Café 😍
and then things will blow up in interesting and exciting ways. Now the company needs to hire 3x more devs than it originally needed to find and fix all these edge cases
In the end, AI will be another tool that experienced developers will know how to use, and hype-driven developers will abuse to make everything worse. I just hope we don't get a build system that needs to be written by an AI, because it's too complex to be understood by humans..
> because it's too complex to be understood by humans
Too complex for one human, too easy for one AI
I imagine something like xcode project file, but with more AI. And you have to check it in into the VCS.
> too easy for one AI Does an AI actually "understands" what it's doing? It's basically a Chinese Room, but with code. I mean, look at the GPT stuff where you convince it that it's wrong, and it basically says "yes, I was wrong, 1+1 is not = 3, 1+1 is actually = 3" And you're like "well... you just said the same thing twice, trying to convince me these things are different..."
It's a longer way to say "did you restart you router?" 😄
Hi, we’re putting a clojure (REST API) app live in the next while, and as we move towards that milestone I’m thinking about monitoring and logging. It’s a mission critical app, we’re using GCP kubernetes and cloud Postgres. I’m wondering if anyone has any advice on what logging/alerting services work best? We could just use the default gcp cloud logging impl (which we get out of the box), but I would be willing to trade off some additional cost that would remove developer effort on our side. I’ve seen plenty of people using things like data dog and new relic for monitoring/alerting, and would be good to hear about people’s experiences of these tools. We are already creating Prometheus-format metrics on our server, so this would be a path of least resistance for us. We are a startup, so a pay as you go model works well for our use case - if/when we are successful and these costs mount we can always re-look at that, happy point. So, if anyone has a logging/monitoring/alerting setup that allows them to see the health of their system at a glance, quickly identify problematic trends in their apps or infrastructure, and alert team members when things go badly wrong - all without requiring an army of people to manage such a set-up - id love to hear from you. Thanks in advance.
sumo logic is nice for log spelunking, particularly when you log json
Thanks Cora, and is that the tool you use for all alerting/monitoring related activities, or do you rely on other tools in tandem?
I'm really happy with Datadog for logs and metrics, it is great value for money and it makes things really easy for our team. It is much easier to set up and keep running than dealing with the ELK stack for example. The many integrations are really helpful I'm hearing great things about honeycomb and I want to research that next for distrubuted tracing
the alerting in sumo isn't all that great
but the log spelunking is nice
datadog is great for metrics, tho
At All Street we use two approaches: • JournalD logs - we make these available via a read only authenticated internal Web UI. ◦ We add our custom searching and display based on our logs and usecase. • https://uptimerobot.com/ hits a secret test endpoint every 5 minutes that runs integration tests (databases and other services are responding sensibly). ◦ This alerts the whole team on Slack if there is an issue. Both have no recurring costs or external data sharing or vendor lock in.
As good as the third party vendor log managers are, you often can't beat pulling the logs down locally and running unix tools like grep or slurping them into a clojure repl or creating custom UI visualisation (e.g. we log out EDN data structures in particular cases for analysis). So we've found it better to do without third party vendor log manager for now.