Fork me on GitHub
#jobs-discuss
<
2023-05-17
>
Martynas Maciulevičius13:05:52

Today I got a suggestion that I should create a talk so that companies would value me more. But I feel somewhat in a conflict based on what ChatGPT and similar tools have potential to do and what they already do. If I present and opensource any of my code that I worked on privately it would mean that ChatGPT-like algorithms could use these algorithms against me and all of the programming community. So I don't understand what I should do. I already moved all of my somewhat meaningful code from github to gitlab but there is no way to remove this code from the models themselves. It's not that I have something significant but I feel discouraged that I'd not be needed if I post any of my code online. I used open source tech for a long time and a lot but if I'd publish anything on my own and won't care to protect the copyrights (whatever that means (especially if these codex models don't bother to maintain them)) then why would I bother to publish the sources in the first place? Then it doesn't make any sense to publish any code. And without code there can't be presentations about what I do (if that's even remotely interesting to anyone). So I feel kind-of locked-in by one side blindly saying that "publicity is alright" and my inner thoughts saying that "we have no idea what we're up against". Any thoughts about it?

p-himik13:05:25

> I feel discouraged that I'd not be needed if I post any of my code online. Your value is not in your code but in your knowledge and reasoning, apart from many other things. The text that you produce comes pretty much last.

👍 16
4
💯 8
2
vemv13:05:01

> ChatGPT-like algorithms could use these algorithms against me and all of the programming community. Probably, medium-term, all knowledge commonly available to ChatGPT will be at reach for every mainstream model, including community-driven ones, which will be free of charge, forkable, uncensored, etc. With which I mean, LLMs seem a net win in terms of democratization of knowledge for mankind. There will be no single corp like OpenAI monopolizing this knowledge. I'm not tremendously afraid for my job security. I reach for ChatGPT like 0.1% of my billable time. It's just the next iteration of Google/StackOverflow/etc - simply a productivity booster for specific moments. I do believe though that there will be a "first mover advantage" for whoever builds something really game-changing with LLMs. The current crop of outcomes, tools, startups etc aren't too impressive to me.

Martynas Maciulevičius14:05:45

> seem a net win in terms of democratization of knowledge for mankind Net win... but for whom? Maybe that would mean that somebody would be able to reach "above the model" but for how long and at what cost? Humans have finite effort to do something. And if this is the way it will go then it's only the matter of time until the model would be able to start to keep up. > seem a net win in terms of democratization of knowledge for mankind If all you have is knowledge and it's democratized away then you are left with something that everyone has. And this tool doesn't democratize patents so only they have value as you can't use this newly obtained knowledge because of these patents. And on top of that now you know about these patents.

Thomas Moerman14:05:13

(tangent) an equivalent of your question is discussed in the context of musicianship and musical identity in this podcast episode: https://www.youtube.com/watch?v=4bT4vsIqdRI

Martynas Maciulevičius14:05:58

> From the link posted by Thomas Moerman: > > <...> really comes down to one thing.. is... my autoloads. So I have such incredible autoloads for of these setups. It is quite literally a couple of power switches and everything is in unity volume, everything has MIDI assigned to it, everything has a bespoke compressor EQ starting placed on it, all my effects synths are all set up. So I literally in a couple clicks of power switches and then launching an autoload for whatever that task is... I'm ready to write record. So it takes.. it cuts all the fat out of the process of like... setting it up each time I go to do it. <...> I didn't listen to all of it and I tried to listen until the point where he was talking about his setup. What I didn't hear was that he'd talk about his autoloads improving each time he touches them and also he didn't say that he shared these autoloads with the other artists who were impressed about how this artist was so productive. So if the setup is actually shared between artists then this single artist wouldn't have any advantage among others. So IMO these autoloads aren't comparable with GPT. They are tools but they can't be compared to something that is constantly retrained.

Thomas Moerman14:05:25

there is a section on AI and what it entails for musicians (more to the end if i remember correctly)

Thomas Moerman14:05:51

So the gist is that at some point, AIs will be able to emulate the style of any artist with enough published work, which basically commoditises their artistic qualities. Which is, if I understand correctly, the issue you see with open sourcing your work.

Martynas Maciulevičius14:05:13

> Which is, if I understand correctly, the issue you see with open sourcing your work. I think this is what I mean. I'm not sure anymore.

Martynas Maciulevičius14:05:09

> basically commoditises their artistic qualities There is this capture of artistic qualitites but then there is also that it could create designs if it would be given some kind of one-shot source. For instance it could be given two systems to merge them in some way. Maybe adjust data models or something. Or it could work on datamodels directly.

seancorfield15:05:50

I think these language-based AIs will be like so much automation of the past: they'll (partially) replace low-skilled jobs but create more high-skilled jobs. I can see these tools writing a lot of our tests for us and a lot of the boilerplate in many contexts -- essentially we'll be able to use them as higher-level languages to let us express problem solutions more directly, without having to agonize over a lot of the low-level details. We saw this when we moved from machine code to assembly language up the chain to stuff like ALGOL and so on. As Clojure programmers, we're already working at a higher level than the vast majority of developers so we're actually in a better position than many other programmers to take advantage of this "leveling up". Not everyone wants to "level up", of course, and we've seen that too, as lower-skilled jobs have been replaced by automation and some workers don't want to "start over" and learn new skills: they liked their jobs and they wanted to keep doing them... but those jobs are gone, replaced by other jobs. As for open source, I think the basic question we need to ask is: why do we open source our software in the first place? Do we want other people to use it? (yes) Do we hope other people will contribute to it? (yes) Those things are still true. If you have something you believe is truly novel, patent it. If you have something you believe is truly commercial, sell it -- don't give it away. If you're doing open source for "reputation", that's still true: you can point to your "portfolio" and it still has value. Will these AIs replace the use of some open source projects? At the low end, quite probably. If your open source project just reduces some boilerplate around the use of some other feature or library, maybe people using AI won't reach for your library because the AI can already generate that boilerplate. But "anyone" could have written that boilerplate and published it as a library so I would argue that doesn't contribute much to your "reputation" anyway. My experience with these AIs so far is that they are very confident about the wrong information they give you. Sure, they can get some of the low-level stuff right (but so could "most" people) -- and of course you always hear of the unusual successes because those are what tend to "wow!" people but you don't hear so much of the failures or the hum-drum day-to-day grunt work they can do... because that's not as newsworthy.

👍 14
☝️ 2
dorab17:05:11

+1 to what @U04V70XH6 said above. And, to address the question earlier of "Net win ... but for whom?", the big win is for society or humankind - all of us, collectively, not any particular individual directly. Somewhat related to "a rising tide lifts all boats" (though admittedly, some boats more than others). Even with open-source, though you can't really charge for the software itself, there are other revenue models possible. Charging for support, advice, maintenance, bespoke modifications, timely response, etc. Similar revenue models will develop (or already exist) for AI models. Even in the current LLM Cambrian explosion, the open-source models and training data are iterating much faster than the closed-source models and are benefiting everyone. In this realm of commoditization, one way of differentiation is any unique data you may have. If you have (say) some data that, because of the position you're in, only you have (or better still, only you can have), then you can charge for that. At the end of the day, you always have to figure out what your particular unique skills/knowledge/value are and that you can charge for. Find a way that you can add value and charge for it. There always is some way to do that. Life is not a zero-sum game.

2
practicalli-johnny17:05:05

To the original point, don't let AI or anyone else's work stop you from sharing, its an effective way to learn especially when others give you feedback. If an AI takes the work I do for http://Practical.li, hopefully it will mean better code for everyone and code I like working with 😄 I expect to have retired before AI has a significant impact on the highly skilled world of software engineering (in about 15 years time) Till then, unless you are inventing something incredibly radical it will just be lost in the stew of billions of other data points consumed by the current AI systems. If that isnt the case, then bring out the https://en.wikipedia.org/wiki/Sabot_%28shoe%29.... time for some sabotage laughcry

2
cddr17:05:40

I don't think it's too much to worry about. I think a lot of this AI hype is there to sell hardware that was bought to satisfy demand for bitcoin generation. But that went away (or at least didn't grow enough to exhaust the increased supply of GPUs) so everyone that's long on GPUs is jumping on the LLM hype train.

2
👍 4
seancorfield17:05:38

@U065JNAN8 You know, I thought it was just the cynic in me that was seeing former crypto folks pivoting to LLM stuff but hearing someone else say it makes me think I'm not just being cynical 🙂

practicalli-johnny19:05:11

All that spare capacity from defuntk crypto miners has to find someone to pay for it's use, especially since China and other countries started shutting down crypto mining (and currencies folding too of course) Very similar hardware usage between crypto and AI (according to the people selling access to that hardware anyway)

vemv11:05:07

> If all you have is knowledge and it's democratized away then you are left with something that everyone has. I'm not afraid of this scenario any more than I'm afraid of Wordpress or Magento developers (with all due respect to the talented individuals using those techs). i.e. if some low-hanging fruit will be tractable by people who aren't thought of today as 'programmers', good for them. It seems a good time to double-down on quality, these days. I'm not just a programmer but one in a specialized language that is highly compensated. Sometimes it's easy to forget that :) With the EU's new Cyber Resilience Act, we might see a positive effect in which "just ship it" culture might rarely cut it anymore for all but the most trivial projects.

cddr10:05:56

There’s more of us @U04V70XH6 https://twitter.com/baddestmamajama/status/1659365718599819268. I’d short AI but I’ve got no money left after paying the heating bill.

skylize16:05:18

AI extracts "what everybody else is doing" from the data. I foresee a regression-to-the-mean problem. Yes, for cases where "good enough" is good enough, the AI are already coming at us fast. The way for humans to stand out is by standing out, e.g. obsessively careful design or outside-the-box thinking. It's the people who essentially regurgitate code instead of creating it that will be hit hardest. That job seems to be going away. As such, I think > It seems a good time to double-down on quality, these days. - @U45T93RA6 should be a pretty good strategy for a long time. Lots of mediocre code being datamined means you can expect lots of mediocre code being written by AI.

🙌 4