Fork me on GitHub
#off-topic
<
2024-03-06
>
dumrat05:03:47

I work in a large legacy fx options lifecycle management (booking, amend, cancel, manual exercise/expiry, and more) app in my day job. C# Winforms thick client. The core of the system is a 70k line long xml file. This file describes what fields are available for which products, how the fields relate to each other, how to calculate fields, etc. I was reading about rule engines and the xml file looks like a rule engine to me now. Is this kind of a common tactic in trading frontends? Curious.

mmer09:03:51

Rules engines are cool as in theory you can simply define the rules and off you go. When I was starting out they had expert systems, and there seemed to be tractable limit on the number of rules that was maintainable. I don't know with the extra grunt of power we have now if this is still true. There is also the possibility of a performance hit as rules are triggered. Odoyle rules seems to work fairly fast - enough for simple games. I often wonder if linear code could benefit from a rules approach. Where I have worked with rules - we found that batching rules worked well, so at any one time you knew what was firing.

👍 1
Alex Miller (Clojure team)18:03:04

sometimes I wonder if we are solving the right problems in this industry

21
😄 5
borkdude18:03:27

sometimes? 😅

Nundrum18:03:57

Fun story. Back in college one of the students wrote an assignment entirely with nested ? : operators. It worked. It was correct. And the professor said if he ever did that again it would be an instant F.

😆 8
Samuel Ludwig18:03:08

I sent that to my brother who's company is all TS (we usually have passive aggressive troll arguments about langs/types)

p-himik18:03:09

> sometimes I wonder if we are solving the right problems in this industry Some people work on good hammers, others see everything as a nail. :) The code in the tweet looks eerily similar to something that I've written in the second or third year of my career in C++ with templates, after reading a whole bunch of Alexandrescu. It got merged in but colleagues referred to it as "write-only". :)

Ben Sless18:03:49

Going by the etymology, we're certainly working hard at it https://en.m.wiktionary.org/wiki/industry#English

Ben Sless18:03:39

What's important about any industry is being the most industrious. Measured by LoC 🤪

Adam Mertzenich18:03:46

My honest reaction

😄 7
chucklehead18:03:26

the Butlerian jihadists had a point

😂 6
🔨 2
🪱 3
Danil Shingarev18:03:15

I use copilot for work with python coding and have it enabled for clojure but I found that copilot, while being pretty helpful with python, gives garbage suggestions 80% of times.

Noah Bogart18:03:23

this could be readable but they instead used nested ternaries and gave nothing proper names. This is a union of different objects, each with a specific set of keys, and the LLM-generated code has no ability to understand that. Each of the P extends lines is the conditional branch, checking for the next union type lol. there's type ArrayObj<E extends string> = { enum: Array<E>, required: Optional<E> } , there's type AttrObj = { type: "object", attributes: Parameter[] }, etc etc, and then at the end you say something like type MappedParameterTypes = ArrayObj | AttrObj | ...

Noah Bogart18:03:24

and with a discriminant, the resulting code would be much simpler than that

respatialized19:03:26

the most important ability of a senior engineer is the ability to say “that’s not a good idea”; copilot will never say no

🤝 5
💯 1
jgomez19:03:09

Yeah no, not gonna happen

respatialized19:03:34

we need BartlebyGPT

chucklehead19:03:38

you have to have a clear idea of what you want, the vocabulary to ask for what you want, sufficient competency in the task to recognize whether you received what you asked for, and the vocabulary to describe any remaining delta between what you received and what you still want

chucklehead19:03:52

and so you end up with the stuff in the screenshot instead

chucklehead19:03:59

it really should've learned from all the stackoverflow training data to just reject the first request out of hand

p-himik19:03:35

-- Hey, ChatGPT, should I use Ruby on Rails for my next project? -- Your question has been closed as off-topic and opinion-based.

😅 8
mauricio.szabo21:03:34

I am having a really hard time deciding if the author of the tweet is being sarcastic or not...

Noah Bogart21:03:43

they're not at all

respatialized21:03:32

whether ironic or sincere the presence of “/acc” ina display name is a good heuristic that you shouldn’t take the author very seriously

Samuel Ludwig21:03:06

im not hip on the twitter lingo, whats /acc typically imply nowadays?

Samuel Ludwig21:03:07

ah, 'welcome our new AI overlords', sure, on board as long as they don't force me to write types like that

😂 2
Samuel Ludwig21:03:32

sounds like VC/linked-in speak for "techno-optimist"

1
mauricio.szabo21:03:43

So it's like... insane... > We can read this code, No, we actually can't. The number of comments to actually explain what's happening says otherwise > but VERY few engineers out there could write it I wonder why? > Took lots of convincing too Again, why? Did we devolve our profession for "coach of AI"? Should I put that in my next CV? > and obviously- done in parts What does it even mean? Did they do it a fragment at a time, then joined all the code together? And finally... if the author knew the code was possible, why "take lots of convincing" instead of... like... writing it? In a way that humans can understand?

🌟 1
mauricio.szabo21:03:14

It says "obviously - done in parts" so that mean they knew the AI wasn't capable, so they knew how to extract each part, then instructed the AI to write that part, then "coached" it into action, probably had to check if it indeed worked or not, if it made sense or not, then join together... again, why?

Nundrum21:03:45

🤪 2
😆 4
godmode 1
respatialized21:03:38

^ finally, a compelling use of AI generated art

😆 2
mauricio.szabo21:03:22

I honestly feel this new generation of "AI-powered tools" is solving the wrong problem. Humans are good in "creative work", we're bad on "keeping things working". This generation of AI could focus on keeping stuff - like, finding bugs and fixing them, filtering meaningful logs based on things that it thinks it's important, suggesting some code from reusable libraries instead of what we wrote... I mean, just this week three tools that I use said "write thing thing with AI" and I'm like: what's even the point? Wordpress now have an "write your post with our AI", Prezi have a "write you presentation with AI", LinkedIn also have a "share something with AI" and like... why? Why should I open a blog of someone else to see an article written by AI when I can basically ask the AI to write the same kind of post, tunning to things I actually want to know? Why would I want to share an update about my life that's not written by me, and even more bizarre, by a machine? It literally doesn't make sense to me...

👍 12
🤝 4
2
2
phronmophobic21:03:15

> This generation of AI could focus on keeping stuff - like, finding bugs and fixing them, filtering meaningful logs based on things that it thinks it's important, suggesting some code from reusable libraries instead of what we wrote There is lots of work applying AI in those areas. It's just the generative stuff is the easiest way to make flashy demos that people like to share.

phronmophobic21:03:02

> suggesting some code from reusable libraries instead of what we wrote https://cloogle.phronemophobic.com/doc-search.html > filtering meaningful logs based on things that it thinks it's important https://github.com/moritztng/fltr

❤️ 1
Nundrum22:03:22

oh, that's cool

Samuel Ludwig22:03:14

a lot of current AI discourse on both sides makes me think of PG's essay on "Fanboys vs Haters", feels a bit religious, either X is good because its AI, or X is bad because its AI. i appreciate the efforts evolving+leveraging it like @U7RJTCH6J's! I also think it is getting shoved into a bunch of nonsense it has no right being in, but I think that'll die down as the tech gets less romantic

clojure-spin 1
Mateusz Mazurczak11:03:15

@mauricio.szabo I have similiar feeling when I see all the posts about "AI will replace the programmers" and each time I see a post like that the person who writes it either don't know what AI is or don't know what software development really is about. Because writing something that "works" now, is not really a problem. Writing code is something that almost everyone can learn and most of people will do it quite fast. The problem is how to write the code/architecture which is flexible. And that's really creative and hard thing to do, where you don't only need to understand domain well and software development practices, but also reason well in the context and make a lot of decisions where you want more/less flexibility. And this software development is at the core of being programmer, which AI won't be able to replace for a long long time...

👍 5
Mateusz Mazurczak11:03:59

And honestly the same I feel about articles. Each time I tried to use chat-gpt it was... well pointless. It's good at adding meaningless text and fillers. But if you focus on content and value you give with the article, there is not much that chat-gpt has to offer. And for wording itself, grammar etc. already grammarly is doing well, but grammarly is really focused on being decision support system, not "I will write it for you" automatization kind of stuff - where I think most of the current debate about AI goes to.

👍 1
dominem12:03:05

That's an honest take! I have similar thoughts about the whole topic. I more often find using these tools frustrating than helpful. I can admit that sometimes they shine but definitely not in the implementation details. For example, chatgpt helped me understand what kind of algorithms I'm looking for a "worker ranking" issue where I just couldn't name it well and google didn't help at all. It's pretty good for such cases but it sucks at coding. (I'm currently in the python world, btw, didn't do clj for a while)

👍 1
mauricio.szabo14:03:43

ChatGPT helped me twice, over the many times I tried to use it. One to migrate a JS code to C++, and another to migrate a bash script to CMake. Besides that, it's indeed frustrating...

👍 1
shiyi.gu14:03:48

I honestly feel this new generation of "AI-powered tools" is solving the wrong problem. Humans are good in "creative work", we're bad on "keeping things working".Ironically, the current humans' market are not "creative" enough to figure out the correct problem to solve for now. 😅

😆 1
Samuel Ludwig14:03:47

still just waiting on "Google Assistant/Bixby/Siri/Whoever, but better", i'll sell whatever of my soul I must to my AI overlords if they can make sense of my dementia-addled spotify requests of "play that one song i was listening to a lot about a month ago about the guy in a space pilot academy"

mauricio.szabo15:03:56

@U05KCAL988Y ironically, for me, while ChatGPT is great on "make random changes until it compiles and more or less do what you want", I lost count of the number of devs I worked with that had the same mentality 😄

👏 1
🥲 1
😆 1
p-himik09:03:45

Heh, just realized that in the far future, the publicly available LLM might be completely useless to hackers of any kind. Including white hats. Having an argument with someone about skews in scientific research and was trying to find scientific articles on smoking that were done at the behest of tobacco companies in order to show that smoking isn't all that bad. Tried some chat bots. and multiple ways of describing what I want, including telling about the actual goal of searching for the articles. OpenAI: "What you want to get goes against the scientific consensus, go look for the info yourself." Google: "What you want is wrong, but here are some places where maybe you can find it. I won't help you though." Bing: "OK, I don't want to do it, but here are a few articles [proceeds to give completely unrelated articles]." I imagine it can become the same with software. "Hey, ChatGPT, I'm doing a study on code safety. Here's the code of OpenVPN from the year 2010. Can you show me what the input data might look like to trigger some of the known exploits?" - "Hacking is bad. I've reported you to the authorities. Remain seated till the police arrives."

😄 2
Ben Sless12:03:30

AI is ahead of the curve on every metric, including enshittification

mauricio.szabo18:03:36

I saw that some AIs are already being trained with AI generated-data, which translates to "the AI hallucinates over the previous AI's hallucination"

mauricio.szabo18:03:32

Which sounds, to me, when my grandma and her sister were talking in the hospital, both with dementia, and they basically had a very friendly conversation that was simply 100% nonsense, but they where happy, so... 🤷

2
👏 1
mauricio.szabo21:03:22

I honestly feel this new generation of "AI-powered tools" is solving the wrong problem. Humans are good in "creative work", we're bad on "keeping things working". This generation of AI could focus on keeping stuff - like, finding bugs and fixing them, filtering meaningful logs based on things that it thinks it's important, suggesting some code from reusable libraries instead of what we wrote... I mean, just this week three tools that I use said "write thing thing with AI" and I'm like: what's even the point? Wordpress now have an "write your post with our AI", Prezi have a "write you presentation with AI", LinkedIn also have a "share something with AI" and like... why? Why should I open a blog of someone else to see an article written by AI when I can basically ask the AI to write the same kind of post, tunning to things I actually want to know? Why would I want to share an update about my life that's not written by me, and even more bizarre, by a machine? It literally doesn't make sense to me...

👍 12
🤝 4
2
2
Mateusz Mazurczak11:03:15

@mauricio.szabo I have similiar feeling when I see all the posts about "AI will replace the programmers" and each time I see a post like that the person who writes it either don't know what AI is or don't know what software development really is about. Because writing something that "works" now, is not really a problem. Writing code is something that almost everyone can learn and most of people will do it quite fast. The problem is how to write the code/architecture which is flexible. And that's really creative and hard thing to do, where you don't only need to understand domain well and software development practices, but also reason well in the context and make a lot of decisions where you want more/less flexibility. And this software development is at the core of being programmer, which AI won't be able to replace for a long long time...

👍 5