Fork me on GitHub
#off-topic
<
2023-02-13
>
skylize05:02:12

Sitting in the middle of Interstate 85-S leaving Atlanta. Staring at police lights, with 4 lanes of cars parked for a mile in front me and presumably the 3 miles behind me to the last exit. Over 4 hours so far, and they have not managed to open even the shoulder for through traffic.

🥴 1
skylize07:02:00

I saw two ambulances go in earlier, only one of which left with sirens and police escort. I could see someone inside who looked awake. Articles on local news websites described it only vaguely as a "multi-vehicle crash". They showed a clip from a traffic camera where all you can see is the police cars blocking the road and the front of the traffic jam. After almost 5 hours of a major freeway at dead stop, 3 of 4 lanes started moving at once. Strangely one middle lane was still blocked by one of those trucks carrying a lighted arrows sign and a small SUV with no obvious damage. Nothing else remained to see of whatever happened there, including all the other emergency vehicles being gone.

skylize07:02:56

Certainly a lot of people there who were not planning to have to drive home at 1:00 in the morning.

Daniel Craig16:02:07

Oh my gosh that's miserable

🙏 1
Daniel Craig16:02:32

Dallas traffic gets bad sometimes but I've never seen anything like that

skylize16:02:30

Me either. Usuallly 20-45 minutes before they get one lane open for everyone to squeeze through. Still makes for a miserable time, but certainly way better than 5 hours of literal parking lot.

skylize16:02:43

I expect it was probably also "bumper-to-bumper" for another 10-ish miles farther back. All the way to the I-285 Perimeter, where it would bleed into that freeway from both directions as people try to exit. I'm sure the side routes around it were awful too, cramming 100% of traffic meant for 4 lanes of crowded freeway through a couple 35mph roads with traffic lights.

skylize16:02:29

Morning new reports still offering no new info on what happened. But I clipped this from a video of when the emergency vehicles first began to leave.

Gerome18:02:19

I find it weird that it's not possible for people to come up with some organized way to move everybody to the last exit. Why have maybe hundreds of people wait for hours? This could be done.

skylize19:02:36

Wouldn't they be nice. 😇 The people who would be expected to to take charge of that are focused on the scene of the accident. Theoretically could happen organically, but you have newly arrived traffic constantly pushing from behind, with no idea the only real movement is sideways.

skylize19:02:12

If I was driving a car instead of an 18 wheel truck, I likely would have eventually tried to back up on the shoulder, or even get turned around and drive the "wrong way" on the shoulder. Whatever cop might be in the mood to ticket that is, like I already said, "focused on the scene of the accident". 🌝

Gerome19:02:00

Yeah, I get that the police is busy focusing on the accident. I’m just saying it’s a weird situation. Here we are with computers that can talk and beat us at chess but if there’s an accident, we act like a herd of cattle that is being watched over by dogs. I don’t mean this as an insult. It’s just an observation and it’s good but weird.

Daniel Craig22:02:16

Would getting all the people off the highway really help? even if all the cars could turn around and drive multiple miles backwards to the last exit, wouldn't there still be congestion as the multiple miles of multiple lanes of traffic filled all the single-lane roads?

skylize22:02:54

You get that either way

Daniel Craig22:02:49

In Arlington TX they have a big multilane road (6 lanes I think) that turns one-way on Cowboys game day - it's really neat

Daniel Craig22:02:22

game day traffic is a good example of what happens when you prioritize traffic going in one direction (against the normal flow of traffic)

Daniel Craig22:02:55

in game day traffic, it becomes a nightmare to do anything other than "leave the stadium" because roads become one-way, lots of detours, etc

skylize22:02:27

Only reversing of freeways in Atlanta is the Express lanes on I-75 just north/south of the Perimeter . They are fully isolated from the main road, with their own private entrances and exits. Except for the ramps at the far ends, and occasional glimpses of "Wrong Way" warnings through the trees, you can't even see them from the freeway. A few 3-lane surface streets are scattered here and there with a reversible middle lane.

pez10:02:14

Since it's OT. 😃

👍 4
🙌 1
mauricio.szabo14:02:00

Folks: I have a notebook that have a weird problem: if the battery is below 40%, and I put it to sleep, it basically drains the battery in about 4 hours. If I put it to sleep with the battery at 100%, after about 8 hours it drained 15% or less. Do anyone had any similar problem? I'm on Linux (which may be part of the problem too 😢 )

pavlosmelissinos14:02:06

I have no idea really but this is what I'd try: 1. If your GPU is nVidia there might be some issue with that. If you have an integrated gpu try using that instead for a while. 2. if you're using tpm disable it and see if the issue persists. I believe it needs some tweaking to play nice with some computers. Good luck!

Rupert (All Street)15:02:13

Linux laptops often have an issue entering deep sleep states. This is sometimes caused by the WIfi chip refusing to sleep. This means that sleep will still drain battery quite quickly. An alternative option is to use hibernate instead of sleep (you may need to enable this in a settings menu or config). Hibernate will save all RAM to disk and completely shutdown the machine. Another option is to shutdown instead of sleep: Linux OSes often have an option to reopen applications that were open at the time of shutdown and IDEs and web browsers often have a similar feature to reopen all tabs/work from when you last closed them.

💡 2
Conor15:02:33

It sounds like it might not be entering an S3 or S4 sleep state, yeah. You can check which ones your laptop supports via cat /sys/power/state - this might be useful to help narrow things down https://01.org/blogs/rzhang/2015/best-practice-debug-linux-suspend/hibernate-issues

sheluchin23:02:46

I wonder how people go about keeping their REPL experiments organized. Has anyone written about a workflow? What I mean is I often start working on some function, but I'll either keep updating the same chunk of code until I get it right, or I'll have multiple iterations of the function that I keep it a comment block or so I can refer back to my changes. Using the first approach, I feel like I'm losing information; with the second approach if often starts to look like a bunch of messy scribbles. Is there some mental model or so which I could employ to keep it all more organized?

2
eggsyntax23:02:57

There’s a lot to be said for experimenting in a namespace in the editor and evaluating the code in place, but sometimes I like to experiment directly in the REPL and take your first approach, and for those times I use the tiny https://github.com/eggsyntax/reconstructorepl library to automatically grab the most recent versions of a function and all the code it depends on.

Rupert (All Street)00:02:58

Good question. I usually edit a function in place and just keep re-evaluating it in the REPL. I use emacs undo-tree-visualise that gives you unlimited undo with no risk of losing the undo history because you can navigate the undo history like a branching tree. If I realise that I need to check an earlier version of the function - I just undo, review the old version (or copy it), then role forwards again in undo history. For much older versions I'll check git history or git blame instead.

👍 2
kennytilton01:02:48

I do not worry about losing information while hacking on a single function; each iteration gets "interned" in my understanding. In a rare case I will take a "backup" by duplicating a version before continuing to hack. Testing is done in comment block code shipped ad hoc to the repl. Only in very rare cases do I keep this code around for when future work might be required on the same code. Bonus trick: I have been known temporarily to put a function definition in a "do" block followed by the test code, so a recompile also re-runs the test.

👍 4
adi09:02:57

> I'll have multiple iterations of the function that I keep it a comment block or so I can refer back to my changes most times I don't need this code version memory, for reasons similar to kenny tilton's but when I want this, there is git commit

adi09:02:35

also helps to have good undo support (e.g. undo-tree etc.)

2
adi09:02:05

Design-wise I'm usually interested in finding correct function API signatures, boundaries, and contracts... any function's internal implementation is fungible and I don't mind forgetting a seemingly cleverer way to do the same thing. If it is important enough, my future brain (or someone else's) will find better ideas sooner or later.

Vishal Gautam15:02:56

I version them. v1, v2... and so forth. Yes deleting/modifying old code will lead to information loss and thats the reason why I keep extra copies. I can always go back see how the versions evolved. Interestingly this works even in software.

pavlosmelissinos16:02:27

> I version them. v1, v2... and so forth. You mean you use something like git tags or do you actually have my-function-v1, my-function-v2 and so on?

pavlosmelissinos16:02:36

> What I mean is I often start working on some function, but I'll either keep updating the same chunk of code until I get it right, or I'll have multiple iterations of the function that I keep it a comment block or so I can refer back to my changes. Using the first approach, I feel like I'm losing information; with the second approach if often starts to look like a bunch of messy scribbles. Is there some mental model or so which I could employ to keep it all more organized? If you try to keep your functions small and simple (responsible for a single thing) that will never happen. If you work with long functions (20+ lines), you can always break them down. > I'll either keep updating the same chunk of code until I get it right > Using the first approach, I feel like I'm losing information That's what I do and I've never lost important information. Besides, throwing away the parts you don't need is part of experimentation/exploration. As long as each iteration gives you a better understanding of the problem, I don't see how keeping the old versions around can help but I'd love to hear an example. Personally speaking, I just can't have multiple implementations of the same logic around at the same time. I've done versioning without git/proper version control tools in the past and I'd rather risk forever losing some momentary glimpse of genius in a previous implementation than have to relive that!

👍 2
Rupert (All Street)17:02:02

As mentioned - I usually edit code in-place and only keep the latest version of a function. Sometimes I will write a naive / low performance version of a function first then write a better/faster version. In that case I sometimes keep the original version as a useful benchmark / correctness comparison. In that case I will give it a distinct function name by adding an appropriate suffix to show how it differs (e.g. -naive , -unoptimised , -uncached , -eager , -single-threaded etc). But this is a bit of a rare edge case.

👍 2
eggsyntax17:02:53

@U0PUGPSFR > I do not worry about losing information while hacking on a single function; each iteration gets "interned" in my understanding. Interned where? Not in the namespace that I know of (but I might just be misunderstanding what you're saying?). Here's what I get if I check interns in the REPL:

user> (ns-interns *ns*)
{}

user> (defn f [x] x)
#'user/f

user> (ns-interns *ns*)
{f #'user/f}

user> (defn f [x] (inc x))
#'user/f

user> (ns-interns *ns*)
{f #'user/f}

Rupert (All Street)17:02:47

I think he means figuratively interned into his mind. 🙂

😁 2
✔️ 2
Rupert (All Street)17:02:27

Also worth noting that because we have an approach similar to mono-repos at work - we can safely create duplicates of functions and then delete them in future and know that no downstream code is impacted (because we recompile and re-run all downstream code/tests automatically).

Vishal Gautam18:02:01

yeah like keep a physical copy of the version and then finally export from one file ex. Like others mentioned about, initially your solution is not optimal, but over time you iterate over it. The problem with current approach is that we iterate over our existing work. This is especially true on frontend where the churn is very high. removing/updating the code you previously wrote leads to information loss. And yes you can track over github, its not scalable. You have to go to a certain commit SHA to figure out what changes were made, You cannot even compare two versions. But if you keep the record of your old implantation, then you can always so such comparisons. For example, this is what I have been doing on frontend. Interestedly which this approach I feature version and backward compatibility for free and can easily compare two versions. Finally since all of my code are versioned and read only after writing an implementation, we can easily track who made changes in which files. And if something goes wrong. We can always fallback to old version, without having to resort to git

pavlosmelissinos18:02:27

> track over github I mentioned git, not github > You have to go to a certain commit SHA to figure out what changes were made, If you have proper git integration in your IDE, it's trivial to see how any piece of code has changed over time, not just ad-hoc 1-1 comparisons with chunks that you have predefined > You cannot even compare two versions That's exactly what git diff does. > without having to resort to git git is a feature, it delivers what you're asking for and then some.

Vishal Gautam18:02:30

> I mentioned git, not github I meant git not github sorry > If you have proper git integration in your IDE, it's trivial to see how any piece of code has changed over time, not just ad-hoc 1-1 comparisons with chunks that you have predefined Thats exactly my point. You have to resort more tooling just to compare > That's exactly what git diff does. The problem with git diff is that it just tells me what changes were made, I still have to run the app Git is a tool but its not a silver bullet. It has lots of flaws

pavlosmelissinos18:02:02

To each their own I suppose. I just never would have thought that I'd be having this conversation in 2023 🙂

Vishal Gautam18:02:31

I am not saying dont use git. Just be critical of what tools you are using

👍 2
teodorlu16:02:42

Lots of good suggestions have already been mentioned 🙂 for quick standalone experiments or reproducing other's problems, I use deleteable code organized by date.

$ hometemp
$ mkcd my-experiment
$ pwd
/home/teodorlu/tmp/temp-2023-02-15/my-experiment
$ neil dep add SOME_LIB
# start a repl, do stuff, see if it behaves as I expect
for repl sessions, I like the (do (defn my-fn ,,,) (my-fn ,,,)) pattern.
(do
  (defn my-fn [& args])
  (my-fn 1 2 3))
It's mentioned by others above, let's you quickly experiment with good function signatures! When it feels right, I move the defn toplevel and delete the example usage (or move it to a (comment ,,,)). And don't be afraid to delete your mess after you're done experimenting! I tend to delete the code I can before comitting. Keeping a good example or two in a comment form is great, but at some point it's more distracting than useful. If you do need extensive examples, consider unit tests.

4
Benjamin10:02:58

I clone the sexp and end up with a list of progressively updating sexp that I delete only in the end when I have a final form