Fork me on GitHub
Ben Sless06:06:00

Thoughts, pros and cons for the following project layouts: src, test, resources, dev, java at top level Vs. src/{main,test,dev}/LANG/

Martynas M06:06:06

For the second case: Your IDE could probably flatten the directories and display them as src/main/clojure and src/test/clojure

Ben Sless06:06:36

It probably would but it's not the main concern. Or is it? Is there any other difference or concern?

Martynas M06:06:30

If I understand correctly your concern is that you would like to see things in a clear way when you develop code. So if it's a tree then you must manually expand the children and that takes effort but then it has less parent nodes. This is what I saw in the examples that you provided. The flattened top-level hierarchy makes the sources immediately reachable at top-level. Edit: Also IMO the first example is incomplete because to fully flatten it we would probably need to do this: java java_test resources resources_java resources_java_test dev So we implemented a flat hierachy and now we have to map these names inside the build tool.

Bart Kleijngeld11:06:12

Love it! It's trendy to iterate your crap


Not realistic - you should instead have little piles of different covered poop from pivoting so much

😆 1
Drew Verlee21:06:21

i agree, this story is way to cohesive, it implies in fact, that i could get my shit together.

😂 3

Do you store session in redis or normal database (e.g., mysql) ?


It's somewhat short lived generally, so I personally go redis


In the main DB.


> In the main DB. Do your actively purge the data in main db? In redis, data can be expired automatically.


Expired sessions are deleted with a trigger upon a fresh login by the same user.


I see. You lazily delete it.


We are storing them with jdbc-ring-session ( with the automatic cleanup enabled

Cora (she/her)16:06:36

I store it in the user's own database (their cookie store)


Stories like this make me pretty fearful of AWS, and other cloud services At my wife's company, due to a combination of silly mistakes, their cloud bill went up to 25k because someone bumped the specs for a database, which was intended to be a temporary solution, but it was left running for weeks.


IME the only way to avoid this is to use a tool like Terraform that lets you know you're operating on everything and a teardown is just a terraform destroy away. I also think that AWS's interface is absolute garbage (GCP is a little better) and makes it very hard to find things. Among the cloud providers I've used, the one friendliest to small projects and orgs is DigitalOcean, and I think their interface is much better organized too.


At least lets you work with a limited credit


Yeah, a per-project budget limit (as opposed to an alert) would be the least these companies can do, but unfortunately we really can't assume that AWS's incentives and interests align with our own.


autoscale your credit limit

😂 5

I think we (people who build software) are often very bad at understanding the limits of our systems, which is the attractiveness of autoscaling / "serverless" deployments, "no limits", but the consequence of no limits is a no limit price tag


Unless they gave you an option to put your system to a grinding halt at a certain credit limit, which is ok for some applications like my pet projects

⬆️ 1

I tend to agree with the "boring stack" advocates in the thread; you can get extremely far with a single mid-range server and you can get all the benefits of a "VPC" with none of the AWS headaches if you use Tailscale to put it off-limits to the public internet and manage access (including, now, It's what I'd do if I had to launch SaaS infrastructure next week.

☝️ 1
Cora (she/her)21:06:15

you can get pretty far using something like capistrano for deploys on boring stacks, too


Unfortunately the Big Data:tm: story isn't quite as simple yet; batch jobs do still sometimes require autoscaling to handle the load, but I'd much sooner reach for something simple like a Nomad cluster that I can impose specific limits on than write another line of YAML for K8S autoscaling groups ever again (to say even less of the proprietary stuff like Spanner). On the other hand, Frank McSherry, so Better Code is still an option worth pursuing even for that.

🙂 1

I once opened an AWS account to experiment with a chat bot in a dev environment. Somehow along the way I must have pressed the wrong button and went with some paid option instead of the free tier I was intending to stick to. A short while later a bill for about $3K came in :shocked_face_with_exploding_head: I emailed AWS about it, and they actually just reversed the whole charge and said something like "don't do it again". They ain't all bad :man-shrugging:

Martynas M03:06:52

Imagine using a garbage collector in 2022...

Ben Sless06:06:26

This reads like a Kevlin Henny story, with "at scale" and everything. This isn't just your programmer error, or stack overflow error. It's a stack overflow at scale. While the whole situation is unfortunate, there's a comedic element to it. It's a classic bug, replicates at a huge scale. I once had a tiny bug in a boolean condition and ended up ddosing plenty of our partners


It bears repeating: Jeff Bezos's interests are not your interests. It would be nice if we could consistently get reimbursed by AWS for our own mistakes or if they put harder limits in place, but please do not bet your business or personal finances on that good will being extended. We keep hearing these stories because AWS wants your money and will take as much of it as they can.

Martynas M12:06:05

From what I see it's that there is no penalty for AWS to behave poorly. What I also see is that if you take away all bad parts of the system then you also take away some of the good parts. But yes, at the end of the day it's capitalism.

⬆️ 2

I'll probably be using nix or docker more on my next vps. On my current vps it's a little bit messy with manually written .init.d scripts


I'm sorry to tell you this but these stories only manifest the cost of software mistakes and endless trust that things turn out to be ok somehow. And we really don't focus enough to countermeasure such mistakes, to test enough the pieces of code because... reasons. Everybody wants to develop fast, to deploy immediately into prod, but it's just a shortcut with significant risks. Cloud services just make this quite transparent, and invoces have a due date on them. Almost nothing to see here, just my 5 cents.


My wife's company had a 25k bill due to the alerts not working properly.


Her company can challenge the bill. But you also said that somebody did an unfortunate silly mistake to bump stuff where it should be done. That also doesn't sound quite ok.


Yes. You first need to become an expert in the cloud stuff before you can prevent silly mistakes like that, apparently.


Which seems plain wrong.


My colleagues once have done 250K EUR mistake by testing in production and yes. Some people had to pick their asses and phones and start actually working... that was fun 😄


They challenged the bill and got 17k back from Microsoft, luckily (just heard this yesterday)


For reference, 25k is about half a person's salary in that company


of course... these things can be usually dealed with in a decent manner.


Sure, but I wouldn't want to be on the paying side of things as a hobbiest deploying some experimental side projects, even if they are reasonable and sometimes return some of the money. Too much stress ;)


For hobby projects it's good to fit in free tiers or limited budget offerings of course. For businesses it's good to develop counter measures and processes to avoid problems. Having a good monitoring is quite a must I have to say.


Absence of monitoring, testing and countermeasures is definitely a significant risk for a small startups and smaller companies. It's also good to have a decent treatment from a Cloud provider. I personally would also like to see more 'Production minimalism' in the real life.


There was monitoring, but it wasn't properly working.


Services like or supabase let you cap your budget. It's not impossible.


Yeah, reminds me the Gitlab backup fuckup back then... they had like 7 backups... none of which they were able to actually recover from.


Monitoring that doesn't work isn't very useful, so are backup from which one is unable to recover I guess.

👍 1

I understand it's annoying as hell. But this is a reality check for all of us. If we are afraid to deal with these we would be probably better doing different jobs.


On the other (funny) side I build once a very simple application for the GAE (Google App Engine). At the time I was working for a startup company building solutions for a banking industry, this applications was just an experiment and after deployment it was running for years. Meanwhile I left the job, the company went bankrupt and ceased operation. But believe or not my application was still running years after, I lost access, credentials and control, but application was running until last year when it was finally removed cloud service staff. I was able to fit into a free tier.