Fork me on GitHub
#clojure-uk
<
2018-09-06
>
yogidevbear06:09:34

Morning 😄

thomas07:09:13

good moaning!

👋 4
yogidevbear07:09:14

One more week until ClojuTRE. I know this has been asked before, but who else is going this year?

thomas07:09:24

not going. unfortunately.

alexlynham07:09:16

not going either sadly 😞

practicalli-johnny09:09:35

110 days to the winter holidays.

Conor09:09:11

9.5 years to retirement 🤞

😂 4
practicalli-johnny09:09:55

On Saturday 15th September I'm starting a 100 days of clojure code challenge, which ends just before the winter holidays. The basic plan and log of my efforts is on https://github.com/jr0cket/100-days-of-clojure-code and in the #100-days-of-code channel. Any advice and support is most welcome.

🎉 20
👍 20
yogidevbear17:09:12

Sounds like an interesting thing to do @U05254DQM 👍

mccraigmccraig10:09:15

any other cljs+cordova users in here ?

maleghast11:09:33

Anyone got experience with connecting an app deployed in AWS US regions to an RDS instance deployed in EU regions?

maleghast11:09:45

(I realise that this may simply be impossible, but I am hoping not)

mccraigmccraig11:09:16

@maleghast i've had heroku apps (i think they were AWS US) connected to (AWS eire) RDS instances, so i think you should be ok

maleghast11:09:59

@mccraigmccraig - That is comforting / good to know...

maleghast11:09:31

I am hoping that I can give the security group of my EKS worker nodes to the RDS instance and it will all just be seamless, but I am not going to know until I try...

mccraigmccraig11:09:02

some of the AWs networking stuff can be pretty awkward to get working... not because of lack of features, just because it's not always obvious how to configure it

maleghast11:09:28

Yeah, that's really my concern, that I won't be able to decode the magical incantation, not that it's impossible.

mccraigmccraig11:09:14

always keep some 🐔🐔🐔 on hand for any urgent sacrifice needs!

😂 4
maleghast11:09:47

wise words 😉

alexlynham11:09:02

@mccraigmccraig it's that old joke about every time you find out how to do something in AWS, it feels like an easter egg b/c the docs/ui are so sprawling

thomas11:09:28

I feel sorry for the chicken... it's not their fault that AWS has its problems.

Rachel Westmacott12:09:01

you may get poor latency though if your app is far from your db - apologies if that’s stating the obvious!

thomas12:09:46

dunno if people have already seen it... but Clojure 1.10.0-alpha7 has been released. Also a new version of spec as well.

alexlynham12:09:25

@thomas the chicken knows what it did :knife:

maleghast14:09:33

@peterwestmacott - I've decided to put up a manually built K8s cluster in the same location as my RDS instance, basically inside the same vpc.

maleghast14:09:06

That way I don't have to worry about connectivity, but the trade-down on that is that it's going to take me more time.

firthh14:09:11

@maleghast have you deployed many services already to K8s?

maleghast14:09:27

Only locally on minikube in the past

maleghast14:09:42

My production experience of Docker is ECS and I have vowed never to use that again.

maleghast14:09:52

Why do you ask?

firthh14:09:28

Be prepared to hit “the network is reliable” fallacy more often than ECS 😉

firthh14:09:08

Or at least that’s the experience I’ve had

maleghast14:09:51

I don't follow you, @firthh

mccraigmccraig14:09:55

dyu see unreliable container networking in k8s @firthh?

maleghast14:09:16

My hatred of ECS is about the unwieldy nature of service creation, deployment etc.

firthh14:09:10

I’ve seen random network failures, some of it I think was down to how we were doing service to service communication and kube-dns

firthh14:09:37

But also where I am now, I’ve been less involved but there have been some random issues with network connectivity

firthh14:09:32

I think a lot of it comes down to dns, which might be a little slower to replicate than pods shutting down (so kube-dns returns the ip of a pod that’s now dead)

maleghast14:09:22

Oh ok, well I will look out for that 🙂

alexlynham15:09:57

otoh I've found ECS to be pretty stable... but then we're using it for a non-critical internal service so...

mccraigmccraig15:09:48

in a similar vein, dc/os networking is rock solid - never had any issues with container-dns or inter-container load-balancing and networking

alexlynham15:09:25

do you find the overhead worth it?

alexlynham15:09:35

or is there not as much overhead as I would imagine?

mccraigmccraig15:09:09

the overhead of dc/os ?

alexlynham15:09:35

I mean everything's a sliding scale of pain upfront vs over time but still...

alexlynham15:09:45

sledgehammer and nuts etc

mccraigmccraig15:09:50

there's definitely a learning curve...

mccraigmccraig15:09:16

deploying a cluster is pretty straightforward with https://github.com/dcos/terraform-dcos though

mccraigmccraig15:09:10

no harder than deploying any other cluster with terraform or puppet or salt or whatever really

mccraigmccraig15:09:53

once EKS makes it over here i'll definitely be looking at switching to that though - since i could deploy to both AWS and google-cloud with k8s and i'm all in favour of paying someone else to look after complicated stuff

👍 4
alexlynham17:09:40

"over here"?

yogidevbear20:09:10

Am I the only person that finds git rebase ... doing unexpected things on occasion (like getting a little confused on subsequent rebases between the same branches down the line)?

Conor20:09:06

With great power etc. etc.

Conor20:09:11

Why do you have to rebase so much?

yogidevbear20:09:51

We have a code base that is share across a couple of domains (the URI kind and the business kind) with fairly frequent/iterative releases so rebase seems to be the appropriate solution for staying in line with the latest on master

yogidevbear20:09:43

I completely agree with the concept of rebase, but I'm finding it borking every now and again. Most likely a PEBKAC, but not obvious in the nature of the actual cause of the bork