Fork me on GitHub
#clojure-uk
<
2018-09-10
>
3Jane05:09:18

Re git merge vs git rebase round 2: I’m reading that book about applying forensic analysis like techniques to identify problematic areas

3Jane05:09:46

And the guy says, use git —before to ensure a reproducible state

3Jane05:09:13

And I nod to myself and think, boy, are you ever gonna be surprised when someone applies a rebase

3Jane05:09:50

(Except, of course, one does not rebase master. Right? Right???)

danm14:09:54

Depends. Do you mean doesn't rebase master onto something else (why would you do that?), or do you mean rebase your latest commits onto the head of master

danm14:09:01

Because we do the latter all the time

3Jane15:09:49

rebase -i to fiddle with history

danm15:09:52

Aah right. No, we just tend to work on master rather than with branches, so our default pull config is to do a noninteractive rebase (which does of course mean you have to have everything in-progress either committed or stashed to pull)

danm15:09:30

But we're also a close working team so we don't do pull requests and such for features, just pull (which rebases your new commits which aren't upstream onto the new head of master), followed by push assuming the pull didn't have any merge conflicts

mexisme05:09:25

always. on a Friday at 6pm before Xmas holidays

😂 16
guy08:09:17

Morning folks!

maleghast09:09:02

Morning All 🙂

maleghast09:09:08

Lovely morning in London today 🙂

mccraigmccraig09:09:51

also lovely on the sarf coast

mccraigmccraig10:09:17

blackcap looking towards brighton

mccraigmccraig10:09:08

huh, i guess file uploads are kinda broken in free slack @seancorfield ? (i just tried to upload a photo)

mccraigmccraig10:09:47

shame there’s no “delete old content to make room for new” setting

danielneal10:09:46

I can still see it 😄

mccraigmccraig10:09:06

huh, slack told me that while the image was uploaded, the space was out of storage, which i took to mean that it wouldn't be displayed without $$$

bronsa10:09:30

@mccraigmccraig yeah slack does that but then the image still shows ¯\(ツ)

seancorfield15:09:56

Admins can delete all public files (and we often delete files older than six months -- since no one can see those anyway) -- but we can't delete privately shared files, which is actually most of them here. From time to time, we all everyone to delete their own old files but a lot of people have over time become inactive so we can't reach them. It is frustrating.

danm15:09:14

I don't even know how to see if I have any private files tbh...

maleghast15:09:43

Anyone got an example of creating a Janinaa

seancorfield16:09:23

Finding your files in Slack

guy16:09:15

Is there a quick way to delete them all? 👀

seancorfield16:09:49

Only with a script that uses the API...

guy16:09:11

im done at least :thumbsup:

danm16:09:44

A picture of my damaged front fork, and a picture of the doggo

danm16:09:45

Thrilling

maleghast16:09:32

Anyone out there got a few moments to spare to look at a Kubernetes Service definition that just does not seem to be working despite being exactly as per the tutorials and examples that I can find..?

maleghast16:09:02

I've got my Pod up and running from my Docker container, but now I can't expose it...

mccraigmccraig16:09:48

i don't know k8s @maleghast, but from my dc/os experience... first thing to check - is your app accessible on its private port on your pods ip address ? (assuming there is a public / load-balanced port which gets routed to the private port/ip of the pod(s))

mccraigmccraig16:09:31

if the private port is automatically allocated (it usually is on dc/os) then you'll have to use one of the k8s tools to look it up

maleghast16:09:11

Yeah, I don't know which node the Pod is on...

mccraigmccraig16:09:17

presumably something in the k8s console or cli will tell you?

mccraigmccraig16:09:22

or maybe the pod has its own vip ?

maleghast16:09:30

Well, the K8s Dashboard is pretty good, and I can see that the Pod is "right", if you see what I mean? I can see that the Container has started up correctly, and everything.

maleghast16:09:09

So the K8s "way" would then be to define a "service" that exposes the pod via an AWS ELB

maleghast16:09:53

And my service config is basically line for line like the docs, but with my pod info swapped in, and yet it doesn't start up... 😞

maleghast16:09:14

The Service gets created but no ELB

maleghast16:09:29

so no publicly available ingress to the app / container.

mccraigmccraig16:09:40

(guessing here, but) the service probably creates a load-balanced port which then distributes to the pod-private ports ?

mccraigmccraig16:09:53

while the ELB just fronts the service port ?

maleghast16:09:47

possibly, the docs seem to insist that creating a Service of type "LoadBalancer" will set up an ELB that routes traffic directly to the app that is running in the Pod

mccraigmccraig16:09:31

is this EKS you are looking at ?

maleghast16:09:50

I could not get EKS in eu-west-1, so I used Kops to build a K8s cluster in that region.

maleghast16:09:09

This means that I have been able to give my cluster nodes access to my RDS database

maleghast17:09:28

and I have visible logs that it's started up and connected to the DB, so I am pleased with that part of the experience 🙂

mccraigmccraig17:09:03

is the LoadBalancer definitely talking about an ELB ? (that seems quite a high level of AWS integration)

maleghast17:09:13

yeah, hold on...

maleghast17:09:37

I am doing the classic LoadBalancer approach, not the newer NLB approach

mccraigmccraig17:09:12

hmm - i suspect that doesn't create an AWS ELB, but instead creates an nginx-based load-balancer service on k8s

mccraigmccraig17:09:24

(but i may be wrong)

maleghast17:09:56

Well, either way, as far as I can tell I am following the instructions and creating a service in the way as described I should get back an IP / DNS name that is public and that's what's not happening... 😞

maleghast17:09:32

(I mean I don't care if it's a real__ ELB I just care that I get a route in to the Container / App)

mccraigmccraig17:09:00

sure - so k8s console isn't showing a load-balancer service then ?

mccraigmccraig17:09:05

ah, my mistake - it looks like the spec/selector in that app-config will probably need to be adjusted to your app... did you do that ?

maleghast17:09:24

Yep, this is my service def:

apiVersion: v1
kind: Service
metadata:
  name: foundation
  namespace: default
  annotations: {}
  labels:
     app: foundation-beta
     env: pre-prod
spec:
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 3080
  selector:
    app: foundation-beta
type: LoadBalancer

maleghast17:09:08

I don't get any errors, and a Service does get created, but the ELB / Ingress side of it does not work.

maleghast17:09:15

I don't know if it's in any way significant, but the service that does get created doesn't have the targetPort set correctly either.

mccraigmccraig18:09:45

you've gone beyond my generic knowledge now @maleghast - specific k8s knowledge seems required to help

maleghast18:09:06

*nods* No worries - thanks for taking the time anyway 🙂

maleghast18:09:29

I am going to look for K8s people - amazingly there is no #kubernetes channel on here...