Fork me on GitHub
#clojure-uk
<
2018-09-11
>
yogidevbear07:09:39

@maleghast you could maybe try #devops ? It looks quiet in there, but you never know...

alexlynham07:09:04

@maleghast you know you can dry run a container and service definition using the cli and just redirect the output to a file?

alexlynham07:09:11

can’t remember the flag

alexlynham07:09:44

sorry if you already knew that, but I find that’s the easiest way to make sure the pods (deployment) and service link up correctly

maleghast07:09:43

Thanks @yogidevbear I will when I get to the office :-)

maleghast07:09:59

Also, thanks @alex.lynham I will try that too :-)

alexlynham07:09:14

I often have had trouble connecting the deployment and service (leaky abstractions innit) so I find having a canonical working one helps as a template :)

alexlynham07:09:46

(sorry, on a train :))

maleghast08:09:59

@firthh - Maybe this is the problem, I don't have one

maleghast08:09:18

I thought that I only needed an Ingress if I wanted a single LoadBalancer to route to many services

firthh08:09:34

I don’t think so

firthh08:09:58

I think a service will just do loadbalancing between pods in k8s but not expose anything outside the cluster

maleghast08:09:48

and seeing as I only have one Pod it's not bothering to spin up the LoadBalancer..?

firthh08:09:14

So the loadbalancing inside k8s is usually done via dns

firthh08:09:36

So other pods when they’re looking up your service will round trip all the ip addresses of pods with that name

firthh08:09:51

The service itself won’t create any pods

maleghast08:09:51

I see (I think, I mean it seems to make sense)

maleghast08:09:22

Yeah, I know that, I created a Pod first, and then tried to expose it to the outside world by creating a Service.

firthh08:09:08

So I’m pretty sure services aren’t exposed to the outside world. They’re just exposed to other things in the cluster

maleghast08:09:33

@firthh Here's my Pod File:

apiVersion: v1
kind: Pod
metadata:
  name: foundation-beta
  labels:
     app: foundation
     env: pre-prod
spec:
  containers:
  - name: foundation-beta
    image: 
    ports:
    - containerPort: 3080

maleghast08:09:04

That is definitely working, I can see that the container is started up properly in the logs and everything (K8s Dashboard UI)

maleghast08:09:51

@firthh - That Services are for exposing things to one another on-cluster makes a lot of sense now that you say it.

firthh08:09:23

That looks like I’d expect

firthh08:09:57

I don’t think I ever created pods directly, we used deployments which would manage the pods

maleghast08:09:19

Oh, ok, that's interesting.

firthh08:09:40

The template key in there should be the same as your pod definition

maleghast08:09:57

*nods* So this is to deploy a set of pods that share the workload?

firthh08:09:30

And the service can then loadbalance over all of those pods

maleghast08:09:40

Then I would still need an "Ingress" to get a route in from the outside world..?

firthh08:09:13

I think you need one ingress configuration per service you want to expose

maleghast08:09:35

OK... So there's one more thing... I want the inbound traffic to be plain HTTP on port 80 - can the Ingress handle proxy-ing to my container port (3080)?

firthh08:09:09

You can, but I don’t remember at which layer that happens

firthh08:09:18

The service might be responsible for that

maleghast08:09:40

Ok, so I need Ingress -> Service -> Deployment ?

firthh08:09:39

:thumbsup:

firthh08:09:18

There are a few moving parts

maleghast08:09:37

There certainly are, but so far I still like this better than dealing with ECS

firthh08:09:07

I think the abstractions all make sense, but there are just lots of them to understand

jasonbell08:09:30

I'm quite liking Fargate/ECR/ECS

jasonbell08:09:00

Especially for scheduled transforms and data pulls. Scheduler is rather handy for all that.

firthh08:09:28

I was listening to a podcast this morning about someone using fargate to handle scaling for bursty traffic and ECS/EKS for normal workloads. It was really interesting

firthh08:09:00

It sounded like Fargate was horrifically expensive though

jasonbell08:09:08

yeah it's about 3-4x more p/hour than an instance. I'm making savings in other areas though by a large margin

firthh08:09:41

Yeah. Is Fargate like serverless in that you have your app scale to zero?

jasonbell08:09:38

I like Fargate, once you get the roles hell dealt with 🙂

maleghast08:09:00

@jasonbell - I am suddenly starting to think that Fargate might be an option while I get my head around K8s...

jasonbell08:09:47

@maleghast Might be, really depends on the use case and I've not been reading much of the thread over the last 24 hours so I don't know all the deets.

jasonbell08:09:02

Totes Apols as they say on instagram

maleghast08:09:30

I just need to get our first Containerised app deployed, and I really hated ECS so I was trying to do K8s (via Kops) instead...

maleghast08:09:02

I am starting on a road that leads to full automation, but I also just need to get the damn Container deployed... 😞

maleghast08:09:12

I am wondering if I kill the Kops-built cluster that I have created and just kick the Container (which is already on ECR) into use via Fargate while I accept that I need to learn more about K8s

guy08:09:49

Hey folks why do you use (:gen-class) for ns’s which have your main in it? Excuse my poorly worded question, ive not drunk my coffee yet

guy08:09:12

(for example do lein new app, and it generates a core.clj with a main method and gen-class at the top)

guy08:09:01

and morning all!

maleghast08:09:49

@guy - At the risk of being wrong / laughed at, I think that it's got something to do with creating Uberjars from the app.

maleghast08:09:02

(strange Java magic, in other words)

guy08:09:20

Yeah i thought it was for ahead of time compilation? So it speeds up either the running or creation of uberjars?

guy08:09:33

But I’ve only ever seen it in main methods ns’s

maleghast08:09:36

I think that it needs to be there to show the tools where the "root" of the app is when building the Uberjar, but I may well have completely misunderstood.

mccraigmccraig08:09:46

@guy you can create a java class which can be run directly with the java command (rather than as a script for the clojure runtime), and can also be the default jar class to run with java -jar in an uberjar

guy09:09:50

alright thanks a bunch!

maleghast08:09:54

(and I am hoping to learn the truth)

danielneal08:09:58

uberjars

😂 20
mccraigmccraig09:09:49

sadly, that is a poor representation of an uberjar @danieleneal

firthh09:09:35

Damn copy paste errors

😂 4
alexlynham09:09:09

@maleghast if you use deployments rather than raw pods as well you get all the autoscaling goodness 🙂

danielneal09:09:30

when maven goes down

😂 12
maleghast09:09:50

@alex.lynham - Yeah, I realised that based on a few people's comments and re-reading the docs with a new perspective.

maleghast10:09:32

@alex.lynham - I have my Container deployed as a Deployment now, but the Service I have is not being provisioned with the stuff in my .yaml file... It should be port 80, target port 3080 and it should be of type LoadBalancer, but what I get back when I run kubectl describe service foundation is info for a ClusterIP service on port 80 TargetPort 80... 😞

maleghast10:09:57

Makes no sense to me...

kind: Service
apiVersion: v1
metadata:
  name: foundation
spec:
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3080
  selector:
    app: foundation-beta
  type: LoadBalancer

maleghast10:09:24

The above is what I am using and is straight off the K8s documentation, just with different values for the nodes (like name, port, targetPort)

maleghast10:09:59

but I get this when I run kubectl describe service foundation:

Name:              foundation
Namespace:         default
Labels:            <none>
Annotations:       <none>
Selector:          app=foundation-beta
Type:              ClusterIP
IP:                100.67.253.9
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         100.96.1.5:80,100.96.2.7:80
Session Affinity:  None
Events:            <none>

maleghast10:09:40

It's as if the type and targetPort I am specifying are just being completely ignored 😞

maleghast10:09:57

The only thing that I can think at this point is that the Cluster (created with Kops) does not have the relevant permissions to create ELBs..? Not sure how to fix that either 😞

firthh10:09:44

What happens if you update the port from 80 to 81, does that get ignored as well?

maleghast10:09:30

I'll give that a try...

maleghast10:09:17

as above, but both port and targetPort are now 81

firthh10:09:58

That’s really strange then. I thought there might have been a default port and the issue was elsewhere

maleghast10:09:42

I was wondering if there was something like that in play...

maleghast10:09:51

but there just seems to be something "wrong"

maleghast10:09:05

I am using the latest version of kubectl that I can get from AUR - there is a more recent version (I have 1.11.2 and the most recent version is 1.11.3), so I suppose that there may just be a bug in kubectl, but I feel reasonably sure that Google-ing this stuff would raise a LOT more results if that were the case.

Rachel Westmacott10:09:25

is anyone aware of a map “restructuring” macro? eg: (restructure foo bar) => {:foo foo, :bar bar}

maleghast10:09:24

@peterwestmacott - I am afraid not; I am still living in that place where the first rule of macro club is... 😉

👍 4
Rachel Westmacott10:09:00

it wouldn’t be a particularly hard one to write, or particularly useful - I just wondered

bronsa10:09:28

(defmacro restructure [& args] `(zipmap ~(mapv keyword args) ~(vec args)))

firthh10:09:40

What benefit is there having this as a macro over just a function?

bronsa10:09:21

you can't do this as a function

Rachel Westmacott13:09:27

any reason why mapv as opposed to map?

bronsa13:09:10

`(zipmap '~(map keyword args) ~(vec args))
would've worked just aswell

bronsa13:09:22

but if you expand to lists insted of vectors you need to quote the expanded list :)

Rachel Westmacott13:09:19

do you know why that is?

bronsa14:09:21

[1 2] vs (1 2)

danielneal14:09:31

I do (def 1 (partial conj [1])) to get round this personally

😱 12
😂 24
dominicm18:09:42

ProxyJump is relatively new. I'm glad ProxyCommand is dead.

mccraigmccraig19:09:37

ProxyCommand looks a lot more complicated

dominicm20:09:14

It is. But it worked.

mccraigmccraig21:09:08

did it work with tunnels too ?

yogidevbear22:09:51

Is it just me or does this seem like a really poor choice for a comic caption on a Clojure article? https://porkostomus.gitlab.io/posts-output/2018-09-11-Just-Juxt-28/

guy08:09:22

I’ve always thought its tricky to have humour/jokes/comics and portray a professional image. Someone is always going to dislike your blog because of it and therefore complain and make the content of your blog irrelevant. I don’t personally find the joke funny as its as funny as “your mum” level of jokes. But if its just some random persons blog i don’t really care either. Is it someone “important”?

seancorfield23:09:47

I would consider that very inappropriate and unprofessional, yes.

seancorfield23:09:24

FYI, here's the original (since he doesn't credit the cartoonist): http://spikedmath.com/525.html

seancorfield23:09:14

(and quite a few of those are a bit NSFW, depending on your cultural norms)