This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-11-21
Channels
- # beginners (5)
- # boot (15)
- # capetown (1)
- # chestnut (2)
- # cljs-dev (9)
- # cljsjs (3)
- # cljsrn (1)
- # clojure (190)
- # clojure-brasil (2)
- # clojure-greece (14)
- # clojure-italy (3)
- # clojure-poland (8)
- # clojure-romania (1)
- # clojure-russia (2)
- # clojure-serbia (3)
- # clojure-spec (38)
- # clojure-uk (98)
- # clojure-ukraine (2)
- # clojurescript (65)
- # clojurex (1)
- # core-async (16)
- # cursive (16)
- # datomic (3)
- # defnpodcast (7)
- # emacs (11)
- # funcool (2)
- # hoplon (16)
- # jobs (1)
- # leiningen (4)
- # lumo (9)
- # off-topic (2)
- # om (1)
- # other-languages (1)
- # protorepl (1)
- # re-frame (50)
- # reagent (16)
- # reitit (32)
- # remote-jobs (1)
- # rum (1)
- # shadow-cljs (73)
- # spacemacs (36)
- # specter (21)
- # sql (6)
- # unrepl (107)
- # untangled (4)
@dominicm at least you were only swabbed š @paulspencerwilliams I stuck my finger somewhere I probably shouldn't have š I removed the cover of the bathroom extractor fan to try sort an issue out and tried pressing on something (the light switch which the fan is connected to was off) and I got a shock. I'm pretty sure the guy who installed the fan took some shortcuts
@yogidevbear that sounds, uh, exciting?
Haha, it was definitely going against my better judgement
How are you doing today @otfrom?
All good here.
Does anyone know of a good primer for setting up a Linux / Docker / Clojure production environment?
Morning Thomas
morning @yogidevbear how are you today?
mornings!
@yogidevbear iirc there are docker images for leiningen and boot if that helps
Cool, thanks Peter. Doing well Thomas. How are things in the Netherlands? (You are in the Netherlands right?)
the darkness (in the morning mainly) is due to the fact we basically have the time of Berlin here while we are a lot closer to GMT
About 30 minutes. 25-28 actual pedalling, up to about 35 when you factor in waiting at lights etc. Pretty much bang on 7 miles
but you have all those hills to consider @thomas :snow_capped_mountain:
of course!!! and steep climbs as well... and only up as well, both ways :thinking_face:
i have 500m to 700m of climbing on my regular ride, depending on which way i go
What distance is that over? 700m over 65km is easy, 700m over 35km would be hard (for me)
I think it's 700m over 0.3km
@carr0t 500m over 20k, 700m over 25k. up down and around the south downs
haha, using strava converted me to metric for cycling
I normally consider anything around/more than 1000 feet every 10 miles to be really hard. Most of the club rides end up about 1500-2000 ft in 35-40 miles
that's a lot of climbing in not a lot of distance. where was that ?
And here's me trying to run 10km š
@mccraigmccraig Peak District. I'm in south Mancs, cycle club is based out of a small town a few miles south of me, so our rides normally go either south into Cheshire or East into the Peak District
And I've done a century once, which was about 6000-6500 ft of climbing, including the 'longest constant hill in the UK', which is steep in parts (17% IIRC), but mostly around 7% or so, for 5.5 miles / 9 km
@yogidevbear don't develop clojure in a docker container
Okay. How would you go about auto-scaling and load balancing, etc, for a large Clojure project?
@carr0t ah, my hometown - i'm from heald-green<cheadle<stockport
Good to know
@mccraigmccraig Oh cool. Yeah, I'm just down from Parrs Wood, right on the border of Mancs as you head over the mersey out to Cheadle. I head out through Heald Green to get to the club down in Alderley Edge
i'm just outside brighton now... very close to the south downs, but they aren't nearly as varied as the peaks
@yogidevbear also EC2 / ELB / auto-scaling here
With everyone talking about deployment choicesā¦ IF you were going to deploy in Production on Docker, would anyone use ECS, or is the consensus to go with K8s..? Also, @yogidevbear, I agree with @dominicm wholeheartedly re development for Clojure - itās not worth the pain working in a containerā¦ However, it might be worth your time to use containers for DBs and other infra, like Elasticsearch, PostgreSQL etc. I do that and find it useful, but YMMV. Of course if you are deploying on Docker you are going to need to have a baked-in, first class citizen process to containerise your app past Devā¦ I would imagine I would do that by having a dev-stable step where Iād build an Ć¼berjar and containerise that, to make sure it all works, but others may see this as an unnecessary complication, I dunnoā¦ Itās what I would do š
> However, it might be worth your time to use containers for DBs and other infra, like Elasticsearch, PostgreSQL etc. Why would you want to spend time taking care about DBs when youāve got the likes of RDS for it ?
I like having containers for DBs etc locally as it means i can version match without jumping through hoops - just grab the container with the appropriate version š
If you are deploying to the likes of Marathon/DCOS then youāll need the app containerised. Iāve done it a few times on various tihngs.
@maleghast i was happy with mesos+marathon for ages... just moved everything over to dc/os (which is the evolution of mesos+marathon) and that seems cool - i've got all my persistent stuff - kafka/cassandra/elastic running under dc/os now
(previously the persistent stuff was deployed separately, 'cos mesos resource allocation wasn't sufficiently capable. it is now)
We've used both k8s and ECS. ECS doesn't seem to have the service abstraction that K8S has so we're currently thinking of moving back to K8S
A benefit of ECS is you can manage it via the AWS console but when we were using k8s we were happy enough with the command line utils
@davesnowdon being familiar with K8S do you know why someone would want to run it on top of DC/OS ? mesosphere have been shouting about that capability recently but i don't understand why
@mccraigmccraig I don't have any experience with DC/OS. Maybe if you really want to use K8S; running it on top of DC/OS makes the host management easier as K8S didn't provide any support for that back when we were using it. on the face of it though seems like more hassle than just using DC/OS to run the workloads directly
This is pretty interesting, but it does mean that I need to investigate DC/OS so that I can understand if itās appropriate to my needs. I had already kinda decided that Mesos + Marathon was a lot of heavy (if very good) tech that I probably didnāt need.
As such I was planning to deploy a K8s cluster with Kops, but perhaps Iāll reconsiderā¦ Thanks š
what is making you reconsider dc/os @maleghast?
yeah @davesnowdon, i don't know K8S but my thoughts were the same - it seems like unnecessary hassle. maybe all the noise is just for box ticking or marketing reasons
Opportunity to give a lightning talk at #clojurex - one of our speakers has dropped out, so if you would like to speak please get in touch (either here or john at jr0cket dot eu ). Thanks
@mccraigmccraig - Based on reading the dc/os page on it, I thought it was so that you can migrate K8s infrastructure to a wider dc/os environment that handles other targets than Containers and Container Orchestration, so that you can leverage DC/OS to do real machine and virtual machine clustering as well as clustering to host Container Swarms. I may be wrong about that, but I think that they are playing to capture market share.
uh @maleghast pretty much everything in dc/os runs in a container i think... either in a docker container or mesosphere's own container runtime - iirc you can run shell commands as a mesos job, but it will still get dynamically wrapped in a container for isolation
I havenāt dug deep enough to know that yet, but that does explain why you would see K8s on top as utterly irrelevant.
yep - DC/OS and K8S look pretty much like the same thing
haha, but which one is which @dotemacs ?
I think that the difference is that K8s is ONLY for Containers, whereas (though it uses its own container to achieve the result, thanks for explaining that @mccraigmccraig) Mesos and therefore dc/os can be used to cluster other kinds of work and indeed ārealā machines. K8s canāt manage metal alongside Containers, at least not as far as I know.
@maleghast the internet begs to differ š https://blog.alexellis.io/kubernetes-in-10-minutes/
Thatās not what I am saying. Mesos can be used to manage actual__ machines as well as fleets of containers. K8s is a technology for building a cluster which can host Containers, but it canāt manage actual__ machines. Of course it can be run on actual__ machines, but you canāt use Kubernetes to administer a physical cluster of machines. You need a physical cluster to run it, but you can only deploy containers into it. DC/OS and Mesos both talk about themselves as Cluster Management Technology for workloads beyond Containers.
I may be barking up the wrong tree AND failing to express myself clearly - these are both things I have āformā forā¦
I was at least under the impression that Mesos DC/OS could be used, for example, to manage a group of physical machines that are being used, natively, as a Hadoop Cluster. You can run a Hadoop Cluster on K8s, but all the nodes in the Hadoop Cluster will actually be Containers, ācos K8s can only manage Containers. Has that changed / have I always been wrong about this distinction?
well, given a mesos-framework for hadoop and a group of machines running dc/os then you can run hadoop on dc/os - but the machines have to be running dc/os first, and all the hadoop processes will be running in containers determined by the mesos-framework
Ok, but not Docker Containers, and Mesos will run a wide an varied selection of workloads, yeah?
K8s can only deploy Docker Containers (I guess it might be able to run other Containers, I donāt know__ that it canāt)
I think itās only a semantic difference really, but my thinking was that the DC/OS people might want to allow people who are already invested in K8s as their deployment target for Docker Containers make use of DC/OS to manage their metal above and beyond K8s and complect their K8s environment into a larger DC/OS environment that is also managing non-Docker workloads..?