This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-08-02
Channels
- # announcements (14)
- # beginners (133)
- # cider (27)
- # cljs-dev (7)
- # cljsjs (13)
- # clojure (105)
- # clojure-dev (58)
- # clojure-italy (1)
- # clojure-nl (17)
- # clojure-russia (33)
- # clojure-spec (5)
- # clojure-uk (154)
- # clojured (1)
- # clojurescript (35)
- # cloverage (4)
- # cursive (35)
- # datomic (58)
- # duct (8)
- # editors (9)
- # emacs (15)
- # events (1)
- # figwheel (47)
- # figwheel-main (132)
- # hyperfiddle (5)
- # immutant (29)
- # instaparse (21)
- # luminus (3)
- # off-topic (5)
- # onyx (5)
- # overtone (5)
- # pedestal (8)
- # re-frame (7)
- # reagent (6)
- # reitit (3)
- # schema (2)
- # shadow-cljs (178)
- # spacemacs (49)
- # specter (2)
- # sql (1)
- # tools-deps (110)
I’m having an issue getting CORS working; I’ve followed the guides and the service map has ‘:io.pedestal.http/allowed-origins {:creds true :allowed-origins (constantly true)}’, but I get a 404 on browser OPTION pre-flight
@dadair I saw that with pedestal-lacinia. IIRC pre-flight means the browser is rejecting it before sending it along. In my case it was sending a query that didn’t match the API spec. Maybe that helps you narrow down the problem point?
Do I need to define a catch-all :option route? From looking at the pedestal CORS code, I thought it handles all of this in interceptors that are pre-router
Found the problem(s):
1. I was setting the ::http/allowed-origins
key after applying the default interceptors function (which relies on the presence of the key to add the appropriate interceptor)
2. I was providing my own :interceptors []
key, which was preventing default-interceptors
from defining the defaults to begin-with (I thought this would add, rather than prevent)
Are there any unexpected consequences one should be aware of when running multiple pedestal-jetty servers (different ports) in a single JVM? Outside of general performance considerations.
We’re trying to trim our proliferation of services by combining services into a single/less JVMs (we don’t expect huge load). I have multiple servers running, and everything seems to be fine, just wanted to make sure there wasn’t going to be anything biting us in the butt later
You can have as many or as few ports open as you want, and certainly webapp containers usually host many services at once- but logically services can have lots of singleton resources- connection pools, caches, etc- and distinct lifecycles- before this/after that- and dependencies. Obviously there's additional context but my knee jerk would be to make them really one service- one port, one lifecycle, etc, easing the transition pain with a load balancer or proxy if need be- or fitting into the war-container pattern. Two microservices on two ports in one jvm- again absent additional context- feels like a doesn't end well situation to me.