Fork me on GitHub
#clojurescript
<
2019-12-29
>
Joe00:12:19

what's the best way to set context variables in ClojureScript? i.e., to have different values for dev, test, and prod

Joe00:12:45

using profile specific source paths, but leiningen doesn't seem to be picking up on them

Joe00:12:26

e.g.,

:profiles
  {:dev
   {:cljsbuild
    {:builds
     [{:id "dev"
       :source-paths ["src" "env/dev"]}]}
   :prod
   {:cljsbuild
    {:builds
     [{:id "prod"
       :source-paths ["src" "env/prod"]}]}}

Joe00:12:21

it only seems to be picking up on :source-paths defined in the top level project map

Prometheus17:12:05

Anybody have any experience with sente?

(sente/make-channel-socket! "/chsk" ; Note the same path as before
                                  {:type :auto
                                   :protocol :http
                                   :host "127.0.0.1"
                                   :port "8080"})]
Doesn’t connect to the server which has these handlers
(GET  "/chsk" req (ring-ajax-get-or-ws-handshake req))
  (POST "/chsk" req (ring-ajax-post                req))
I checked chsk which provides this output:
{:chsk
 {:ws-chsk-opts
  {:client-id "b557ccac-5149-461d-9683-4babd262cdfa",
   :chs
   {:internal #object[cljs.core.async.impl.channels.ManyToManyChannel],
    :state #object[cljs.core.async.impl.channels.ManyToManyChannel],
    :<server #object[cljs.core.async.impl.channels.ManyToManyChannel]},
   :params nil,
   :packer #object[taoensso.sente.EdnPacker],
   :ws-kalive-ms 20000,
   :url "",
   :backoff-ms-fn #object[taoensso$encore$exp_backoff]},
  :ajax-chsk-opts
  {:client-id "b557ccac-5149-461d-9683-4babd262cdfa",
   :chs
   {:internal #object[cljs.core.async.impl.channels.ManyToManyChannel],
    :state #object[cljs.core.async.impl.channels.ManyToManyChannel],
    :<server #object[cljs.core.async.impl.channels.ManyToManyChannel]},
   :params nil,
   :packer #object[taoensso.sente.EdnPacker],
   :ws-kalive-ms 20000,
   :url "",
   :ajax-opts nil,
   :backoff-ms-fn #object[taoensso$encore$exp_backoff]},
  :state_
  #object[cljs.core.Atom {:val {:type :auto, :open? false, :ever-opened? false, :csrf-token {:type :auto, :protocol :http, :host "127.0.0.1", :port "8080"}}}],
  :impl_
  #object[cljs.core.Atom {:val #taoensso.sente.ChWebSocket{:client-id "b557ccac-5149-461d-9683-4babd262cdfa", :chs {:internal #object[cljs.core.async.impl.channels.ManyToManyChannel], :state #object[cljs.core.async.impl.channels.ManyToManyChannel], :<server #object[cljs.core.async.impl.channels.ManyToManyChannel]}, :params nil, :packer #object[taoensso.sente.EdnPacker], :url "", :ws-kalive-ms 20000, :state_ #object[cljs.core.Atom {:val {:type :auto, :open? false, :ever-opened? false, :csrf-token {:type :auto, :protocol :http, :host "127.0.0.1", :port "8080"}}}], :instance-handle_ #object[cljs.core.Atom {:val "ad35f753-30cd-46dc-97d0-7df7e9bf43f8"}], :retry-count_ #object[cljs.core.Atom {:val 0}], :ever-opened?_ #object[cljs.core.Atom {:val false}], :backoff-ms-fn #object[taoensso$encore$exp_backoff], :cbs-waiting_ #object[cljs.core.Atom {:val {}}], :socket_ #object[cljs.core.Atom {:val #object[WebSocket [object WebSocket]]}], :udt-last-comms_ #object[cljs.core.Atom {:val nil}]}}]},
 :ch-recv #object[cljs.core.async.impl.channels.ManyToManyChannel],
 :send-fn #object[G__11281],
 :state
 #object[cljs.core.Atom {:val {:type :auto, :open? false, :ever-opened? false, :csrf-token {:type :auto, :protocol :http, :host "127.0.0.1", :port "8080"}}}]}
@connected-uids gives the following output
{:ws #{}, :ajax #{}, :any #{}}
I literally followed the example in the docs, also I’m running http-kit on the backend.

p-himik21:12:37

Where did you get the :port option from?

p-himik21:12:51

Also, please provide the server side code.

Prometheus21:12:22

I figured it out, the only issue is that uids are not registered

👍 4
Drew Verlee18:12:15

I have a seemingly preposterous question. Is there anything that prevents the server from sending the client/browser the instructions to update the DOM directly? As in you could send those, along with the data needed to udpate the client and potentially make the client get to the paint stage faster. This would mean the server would need to duplicate the clients environment.

smashedtoatoms18:12:41

That's kinda what Elixir/Phoenix does with LiveView I believe.

Drew Verlee18:12:26

Interesting. i'll have to take a look.

lilactown21:12:46

Why not send html?

lilactown21:12:56

I'm not sure exactly what you are saying

p-himik21:12:34

I believe the question asks about having something like React with virtual DOM on the backend and actual DOM on the frontend.

Drew Verlee21:12:06

Yes. It's my understand that there are multiple entry points/ ways to update the actual Dom. I'm just exploring the idea of what it would mean to have the server so more work. Maybe sending the client multiple forms of updating and then having the fastest version win.

Drew Verlee21:12:10

Multiple is an exaggeration, I can think of two common ways. The browser parses html or it understand a programming language like JavaScript to update the dom directly

p-himik21:12:54

I feel like it's a huge potential for some messed up state. I have no idea how LiveView manages to do that in a robust way (if it does). The browser side is synchronous - if you press a button, you will not be able to interact with the page until the event is processed. No problems here whatsoever. But if you bring the backend here, it suddenly becomes asynchronous. You press a button, the event is handled in the background by the backend - in the mean time, you can do anything you want with the UI. How to process the coming update from the backend then? Of course, maybe it's something simple that I just don't see right now, I don't know. In my applications, about half of the work happens purely on the frontend - I don't even need to query anything. In this scenario, involving the backend seems unnecessary at best.

p-himik21:12:20

If you just want to speed up the initial page loading, it's called SSR, or Server-Side Rendering. Multiple frameworks support it.

p-himik21:12:53

(OK, I need to go sleep - the amount of full word typos is through the roof)

Drew Verlee21:12:16

Your right, the premise involves more shared state. I feel this is the direction things are headed and so I'm trying to brainstorm it. The idea is more about poking at the space then a concert implementation. Go get some sleep 😴 😄

smashedtoatoms21:12:25

The only state in LiveView is server side. The client is just a representation of the server side state.

smashedtoatoms21:12:41

The issue is that if your network is slow, your ui will be slow.

smashedtoatoms21:12:11

I'd argue that is better than having state in the client that doesn't align with reality, but it depends on what you're trying to do.

p-himik21:12:54

So, are you brainstorming something that's not only not a problem right now (meaning, the UI update performance when the logic is on the frontend), but where the premise itself (more shared state) is not yet here? :) Don't get me wrong, poking around is nice, it helps us learn. But I definitely wouldn't do real brainstorming over this.

smashedtoatoms21:12:41

I disagree. I think it's a great thing to be looking into. I think clients having their own state is a nightmare if your network can accommodate the back and forth of having it in one place. Again, it depends on what you're doing, but having the client and server having separate state that has to be kept in sync, which is how most things today work, is not ideal in my opinion.

smashedtoatoms21:12:32

That being said, I'm primarily a backend dev who hates dealing with front end state sync problems, so I suspect my opinion is far from objective.

p-himik21:12:03

> I think clients having their own state is a nightmare I have a totally different experience. :) It's quite nice. Also, it depends on what you mean by state. > if your network can accommodate the back and forth of having it in one place Yeah, but that's a pretty huge "if". I'd say that it's the minority of all the situations.

smashedtoatoms21:12:57

It's working pretty well for the Phoenix LiveView folks from what I understand.

smashedtoatoms21:12:54

Yes, but if you have a backend query a db and feed that to a front end framework, you have 3 states now.

smashedtoatoms21:12:17

Hehe, most definitely.

smashedtoatoms21:12:39

I might not totally understand what is being talked about here. I'll just shut up.

p-himik22:12:20

> It's working pretty well for the Phoenix LiveView folks from what I understand. Well, their list of use cases is quite limited right now. I'm willing to give them a benefit of the doubt, definitely. But it still reminds me way too much of CORBA.

lilactown23:12:33

Stuff like LiveView imposes certain limitation • how many concurrent websocket or polling connections your server(s) can handle = the number of concurrent users you can handle • how your site / application behaves with spotty internet; dropped packets = dropped updates / need to retry • how your site behaves while offline; needs to fallback to just client-side state or limit functionality It sounds nice in theory for those of us that work primarily on server apps, as we can control the environment better on the server and do not need to learn the ins and outs of client-side environments / tool chains. However, I don’t think that it actually promotes a good UX if your application needs to be used by many concurrent users or in low or no connectivity scenarios. A good UI will typically need some client side state even with this, in order to pre-emptively render based on user interactions without waiting for server responses. So the tradeoffs are not just choosing “all server-side” or “all-client side” but how moving more (but not all) of the state server-side might make things better for your app.

lilactown23:12:41

I would hypothesize that in general, computing the changes to the DOM based on a change in state is quicker and cheaper to do on the device than making a network call to compute it on the server. The React team has done experiments with just moving the VDOM computation to a worker thread, and the cost of serializing things was high enough that it made it slower overall

lilactown23:12:58

Now there is a lot of work being done on streaming updates where the first render is a bare page, and as e.g. database queries resolve streaming the next DOM update. But this is only on initial render, and once each portion of the page is hydrated it functions as a purely client-side app. This is different than LiveView, which maintains the state on the server for the entire duration of a user’s session

didibus23:12:18

From a user perspective, you'd want the app to be able to handle disconnects, high latencies, etc. So building a syncing mechanism that works, while painful for a dev, is awesome for the user

didibus23:12:25

That said, eventual consistency can be a terrible user experience. Posting a message only to have it unposted 10 second later, or seeing a reply and replying to it and then having the post you're replying too disapear

didibus23:12:34

That's really bad from a user perspective as well

lilactown23:12:32

Yes, it depends on the semantics of the action and feature

lilactown23:12:39

The goal is to be optimistic in the updates where you don’t care so much about the timing or success of the thing (e.g. a “like” on a post), and be able to show a meaningful status of operations that the user does care about

lilactown23:12:58

E.g. writing a message, posting it and it showing in some pending state is better than being unable to post at all