Fork me on GitHub
#clojurescript
<
2020-02-26
>
Old account11:02:57

how would you pass any data from server to ClojureScript instance? I want to pass some data from hosting server to the frontend. I use Leiningen and Figwheel in the frontend.

zilvinasu11:02:26

during the build time or when application is live ?

Old account11:02:47

either would do

restenb11:02:20

are you talking about application data? like, use HTTP requests or websockets?

Old account11:02:12

In this case it would be current git commit hash. so it would not change during runtime

zilvinasu11:02:04

anyway I guess you can access the ENV variables during the build time , 1. set the hash to env, 2. add the data from env to your ClojureScript app, i.e. exporting version to window object or smth

👍 4
wasser16:02:07

We used https://github.com/trptcolin/versioneer/ with good success. Just make a url and handler that calls this and returns the result

wasser16:02:27

We also use lein-git-inject to creat the number in the first place

Old account16:02:37

I can get version number easily, but how to pass it in to the frontend app into the browser...

zilvinasu03:02:14

@UU0EZB7M3 if you can get the version easily, you can resolve it from the env , and execute CLJS code to persist it in window object, i.e. under window.app__version_ or something, better yet, what is the actual reason for why you need it in the first place ?

mavbozo03:02:07

we use this in our deployment:

mavbozo03:02:17

1. get git hash of current commit

mavbozo03:02:33

2. store the hash in a file somewhere in resources dir

mavbozo03:02:44

3. build the uberjar

mavbozo03:02:35

4. our web handler gets the hash for the file in the resources and put the hash information in the meta tag in html

mavbozo03:02:01

<head>

  <meta name="client/commit" content="0544868d01a2143d3ab58802a185be313a00c8a5" />

mavbozo03:02:40

5. the clojurescript just read the hash from that meta tag

👍 4
restenb13:02:47

so what do people use for headless testing of clojurescript these days?

rickmoynihan13:02:38

Just run into this inconsistency between clojure and clojurescript… clojure’s reader treats this as a valid keyword: : clojurescript doesn’t, and fails to read with a :reader-error granted it’s a degenerative case.

michael.heuberger20:02:55

morning folks, JS interop question again here: is there a reason why some prefer the shorthand form over (goog.object/get obj "key") over (.-key obj) … or is there no discussion anymore and is there a standard now?

lilactown20:02:13

@michael.heuberger there is slightly different behavior depending on a number of factors. essentially, (.-someKey obj) has fallen somewhat out of favor due to the fact that the closure compiler might change this when doing advanced optimizations to o.Fz instead of obj.someKey. In some circumstances, this is OK because the compiler will change all of the places that refer to someKey to Fz . But if you’re interacting with external JS that isn’t passed through optimizations (e.g. any npm libs or other external JS), then you need to tell the compiler that obj.someKey isn’t safe to rename. That’s what externs are for. Externs are tedious to write and maintain, so a lot of people don’t. There’s also externs inference which works pretty good, but sometimes you run into issues. So the safe choice when interacting with external JS objects is to use goog.object, and interop forms like . .- when interacting with code that can be optimized. That’s the advice that the CLJS maintainers are giving. In practice, many people still use (.-someKey obj) on external JS objects and rely on externs or inference, but it’s more likely bugs will occur.

👍 4
michael.heuberger20:02:26

that’s a good summary, thanks. this should be public somewhere.

thheller20:02:20

the recommendation is to use .-someKey for code and goog.object for data

☝️ 4
👍 4
lilactown20:02:02

thheller with a more succinct and more correct version than what I wrote 🙂

michael.heuberger20:02:36

thanks thheller. may i ask where this recommendation is from? written somewhere?

thheller20:02:05

from the closure-compiler wiki somewhere just translated to CLJS 😛

thheller20:02:22

I think there was something more detailed somewhere but can't find it

thheller21:02:48

FWIW the motivation for sticking with .-someKey and not using goog.object for everything is that IF it ever becomes practical to use :advanced for everything then goog.object would break stuff because it doesn't follow the rules mentioned aboved (mixing string/property access)

thheller21:02:49

I know 😉

michael.heuberger21:02:59

does nolen have an opinion on this?

thheller21:02:40

yeah he properly expressed the recommendation I made first 🙂

dnolen21:02:44

what's been stated above is more or less my opinion

dnolen21:02:20

unless you're writing against mature browser APIS, coding w/ deftype , or you wrote the JS yourself by hand to be fed into Closure - you should avoid .-

dnolen21:02:28

the more subtle idea here is distinguishing between data interop and api interop

👍 4
dnolen21:02:54

for data interop goog.object is nearly always the right answer

dnolen21:02:36

for api interop, i.e. method calls, externs inference is the way to go

michael.heuberger21:02:47

thanks so much. all clear here.

lilactown22:02:42

Has anyone used pathom outside of fulcro?

λustin f(n)22:02:18

I have been. So far, just as a second API to query against.

lilactown22:02:08

how much time do you spend on pathom stuff like writing and maintaining resolvers?

Aleed02:02:03

i feel like the hard part would be figuring out a way to handle caching the data i believe @U7YNGKDHA is still using Pathom after migrating away from Fulcro, so maybe he can offer some advice

Jacob O'Bryant02:02:58

I haven't used pathom outside of fulcro actually

lilactown02:02:45

We have a bunch of microservices at work. They're already using EDN and are specced. I want something that is pathom or datalog like to handle fetching and caching

marcelo.piva03:02:19

We have an experiment of using pathom to do service to service queries. @U066U8JQJ is doing a new planner that will enable connecting foreigner parsers to a parser, allowing us to have a central service with all attributes indexes that can find out the path to the data we are asking.

marcelo.piva03:02:09

And the new version of pathom datomic will be able to expose datomic entities for free

wilkerlucio10:02:13

yes, we are starting to play with this new planner, still getting confidence in the new impl, testing on a few things in prod, also developing new tools, I wanna get those more trusthworty before doing the public release, but if are feeling brave, the versions 2.3.0-alpha-* have it, but no docs yet, its all around reader3 and connect.foreign

lilactown15:02:41

I don’t really understand what a planner is or much else about pathom yet. I’m mainly trying to evaluate use of pathom with our Reagent/React front-end. our front-end has no real standard way of querying our whole system. requests are ad-hoc and caching is done per-feature.

lilactown15:02:27

how much is this new planner thing going to change? what does it enable? should I hold off from building a pathom service to serve our UI needs until it’s out?

wilkerlucio16:02:11

@U4YGF4NGM this new stuff is pluggable and easy to change, when you build a pathom parser you define witch readers are going to be used, currently there is reader2 and parallel-reader are the main ones, this new planner stuff is coming as reader3, this is all the user has to change, for users of the old readers, everything should work the same, but internally they work in a very different way. what's new about this recent planner is the ability to project the dependencies for a group of attributes at the same time, this allows for more complex and robust integrations between dynamic sources (most resolvers are static, in the sense their input/output are fixed, dynamic resolvers allow more complex requests to resolvers, like having a single resolver for a whole Datomic API, or a SQL, and also even other pathom apis (this the new foreign support thing)).

wilkerlucio16:02:50

so, if you are trying to integrate multiple parsers, the new planner is what will enable that, if you are just trying to write some parser on the client to make EQL as a standard for the app, then the current things would do it

lilactown16:02:35

yes, it sounds like I don’t need to worry about this change

lilactown16:02:48

@U066U8JQJ how do you think about using pathom on the front end outside of fulcro? would you think it wise?

wilkerlucio16:02:08

@U4YGF4NGM I see pathom as a generic EQL fulfilling engine, Fulcro is just very nicely integrated with it, but I don't see any blockers to use with other things, I believe that would look at lot like how Apolo works on JS land, Fulcro is a more full buy-in in this process (where all the components that need data have their query, very fine grained, automatic normalization, etc...), Apolo doesn't go that far, for what I see they just use some hooks and some central points to make data fetching (choosed a bit arbitrarily), so at some points you would define a query to get some parts of the sub-tree data

lilactown17:02:45

Thanks, that helps

lilactown17:02:10

Does pathom have any notion of "streaming" or subscribable queries?

wilkerlucio18:02:33

no, nothing at this stage, you would to do something around it

myguidingstar11:03:56

@U066U8JQJ do you have an code example of a simple dynamic resolver (when input, output are not defined yet)?

wilkerlucio13:03:02

I'll still pending to create docs on that, if you are feeling adventurous you can check the source for the datomic thing, which is the closest to a simple example I have so far: https://github.com/wilkerlucio/pathom-datomic/blob/master/src/com/wsscode/pathom/connect/datomic.clj