Fork me on GitHub
#clojure
<
2017-10-24
>
h0bbit03:10:18

Hello All, has anyone faced/solved the problem of serializing Clojure classes using java.io.Serializable? What’s the idiomatic way of using serialVersionUID? (if there is one)

Mudge04:10:59

Don't laugh at me when I ask this: How can I get the name of an anonymous function function when it has a name?

Mudge04:10:37

For example: (fn cool [] (println "hi"))

Mudge04:10:53

how can I get the name "cool" from the function?

reedho05:10:30

(.toString (fn cool [] (println "hi")))

bfabry06:10:36

@h0bbit what do you mean by clojure classes? records and deftypes? to be honest, the best answer is use something other than Serializable. Which is also the answer even if you're using pure java

bronsa08:10:49

@nick319 what do you need that for

abdullahibra09:10:00

i have played with core.async a bit

abdullahibra09:10:36

this a code which pass lists of numbers to each process linearly then each process try to remove multipliers of one number and pass result to other process

abdullahibra09:10:01

the code works fine for example till 10000 odds

abdullahibra09:10:22

when i tried to increase more for example 100000 it's broken

abdullahibra09:10:36

anybody can help in this?

nha09:10:35

thanks @gfredericks and @tanzoniteblack test.check’s shrink-loop looks like what I want 🙂

Arno Rossouw10:10:28

On database, Is it bad idea to enforce row level lock on a row, to avoid duplication in data?

tatut10:10:03

what do you mean? that’s the job of the database, a unique constraint will handle it

Arno Rossouw10:10:25

Yes, i also thought that from database admin point of view, but php programmer wants to enforce in code

Empperi10:10:20

do NOT do that on code level

Arno Rossouw10:10:58

And some people wonder why i get annoyed at php devs that silly statements. hehe

Empperi10:10:38

just ask your PHP devs how well that would work when you have like, two processes (which happens pretty much instantly with PHP) doing that stuff concurrently?

Arno Rossouw10:10:12

We have 200 processes running concurrently

Empperi10:10:18

I’ll answer: it won’t work

Empperi10:10:29

since if it is in memory, then it works only for that single process

Empperi10:10:55

if it is on disk then one needs to take care of concurrency issues on I/O, meaning implementing transactions or something similar

Empperi10:10:09

and boom, you’ve done a database all by yourself, congratulations

Arno Rossouw10:10:24

postgres is acid complaint, so dont know why someone would consider it to be honest

Empperi10:10:54

most likely the person suggesting that hates databases in general and wants to keep everything at his hands and refuses to learn new stuff

Empperi10:10:02

always a fine combination

Arno Rossouw10:10:17

refuses to learn new stuff sounds like it, hates anything thats not mysql and php

Empperi10:10:36

see, almost a full match 🙂

Empperi10:10:00

and postgresql is actually much better than mysql when it comes to ensuring database integrity

Arno Rossouw10:10:36

Yes, i've learned that the hard way, hehe, been administering mysql for 5 years

Arno Rossouw10:10:47

So i replied to the dude suggestion with a link to postgresql unique constraint index

Arno Rossouw10:10:26

@niklas.collin so what could happen if someone were to enfoce row level locks, lead to issues with integrity?

Arno Rossouw10:10:37

s/enforce/enforce/g

Empperi10:10:46

you mean if you do it on code level, in this case with PHP?

Arno Rossouw10:10:57

Just out of curiousity

Empperi10:10:58

oh god, so many things

Empperi10:10:20

I take it you have apache or nginx there spawning your PHP stuff, right?

Arno Rossouw10:10:37

no we use django and 5 web load balancers with nginx

Arno Rossouw10:10:48

he is not contributing to django codebase at the moment

Empperi10:10:05

right, doesn’t change things. Something somewhere is anyway spawning a PHP process per request

Empperi10:10:57

now, imagine two concurrent requests coming in, both of which try to do that row locking. If this row locking logic is kept in memory these two processes won’t see the locking defined by the other process since they are separate processes

Empperi10:10:26

if you’d have something sensible like Clojure then you could do that relatively safely - as long as you’d have only one server serving all the traffic

Empperi10:10:48

when you add another server then it’s the same problem all over again: different processes, different memoryspaces

Arno Rossouw10:10:05

So non-deterministic output?

Empperi10:10:17

it depends on completely what is being row locked and why

Empperi10:10:31

if row locking is done for writes then most likely this will fuck up your database

Empperi10:10:38

stuff that’s supposed to be unique is not

Arno Rossouw10:10:47

fuck up how, load average?

Empperi10:10:13

process A writes “foo”, process B tries to write “foo”, manages to do it since it doesn’t see the row locking stuff

Arno Rossouw10:10:20

Sorry i'm asking so many questions, but this guy will interogate and second guess what i know. So i need ammunition

Empperi10:10:06

in addition you’ll end up writing a lot of code which you could handle with single statement in your schema definitions for postgre

Empperi10:10:02

code which won’t work unless you create a mechanism to handle concurrency which is ACID compliant

Empperi10:10:36

that guy should write nginx too while he’s at it

Arno Rossouw10:10:10

Ok. thanks, I will just say no it wont work. Don't have energy to argue with know-it-alls

Empperi10:10:58

no, you can say that he can get it to work if he writes a fully fleged ACID compliant concurrency handling

Empperi10:10:09

good luck with that

Empperi10:10:13

especially with PHP 😄

Arno Rossouw10:10:29

roflol, but phalcan is a c implementation and is fast as hell

Empperi10:10:47

it’s not about speed, it’s about what you have at language level

Empperi10:10:55

PHP doesn’t have concurrency primitives

Arno Rossouw10:10:54

I know, you can write bad code in any language, most people write bad code in php, hehe

Arno Rossouw10:10:36

Is there a way to avoid a know-it-all outlandish ideas?

Empperi10:10:55

yes, get experience on multiple things, realize you know jack shit and get humble

Arno Rossouw10:10:32

I know, but i dont want to have to defend my position and knowledge all the time, its energy draining

Arno Rossouw10:10:47

I know my limitations, so i wont talk about stuff i dont know

Empperi10:10:48

been there done that. Changed jobs.

Empperi10:10:22

or to be more exact: client, not job

Empperi10:10:27

when I was a consultant

Arno Rossouw10:10:53

I really like this job, don't want to work for a corporate again

Empperi10:10:31

then deal with the ugly parts of it and be happy 🙂

Arno Rossouw10:10:45

Ok, good point

Arno Rossouw10:10:07

I think i'd be happy if i can get paid writing clojure code all day

Arno Rossouw10:10:46

but south africa is behind 😞

Empperi10:10:08

I’m actually writing scala these days, not exactly happy about that

Arno Rossouw10:10:29

Why, whats wrong with scala?

Empperi10:10:53

well, my main issue with it is the implicit and what it brings

Arno Rossouw10:10:14

Well, dont know any scala, so i can't say i understand 🙂

Empperi10:10:26

and also the ideology that “writing code is a bad thing, minimize the amount of code you need to write even if it makes your code unreadable”

Empperi10:10:48

which actually is part of why implicit exists in the language

Empperi10:10:03

but that “writability over readibility” is more of a design choice than a feature of the language

Empperi10:10:36

and still most stuff written in scala takes more code than equivalent clojure 😛

Arno Rossouw10:10:23

oh, can you migrate to clojure?

Empperi10:10:09

company is using clojure, team isn’t

Empperi10:10:14

trying to educate them 🙂

Empperi10:10:16

so we’ll see

Empperi10:10:31

the absurdity of it is that we are actually using datomic but not clojure…

danm10:10:15

Heh, we have the opposite opinion in this team

danm10:10:23

Make it readable, even if it makes it a bit longer

Empperi10:10:37

yeah, I agree with that. But Scala peeps do not apparently

danm10:10:48

Makes it easier to onboard new team members, and easier to understand and bugfix when you come back to code you've not touched in mnths 😉

Arno Rossouw10:10:04

anyone tried kubernetes?

Empperi11:10:16

we use it at work

Arno Rossouw11:10:15

very expensive?

Empperi11:10:24

in what regard?

Arno Rossouw11:10:38

lets say i need 100 instances?

Empperi11:10:50

kubernetes is opensource, no licensing fees

Empperi11:10:00

so, it’s up to the price of your servers

Arno Rossouw11:10:13

So, how many instances can you have on decent server?

Empperi11:10:29

there is no clear answer to that since it totally depends on your containers

Empperi11:10:03

create distroless containers which contain just a hello world written in C: then the answer would be “millions” I think

Arno Rossouw11:10:16

ah, like alpine

Empperi11:10:22

no, distroless

Empperi11:10:28

alpine linux is a distro

Arno Rossouw11:10:55

Very interesting

Empperi11:10:57

I think “millions” is a bit over the board, but “thousands” easily

Empperi11:10:41

but create a ubuntu container with weblogic behemoth inside and you can put your server to it’s knees pretty easily with just one container

Arno Rossouw11:10:58

I still need to learn kubernetes 🙂. Have to plan a project where a basic html page can handle 100k concurrent sessions

chrisblom11:10:48

i would not recommend http-kit

Empperi11:10:09

yeah, well most of the stuff there is about other stuff than http-kit

Empperi11:10:38

I guess you’d suggest aleph instead?

Arno Rossouw11:10:45

Ok, just concertned about socket limit on servers and bottleneck on databases then

Empperi11:10:07

if it’s serving just html page you do not need a database

Empperi11:10:16

of course if you add databases into the mix it gets way more complicated

Empperi11:10:29

then kubernetes isn’t such a bad idea

Arno Rossouw11:10:35

Hmm, think they need databases, since it does api calls

Empperi11:10:45

ok, so it’s not just a basic html page then 🙂

Arno Rossouw11:10:18

yeah, sorry, got ahead of myself

Arno Rossouw11:10:57

a side-effect of multi-tasking, hehe

Empperi11:10:40

but anyway, Kubernetes is a container orchestration platform like Docker Swarm or Apache Mesos (although that is actually more than just container platform)

Empperi11:10:59

so basically all it costs you is the servers where Kubernetes itself is running

Empperi11:10:23

rest is just your normal containers on top of VMs stuff, it’s just being managed by Kubernetes

hlolli11:10:45

I have yet another interop question, I'm pretty sure that the method .addExtensions exists for my imported org.openqa.selenium.chrome.ChromeOptions class.

(user/all-methods (new ChromeOptions))
(.addArguments .addEncodedExtensions .addExtensions .amendHashCode .asMap .getExperimentalOption .lambda$asMap$0 .setBinary .setExperimentalOption .setHeadless .setPageLoadStrategy .setUnhandledPromptBehaviour .toCapabilities)
but when useing it, I get a strange error
(doto (new ChromeOptions)
        (.addExtensions (java.io.File. "chrome/my-extension.xpi")))
;; => CompilerException java.lang.IllegalArgumentException: No matching method found: addExtensions for class org.openqa.selenium.chrome.ChromeOptions
https://seleniumhq.github.io/selenium/docs/api/java/org/openqa/selenium/chrome/ChromeOptions.html

qqq11:10:46

what feature are you using that makes aws lambda / google compute functions inadequate ?

qqq11:10:36

@niklas.collin: yes, I'm recently moved from EC2 + Elastic Bean Stalk to AWS Lambda + CloudFront + API Gateway + DynamoDb ... and am loving serverless.

Empperi11:10:00

well, we are talking about a huge microservice ecosystem here, hundreds of engineering teams

Empperi11:10:07

on top of kubernetes

Empperi11:10:41

we are doing whole bunch of stuff where AWS lambda wouldn’t be enough, especially performancewise

Empperi11:10:00

and long lived stuff which isn’t possible with that etc

qqq11:10:13

I agree, lambda does not sound like the right use case for your project; good luck!

Empperi11:10:32

“project” 🙂

Empperi11:10:41

and FYI, it’s Zalando.

Empperi11:10:01

and all the other stuff under the hood which isn’t the fashion store

Empperi11:10:57

and lambda could definitely be used in some parts of the system

Empperi11:10:31

as it is now it’s not being used due to unpredictable costs at this scale

qqq11:10:08

yeah; if you are running a huge shopping site, you should probably go the Amazon route and build Zalando Web Services

Empperi11:10:27

well, currently everything is on top of AWS

Empperi11:10:43

but I can see the company slowly building abstraction layer between AWS and Zalando stuff

Empperi11:10:52

like the Kubernetes change from EC2 instances

hlolli12:10:46

(doto (new ChromeOptions)
       (.addExtensions (list (java.io.File. "chrome/my-extension.xpi"))))
to answer my own question, list was needed around the extensions. Undescriptive error message and strange documentation.

slipset13:10:40

So, given a honey-sql statement as

slipset13:10:21

["UPDATE my_table SET foo = ? where id = ?", "bar", 1]

slipset13:10:51

is there a function in java.jdbc or honeysql which gives me

slipset13:10:17

"UPDATE my_table SET foo = 'bar' where id = 1"

slipset13:10:38

eg, a function from prepared-statement to string.

Arno Rossouw13:10:42

@niklas.collin how much of a learning curve is kubernetes

Empperi13:10:13

well it has some, but it depends on which side you are looking at: user of kubernetes as a dev or ops guy keeping it up

Empperi13:10:19

no experience on the latter

pesterhazy13:10:33

@slipset probably not, given that most db drivers don't actually do any escaping in prepared statements - they pass the values directly to the db

slipset13:10:19

@pesterhazy yeah, I figured as much after a while…

abdullahibra14:10:49

nobody can help me

admay14:10:39

What are you looking for help with @abdullahibra?

abdullahibra14:10:08

hello guys [11:00] i have played with core.async a bit [11:00] https://gist.github.com/aibrahim/85af843ef94c6cda544e596f4ba8b50a [11:01] this a code which pass lists of numbers to each process linearly then each process try to remove multipliers of one number and pass result to other process [11:02] the code works fine for example till 10000 odds [11:02] when i tried to increase more for example 100000 it's broken [11:02] anybody can help in this?

admay14:10:00

How does it break?

abdullahibra14:10:12

@admay (def odds (rest (filter odd? (range 1000000))))

admay14:10:51

@abdullahibra, excuse me, I should be more clear. What error message do you get? Or do you not get any error message at all? 🙂

admay14:10:43

This doesn’t look like the entirety of the error output, what this everything that printed out?

abdullahibra14:10:48

how can i dump full trace error to file ?

qqq15:10:27

what is the builtin for "pprint but out to string rather than stdout" ?

ghadi15:10:00

(with-out-str .... (pprint...))

seancorfield16:10:48

@slipset That's an oft-requested feature for clojure.java.jdbc but as @pesterhazy notes, there's no easy way to get that from a JDBC driver under the hood, unfortunately. I guess my question would be: Why do you need/want this?

slipset17:10:02

@seancorfield Yeah, I saw some discussion about it on the mailing list. My use case was the following. Given a csv file, I wanted to generate update statements. I wanted to have the actual statements so I could 1) have it reviewed by a teammate, and 2) add it to a migration script.

slipset17:10:58

I ended up just running the honeysql statements through jdbc.

slipset17:10:54

I realize that there are other ways of achieving this, but I had a nail and honeysql looked like a hammer at the time.

seancorfield17:10:48

HoneySQL is a good way to compose query fragments, and producing vectors of ["sql statement" param1 param2 .. paramN] is a reasonable representation for review/debugging...?

slipset17:10:21

True, and it’s what I ended up using for review. Doesn’t fit so great in a migration script though.

fantomofdoom17:10:45

How can i read batch messages from channel?

potetm18:10:58

@fantomofdoom xform w/ partition?

noisesmith18:10:47

another option is a debouncer that pushes all values read after a timeout (time based batching rather than count based)

fantomofdoom18:10:50

@potetm ok, but how i can set timer to not wait when channel fill to get me full partition?

noisesmith18:10:18

@fantomofdoom that’s where you want a debounce loop and not a partition

noisesmith18:10:44

I bet there’s a good library with a debounce in it out there, but there’s definitely gists showing how to do one in core.async

noisesmith18:10:19

this isn’t exactly a standard debounce (since you want to collect all the messages over some timespan, and send all together), but the logic is very similar

noisesmith18:10:06

yeah- the modification to that would be to attach the new-val to the last-val, and make sure last-val starts as an empty coll

noisesmith18:10:17

should be easy enough to make and unit test though

beoliver18:10:38

is it possible given a async/chan that returns maps say {:foo 2 :bar xs} where xs is a vector, to flatten transform the channel into one that yields elements of xs. (async/map< #(get :bar []) ch) gets me the vectors, but ideally I would like the flatten the whole channel (I know that map< is depreciated...

beoliver18:10:34

so if xs was a vec of ints (<!! ch) would yield an int

bfabry18:10:03

@beoliver a transducer should do that

bfabry18:10:44

(async/chan buf-size (mapcat :bar))

beoliver18:10:16

if I am given the chan (i.e I dont control its creation, should I create a pipe?)

tanzoniteblack18:10:16

^^^ Example of this being used

(let [c (async/chan 10 (mapcat :bar))]
                          (async/put! c {:foo 2 :bar [1 2 3]})
                          (async/close! c)
                          (async/<!! (async/into [] c)))
returns:
[1 2 3]

tanzoniteblack18:10:01

@beoliver yes, you can pipe from the channel you have with the maps onto a channel with that reducer, or just async/pipeline to accomplish the same thing without the transducer directly on the channel

bfabry18:10:39

yeah pipe seems like the simplest solution there

bfabry18:10:48

or pipeline

beoliver18:10:42

is there a way to "perculate" the closing of the channel? can a/close? be made to close the channel that I have created a pipeline from?

hiredman18:10:12

it tends to go in the other direction, most things in the channel library have an option to close the destination channel if the source channel is closed

hiredman18:10:54

(the docstring for pipe describes exactly this)

beoliver18:10:05

hmm... I guess I will run some tests - just don't want dangling channels

bfabry18:10:46

pipe or pipeline will both propagate the close on the source channel. closing the destination channel doesn't make much sense as what do you do with the undelivered messages

beoliver18:10:20

flush the channel?

beoliver18:10:50

(loop [] ...)

beoliver18:10:55

to the ether

bfabry18:10:02

seems bad man

bfabry18:10:34

the docs for pipe says it'll just stop consuming anyway

beoliver18:10:16

so does this look reasonable s/scroll-chan is out of my control. it just returns a channel.

(defn get-hits [response]
  (get-in response [:body :hits :hits] []))

(defn search-channel
  ([cli index type query-map] (search-channel cli index type query-map {}))
  ([cli index type query-map params]
   (let [ch (a/chan)]
     (a/thread
       (->> (s/scroll-chan cli {:url [index type "_search"]
                                :query-string (select-keys params [:search_type :request_cache])
                                :body (merge {:query query-map}
                                             (dissoc params :search_type :request_cache))
                                :exception-handler exception-handler})
            (a/pipeline 1 ch (mapcat get-hits))))
     ch)))

beoliver18:10:42

sorry for the extra crud

bfabry18:10:49

the a/thread seems strange there

beoliver18:10:17

as far as I am aware g/blocks are a bit iffy...

beoliver18:10:29

thread pool of 8

beoliver18:10:39

and not great with "long running" requests

bfabry18:10:19

I don't think you need either thread or go. the function scroll-chan returns a channel so is presumably asynchronous/fast. a/pipeline starts its own asynchronous operations so returns immediately. so you're creating a new thread that does 2 tiny operations then ends

beoliver18:10:05

shit... good point.

bfabry18:10:10

I do kinda think pipe would've made more sense than pipeline here.

(async/pipe
  (s/scroll-chan ...)
  (async/chan 1 (mapcat get-hits)))

beoliver18:10:38

yeah - makes more sense

bfabry18:10:56

pipe returns the to-chan btw. though it's not documented

mpenet19:10:47

scroll-chan has a ch optional arg that is the ouput chan. You can pass a ch with an xform for instance

mpenet19:10:38

I think all of spandex async functions allow to pass custom chans

mpenet19:10:25

And yes, scroll-chan is safe in go blocks it returns immediately, under the hood it s all async io

yury.solovyov19:10:01

is that ok to use let instead of do for else block?

bfabry20:10:57

yes, let is a single expression, and it contains an implicit do block

arnaud_bos20:10:50

Hi all, I'm preparing a Clojure talk at a local meetup (yay!) and re-watching this talk https://www.infoq.com/presentations/Are-We-There-Yet-Rich-Hickey in order to explain Clojure. At 52:34 Rich says "actors definitely do not" support "Point-in-time value perception", contrary to agents. I do not understand this statement, can someone try to explain it to me?

hiredman20:10:30

the way to get a value from an actor is by sending it a message and then it will send you message back. agents can be derefed

arnaud_bos20:10:30

so it's the fact that "value perception" is deferred in the case of actors?

talios20:10:38

hola - does the latest leiningen force release/deployments to https? coworker complaining that it seems to be ( our nexus isn’t on https currently ).

hiredman20:10:38

you can't just read the sate of an actor right now, you have to effectively put your request in the actor's queue, and the actor will get to it when it gets to it

slipset20:10:46

@arnaud_bos Unless you're specifically wanting to bring up agents, eg doing a talk about the Clojure STM, I'd leave them out of the talk.

hiredman20:10:52

the point-in-time is the key part there

danielcompton20:10:56

Yep that sounds familiar @talios

arnaud_bos20:10:52

@hiredman got it, thanks @slipset I just didn't want to follow through the talk and not understand this 🙂 I'll keep it simple this will be an introductory talk

slipset20:10:36

A talk very much worth watching though.

slipset20:10:14

I think a more interesting thing, if you want to contrast actors with the clojure eco-system, is to contrast them to core.async.

arnaud_bos20:10:18

Ah! Haven't seen that one yet, it's definitely on my list.

arnaud_bos20:10:23

Sure, I know for a fact that I'll have a few actors fans in the audience so...

talios20:10:48

@danielcompton kinda arse when its a 100% internal repo tho

danielcompton21:10:06

Well there is a workaround, in the FAQ. What's the alternative if you want to help people avoid talking to HTTP repos in the general case?

talios21:10:03

I suppose you could run a local HTTPS proxy..

arnaud_bos20:10:29

I remember reading this one a few months back and not understanding every bits of the follow up discussion in comments with "Jeff Rose" is really interesting.

tbaldridge21:10:48

@arnaud_bos http://Clojure.org has a rather good overview on actors and "why not"

tbaldridge21:10:18

And btw, http://dalnefre.com is a fantastic site, some really mind-blowing stuff on there. Not stuff I'd use at work everyday, but cool from a geek perspective

arnaud_bos21:10:50

I've read that page on http://clojure.org a few times already and must admit I haven't fully made my mind up about this... 😅

arnaud_bos21:10:55

I just don't use there concepts enough to get a good picture that is based on experience and not on theory and acquiescence bias

nroos21:10:19

hi! I’m having problems with ring/compojure since upgrading leiningen. If I create a new app with > Lein new compojure testapp it creates the app like it should. But if I try to run it with > lein ring server I get this error “java.lang.RuntimeException: No reader function for tag object”

nroos21:10:24

any idea how to fix this?

mdrago102611:10:54

did you end up resolving this?

mdrago102611:10:07

I ran into this issue the other day and I think I had to downgrade back to 2.6.* to fix it

nroos04:10:22

Never got a fix unfortunately 😞

mdrago102613:10:30

even after downgrading leiningen?

nroos13:10:02

haven’t tried that yet

mdrago102613:10:36

i recommend trying it… lein downgrade 2.6.1 i think works

mdrago102613:10:43

that solved the leing ring issue for me at least

nroos13:10:01

Yeah, problem is I have lein instlled through brew

nroos13:10:07

and have no idea how to downgrade then 😄

nroos14:10:53

yeah, downgrading fixed the issue

nroos14:10:55

thanks 🙂

mdrago102617:10:29

great news 🙂

cjsauer22:10:24

Is there a built-in "`split-around`" function? Sort of the opposite of split-at...I'm trying to say "keep the closest X values around Y in the given seq"

yedi22:10:26

hey all, i ran into an issue with this foor loop

(for [gender     performer-genders
                               name       performer-names
                               instrument instrument-names
                               mood       moods]
  (do stuff))

yedi22:10:03

basically, if any of those lists are empty, then the loop won't iterate over anything, which is expected behavior

yedi22:10:51

the looping behavior im looking for is if one of the list are empty, instead of not looping over anything, do the iteration over the other 3 lists, just ignoring the empty list (binding the value in each iteration to nil or something)

yedi22:10:07

does anyone know of a good way to accomplish that

yedi23:10:09

i guess i could just check to see if those lists are empty, and if so just bind the variables to [nil] seems like that should work

noisesmith23:10:08

for isn't a loop, it's a list comprehension

noisesmith23:10:13

if you want a loop, use loop

noisesmith23:10:26

in particular where you say (do stuff) - if your intention is to do anything with a side effect rather than generating data, for is the wrong construct

yedi23:10:08

nah that's just aggregating data within a reduce function

yedi23:10:28

i guess loop would be better though since it'd have more explicit semantics then for