This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2020-02-16
Channels
- # announcements (2)
- # aws (2)
- # babashka (29)
- # beginners (69)
- # calva (6)
- # chlorine-clover (2)
- # cider (1)
- # cljs-dev (4)
- # clojure (44)
- # clojure-israel (1)
- # clojure-spec (3)
- # clojure-uk (31)
- # clojured (2)
- # clojurescript (6)
- # code-reviews (22)
- # core-typed (133)
- # cryogen (6)
- # cursive (7)
- # datomic (25)
- # emacs (19)
- # fulcro (69)
- # graalvm (1)
- # graphql (7)
- # lumo (1)
- # off-topic (92)
- # parinfer (2)
- # pedestal (6)
- # reagent (5)
- # remote-jobs (1)
- # shadow-cljs (11)
- # tools-deps (20)
- # tree-sitter (1)
- # vim (4)
- # vscode (6)
fwiw I ported something like #lang functionality to clojure. kind of lost interest because I didn't feel there was much demand for it https://github.com/typedclojure/core.typed.lang.jvm
but I'm personally interested in preserving clojure's evaluation model as-is as much as possible, it certainly has it's advantages. very simple.
it breaks down when you need to write macro-generating macros. but that's the only limit I've come across in practice
people here use redex a lot to prototype languages as it seems, i have not used it it yet
I'm a big fan of clojure. I'm not glamouring for anything in particular from racket. I want the best parts of Typed Racket and turnstile, but tailored precisely to clojure
and basically everything I've learnt has been from contrasting it with racket/typed racket. so I find racket useful and interesting from that perspective
that might be relevant: https://popl20.sigplan.org/program/program-POPL-2020
i mean "Dependent Type Systems as Macros" on this page, somehow it is not part of the URL
yes, i am meeting william several times a week in reading groups, we are going through his thesis at the moment
so I got to hear about some progress and their frustrations in publishing a previous version of that paper
"Since dependently typed languages may evaluate expressions while type checking, checking dependent types with macros requires new macrology design patterns and abstractions for interleaving expansion, type checking, and evaluation"
but it's basically the same thing I imagine. Just some extra details of clojure thrown in
turnstile+'s "evaluation" is probably just dependent type checking, and for me it's literally calling eval on a clojure form
that makes sense. what would be necessary to make that extensible? i.e. being able to strengthen clojure semantics by adding annotations or additional derivations and being able to plug into your pipeline?
i am building bayesian inference schemes that propagate measures through your code. there can be facts you know about these numeric entities or that you can add to description of algorithms, e.g. about convergence or resource usage, that could be automatically available to the type system.
in a more commercial setting i imagine having world knowledge about the system and clients that are referred to in the types that might be provided through a database
that is what we have been discussing with our cloud service provider client. in that setting you want to ensure static guarantees over your distributed system and would like to type check your infrastructure
if the type information would be in something like a datalog database then this would be way easier
if you can then write down rules with queries on the environment instead of using the clumsy PL notation, you could build much more refined type judgements as extensions and maybe make dependent types more accessible
i think most type system people work with functional languages, but are amazingly primitive when they deal with the description of the logic on the type system level. they mostly use adhoc encodings to describe the rules
let's say you assign a memory limit to a client inside of your cloud system that you track in your client-related database. you also track the memory limits of all the docker containers you can spawn on the system with kybernetes. now you do runtime checks regularly to ensure that clients to do not violate their limit. nonetheless you write nontrivial code that does scheduling and cluster allocations and it would be nice if you could prove that it does not violate the memory limits provided to it in form of the database over clients and the database over cloud containers
it is similar to the dependently typed example of checking shapes of arrays that are passed in at compile time and ensuring you do not access them out of bounds
you can first collect and extract as much information as might be helpful in the database for the type environment and others and then right rules that make use of all this information
that means if you can extend your knowledge during execution by collecting information of the environment, proving certain things in a local scope becomes much easier
but i imagine having a kind of type checking be partially done at runtime, somewhat like gradual typing
where runtime checks are introduced, but they could also then run a refined type inference to make sure that everything is adding up
and I guess that's the big distinction between my work and turnstile+? their evaluation is literally runtime?
i think so too, but i have to check more precisely what they do. we are not at turnstyle yet in the reading group, i have just heard about the paper a few weeks ago
yea you can emit custom code from a typing rule. but that's not a good idea because (unlike turnstile) core.typed isn't always in charge of expansion
which is why I dabbled in #lang's for clojure, to hook core.typed into load/require
well, if you used a typed #lang in clojure that always enforced type checking on load, it would be a good idea
to rephrase on a high-level, i think an extensible static analysis stack allows to join the type inference with a general proof system over a rich set of domains if the basis for the logical inference rules is a shared logic programming language like datalog
datalog is also used for reachability analysis and there are some attempts to use it for type systems, but in clojure land we also have our domain data in a powerful datalog database
I'm not clear what exactly you want to "plug into my pipeline". the compilation of rules to datalog?
i think the rules themselves could contain datalog, it would actually make more complicated rules more explicit
at least i have found some of the notation changing between papers and confusing when the rules do tricky things
yea so instead of the normal type environment which is a sequence of mappings from var=>type [x : Int, y : Bool], you use propositional logic to represent the current environment. atomic propositions are statements of the form "x is of type t". and you can && and || them any way you like
and the rule for looking up in the type environment does constraint solving (model checking IIRC?) to determine the type of x
and then if you type check (if (integer? x) e1 e2)
you can update the then branch with the prop "x is type Int", which reduces the env to "x is type Int", and the else branch with !"x is type Int" (negation) which reduces env to "y is type Bool"
check out "Logical Types for Untypable Languages" Tobin-Hochstadt & Felleisen for intro
yes, that is very related, although Datalog itself provides a programming language to describe simple inference rules over relations extending on pure propositional logic. it is still first order though and some restrictions on mixing recursion with negation to stay terminating.
it would be like a base language for inference that can be extended by more rules, either in datalog or by ones doing more fancy logical constraint resolution
that would be useful for implementing occurrence typing. I tried using minikanren to do it once https://github.com/frenchy64/miniKanren-occurrence-type-inferencer
right. minikanren would be like prolog. the problem is that it does not necessarily terminate, so you could easily have an undecidable type system then
oh no that's it https://github.com/frenchy64/miniKanren-occurrence-type-inferencer/blob/master/inferencer.rkt
the very pragmatic benefit i see from using a datalog database, is that we know how to populate it from everywhere with information
don't know datalog. I like the direction of somehwere inbetween hand-written and minikanren
(or more precisely, datalog is compositional over multiple data sources, datahike for instance can join data from everywhere, locally, a cluster db or one running for another party on AWS)
i think you still need to be able to write custom inference logic, because type systems very much are at the boundary of making logic programming tractably enough to handle programs
regarding that it would be challenging, i am mostly thinking of instead of storing the environment in a hash-map to store it in an in-memory datalog database (could be datascript) and just start with using the pull syntax to get items out, practically the same as for a hash-map. but once the environment is stored in that way, one would have the freedom to a) use datalog queries and rules selectively when a more complicated invariant is supposed to hold in the type environment and, independently, b) expose immutable snapshots of this environment also to the runtime code in case it declares its interest (intertwining compile and runtime more effectively).