Fork me on GitHub

If I want to try a message-passing language should I go with Erlang or Elixir? Or get the basics in Erlang then move to Elixir? I'm leaning towards Elixir because of the friendlier syntax.


It’s 1-1 mapping, so you can go both ways. Elixir might be bit easier to start with.

Josef Richter19:07:43

there's no strong to start with erlang, unless you have existing erlang codebase. someone described the erlang syntax as neandrthal. but once you get into elixir a little, you'll sometimes look at erlang docs for some things, because you can use everything in elixir too. and knowing elixir will help you quickly transform the erlang syntax into elixir in your head. after all, syntax is usually no big deal.

👍 3

@allaboutthatmace1789 Elixir is easier because Erlang has some weird legacy behaviors. Alternatively if you're trying to learn the pattern, the process model is faithfully reproduced in Akka which you can use with Clojure.


Ooh, interesting, I'd heard of Akka but didn't know this was in the same space - thanks!\


I am learning ARM architecture and I am thinking i must be reinventing / rediscovering something that's widely known you can break down CPU instructions into a heirarchy, each is safer/faster than the ones above it, and can call ones below it without becoming significantly slower or less safe: 0. using registers and stack - the bare minimum for computation, fast, safe 1. reading memory - order of magnitude slower, safe on its own 2. writing memory - a little slower, makes 1 unsafe transitively 3. exceptions / interrupts - requires total flush of state, security boundaries, etc. etc. does arbitrary things with devices attached to the machine


what is "safety" in this context?


the opposite of being error-prone


each level has a range of potential failures that are a superset of the one below it


If you are dealing with single core/thread, then I don't see writing as any less safe than reading, personally. Dealing with multi-core cache coherency protocols in hardware, and what their performance characteristics and effects on writing parallel software is complex.


Not clear to me whether writes are a little slower than reads. Both reads & writes are similarly fast if they hit in the appropriate L1/L2/etc. cache, and similarly slow if they miss.


right -that's all true but, for example, an interrupt handler can write memory, even if only a single thread is in play


writes are slower to do correctly though


(as in, a write can invalidate a cache, or lead to a race condition)


exceptions/interrupts are more complex to deal with in terms of what promises a CPU implementation makes on state that is preserved or not. Certainly some CPUs try to make interrupt handling 'lighter weight' than other CPU architectures, but it isn't fast on any of them I have heard of


leading to a race condition can be fast 🙂


as in, fast to complete in hardware. Whether it is correct or not is up to the application.


what I meant is that the things that prevent races tend to make things slower


very good points, thanks


Sure, I probably misunderstood your points. Thanks for clarification. I can't think of any objections to your clarifications.


I was thinking in terms of being able to have good rules of thumb for which code requires which kind of care, and how I could structure a tool that generates or manipulates instruction sequences (a compiler targetting ARM specifically)


and a lot of this is brainstorm level stuff


It seems odd to me to think of exceptions / interrupts "calling things below it" such as reading memory and writing memory. At least, many CPUs do read/write memory as part of performing the exception / interrupt handling, but some parts of it are done without executing other instructions at all.


Sure, no worries brainstorming.


At least, interrupt handling is at least partly done 'purely in hardware', to set up the context for starting to execute programmer-provided instructions that they want to be executed when that interrupt occurs.


I'm far from an assembly expert, but I'm not sure how neatly instructions fit neatly into those 4 categories. • not sure combining speed and safety makes sense and that instructions can be divided into sets the 4 sets where each set is a superset of the set below • it seems like categories 0-3 include instructions while category 4 includes something else (handlers?) • I thought speeds for reads and writes could vary dramatically.


above/below is overloaded and I should find a different metaphor - I'm thinking in terms of being able to define restricted sets of capability, where things on the outside can use everything inside them


I've never looked into any detail at hardware-level support for things like hypervisors or VMs. I have seen logical diagrams of enclosing rings of trust/security when implementing those, but don't know any more about it than that. I've looked in great detail at relatively simple CPU hardware architectures and their instruction sets, but nothing top of the line server stuff.


I think that the speed / safety aspects might be coincident rather than causal, but at each step you have things that take more instruction cycles, and have bigger / more problematic ways they can go wrong


yeah - even rings are overloaded here, I need another thing to differentiate what I'm talking about


I would definitely put interrupts / exceptions in the more complex parts of knowing exactly what CPUs are doing, and in how complex they can be to implement correctly.


I think the problem is that the dimensionality is really high such that it's hard to determine which instructions are above or below other instructions (no total ordering with respect to safety or slowness). some dimensions that I can think of: • where data is located (which cache) • how interrupts are handled • branch prediction • what errors can be produced • slow path/fast path


right - but I think I can make 4 groups, even if a few things are kind of "boundary" items


I remember the first time, early 1990s, when I saw a book on Intel x86 architecture in the library, and happened across an appendix which showed which bugs were fixed in which mask revisions. Even though I had written software for 10 years before seeing that, it somehow never occurred to me before that, that CPUs could have bugs. It made me wonder "CPUs can have bugs?" 5 years later when I had done some hardware design myself, I asked "How do CPUs have so few bugs?"


lets say you write a program using only layer 0 instructions, what guarantees would that imply?


it will imply that you can make transforms in terms of ordering / consolidation that are unsafe to do if you are touching memory


(there are instructions to require flush / visibility that help mitigate, but it's also useful to use those as little as possible)


I'm not sure commutability follows those categories. aren't floating point operations an example of category 0 instructions that don't commute without changing the result?


If performance didn't matter, then all of this could be so much simpler 🙂


ie. if you're processing an array of floats, I think you get a different result if you process floats from largest to smallest than if you process them smallest to largest


sure - but that follows well known mathematical rules, and on an architectural level, nothing touching memory really does


Among the category 0, you can reorder operations that are data-independent. But no, not things that are data-dependent.


right - of course, they touch registers and stack, but you can use symbolic reasoning based on which registers are used, and reorder stack usage or change register usage without changing meaning


you don't have that kind of capability if you touch memory


you do for a single thread


(unless you also start caring about visibility and alignment and flushing and etc.)


unless I am misunderstanding again. For a single thread accessing memory, CPU caches become not simple, but much simpler.


right, that's true


I guess I was taking threads as a premise


"thread" here meaning an independent CPU core / hyperthread


At least, I think that the symbolic reasoning you can do about reordering of operations is the same for stuff that only touches registers, and the stuff that touches main memory.


I guess what I was dreaming of was a weaker but still useful version of the transparency clojure offers - of course nothing at this level is immutable, but there seems to be a subset of instructions that have drastically less complexity (if multiple threads are a given)


CPU registers are thread locals 🙂


Nobody else can touch or see them, but you only get a handful of them.


and maybe a hunch about segregating and batching data access / io the same way I would in a clojure program - where it's safer and more performant to do a bulk load, or to tell the CPU "I will be reading this but I won't write it back and won't need to see changes" (ARM has instructions for that btw)


which is a lot like the way I'd safely use an atom


this is all learning and fresh for me, as I mentioned


The ARM instructions you mention are memory page protection attributes, or something else?


one moment, looking it up...


I was thinking of the prefetch - you can do a different cheaper prefetch if you know you will only be reading a single value, not checking for updates, and not writing back to it


but this might be a half baked thing I imagined as I slurped in docs, and not actually useful


Prefetch should be "only" a performance optimization. It should not be necessary to use it for any program, I am pretty sure.


I put "only" in quotes because performance is important, of course, but compilers emitting such instructions without help from a developer is probably somewhat unusual.


it seems like an intermediate level language could have rules / declarations, a sort of type system of operations (like haskell monads), letting each category freely use the things below it, but encouraging their separation

Mikael Andersson02:07:27

I believe effects (type) system are used to model side effects in both verification and simulation research, but I haven't seen anything trying categorise effects as a way to (at least) guide implementation for humans. Momentarily disregarding highly complex CPU's to look at simple MCU's, it might be possible. Largeish issue even is this restricted case, but also interesting to form categories from, is that flags are essentially registers to where reads might write.

Mikael Andersson02:07:20

If one packs a few ops together in functional blocks, you can get better composability at the cost of some performance. But on MCU's without deep pipelines the cost is less severe. I've come to understand that it's fairly common to do bringup of new silicon using forth because you can get a decent amount of functionality using quite little code and few opcodes, hence somwehat isolating yourself from the complex instruction to instruction dependencies.

Mikael Andersson03:07:01

Interesting thought re hierarchies of operations in any case, got me thinking!


the model breaks down when dealing with malicious code (in ways I know, and I'm sure in many more ways I don't know), but seems like it would help "consenting adult" code be performant and correct


I think I'm rediscovering something that should be very basic, so links to books or papers would be awesome


(reading TAOCP currently)


There is some tutorial of "generate a clojar on every tag using github actions" ?

delaguardo16:07:25 This is not a tutorial but an action that I'm using for some time

👍 3

At what point would you guys say a graphdb would worth it? After your queries require 5 edge traversals (would probably relate to 5 joins in a relational db). Imagine a social app like twitter and possible queries. It seems it would work better but at the same time im not so sure. Maybe sticking with postgres and building graph with RedisGraph for the queries could benefit it, is better?


I worked on a social analytics app, and we actually found that with our edge count a graph db just couldn't keep up - we could have used one if we had fewer edges, but we ended up needing a document db then in process graph code (with caching of partial analysis back into said document db)


the document db was mongo, but could have as easily been postgres with the right table definitions and consistency rules


honestly, clojure has good data manipulation functions (many inspired by sql), and good immutable data semantics, so you don't get as much from a graph db as you would in most languages - I suspect with a few functions clojure is more capable with graph data than most graph dbs are

👍 9

thx for the answer, what was the graph db you choose?


I didn't do the spikes with graph dbs, but I know someone tried neo4j.


we ended up using a document store and our own in application graph code


> Crux is ultimately a store of versioned EDN documents. The fields within these documents are automatically indexed as Entity-Attribute-Value triples to support efficient graph queries.