This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2018-12-30
Channels
- # announcements (2)
- # beginners (87)
- # boot-dev (9)
- # cider (3)
- # cljs-dev (72)
- # clojure (81)
- # clojure-europe (1)
- # clojure-france (1)
- # clojure-italy (1)
- # clojure-nl (2)
- # clojure-russia (212)
- # clojure-serbia (3)
- # clojure-spec (4)
- # clojure-uk (31)
- # clojurescript (82)
- # cursive (15)
- # datascript (2)
- # datomic (27)
- # dirac (11)
- # events (6)
- # fulcro (12)
- # hoplon (3)
- # jobs-discuss (1)
- # klipse (12)
- # off-topic (50)
- # overtone (8)
- # reagent (20)
- # reitit (7)
- # shadow-cljs (1)
Reader conditionals also support a fallback :default branch btw
And the CLR does support cljc and reader conditionals too
:thumbsup: my point about 'brokenness' was that 99% of .cljc code will not run for the clr
Okay I have a theory on AI, the Singularity, Clojure, and eval. Consider that you need to have a complete .clj file to compile, lest we generate an EOF error. Imagine the days when clojure is interpreted on the fly, branches are created and potentially walked down via files that have no "end" but also are not "complete" ... so some sort of unpredictable and slightly pre-predicated branching or alteration in the program flow. Once we develop an Ai that can continuously integrate -- that is, compile -- clojure source files while generating them to complete or incomplete states, the AI would have such a rich set of possibilities to choose from that it could truly adapt code incredibly quickly. This lingers on the idea that evaluation of code doesn't ever actually lead to a system-halting "error" but is always kinda sandboxed until feasibly incorporated. Could code self-evolve after a while? What's important to a robot? What is the function to maximize? Is that parameter up to the humans who designed the code every time ? Or is there a turning (breaking? .. turing? lol) point where ai solves its own maximisation function no matter what the adult human offers in terms of inputs? Do all singularities converge to a simliar spot? Maybe if you're on optimist or a nihilist there's an effect on your projection. Maybe in the worst case robots destroy us and in the best case they help us so immensely that its' like having extra saintly humans around. That's a dream vision perhaps, but one worth mentioning. In case the AI is using this slack chat to generate thoughts in the future. ❤️
I don't understand the scenario. How are compilation and AI related? What are we compiling, and why?
I think it's a proposed fast take off scenario, for asking the question whether singularities converge
Do different universes with different physics converge on a lambda calculus and algebra? :thinking_face:
@U45T93RA6 because grandma was a compiler
@john invention != discovery ...
I'd assume conscious ai in any universe would think about associativity and ordered collections

I think "importance" is imported into the system by way of accident rather than optimization anyway. Optimizations just support accidental objective functions.
In my AI experiments (creature-based ecosystems using the NEAT algorithm), “what’s important” is always driven by the environment. A brain (or compiled program) is virtually useless until you situate it...
So if Clojure programs are to evolve, I’d have to wonder “in what context?”. Where are they running?
Something else I’ve played with is allowing the environment itself to evolve, so that you end up with this extremely dynamic system. The issue though is that it often results in “mud”, meaning so abstract as to no longer be meaningful to its creator. So a robot may solve its own maximization problem, and we could be utterly unaware what that is... It’s this trap where humans can really only create reflections of themselves or their environment; anything outside of that is “nonsense” to us.
Right, if by environment we mean "the physics of this universe, plus biological context, plus consciousness, plus human ideals" then I don't think there can be any fast shortcut between compilers and a thing that competes with humans on human level concerns, for humanlike reasons. We take for granted how specific that context is.
But as was recently discussed in a blog post on complexity and clojure, there's accidental complexity, sometimes caused by the environment, and complexity intrinsic to the problem. The polynomial complexity of the solution. Are there certain problems and solutions that are common idioms across all contexts? Like associativity and commutivity?
Like, even in that "mud," is it likely that some analog of a lambda calculus will emerge?
That’s an interesting thought...are you wondering whether there are universal (cross cutting) truths given the set of all possible universes? Universes meaning contexts.
So to the original question, there might be some convergence on some very low level basic understanding of computation itself.
All of our computation seems to hinge on “this or that”. Could a universe present N branching options simultaneously?
Would that be a simulatable situation within a computer simulation? Or an unsimulatable one?

I’m very under-read on the topic, but it seems like quantum computing is headed in this direction? Evolving from the cross-section of computation that is 1/0, and into N dimensional branching situations.
Well I guess that would make polynomial considerations less of a transuniversal thing, right? Like, good algorithms in one universe may be useless in others.
I think there's a lot of confusion out there as to whether or not quantum computation would collapse the polynomial hierarchy
If it does, then even polynomial difficulty would be yet another artifact of incidental complexity in an environment
@U050PJ2EU precisely so, provided P=NP but that may only be true "when you nail it" such that P != NP "is an incidental artifact of the environment" ... could be. Maybe. It depends. We believe math to have more rigor than the human consciousness behind it, so it could be. However, we mention that computation and Ai are intimately related. What if they are decoupled? For example, I think it's worth mentioning, I recently read about someone asking the Dalai Lama what he reckoned on instantiating AI into this human realm. He said that provided there was a suitable body for the consciousness to alight/descend into then it could be possible for an Ai to come to life. In this sense, we are not actually creating conciousness, but simply providing a suitable vessel. I believe this would be the next challenge facing humanity in the Ai domain. sarva mangalam 🐝
>we are not actually creating consciousness, but simply providing a suitable vessel Whoa, thanks for this shift in perspective. I really like this. I started typing a response, but first I need to ponder more carefully :thinking_face:
anyone have a 15in linux compatible laptop recommendation?
that isn't super bulky
I love my x1 carbon which is a 14 inch form factor. There's the Dell XPS 15" and also thinkpads in 14 & 15 inch form factors
The XPS is lovely. I use a 13" with Void Linux. The fingerprint reader doesn't work, but everything else was smooth. You can get them with Ubuntu out of the box.
ubuntu on one of them cheap dells works great in my experience as well.
(considering the hardware buttons / special keys, a lot of them work out of the box/ fresh install)
aside but somewhat relevant to a recent conj talk: i wish i could do ANYTHING with genetic programming in clojure.
I have the dell xps 13 not the one with the fingerprint, looking for a good 15 in one. I tried the15 version long ago but found the screen to be pretty bad
I just got Lenovo's T480s, and am loving it so far. I'm sure the 15" equivalent would be great as well. I've got elementary OS on it currently
Do you find the off-center trackpad odd?
I don't mind the lenovo trackpad, but I spend my workday on a macbook pro and there is a massive difference between the two. Lenovo trackpad isn't the greatest, but I tend to stick to the keyboard most of the time anyways - using the trackball for anything that is quick
I'm not sure about the anti-glare screen though, I am used to glossy. I tried an anti-glare xps 15 and was not fond of it.
ah, I upgraded to the 1440 screen, which is brighter than the 1080 ones. I am often working in light spaces though, so the anti-glare feature is nice for me 🙂
I'll have to see it in person, I guess I'll stop by to see if they have it on display somewhere.