This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2019-04-25
Channels
- # announcements (3)
- # aws (6)
- # beginners (143)
- # boot (14)
- # calva (2)
- # cider (1)
- # clara (1)
- # clj-kondo (1)
- # cljdoc (4)
- # cljs-dev (50)
- # cljsrn (5)
- # clojure (61)
- # clojure-chicago (1)
- # clojure-europe (4)
- # clojure-italy (5)
- # clojure-nl (5)
- # clojure-spec (32)
- # clojure-uk (11)
- # clojurescript (166)
- # clojureverse-ops (2)
- # clr (3)
- # core-typed (1)
- # cursive (8)
- # datomic (21)
- # defnpodcast (1)
- # emacs (1)
- # figwheel (1)
- # figwheel-main (1)
- # fulcro (7)
- # graphql (7)
- # jobs (8)
- # leiningen (4)
- # luminus (3)
- # lumo (17)
- # mount (3)
- # nrepl (4)
- # off-topic (113)
- # pedestal (1)
- # re-frame (15)
- # reagent (2)
- # reitit (2)
- # shadow-cljs (75)
- # spacemacs (3)
- # sql (12)
- # tools-deps (44)
- # uncomplicate (2)
- # xtdb (15)
Thoughts on aspect oriented programming? Seems like it’s macros but heavier.
AOP tends to involve too much magic/surprises for my liking, though I've only really come across it in Spring
Yeah, that’s what Wikipedia said.
But there are a few things that really are cross-cutting. Tracing, authentication, maybe a few others.
I don’t see how cross cutting concerns can be handled in a way that’s universal without being obscure.
(Or just function composition)
Another factoid about locality and perception that I find interesting: It takes 1.3 seconds for light to get from the moon to earth, so, pending any QM discoveries, we'll never have human-perceptibly "local" access to state on the moon. However, given the size of the earth, light in a vacuum can round trip to any location on earth in just under the level of human perception (roughly 16 ms, give or take). So, it's like the size of the earth and human perception are conveniently related such that state at any location can be made to seem local relative to human perception, given the speed of light.
The truism still rings true. It really is a small world 🗺️
"human perception" is a bit vague, though - each sensory system is tuned a bit differently in its sensitivity to time. I'm thinking specifically of sound - 15 ms isn't a lot of time, but it is noticeable for musicians. It's small enough that if two sounds play 15 ms apart from each other, it might not sound like two discrete events, but it's enough that you can feel that something is off.
they used to say there was no perceivable differences in a movie's frames per second over ~24, back when they were shot at... 24fps! 🙂
now we have games that run at >144 fps, we know it's totally possible to perceive it
yeah... Perhaps if the diff between 60 and 144 is noticeable then my numbers are a bit off. Good point about audio too. But, strangely, neuroscientists claim that 44 khz range frequencies become downsampled by our auditory systems and, by the time they reach our consciousness, the experiential bits are moving through the brain no faster than the alleged 200 hz limit of the brain.
But the cochlea does not "sample" the waves digitally. It's an analogue system, shaped like a cone, where different parts resonate at different frequencies.
presumably though they're converted into spike trains in the auditory cortex somewhere right?
I guess... but the frequency of that doesn't correlate with the frequency of the actual waves, it has already been converted from the time to the frequency domain. Just like eyes perceive color in the terahertz range.
right... the problem is that... well... from what I understand, many neuroscientists believe that the "seat of consciousness" sort of sits in the front lobes of the brain - front-striatal-loops. But, for that to be true, you have to explain how the information gets from the visual cortex, auditory cortex and somatosensory cortex, into those frontal, affective areas of the brain.
To explain that, they tend to say that the stream of our consciousness is actually a very thin byte stream of only a few bytes per second... I can't remember the estimate... maybe a few hundred or a few kilobytes
Reason being, you have to explain how all that information from all those different features get integrated at only 200 hz
So, the argument is that our actual consciousness is like a pinhole that quickly saccades across our perceptions
IDK, it seems to me that I am conscious about a very small subset of the stuff I should be percieving
But our TV is like 1080p, right? What do you subjectively feel like the resolution of your visual system is? Mine feels like... I don't know, 16K? 32K? Allegedly, only a pinhole of that gets through to consciousness.
My brain reconstructs a 16K image from what it knows the other comments and my surroundings to be
at 200 hz, it's hard to integrate such a rich visual experience, and funnel it down to a few K, all while catching baseball
Where is that image reconstructed? and how do those 16K get to the frontal lobes all at once?
The brain is believed to operate on a few different frequency ranges. The highest range is 200hz. That means that it is believed that the information communication medium of the brain - spike trains - move through the brain at those frequencies.
Less believed theories claim that quantum effects are being leveraged by our consciousness, allowing long-distance communication between neurons, much faster than the spike trains we're usually measuring.
I tend to want to believe the former, since I fancy the idea of seeing a human-like AI sooner rather than later. But this paradox about integrating such massive information at such slow rates makes me wonder if the later is rather true.
Most AI folks you hear will poopoo on the quantum theories of consciousness, because they desperately want to believe that consciousness is predicated on classical mechanics.
And they say that MRIs would likely mess with our consciousness if quantum activity was involved
a given neuron has a tree of branches that respond to different spike trains differently... So there's a lot of parallelism, harmonization and dissonance between those spike trains.. So it's sorta analog too
Yeah, lots of communication takes place over brain waves that are in the 5 10 and 15 hz range. Super low.
They synapt electrically and are much faster, but people haven't yet implicated them as some fundamental component of consciousness AFAIAA
So you have this communication that happens between the auditory cortex and the front lobes at max 200Hz... does that limit also hold in the intra-lobe signals as well?
Some believe this circuit is seat of consciousness, sotospeak https://en.wikipedia.org/wiki/Frontostriatal_circuit
Even ancient trilobites from the Cambrian era are thought to have some homolog of that circuit.
So IIUC, for experiences from the sensory lobes to be experienced by our seat of consciousness (and for a person to give positive self-report of an experience) then it has to integrate with that circuit. Most information integration takes place in the thalamus
Neuroscientists have been able to show that when parts of the visual cortex, perceiving certain aspects of a scene, do not integrate with those fronto-striatal circuits, then people have no self-report of the experience at all.
So anyway, you'd have to explain how the rich diversity of all those rich sensory data sources are all integrated into those frontal loops so quickly.
And it may be possible, but I just don't think I've heard a reasonably theory on how yet.
It's just reduced from the rich 20Hz-20kHz to a frequency analysis of some stimulated cells in the ear
Where you have the THz range that gets reduced to a "few" red, blue and green cones being stimulated or not
Maybe the other layers above, reaching to consciousness, can be explained by similar abstractions
Yeah, there's lots of layers though... The visual cortex has 4 layers, some of which have sub layers. And the scenes and object semantics are built up there. So lots of data is being compressed into signals that summarize objects and whatnot
Which pass some information laterally across the cortices via waves, but mostly through the thalamus, where lots of integration happens. The hippocampus, two small lobes around the sides of the thalamus, have "place cells" that actually maintain a grid of our location within a subjective radius of roughly one mile.
All of that information has to bounce around, alot, and integrate, send it over to the frontal lobes to see what it thinks, check with the cerebellum to remember how to do the basic motor action, and the commit the decision back to the motor cortex, all while catching a baseball, all under 200hz
Let's put it this way: the human brain is thought to compute at 1,000 petaflops. That's 10 quadrillion (10,000,000,000,000,000) floating point operations per second. However, the "bus speed" of the brain is only 200 hz. How do you ship 10 quadrillion flops between each other when your bus is only 200 hz? Easy, right? You just have lot's of parallel busses. But the problem is that we know information must flow into funnels in the brain, so that different streams can flow into different circuits. So what kind of bus funnels can possibly integrate 10 quadrillion flops of computation at 200 hz? Not saying it's not possible, but I'm not aware of any hard-science theories on how that is supposed to work, given Amdahl's law https://en.wikipedia.org/wiki/Amdahl%27s_law
Does the brain even do flops, though? As programmers we tend to fall too much into the zeitgeist of trying to interpret everything as a computer, sometimes, I think.
eh, it's just an analogous estimate.. It's not like the brain is working with floating points.
When you strike the string on a guitar, how many petaflops does it process to turn that into soundwaves?
When they say 10 petaflops, they're using some model of a neuron as an analog of an actual circuit. So if you had a guitar with a quadrillion strings, then you could use those harmonies to probably compute some big logic
Maybe this is derailing the conversation a bit, but if we were in 1619 we would be talking about how many levers a neuron is equivalent to
fair point, but when neuroscientists are making claims about how much the brain can compute, they're trying to guess at how many different states a given neuron can switch on, like a circuit. They may not be right, but they're doing their best to approximate some computational power of a given neuron.
If it so happens that neurons can also tap into their DNA to store and retrieve information in real time, as some believe, then that could significantly increase that number of flops.
there's more going on in hearing than is implied by just "downsampling"
like feature and pattern detection at different levels
in his copious free time, Rich has been working for years on a system that actually models human hearing
is it written in clojure?
I believe it is more of a mathematical model, but you'd have to ask him
it's a fascinating subject if you ever get a chance to ask him about it
what is it with Lisp and cognition… I did my final thesis at a “music cognition institute” where I met Common Lisp.
@john interestingly it becomes harder to see difference between 120 and say 200 hz refresh rate, unlike 24 vs 60 vs 120, so 200hz at brains sounds reasonable. Or noticeable lag in our retina\optic nerve is present, also human reaction on visual event is about 80-250ms
@nxtk yeah, I'm suspicious of the 200hz limit though cause the 44khz audio feature translation thing just seems super magical to me.
What is the "44khz audio feature translation thing?"
Afaik, human consciousness mostly exists below a 200 hz range. But human perception of audio is up to 41,100 khz. So presumably auditory features from the high range are being translated into features that fit in those 200 hz range spike trains.
maybe there is pre-processing parts which operate at higher frequencies, something analogous to FPGA's
Somehow that thing - the cochlea and/or whatever else - is also able to signal to us that a particular instrument in a recording of a band is off beat with the rest. And in our consciousness it all feels subjectively like our whole brain piece is going full speed of the auditory source, up to 44,100 khz