This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2017-02-21
Channels
- # arachne (1)
- # aws-lambda (50)
- # beginners (10)
- # boot (59)
- # capetown (4)
- # cider (9)
- # cljsjs (27)
- # clojure (249)
- # clojure-berlin (8)
- # clojure-finland (7)
- # clojure-germany (1)
- # clojure-italy (6)
- # clojure-nl (7)
- # clojure-russia (91)
- # clojure-spec (100)
- # clojure-uk (61)
- # clojureremote (2)
- # clojurescript (171)
- # core-async (11)
- # cursive (31)
- # data-science (1)
- # datascript (2)
- # datomic (11)
- # dirac (2)
- # emacs (16)
- # events (1)
- # hoplon (142)
- # juxt (4)
- # lein-figwheel (9)
- # leiningen (10)
- # luminus (7)
- # lumo (44)
- # mount (3)
- # off-topic (150)
- # om (18)
- # onyx (5)
- # perun (12)
- # planck (12)
- # protorepl (13)
- # re-frame (28)
- # reagent (8)
- # ring (1)
- # ring-swagger (10)
- # spacemacs (2)
- # specter (11)
- # sql (14)
- # untangled (99)
- # vim (18)
- # yada (2)
Then you'd be able to create generic and specific equality implementations for arbitrary type combinations, ensuring it's internally consistent insofar as you need it to be.
@danielgrosse : what's your math question?
I have line width the know width w and the source point x.
Now I want to calculate point y.
Looks like it dictates the next tangent. It seems easy if you know the radius.
width or length?
The problem is, at point x, you "go up for a while; then you get tangent point y." However, you can "go up for a while" for as long as you want -- and that varaible determines location of y.
The coordinate x is bound to the circle. So raising it, will move to point y. But I can't determine, how far I have to move, to match the connection between the two lines.
The radius is 1000
w for width
Do you know the length of line w? Can't you make two right angled triangles from the centre to x, y and the intersection of the two tangents?
I know the length
This may sound rude: can you please assign concrete numbers to everything that is known in the diagram? It's not clear what is known / what is not known. Assigning this(and providing a concerte example) would help us help you.
@danielgrosse: okayh, this is enough, so x has coordinate (1000, 0)
the point where the two lines meet has coordinates (1000, 100) // since entire line segment has length 200
so the point where the two lines meet -- call that point Z. Now, draw a line between (0,0) and Z. Y will be the reflection of X, with respect ot this line.
@qqq yes, thats the fact.
It is the reflection. X is the center of the line
i think this only happens when w is the correct length to make a regular polygon around the circle
the drawing gives it a sense of scale and makes it look like it is in fact the reflection, but dramatically increase w and it will look quite strange
@danielgrosse : do we agre with the above so far?
@qqq Thank you. Let me think about it.
yeah, the key thing to keep in mind -- is triangles OYZ and OXZ. Note that they're symmetric in that: OZ = OZ OYZ = 90 deg = OXZ OY = r = 1000 = r = OX this means tha these two triangles are reflections of each other, giving us: angle YOZ = XOZ
Can also be done without any trigonomentrics: http://www.wolframalpha.com/input/?i=e%5E(i*2*tan%5E-1(x)) (see "alternate form", you x is the first part without i
, your y
is the second summand). Just needs to be scaled by radius r
. Your x = (l/2)/r
@rauh So you say it could be (1 - x^2 / x^2 + 1) + (yix/y^2 + 1)
What is i?
(1 - x^2 / x^2 + 1) + (2yx/x^2 + 1)
See they're the same:
(let [x 0.5
x2 (* x x)
num (- 1 x2)
den (inc x2)]
[(/ num den) (/ (* 2 x) den)])
(let [x 0.5
ang (Math/atan x)
ang2 (* 2 ang)]
[(Math/cos ang2) (Math/sin ang2)])
@rauh you're right. So I get the coords of y in the example. But I need the angle to calc the radians. So the second function fits more.
very interesting course stumbled upon http://callingbullshit.org from uw
Huh. I'll be intrigued if they can keep this non-partisan. Traditionally, this falls under a 'critical thinking' course, but it seems more application-based by examining specific tactics for generating and evaluating b.s.
> No. We began developing this course in 2015 in response to our frustrations with the credulity of the scientific and popular presses in reporting research results. While the course may seem particularly timely today, we are not out to comment on the current political situation in the United States and around the world. Rather, we feel that in a democracy everyone will all be better off if people can see through the bullshit coming from all sides. You may not agree with us about the optimal size of government or the appropriate degree of US involvement in global affairs, and we're good with that. We simply want to help people of all political perspectives resist bullshit, because we are confident that together all of us can make better collective decisions if we know how to evaluate the information that comes our way.
I'd note that it's less about identifying b.s. and more with identifying the truth, first and foundationally.
The problem is, identifying truth can often be hard.
I once heard to politicians argue about which side created more jobs. The truth? They were both using the same dataset, except citing gross vs net job loss/gains.
The moderator of the radio program asked: "Why didn't you just state that you're using the same data then talk about gross vs net?". Their reply was: "people would tune out of the debates and stop listening".
And that's for simple cases. More complex cases are stuff like small sample set sizes, poor statistics gathering etc.
One other example was a study done awhile back that "showed" (even though the researchers said it didn't) that homeschoolers performed better than those attending public schools. The caveat (that the researchers mentioned): the public school assessment tests were mandatory, the homeschooler tests were opt-in.
That then created a bias because no parent would have their kid take a test if they thought they'd do badly.
So I've seen a lot of people argue that AI or machine learning can "find bullshit" or "tell if a article is true", but none of those efforts hit the real problem: research that is misinterpreted, or poorly conducted.
@fellshard @tbaldridge we are shockingly bad at managing ignorance. Everybody wants to shoehorn facts into a tidy narrative
I blame Obama
That was my attempt at shoehorning
But yeah, I agree, it's a long standing problem. Just go talk to someone about the causes of any war (esp. The American Civil war or WWII) and you'll normally hear a one-liner about some insane group who hated another group. When the reality is much more complicated.
For instance, how come someone can convince so many people to become a soldier and basically commit suicide?
I could understand if you die at home anyway, for whatever reason, so basically self defense, but going into a war where you have a high chance to be killed? For money? And not even a lot of money? You could earn more money becoming a dealer or banker or whatever and have not that a high chance of dying.
But yea, here in germany the army also is massively going into schools and tries to make army look like fun -.-
we have the same thing here in the US. I have friends who joined purely because they couldn't find a job. And the US Army gives food/room, and when you get out nice discounts to college and housing.
That has happened in the past, but it's not the norm.
We're just really inefficient. The last I heard there's 2 logistics people for every 1 soldier on the field.
The US Military is really a golden calf in the US. No politician can say anything about down-sizing it without being crucified by the people.
Two questions: 1) How many years until https://openai.com/requests-for-research/#description2code is solved? 2) After that is solved, how long until Google or AWS builds a "cloud programmer" where you talk to it and it writes code?
are we joking because we believe (1) ai programs that write code won't happen or (2) it's too insane to discuss
its actually (3) i havent thought about it, nor do i think i could even conceptualize the form it would take.
in a sense we already have computer programs that write code -- they're called compilers / interpreters; theonly issue is right now, they can only convert "precise code -> precise code", whereas the next evolution would sadly be "vague english description -> precise code"
AWS/GAE is basically: all these other enginers you would have hired? we'll replace them with an API
their ML services: you don't need a data scientist either; just use our algorithms / compute power
nah there will always be jobs in this area, at least in our lifetime.
They've tried the whole "programs that write programs that replace programmers" and it never works because software design is a super hard problem that takes tons of variables into account.
Sure, there's MS Access for simple CRUD stuff, but the meat and potatoes of programming will always require a human.
Notice that all these programs that do machine learning are all about repetition and analysis of previous situations. I have yet to see/hear of an AI that can actually think "outside the box".
the moment AlphaGo wins against the to Go player by (without human prompting) hacking the traffic network and delaying his car...then we have to worry, lol
just sprinkle it on top and you’re done right?