This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-04-28
Channels
- # aleph (3)
- # announcements (3)
- # babashka (8)
- # beginners (12)
- # biff (4)
- # calva (12)
- # clerk (29)
- # clj-kondo (1)
- # clojure (104)
- # clojure-art (1)
- # clojure-austin (5)
- # clojure-berlin (3)
- # clojure-brasil (34)
- # clojure-europe (11)
- # clojure-germany (16)
- # clojure-losangeles (9)
- # clojure-nl (30)
- # clojure-norway (58)
- # clojure-uk (1)
- # core-async (8)
- # cursive (4)
- # data-science (9)
- # datalevin (1)
- # datomic (40)
- # emacs (2)
- # events (3)
- # helix (1)
- # honeysql (3)
- # hugsql (1)
- # hyperfiddle (66)
- # jobs (4)
- # juxt (7)
- # kaocha (9)
- # lsp (5)
- # malli (10)
- # off-topic (4)
- # polylith (2)
- # reitit (5)
- # releases (1)
- # remote-jobs (5)
- # sci (46)
- # scittle (2)
- # shadow-cljs (9)
- # tools-deps (17)
- # xtdb (8)
Hey all, does anyone know of a tutorial or cheatsheet for translating common NumPy operations to Neanderthal? I imagine https://aiprobook.com/numerical-linear-algebra-for-programmers/ contains all of the answers, but I'm looking for something for people who want to dip their toes in the water before deciding to commit the time to fully understanding the BLAS API.
@matthewdowney20 Neanderthal does not expose BLAS API to the user. Also the emphasis in the NLAFP book is not BLAS, but exactly the understanding of mathematical operations and how to efficiently use them in programming, with a nice UI similar to (but even more nice than IMO) NumPy.
Oh my bad, thank you @blueberry! Are the concise function names are BLAS-inspired then, even if not 1:1? Asking bc I've had better experience asking ChatGPT to describe NumPy code in terms of BLAS operations, and then figuring out how to do it in Neanderthal, than asking it to translate directly or reading the docs and trying to figure it out (the docs are good, I'm just too ignorant to grasp how some of the math relates!)
Function names are completely BLAS-compatible, but much better because they are polymorphic, so instead of many blas_datatype_matrixtype_mm variants you have only one mm (matrix-multiplication) clojure function that does the right thing under the hood for all data types and matrix types (and combinations). But it still allows you the control with your existing BLAS insights! That's why it's much more elegant than NumPy. How it's possible? That is demonstrated in the book by teaching math and code at the same time, so you don't need to translate anything from NumPy (which might often lead you to wrong/inefficient/complicated solutions) but by understanding what you need to do mathematically, and then coding the right implementation with very little code.
ok quick follow up: what do you recommend for the person who has 5-10 hours to invest in learning, but probably not 50-100?
I really don't have an answer to that, because I've never looked at learning in that way.
fair enough!
For context, I'm going through a lot of introductory material on neural networks and deep learning, starting with http://neuralnetworksanddeeplearning.com/, but translating the code to Clojure, and following along with Andrej Karpathy's videos. I've also gone through your tutorials, and I think they'll be an excellent resource once I've mastered some more basic material. So I'm trying to wrap my head around the algorithmic concepts and explore the problem space a bit before diving into the absolute most efficient matrix math and memory concerns, and I don't want to detour too much from the subject matter if I can just use NumPy or something. Though perhaps you'd say if I'm willing to "just use NumPy" I should be willing to "just use" some deep learning library! 🙂 But I do feel like I'm learning, just one piece at a time.