This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
- # aws (5)
- # beginners (67)
- # boot (30)
- # cider (55)
- # clara (7)
- # cljs-dev (6)
- # cljsjs (6)
- # cljsrn (1)
- # clojure (136)
- # clojure-brasil (2)
- # clojure-dusseldorf (14)
- # clojure-finland (9)
- # clojure-italy (49)
- # clojure-nl (1)
- # clojure-romania (6)
- # clojure-russia (4)
- # clojure-uk (16)
- # clojurescript (136)
- # core-async (1)
- # cursive (21)
- # datomic (64)
- # fulcro (26)
- # hoplon (25)
- # jobs-discuss (53)
- # keechma (3)
- # leiningen (6)
- # luminus (11)
- # lumo (2)
- # off-topic (351)
- # om (1)
- # onyx (11)
- # parinfer (32)
- # portkey (9)
- # re-frame (45)
- # reagent (38)
- # shadow-cljs (60)
- # specter (9)
- # vim (8)
- # yada (22)
Is it just me, or is IntelliJ auto completion insane? I've only played with it for the past 24-48 hours, and it's already at the point where I'm no longer using Chrome to browse javadocs api, and just letting auto completion fill in everything I need.
it's pretty ace for Java but for Clojure it's more noisy iirc, with an overly broad candidate pool
Ouch. We had a project in school where a guy was writing Java in gedit it was not pretty
This was I've written some Ruby before and don't want to set up a dev environment in the VM and how different can a programming language really be
My favorite IDE for Java was Together/J -- you drew UML diagrams and it generated your code, then if you edited the source code it regenerated the UML diagrams!
The full version was something like $5K per seat, if I remember correctly. But that did all the JEE stuff, including deployments etc.
I used that for a bit… and VisualAge (IBM’s repurposed Smalltalk environment) and Rational Rose and Netbeans and JBuilder and on and on
I remember what a lightning bolt Eclipse was. It was immediately apparent that someone had finally “gotten” it.
Yeah, I moved on to Eclipse... and stayed with it for years... including trying CCW early on when I was learning Clojure.
@qqq my understanding is that you have a few clojure / java / kotlin projects that all depend on library foobar.jar. Thaths why you want to make it permanently know to intellij by adding it to intellijs classpath. Did you mean something different?
@sveri: that's a perfectly valid inference, but my actual problem is a bit different; I am writing an intellij plugin that fires off an clojure nrepl; but the newly launched ide can't find the clojure.jar; so I was trying to figure out if there was a way to install clojure.jar globally for all intellij projetsc
That is precisely the problem. The clojure.jar dependency was not being bundled in into the plugin, I couldn't figure out what was going on, so one hacky solution was to just install Clojure.jar globally.
So every news site today is claiming that Apple is about to switch to non-Intel processors for Macs by 2020. Seems like the worst thing they could do at this point. Sure ARM is getting better (let's hope it's ARM-based), but I'm not convinced that they're even close to the level needed by average users.
If they're staying x86 they could just go to AMD. But I would be surprised if that was the case
No, the news is they're moving to in-house fabrication. They did this with the iPhones/iPads/AppleTVs a few years back.
and yeah, I'm sure it has to do with a lot of things like meltdown and power consumption, and dealing with two instruction sets. Perhaps as well as problems shrinking x86 hardware into smaller laptops.
But there's the other side, like x86 branch prediction from Intel and AMD being some of the most advanced on the planet.
ARM has some pretty insane SIMD support, but none of that helps if your code is super branchy.
I think most algorithms that involve Video, Photo, or Audio tend to leverage SIMD more than branch prediction. What type of tasks are (1) highly branchy and (2) likely to be used by a MBP or MacPro user ?
i wonder how much of the move is directed at power consumption rather than pure performance
@lee.justin.m I'm sure that's the case, I think it will kill a lot of their professional customer base though.
@qqq it's not so much about raw power, it's about problems of multi-tasking, badly written software, etc.
If they’re announcing it now, that means they’ve been working on the processors for some time already
Stuff like databases, compilers, etc. run "okay" on ARM in my experience, and yes they're more efficient power/performance. But there's some things you just can't make parallel. And then you're left with whatever the raw performance of the CPU is.
Making a efficient CPU is one thing, making a CPU that's not 1/10th the performance of an x86 CPU is really hard.
Do these ARM CPUs have more core? compiling (often more than 1 file at a time) seems embarrassingly parallel ; not sure about db transactions
Just looked at some benchmarks, it seems the A11 (iPhoneX CPU) is on the same level as a low end i3. So not that far off I guess. But there'd have to be some reason for people to switch. Smaller laptop designs is a good starting spot I guess.
But it'd probably be the final straw for me to ditch apples for dev work. Since I already have problems writing code in OS X and deploying to Linux, adding a CPU instruction set switch into the mix is probably not worth it.
Apple's moves over the last few years have me switching to Windows 10. Especially now Ubuntu etc can run in WSL.
My main dev machine is a 2012 27" iMac i7 with 16GB RAM and a 256GB SSD HD -- but I run Windows 10 in Parallels Desktop for all of my non-dev stuff and use Terminal on the Mac side for dev stuff. The next machine will probably be a Surface Book 2 at this point.
Programming workload is constrained mainly by single-threaded performance, no? Where Intel will probably still be king for a while...
i’m going to have to start seeing “ohhh, you’re using windows!” less often on github issues before i consider giving ms another chance
I want more RAM over more powerful CPU. The issues I run into mostly tend to be related to RAM usage.
@lee.justin.m I do all my Clojure dev in WSL on my laptop. Having that facility is why I can finally move to Windows (after using Apple machines for 25 years).
@lee.justin.m bunch of info here: https://blogs.msdn.microsoft.com/commandline/learn-about-windows-console-and-windows-subsystem-for-linux-wsl/ But it works the same way WINE does, it wraps the Windows kernel with Linux APIs. So linux binaries think they're running on a version of linux.
Pretty cool idea, WINE mostly-works, but has flaws due to there being so many undocumented Windows APIs and complex tech, but it seems like duplicating an OSS interface would be much simpler.
i guess for the reasons you mention it’s easier for MS to make something like that than for the wine team
ree-uh-fahy, which sound right to this non-native speaker
i was gonna put a reaction on that but all of the laughing ones are a bit over the top. like i feel bad using them if i'm not crying tears of laughter
@lee.justin.m You can download Ubuntu and a few other distros directly from the Windows Store and they run as-is on WSL. And they're actual user mode distros, as far as I understand it.
You install stuff with
sudo apt-get install ... and everything behaves just like regular Ubuntu. The Windows C: drive is
I've used it successfully to do standard development on, even installing parallel versions of Ruby and Node
once or twice I've had aliasing issues where commands eval'd to Windows PATH targets, but for the most part that was just an issue of refreshing environment after an install.
Teammate uses a Mac, so he was writing shell scripts, and this let me use it flawlessly
Yeah, our target env is Linux (Red Hat/CentOS) and our dev envs have historically been Macs but one dev uses Linux and with WSL I'm able to run our stack on Windows.
MS has changed a lot since they got a new CEO a few years back. The language of the company went from "everyone should use Windows!" to "everyone should be able to use MS software".
Like most major tech companies, I think, they've had a mix of brilliant and horrid ideas. - Their researchers are top-tier, but that research rarely makes it to light. (Shed a tear for Midori...) - They share a great portion of responsibility for software of the '90s leaping off the complexity cliff, but are making efforts to tidy up the consequences to some degree. - Azure is a bonafide mess, with documentation that rivals the worst of MSDN, and a strategy that boils down to 'lift and shift, but worse'. I'm still waiting for the upside for this one...
I agree, I think were MS has excelled the most is when they cut the red tape and go do their own thing. DirectX murdered OpenGL mostly because it could get new features enabled in months instead of years, and because they didn't have to care about how some standard was going to work on SGI/SUN/whatever hardware
CLR was the same way, don't worry about backwards compatability, just go write your own VM that's "good enough" but because of stuff like value types, great C interop, and SIMD can beat the pants off of the JVM in some situations.
While companies are still figuring out how to get Vulkan working everywhere, DirectX 12 is already out and gives performance on par with Vulkan.
And the crazy stuff they've done recently, like putting MSSQL on a Linux vm and shipping Docker containers of it, is just icing on the cake
The language of the company went from "everyone should use Windows!" to "everyone should be able to use MS software". This still hasn’t been a pleasant experience for me.
Every year I give F# a shot in a Mac, and it is so painful to do anything practical with it.
Yeah. They've never invested in its tooling, and it's never worked anywhere I've tried it.
well that’s all super interesting. are there any laptops with mac-level build quality? i’m more than happy to pay for premium hardware, but last time i looked, everything was plastic garbage
@lee.justin.m Have you looked at the Microsoft Surface line? One of my colleagues has a Surface Book (1st ed) and it's beautiful -- the new Surface Book 2 is even more gorgeous.
As others have said, Dell is pretty solid -- I've had several over the years that have lasted really well. Even the old plastic ones were very well built. I have an XPS 12 (convertible) that's a bit long in the tooth but still runs beautifully after being carried all over the world and dropped and used as a drink coaster etc. I'm very hard on laptops!
As for Mac-quality, I've been increasingly disappointed with that over the last ten years. I've had several screen and HD failures recently -- and that was unheard of years ago. I definitely will not buy another Mac (and I've been an Apple user for 25 years!).
@seancorfield Thanks. I’m just so scarred from corporate-issue laptops that creak and squeak and have bad monitors. Or that don’t sleep/wake properly or recognize devices in a timely fashion. Or fail to connect to the wifi. I have thought about the surface. It’s a nice looking machine and I trust MS to spend more attention to detail than your average OEM. Apple is irritating, but on the other hand I have a 2012 era laptop that is serving me well. I had to get it serviced once (bad screen), but it was flat rate and handled with typical Apple level of service, which meant they replaced other stuff I didn’t ask them to because it failed their diagnostics (got a new battery for free). All flat rate.
I’m also a guy who simply appreciates the aesthetics and general all around good taste of apple
Yeah, like I say, 25 year veteran of Apple products but lately they've really lost their edge and we've had less-than-stellar experiences with Apple service techs too. We have a 2009 iMac with a screen that randomly switches off and it's been into Apple multiple times -- they've replaced all sorts of parts (including the motherboard!) and they still can't fix it. My last MBP became pretty much a brick due to screen failures. And their model updates since 2012 have been totally underwhelming.
The Surface line from MS is just breathtaking. I literally started to drool the first time I got my hands on a Surface Book and the massive Surface Studio left me speechless 🙂
Yea that’s disappointing and seems consistent with the general view of the world. I just haven’t been excited about non apple stuff in a long time, but maybe I should take a look at the surface
I run Windows 10 Pro on my iMac although, for reasons, I still do my dev work on the Mac side with Terminal and Atom/ProtoREPL.
For me, the turning point was touch screens. My XPS 12 is a touch screen and I love it. I am constantly frustrated that my iMac is not. And Apple have made it clear they have zero plans for a touch screen on a computer.
When pushed recently about the issue, they specifically said their view is that people who want touch functionality should buy an iPad Pro and use it as an external touch screen for their main Mac desktop!!! 👀
Well to be honest, that sounds about right. I was surprised because I basically never want to touch my laptop, but maybe I just haven’t experienced it.
(someone was just asking my opinion of the ThinkPad T470 and I didn't know what to tell them)
Thinkpad X and T still ok. Other thinkpad series are also "good as a DELL". IdeaPad and others are just regular notebooks
This makes me nervous as well. I went Win95->Linux->BSD->OSX and can’t imagine giving up the look-and-feel of my Retina MBP with Unix under the hood.
Even that's gone downhill for me lately. My 2017 MBP overheats constantly, can't deal with hot-swapping displays very well and is limited to 16GB of RAM
It’s the age old thing of it’s not aesthetically nice to have room for/listen to fans.
I honestly don't understand why everyone hates fans... I have a couple of rack servers in my bedroom running 24/7 and the noise feels rather calming.
There's fans running at constant hums, and then there's fans that oscillate between silent and jet plane taking off. That is the MBP experience.
Probably also depends on the fan drivers and how fussy they are about state changes
@justinlee we use the dell latitude line at work, cannot complain. But I never felt the need for that bling bling anyway. 1920x1080 resolution + 2 extra displays + an extra keyboard and mouse (aka dockingstation) is enough for me. On my machine (16 GB, i5) I can run two instances of eclipse + like 5-7 servers (all java of course and usually something built on top of tomcat or jetty). If its needed I can also fire up YourKit additionally and start profiling.
Overheating is only a problem whenever that shit thing of McAfee starts running, which we have to have installed, how I hate that.
Rich mentioned JVM vs CLR in his latest talk. JVM was set top boxes/dynamic where CLR was static technology. How accurate is that?
the JVM is very dynamic (and type erasure was a fork in the road as well). the CLR is more static (and does not do type erasure afaik)
which intuitively one might think it makes the CLR a 'better' vm, in practice it makes implementing languages with different type systems than the underlying vm (such as clojure) much harder
yeah, instead they went the other way and made DLR on top of CLR for dynamic stuff :)
CLR has stuff like dynamic though that removes the need for a lot of the reflection logic
I think you could have a lisp like Clojure that ran quite well on the CLR, but it would require working with the VM in some specific ways and making the compiler more complex
Yes, but it's a port of Clojure on the JVM and the two platforms are quite different.
So in Clojure on the JVM you have the whole invokePrim thing, but that could all be reduced to a single generic interface, and expanded on-the-fly
But the CLR doesn't have runtime profiling, so that puts more pressure on the compiler to write efficient code, and to do inlining on its own.
compilers and jits is such a deep and fascinating area, I’ve barely scratched the surface and I’m completely hooked 😄
That's what I loved about the CLR, it's way less magic than you would think. It doesn't have a warmup time at all. It goes straight from bytecode to machine code once. You can even pre-compile bytecode and cache it. The GC is really light, a GUI app in C# often uses < 10MB ram. and the C interop is pretty awesome.
I finally got time to look into it recently. It's pretty cool tech. A bit strange as it involves a lot of boiler-plate code, and they try to reduce that with code generation and DSLs, and it's all quite undocumented.
But it's a cool idea, you execute an AST, an throw exceptions when an assumption about the code doesn't hold, and then de-optimize. Once the AST stops changing it's compiled to a single method/code block.
Still a bit memory heavy though. Ruby in C uses about 10MB by default, Truffle-Ruby is about 170MB
I’ve been thinking about automatic primitive specialization in Clojure, if that it even possible
@schmee yeah, I think it would work quite well. But it would mean a complete rewrite of most of clojure.lang.* Especially RT.java.
Every method overload in RT would expand into its own class, with logic to switch between the impls.
Thats what made me stop digging into it more. Truffle is just so verbose. And since Truffle uses abstract classes and annotations, writing such a thing in Clojure is not possible. Unless of-course you start with modifying Clojure 😄
> And since Truffle uses abstract classes and annotations I don’t understand why this is a problem though. Is it because Clojure constructs already inherit from something else?
No, correct me here @alexmiller what's the best workflow for doing a lot of genclass. Is that even possible in the REPL, or do you have to restart the JVM? It's been so long.
the best workflow is to write a lot of code, then compile it, and hope it was all right
if https://dev.clojure.org/jira/browse/CLJ-2343 gets merged it should work in the REPL as well, right?
should. and yeah, I got him to create that patch when I was playing around with Truffle 😄
Right, and to somehow build a DSL for it, since you'd have to be really careful not to pull in a lot of Clojure code in the process.
So it would probably involve a lot of definterface as well so you could get proper typing of the arguments
(props to @cfleming for this patch, he paid for the development and was kind enough to allow for open sourcing it)
But it would be a fun project, get the framework up and running, then just do a straight port of Truffle's SimpleLanguage to the framework
yep truffleclojure would be fun, I wouldn't mind helping out from time to time if somebody wants to tackle it and drive the main development
Also thanks to @bronsa for actually writing the patch, that thing has saved my sanity
it should support annotations, as it builds on the deftype/reify codebase and IIRC they do
In case anyone worries about stability, everyone using Cursive has been running it for a couple of years now.
This is linked in the JIRA, but the original full rationale is here: https://docs.google.com/document/d/1OcewjSpxmeFRQ3TizcaRRwlV34T8wl4wVED138FHFFE/edit
people that complain about shit being closed source probably
are not struggling have never struggled to make a living
(extend-class JPanel  (paintComponent [this g] (.paintComponent ^JPanel this ^Graphics g) (.paint status-text this g)))
@cfleming ahh, my apologies, I didn’t know it wasn’t open source! I did not mean to start a discussion about open vs closed source 🙂
Hehe, I don’t think there’s a real discussion about it taking place, it’s more ironic commentary 🙂
(defclass ClojureParagraphFillHandler  :load-ns true ParagraphFillHandler  (isAvailableForElement [_ element] (boolean (psi/string? (if (psi/leaf? element) (psi/parent element) element)))) (isAvailableForFile [_ psiFile] (instance? ClojureFile psiFile)) (performOnElement [this element editor] (fill-paragraph element editor)))
unless @schmee wants to practice his clojure compiler/jvm bytecode skills before starting truffleclojure ;)
There’s one thing that has been tricky with this, which may or may not be a problem for those using it.
I’m in so far over my head here it’s not even funny, but that tends to be a good way to learn lots of things fast
@schmee I think I've said it before but I really like the enthusiasm you're putting into this so feel free to ping me anytime if you want to ask something
In particular, if your classes are loaded outside your control (e.g. from an IoC container, which is my case) you’re probably SOL
@cfleming would you solve this on a single-class loading basis or at a different granularity?
@bronsa thanks, I really appreciate it 🙂 your help as been invaluable in understanding the compiler and the JVM in general, so hats off to you good sir
My plan was very hacky - to modify
RT.baseLoader() to check a global static var or atom before doing the usual checks.
Then I can set that to my plugin classloader at app start and just not worry about it.
So I’ll just have a
public static volatile ClassLoader clojureLoader = null; or something similar which I’ll set when my plugin is loaded.
In my case, it’s when using
:load-ns true - that fails if the classloader is incorrect
So this is when you have a class which calls other functions from the namespace in which it’s defined.
That option will load the ns in the class’s static init, but if the classloader is incorrect it can’t find the ns to load.
@bronsa I don’t, but it’s trivial to repro - just load a class defined like that from a classloader without access to the right classpath.
The issue is that then I’ll get multiple copies of the code loaded, depending on the classloader used to load each class.
I think so, at least in my case where I want everything loaded via the plugin’s classloader. I’m actually not sure what the implications would be in a more general case of loading code into various different classloaders. It sounds like it would end badly, but I don’t have a specific example of why 🙂
https://docs.gradle.org/current/userguide/multi_project_builds.html Is there any feedback from gradle users on how multi project builds work? I want this feature for myself.
@bronsa @schmee well that wasn't as bad as I expected: https://gist.github.com/halgari/f03ff8d75bde40903852a1b15dbe61cf
So the next step is to try and port a simple language of some sort and see where the duplicated code is that could be removed via a dsl
@tbaldridge if you do spend time on this, please, let me know what you’re up to somehow (github or something else)! 🙂
Will do, I'll probably start by porting this code: http://cesquivias.github.io/blog/2014/10/13/writing-a-language-in-truffle-part-1-a-simple-slow-interpreter/
let’s have it it for a while and if it seems that something might come out of it we can maybe make a joint effort 🙂
for sure, we've talked about doing this for awhile, but the whole gen-class stuff was always a turnoff
At least a year ago there was a challenge with entering and exiting the Truffle/Graal interpreter at
SubstrateVM isn't a good option for Clojure, because of dynamic loading being incompatible philosophically
Otherwise if you had a clojure-like language with less dynamicity, SubstrateVM could work
so all the stuff in core, rt, etc would be frozen, but everything else could be dynamic
the reify boundary thing is real... Need to be able to reify something and pass it to java
So the reify interface impl would have little stubs that call the Root Node of the Truffle AST
@schmee This is totally self-interested, but if you’re interested in a compiler project that would actually get used, brushing up tools.emitter.jvm would be awesome.
I contacted the
insn author a few months ago about getting
insn into contrib & switching t.e.jvm over to it
But I would absolutely use that. That would mean that defclass/extend-class could just be macros, I guess?
there's a ton of work to do: - making sure it all runs using the latest t.a.jvm backend - work on supporting AOT - implement all the compiler enhancements of the past 4 years - move t.e.jvm from the shitty ad-hoc symbolic bytecode I wrote to insn as a first
@bronsa Let me know if this is a crazy idea, but once you have something like
insn, it seems like macroexpansion could even bottom out at the bytecode data structures, so even things like
if could be macros. Is that possible/desirable?
some special forms could be macros I guess, but there might be bootstrapping issues I've never thought about it
with the t.a/t.e architecture the whole language is already extensible/hackable anyway
I don't remember what nasser called it, but he architectured it so that the emitter would have a context map of special form -> emit-fn