Fork me on GitHub
#uncomplicate
<
2017-10-17
>
jcf15:10:59

Hello all. Anyone come across this issue loading libcublas?

Stack trace from the attempt to load the library as a resource:
java.lang.UnsatisfiedLinkError: /tmp/libJCublas2-0.8.0-linux-x86_64.so: libcublas.so.8.0: cannot open shared object file: No such file or directory
	at java.lang.ClassLoader$NativeLibrary.load(Native Method)
	at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
	at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
	at java.lang.Runtime.load0(Runtime.java:809)
	at java.lang.System.load(System.java:1086)
	at jcuda.LibUtils.loadLibraryResource(LibUtils.java:260)
	at jcuda.LibUtils.loadLibrary(LibUtils.java:158)
	at jcuda.jcublas.JCublas2.initialize(JCublas2.java:81)
	at jcuda.jcublas.JCublas2.<clinit>(JCublas2.java:66)
	at uncomplicate.neanderthal.internal.device.cublas$eval51203.invokeStatic(cublas.clj:764)

jcf15:10:22

I've got cuda installed to /opt/cuda, and can see libcublas.so in my LD path.

jcf15:10:52

Oh, wait. I have version 9 and this is looking for version 8!

jcf15:10:05

ldconfig -p | rg libcublas
	libcublas.so.9.0 (libc6,x86-64) => /opt/cuda/lib64/libcublas.so.9.0
	libcublas.so (libc6,x86-64) => /opt/cuda/lib64/libcublas.so

blueberry17:10:49

@jcf You are right. It currently requires CUDA 8 (downgrade helps if you've already updated to 9).

blueberry17:10:08

It'll be soon upgraded to CUDA 9.

whilo19:10:33

@blueberry I don't know if you have seen it, but I am discussing autograd and neanderthal with the cortex guys: https://groups.google.com/forum/#!topic/clojure-cortex/ba4eVXT8DMM

whilo19:10:20

I would be interested whether you think that a generalized buffer management DSL like Cortex builds has advantages over neanderthals direct API mapping. One can do ahead of time optimizations and compilation with a data description (AST) of tensor operations. I don't see this as competing with neanderthal, because it provides the low-level primitives. But I might miss some subtleties.

whilo20:10:16

And I am curious about your opinion on buffer-management in the special purpose low-level matrix formats, that are actually used by most higher-level frameworks.

whilo20:10:48

I think the plurality of deep learning frameworks is obfuscating the fact that they all use these low-level APIs.

whilo20:10:07

BLAS in form of MKL, OpenCL or Cuda and additional ones like cuDNN.

blueberry20:10:38

One interesting things with that is that it is (was for me) impossible to find any comparison with any other implementation. Either performance-wise or by the ease of use. I am interested in Cortex and follow what's happening from time to time. I never saw any comparison of Cortex with any of the leading libs (TF, Caffe, Pytorch etc.), or even with dl4j, that says "in this and this example, Cortex achieved this speed compared to library X", or "we showed that cortex requires less configuration" or whatever. There are some blog posts, but these are more like "you can build this example model with cortex" and nothing more. So, the only thing interesting to me now is that it is a Clojure-centric library. That is awesome, but only if I can leverage it to build custom, or not-strictly-NN things with it. If it is aimed at users who need a packeged ready-to-use solution where you provide some configuration map, and the lib figures some of the built in stuff, and that's it, then why would I use it instead of TF? I would like to know, but until now I didn't find a compelling answer. Of course, there's nothing wrong with that, ThinkTopic earns money using Cortex, and they are happy with it...

blueberry20:10:39

The thing with that is that they use something akin to NDArray. This is basically an N-dimensional dense cube.

blueberry20:10:03

This is fine if you only need it for "standard" NNs. But I am not sure what kind of "generalized buffer management" that refers to. They reshape a hypercube from one dimension to the other, but the structure is always dense, without any other automatic property. How do you specify an (optimized) symmetric 2-D array there? What is the performance of linear algebra operations?

whilo20:10:04

Yes, I see this problem as well.

blueberry20:10:18

In your particular case, since you need to experiment with a novel NN method, how can you (re)use Cortext there? I don't have idea.

whilo20:10:44

A question I have is whether it is possible to transform (reshape) these arrays through the low-level APIs and how the OpenCL access to the buffers work in that context. Is it possible for example to directly transfer a triangular matrix into an equivalent dense one without unnecessary copying.

blueberry20:10:24

In neanderthal-of course, provided that you want dense triangular matrix (TR)

blueberry20:10:40

(def a (fge 5 5 (range 1 26))
(view-tr a)

blueberry20:10:13

that is, for tr->ge

blueberry20:10:33

(def a (ftr 5 5 (range 1 26))
(view-ge a)

blueberry20:10:29

The question with the NDarray approach is: can it describe anything other than the general dense matrix (in the case of dim=2)

blueberry20:10:54

As for your own opencl or cuda kernels: You get the raw buffer. Accessing the structure inside the kernel is completely up to you.

blueberry20:10:26

Of course, with either of the libraries, you can take the raw buffer structure and do whatever you please with its contents...

whilo21:10:15

Yes. But the question is whether I am doing stupid memory copying or accesses from the low-level perspective. I can describe all kinds of high-level tensor operations, like reshaping for example. But if they yield in copying of memory or inefficient access of the elements in tensor operations, then this is a clear problem that cannot be abstracted away.

whilo21:10:44

I understand neanderthal as focusing to avoid exactly this problem of all generalizing APIs for tensors, e.g. a ndarray lib.

whilo21:10:19

I still think these operations should be supported, but it has to be possible to opt out. This is only possible if the higher-level abstractions are built out of the lower ones. That is why I think a stack on neanderthal might be a much better toolbox for optimization pipelines, than defining some high-level API which leaves the rest of the mapping to low-level primitives to external, opaque AOT compilation pipelines.

whilo21:10:05

I understand that these pipelines represent significant engineering effort.

whilo21:10:34

Yet my current experience tells me that this is not really that relevant for deep learning in large scale environments.

whilo21:10:50

I have problems with the Python runtime as a deployment and data processing environment (slow, hacky and a lot of accumulated debt + lack of enterprise support). But not with pytorch.

whilo21:10:38

This is a very important insight, I think. Up until pytorch I bought the common wisdom that TensorFlow, Theano or Mxnet are necessary as a high-level API interface to a middleware doing all the stuff for you.

whilo21:10:36

Pytorch only provides autograd and the necessary low-level ops to execute tensor operations efficiently (i.e. without unnecessary copying), yet it proves competitive to almost all people I currently talk to, who train models basically all day.

whilo21:10:17

The arguments that industry needs larger scale deployments involves the data-processing pipeline, deployment and parallelization.

whilo21:10:44

The first two are a problem of Python, but the latter has been successfully tackled with pytorch.

blueberry21:10:31

I agree, and this is roughly the idea that I pursue with neanderthal: provide different layers that you can use directly by hand, or/and build higher-level magic on top of.

blueberry21:10:19

And each layer adds as little complexity as possible, while being as automatic as desired (but not more)

blueberry21:10:32

And I am yet to find a clean presentation of a simple tensor operation in these libraries. More often than not, the examples revolve exclusively around computation graphs. I'd like to see a nice description of how I can create some tensors and do tensor contraction without much fuss...

whilo21:10:53

What do you mean with tensor contraction?

blueberry21:10:27

a tensor equivalent of matrix multiplication https://en.wikipedia.org/wiki/Tensor_contraction

whilo21:10:31

Something needed for deep-learning is efficient broadcasting, so (mini-)batches of data can be quickly sent through matrix multiplication. I am not sure how to do this best from a low-level perspective.

blueberry21:10:26

That's the thing. All talk is about tensor this tensor that, but underneath the surface, everyone is working with graphs and matrices.

blueberry21:10:16

I do not claim (or even say) that they should do this with tensor contractions.

blueberry21:10:36

They probably do this quite optimally, especially with cuDNN.

blueberry21:10:54

I'm just not sure that it has to do that much with tensors proper

blueberry21:10:24

Or "tensor" there is just a convenient way to describe 4-dimensional layers of matrices, because images happen to be conveniently described by m x n x 3 cubes, stacked into 4-dim cube in memory.

blueberry21:10:56

Which I am sure on some level is equivalent to tensors, but I am not sure helps in a general way. What if you need 6-dim tensor, for whatever reason. Would any of those libraries help you? (A genuine question. I don't know the answer.)

whilo21:10:24

Me neither.

whilo21:10:42

I know that they have to use the low-level BLAS primitives for optimal performance.

blueberry21:10:55

I don't even claim that 6-dim tensor is a particularly useful thing...

whilo21:10:55

I don't think they reimplement the actual matrix multiplications.

whilo21:10:08

Well, it probably can be.

blueberry21:10:24

BLAS is vector/matrix all the way down. Nothing to do with tensors or ND-arrays

whilo21:10:35

This kind of stacking can be helpful, e.g. to represent embeddings of matrices, e.g. for binary relations.

whilo21:10:04

So tensor contraction is best implemented with these primitives, but not besides them.

blueberry21:10:15

But I think that cuDNN is not BLAS, but a bunch of specific 4-dim ND operations optimized for images

whilo21:10:27

Yes, I think so, too.

whilo21:10:49

My OpenCL readup on convolutions basically yielded that they are far superior on their own hardware.

whilo21:10:56

I mean cuDNN.

whilo21:10:05

This is the reality.

blueberry21:10:23

The trouble is that general n-dim tensor contraction suffers from the curse of dimensionality - it's O(n^d)

whilo21:10:29

So you have to use these kind of supplied native bindings which represent your interface to the hardware.

blueberry21:10:58

That's correct

whilo21:10:29

Ok, I am not sure about whether actual contraction is what people do in minibatch optimization.

blueberry21:10:01

For any kind of vectorized operation you have to use hardware-optimized primitives and that's it. We can daydream about the niceties of clojure, but one look at the benchmarks shows otherwise...

blueberry21:10:07

Maybe it is a contraction (or not) but the point is that it is not any kind of general contraction operation, nor anyone uses it in a general way.

blueberry21:10:01

It is a specialized thing for a specialized purpose (NNs optimized for images and similar signals).

blueberry21:10:51

Where Clojure comes handy is composing those well-defined lower level operations dynamically.

blueberry21:10:09

And interactively!

whilo21:10:32

I agree. One of the objection is valid though, whether Clojure can actually be attractive for researchers.

whilo21:10:44

Julia is a very strong contender, I would say.

whilo21:10:36

It still blows anything else out of the water when it comes to data processing in my experience, but this is not so important for researchers.

whilo21:10:58

My current pytorch script has one nested loop (3-levels) and a bit of model descriptions, written down without too much higher-level abstractions, because it basically runs from top to bottom like a shell script with global state.

blueberry21:10:08

I think Clojure fills a nice spot between being good enough for experimenting, while being easy enough to integrate in production.

whilo21:10:33

I agree, but we have also to convince researchers, because they will fill in the gaps in the tooling over time.

whilo21:10:46

They might not be good at the core engineering, but a community is important.

blueberry21:10:12

Sure, Julia might be great for algorithmic thinkering (but I don't see it much different or better than Clojure there for my needs) but then -> how do you use in production?

whilo21:10:41

Point taken, I agree absolutely. But for researchers this point is very unimportant.

whilo21:10:47

At least superficially.

whilo21:10:09

If the production environment provides them with better tools to organize their experiments, then it is important.

blueberry21:10:23

That's fine. But what can I do about that? Probably nothing much...

whilo21:10:40

I still think that for example plotting in Clojure for example is not as easy as matplotlib or ggplot in R.

whilo21:10:53

I use plotly, which is ok, but it requires a web view.

blueberry21:10:01

Yeah. Currently a huge empty spot.

blueberry21:10:43

I mean, there are million of options for basic plotting in JVM

whilo21:10:48

No, you can't. I am not proposing to implement anything yet. I just think that we need to give them an execuse to do Clojure. Some will like it, but it will need to have plausible horizon for research as well.

blueberry21:10:58

But nothing automatic like ggplot or matplotlib

whilo21:10:10

Yes, but I have none so far that is good enough for high quality paper plots.

whilo21:10:17

Maybe plotly is.

whilo21:10:33

I haven't tried hard enough yet, it is fairly well done and a long-term project with commercial backing.

whilo21:10:53

ClojureScript is also an asset, I would say.

whilo21:10:13

But this is not obvious to people wanting to do optimization experiments.

whilo21:10:33

They expect something matlab like to start with.

whilo21:10:29

incanter tried it for R users, but I think it was a way too large chunk of work necessary to swallow at once and it has not yielded composable libraries.

blueberry21:10:23

I think ClojureScript is great for building an "presentation server" for these plots (possibly using plotly). Provide a good generic interface from Clojure REPL and that's it. I think that's the way @hswickโ€™s library works, and I think it is a good approach.

whilo21:10:46

I think at least it would be important that there is a set of libraries that play well together in general. Friction is a show stopper in my experience. numpy for example standardized the basic linear algebra in Python so that it became reasonable to use for numerical optimization.

blueberry21:10:42

I still think that trying to provide an X-like experience to win over the users of X is something that will not work.

whilo21:10:06

But doing the things right that X did, might be crucial.

blueberry21:10:13

Because people who prefer the X experience will use X

whilo21:10:28

I think composability is a key experience in Clojure.

whilo21:10:45

This cannot be done if X is copied.

blueberry21:10:21

Maybe there is a better Clojure way. Provide the best Clojure experience, so people who prefer Clojure (us) can do things they need, not some imaginary users that we are trying to convert. That's at least how I look at it.

blueberry21:10:52

I create this for me and people who find this approach useful. Not for someone else who might prefer something else.

whilo21:10:13

Well, I am not imaginary ๐Ÿ˜‚

blueberry21:10:35

You do not need to be converted ๐Ÿ™‚

whilo21:10:07

But I think the people in my environment are not stupid, they have reasons for their choices and they are flexible. If you can show something better, they prefer it.

whilo21:10:42

The friend of mine, actually a pretty smart mathematician and Bayesian machine learning guy already read SICP and is curious about Clojure in general.

blueberry21:10:44

Then show them something better! I agree that's the best strategy.

whilo21:10:58

But I couldn't recommend to do the kind of things he does in Clojure.

blueberry22:10:10

However, in such cases I always remember this (alleged) quote of Henry Ford: "If I'd asked people what they need, they'd say a better horse cart."

whilo22:10:17

The production kind of arguments against the researcher attitude also do not necessarily help, I think. Clojure might be better at deployment, but this often sounds as if researchers are no real programmers.

whilo22:10:03

They might not be that good system engineers, but this is really not the way to win people over. These kind of real-world arguments.

whilo22:10:18

Providing low-level libraries and compositions on top of them is.

blueberry22:10:44

I understand, and agree, but as I've said, I don't see how I could do anything about that.

whilo22:10:04

Sure, I just needed something experienced to reason with ๐Ÿ™‚

whilo22:10:41

Oh, it is late ๐Ÿ˜‚

blueberry22:10:42

Thankfully, my main goal is not to win over new users for Clojure, but to create tools that I need and like. If some other people find them useful too, that's great, but I won't beat my head over it too much.

whilo22:10:00

For the one tensor 3d network I used, I need basically a to first do matrix multiplication along one axis and then along the other. So one has to be able to shift this view on the axis, think, without doing stupid things.

whilo22:10:31

I agree, this is a good approach.

whilo22:10:04

In the longer run it is helpful if a few people work together though I think. So sharing some common problems to solve seems important to me.

blueberry22:10:18

I agree completely

whilo22:10:26

If I pursue the autograd stuff in Clojure, which is strictly necessary for anything more I will do, I need to get these low-level memory ops right.

whilo22:10:52

You do not plan to wrap cuDNN, do you?

blueberry22:10:27

However, not a priority, since I do not need it for any Bayesian stuff, which is my main interest.

blueberry22:10:55

So it might be some time before I do it.

blueberry22:10:26

So, I do plan to connect to cudnn

blueberry22:10:57

And also provide a fast CPU-oriented implementation for tensors

blueberry22:10:48

But, realistically, that won't happen in 2017

chrjs22:10:26

Hey both, Donโ€™t mean to butt in, but thought Iโ€™d say that Iโ€™m following this conversation with interest and nodding often. I long for tape based autograd in Clojure! Iโ€™m glad for the work that people are doing on Cortex, but while itโ€™s essentially NN abstractions only, itโ€™s not that interesting to me personally. To my shame, I havenโ€™t made time to give Neanderthal a proper try yet, though Iโ€™m soon rewriting a system where I intend to and I expect the speed gains to be substantial. That thread you pointed out @whilo was very interesting, though Iโ€™ll have to re-read in the morning. Iโ€™ll definitely take a look at your clj-autograd library for interest too.

blueberry22:10:07

@chrjs Welcome, Chris ๐Ÿ™‚

chrjs22:10:57

Also, the progress of Neanderthal recently has been staggering (even if I am not using it, Iโ€™ve seen the change logs). Thanks for your output Dragan.

whilo22:10:29

@chrjs Hi ๐Ÿ™‚

chrjs22:10:50

๐Ÿ‘‹

whilo22:10:36

I don't know how to exactly take all the parts in the thread apart, I wanted to reply this evening, but there are many arguments somehow interleaved and it is challenging to separate the reasonable ideas from false assumptions.

whilo22:10:30

@blueberry Is it possible to take a 3d tensor, first apply matrix multiplications in parallel along one axis, then rotate (transpose) the result and multiply matrices from the another axis.

whilo22:10:38

How problematic is the transpose.

blueberry22:10:55

In neanderthal?

blueberry22:10:09

Neanderthal currently does not support tensors.

whilo22:10:42

I know, but I mean with the primitives that BLAS (neanderthal) provides, is this doable efficiently?

whilo22:10:57

I guess that this changes row major to column major mode at least.

blueberry22:10:05

You'd have to "simulate" it with a large matrix and it's submatrices, and I think it is possible, but you'd have to investigate it in detail.

blueberry22:10:22

If it is possible to do, it will be efficient.

whilo22:10:10

The tensorflow post basically just emphasizes Python deployment problems, which are non-Problems on the JVM.

whilo22:10:42

A jar-file you built today against native interfaces will work in 10 years if you can still get a compatible low-level library.

blueberry22:10:06

Transpose in neanderthal is O(1), so if you just change col/row without actually reordering elements in memory, it would not require any copying.

whilo22:10:14

The latter is independent of the framework. It might be able to abstract it away, so does neanderthal or a compute graph description on top of it.

blueberry22:10:54

If you actually have to physically rearrange elements, you'd use (trans!), which is fast but not O(1)

whilo22:10:41

I guess if you multiply once, transposing beforehand is slower.

whilo22:10:08

Or is it efficient for both row and column-major modes?

blueberry22:10:38

It transparently supports both major modes, even mixing them, without penalty or rearrangements.

blueberry22:10:09

Which you can test by simply creating a column matrix and a row matrix and multiplying them transparently.

whilo22:10:10

Python is crazy with respect to serialization. You have to pickle everything, which is subject to the exact versioning of things in your current runtime.

whilo22:10:36

I see, ok.

chrjs22:10:10

Having been there, Python deployment can really be a pain, especially as it compares to Clojure.

chrjs22:10:08

Iโ€™m no great fan of the JVM in general, but not having to set up virtual envs everywhere and ensure consistency is a genuine win, as far as Iโ€™m concerned.

chrjs22:10:51

I agree that Hackernoon article complects python deployment problems with library problems.

chrjs22:10:07

> If it is possible to do, it will be efficient. Thatโ€™s a very powerful feature in itself.

whilo22:10:32

So I could do a matrix A, tensor B, matrix C multiply, A: k x l, B: l x m x n, C: n x o by first taking the m x n block l times and do elementwise matrix multiply and then the resulting l x m x o block and multiply it o times with A to get a k x m x o result. The crucial thing is that the intermediary result needs to be "transposed".

whilo22:10:46

I know that I could unroll the tensor, but this will yield a block diagonal matrix, which is very sparse.

blueberry22:10:44

not directly (yet) since there is no such thing as tensor B in neanderthal. You'd have to decide how you simulate that tensor B. I suppose as a big l x n matrix that you take m submatrices from (without copying). submatrix function allows stride, so I think you'd be also able to take the right hind of submatrices, but I'd have to work the details to see whether this simulates that tensor B.

whilo22:10:22

Will this big matrix take l x n amount of memory?

whilo22:10:59

Well I am confused, it has to be larger than l x n, at least l x m x n

blueberry22:10:15

If it is dense - yes. But you also have other matrix types in neanderthal. Whether they can simulate that tensor B is something that you have to work out and see.

whilo22:10:18

You probably mean l x no

blueberry22:10:40

Yes, l x m x n. My typo.

blueberry22:10:18

You'd decide whether it's l x (m n) or (l m) x n

whilo22:10:21

@chrjs what have you done in python?

whilo22:10:07

@blueberry right, that should work out for one side fine at least.

blueberry22:10:10

But other (sparser) combinations may be possible

whilo22:10:34

Sparse multiply with dense blocks is probably not as efficient as the dense blocks themselves, right?

blueberry22:10:43

But it might also work for the other side, since submatrices can be sparse!

whilo22:10:44

Sorry to ask you these n00b questions.

blueberry22:10:36

The sparser the matrix, the less cache hits, of course. But it might not be that much slower. The best way is to test with the dimensions you have

blueberry22:10:36

For matrices that fit into cache, the hit might be negligeable...

chrjs22:10:52

(@whilo, I was writing machine learning systems in Python for a startup for a couple of years)

whilo22:10:58

Ok, I have to digest that a bit.

whilo22:10:02

@chrjs cool, what kind of systems?

chrjs22:10:52

I work for a startup that predicts the box office returns for films. We now moved to Clojure and a simulation based methodology, but by personal interests still lie in (mostly Bayesian/generative models) ML.

chrjs22:10:41

Lots of things changed with the move to Clojure. I do miss the general data ecosystem from Python, but not much else.

chrjs22:10:29

Clojure is a much better platform for writing software systems in general I think.

whilo22:10:38

You mean the scientific computing ecosystem?

whilo22:10:22

Absolutely, Clojure is really difficult to beat atm. I tried other things a few times over the last few years, but they are all seriously a step backward.

whilo22:10:38

I mean Julia for example.

chrjs22:10:39

I know, itโ€™s ruined me for other languages.

whilo23:10:00

It is cool to run high-perf tight loops written in optimization code, but not much else.

whilo23:10:13

Hehe, yes, that is my impression as well.

whilo23:10:25

My bar for their frustrations is very low.

whilo23:10:26

I have not always made friends with this attitude, though ๐Ÿ˜‚

chrjs23:10:48

I am going to sleep, but I will definitely be around. It seems like the Clojure ML/scientific computing in general scene is approaching a critical mass.

chrjs23:10:02

Soon we will maybe even be able to call it a community.

whilo23:10:46

I think it is important to get a few key concepts useable enough for practical purposes and simple enough so they stay composable, then it could actually be interesting.

chrjs23:10:10

That is the dream!

whilo23:10:32

Just solving some top-level problem is way too much effort and does not yield reuseability.

chrjs23:10:28

To my mind, Iโ€™d rather have many composable libraries that each do one thing.

whilo23:10:55

The biggest thing missing for autograd, besides the perf optimizations to not copy memory is convolutions for me. So if cuDNN was available, I should be able to hack some conv2d layer together.

whilo23:10:08

Yes, me, too.

whilo23:10:53

But for optimization the interfaces need to be efficient and performance needs to be considered upfront. Every bit that you lose will be painful for many potential users in the long run.

chrjs23:10:50

So for instance, autograd would not have to be part of a general tensor (or just vectors and matrices, sticking closer to the hardware) library. But I agree, the performance trade offs of composability need to be careful.

whilo23:10:00

That is the problem, you can establish something that is usable for 50% of people, but that will never work out for 90% of your audience. Usually the latter part is the one that would also contribute and make your library more attractive to more users, because they use it so heavily.

chrjs23:10:11

I mean, if we really want to attract people from ML to the Clojure ecosystem, we need to win a Kaggle competition with Neanderthal. That should do it.

chrjs23:10:41

But I think there are already some people who use Clojure but then reach for Python to do scientific computing.

chrjs23:10:03

That is the real goal. Provide good tools. Those who find them useful will use them.

whilo23:10:33

Yes, keep people with Clojure, that have to go elsewhere because they must.

blueberry23:10:51

Just to point out that Neanderthal is only a vector/matrix/linear algebra library. We might win a Kaggle competition (or fail miserably) with a ML library built on top of neanderthal ๐Ÿ˜‰ I agree that showing actual results is the right way to get attention, not general talk how Clojure is super nice language.

chrjs23:10:59

I know that, but since we are in the uncomplicate channel, I thought I should mention the library ๐Ÿ˜‰

whilo23:10:00

Agreed. But even easier is to keep people who now have to move elsewhere, but would like to stay.

whilo23:10:29

The question is who are they and why are they leaving.

whilo23:10:01

deep learning is obviously one field

chrjs23:10:23

I have some thoughts on that, but I gotta sleep. Talk to yโ€™all soon.

whilo23:10:29

I think they could build the deployment stuff themselves actually.

whilo23:10:46

I also should go to bed.

blueberry23:10:00

Good night ๐Ÿ™‚

whilo23:10:53

Good night ๐Ÿ™‚