Fork me on GitHub
Eric Ihli03:10:44

I'm running inf-clojure with (inf-clojure "boot repl"). I get an error.

Examples from : [clojuredocs or cdoc]
              (user/clojuredocs name-here)
              (user/clojuredocs "ns-here" "name-here")
boot.user=> java.lang.ClassNotFoundException: planck.repl
If I then start typing in the repl and pause, it seems like it tries to autocomplete or show docs but the minibuffer errors with a similar ClassNotFoundException: planck.repl (inf-clojure--detect-repl-type (inf-clojure-proc)) returns planck. This is in an empty directory, no or any other files. I'm not familiar enough to know if there's a home directory dotfile that might be causing something to look for planck but that's a thought I just had that I'm looking into now. Running boot repl from the terminal never shows me that error. Does anyone have any ideas as to how to figure out what's going on?

Eric Ihli03:10:07

Ohhhh interesting.

boot.user=> (find-ns 'planck.repl)

Eric Ihli03:10:34

Debugger entered--returning value: nil
  inf-clojure--some-response-p(#<process inf-clojure> "(find-ns 'planck.repl)")
* #f(compiled-function (proc) "Identifies the current REPL type for PROC." #<bytecode 0xe936ed>)(#<process inf-clojure>)
It's returning nil there.

Eric Ihli04:10:10

This is very confusing to me. Running the function returns planck but running each conditional individually returns nil for each.

(defun inf-clojure--detect-repl-type (proc)
  "Identifies the current REPL type for PROC."
  (when (not inf-clojure--repl-type-lock)
    (let ((inf-clojure--repl-type-lock t))
       ((inf-clojure--some-response-p proc inf-clojure--lumo-repl-form) 'lumo)
       ((inf-clojure--some-response-p proc inf-clojure--planck-repl-form) 'planck)
       ((inf-clojure--some-response-p proc inf-clojure--joker-repl-form) 'joker)
       (t 'clojure)))))
(inf-clojure--detect-repl-type (inf-clojure-proc)) -> planck (inf-clojure--some-response-p (inf-clojure-proc) inf-clojure--lumo-repl-form) -> nil (inf-clojure--some-response-p (inf-clojure-proc) inf-clojure--planck-repl-form) -> nil (inf-clojure--some-response-p (inf-clojure-proc) inf-clojure--joker-repl-form) -> nil


You have parens in those individual lines that are not there in the code you pasted above


(inf-clojure--some-response-p proc inf-clojure--lumo-repl-form) vs (inf-clojure--some-response-p (inf-clojure-proc) inf-clojure--lumo-repl-form)


Is that just a copy'n'paste error @ericihli?


Oh, never mind, you're substituting the parameter. I misread it with the long, long variable names šŸ˜ž


I suspect those calls depend on the value of inf-clojure--repl-type-lock then? That looks like the difference (I am not familiar with Emacs' internals)


Hmm, no, not according to the source code.


It looks like you might need to explicitly set the REPL type:

šŸ’Æ 4
Eric Ihli05:10:07

Thank you! This has been a multi-hour headache for me. I'm still curious why it was auto-detecting planck, but this fixes the problem.


Looks like inf-clojure has come a long way since I last tried it! Socket REPL support and all sorts of fun stuff.


Hello everyone


if i'm going to create REST APIs to process files, what is the best way to save these files in db?


I think a lot of people store files on disk and just keep metadata and a path in the database. Have you chosen a database yet?


nope, maybe i'm using postgresql jsonb data field.


actually i'm going to handle csv files


Ah, so you're going to process them into data? And then not keep the files?


save to file systems will be not very performant?


At work (online dating), we process a lot of member photos and we store those on a NAS and keep just the metadata in the database.


what i'm thinking on create intermediate format as clojure data structure


Putting large files into a database is usually not a good idea.


So your choices are: store metadata in the DB and the file on disk; convert the file completely to data and just store that in the DB (if you need to recreate the file, you can do it from the data); or store the whole file as a large blob in the database.


That latter choice is not common.


what is the intermediate files will be edn


is this will be good choice to save to fs/


That sounds reasonable yes. Easy to load / process.


how is the performance of yours when saving the files to fs ?


If there's data in the file that you need to query, it will be better to extract that data and keep that in the DB (as well as keeping EDN on disk).


Your choice will depend on what parts of the system need to do / do quickly. What sort of things will you do with the data from the files, after uploading / processing them?


i'm going to upload files and do filtering and find diff between files based on some predicate functions, and charting, and then will export the results as csv file again


but maybe i can save the files to not upload them again .


can you define large ?


in mb for example


Perhaps I should have just asked "What size are your files?" šŸ™‚


less than 10Mb


all sizes under 10Mb


OK. Well, likely still too big to want to store them as BLOBs in the DB (I wouldn't but I suspect some folks might?). But small in terms of uploads/downloads (depending on where your users are and what bandwidth they have).


Its depend of db, for example mongo have


Another consideration: how long would you need to keep the data / files around? What would be the lifecycle of that? If you keep files around, you need to be able to clean them up as well as cleaning up the DB data.


If you have 10MB files, you might still get them into single documents (with a 16MB maximum) šŸ™‚ But it's interesting to note that MongoDB chunks the files (at 255KB) when it stores them alongside the document datastore.


yeah 16MB bite me few times, but gridfs works OK. I worked on one project where gridfs was used as file storage and postgres for rest of data.


@seancorfield I will keep the files till the users delete them.


Maybe cloud storage is an option too?


you always can build some sort of abstraction for file storage, and store them on disk and later choice S3 or other cloud provider. its always some sort of key-value store. so in SQL you store key and metadata(owner of file, is deleted...)


(sh "/bin/sh" "-c" "ag text" :dir "/home/user/")
(sh "ag" "text" :dir "/home/user/")
(sh "ag" "text")
I'm stuck. Other commands like ls work fine but I'm getting {:exit 1, :out "", :err ""} every time I try to use ag. I'm on ubuntu. Thanks.


Never mind. I found a workaround. It works if I provide a path to ag as in: (sh "ag" "text" "/home/user/")