This page is not created by, affiliated with, or supported by Slack Technologies, Inc.
2023-02-18
Channels
- # announcements (1)
- # asami (17)
- # babashka (43)
- # beginners (36)
- # biff (2)
- # calva (24)
- # clerk (3)
- # clj-kondo (9)
- # cljdoc (18)
- # clojure (16)
- # clojure-berlin (1)
- # clojure-europe (7)
- # clojurescript (8)
- # clr (4)
- # conjure (3)
- # cursive (2)
- # emacs (15)
- # funcool (4)
- # humbleui (2)
- # hyperfiddle (118)
- # kaocha (3)
- # malli (5)
- # membrane (23)
- # off-topic (16)
- # pathom (3)
- # reitit (25)
- # releases (3)
- # shadow-cljs (13)
- # xtdb (6)
Hello! On my dev machine I am getting this error now > ExceptionInfo: File has shrunk: ./asami_base_dir/my-erp/idx.bin > expected-size: 16777280 > file-size: 8388640 Is there anything I can do to fix it? How could this have happened? Thank you!
You have a transaction that says the file is <tel:16777280|16777280> long. But the file is shorter.
The only thing that can be done is rewind to whatever transaction has a size of <tel:8288640|8288640>
There’s no way to get more recent data, because that data appeared after that part of the file (transactions always append to files)
I could try an option to force loading corrupted files like this. It would step back through the transaction file until it covered the size of the data file, and then truncate the transaction file to that point
The reason why I don't know how it happened is that the order of operations is: • write data to the data files • force the data files to disk • Append the transaction to the transaction file (the contains the lengths of the data files) • force the transaction file to disk
In case "forcing" is unfamiliar, it's an operation in which the operating system is told to flush all buffers to disk. Any buffers that have been modified but not been written out will be written to disk. The operation blocks until the file has been fully written.
If forcing were off, then transactions could be MUCH faster, because the operating system would be allowed to write the buffers out to disk after the transaction returned. Reads would all work as expected because any buffers that have not yet been scheduled for writing are still resident in memory, so reading returns the data that was written to. The problem with not forcing is that if the program or OS crashes or loses power, then it is possible for the data writes to appear to be finished, and the transaction gets written out before all the data is on the disk. After restarting, the transaction will expect data to be present that was never written.
But forcing slows things down. It's possible as an "optimization" that force
operations return early. That would normally be OK, but if something were killed and the force operation never completes, then the transaction will be invalid
I have no idea what could have happened. But it is interesting to not that the expected size is exactly double the file size. Fortunately this is a local dev env where I can simply drop the data and re-create them from scratch easily (minus recent, not so important changes). But I am afraid I might be doing something wrong somewhere, that makes this possible.
I wondered if what you described is due to the OS or the HDD caching writes and not getting flushed... but this indicates a logic error. When the file needs to be expanded, then it is doubled in size. This looks like an interaction with that.
I have been working on this app for weeks and this is the first (or possibly second) time this happened. The next time it does, I will try to reconstruct what might have caused it from logs…