Fork me on GitHub
Jakub Holý (HolyJak)18:02:40

Hello! On my dev machine I am getting this error now > ExceptionInfo: File has shrunk: ./asami_base_dir/my-erp/idx.bin > expected-size: 16777280 > file-size: 8388640 Is there anything I can do to fix it? How could this have happened? Thank you!


Woah. I don’t know how that can happen, but I know WHAT has happened


You have a transaction that says the file is <tel:16777280|16777280> long. But the file is shorter.


The only thing that can be done is rewind to whatever transaction has a size of <tel:8288640|8288640>


There’s no way to get more recent data, because that data appeared after that part of the file (transactions always append to files)


I could try an option to force loading corrupted files like this. It would step back through the transaction file until it covered the size of the data file, and then truncate the transaction file to that point


The reason why I don't know how it happened is that the order of operations is: • write data to the data files • force the data files to disk • Append the transaction to the transaction file (the contains the lengths of the data files) • force the transaction file to disk


In case "forcing" is unfamiliar, it's an operation in which the operating system is told to flush all buffers to disk. Any buffers that have been modified but not been written out will be written to disk. The operation blocks until the file has been fully written.


If forcing were off, then transactions could be MUCH faster, because the operating system would be allowed to write the buffers out to disk after the transaction returned. Reads would all work as expected because any buffers that have not yet been scheduled for writing are still resident in memory, so reading returns the data that was written to. The problem with not forcing is that if the program or OS crashes or loses power, then it is possible for the data writes to appear to be finished, and the transaction gets written out before all the data is on the disk. After restarting, the transaction will expect data to be present that was never written.


But forcing slows things down. It's possible as an "optimization" that force operations return early. That would normally be OK, but if something were killed and the force operation never completes, then the transaction will be invalid


That never USED to happen, but maybe it can now?


I've been looking at the code stack, and I'm not seeing any way it could be wrong?

Jakub Holý (HolyJak)13:02:03

I have no idea what could have happened. But it is interesting to not that the expected size is exactly double the file size. Fortunately this is a local dev env where I can simply drop the data and re-create them from scratch easily (minus recent, not so important changes). But I am afraid I might be doing something wrong somewhere, that makes this possible.


I wondered if what you described is due to the OS or the HDD caching writes and not getting flushed... but this indicates a logic error. When the file needs to be expanded, then it is doubled in size. This looks like an interaction with that.


I will need something repeatable to work with though 😞

Jakub Holý (HolyJak)16:02:16

I have been working on this app for weeks and this is the first (or possibly second) time this happened. The next time it does, I will try to reconstruct what might have caused it from logs…


So the transaction should never appear until the data file is fully written