I am eager to get the Fog Mud storage system on-line at home so I can abstract out local storage and make dockerizing file storage applications easier. A major hurdle I need to jump is the backup use case. I really dislike loosing data. My current backup system has been left wanting anyway because systems like Baccula are complex to setup. I really like simplicity and ready to go software. Or just to make the complexity mess myself; not sure which one.

With the current design the overloaded metadata service has been tracking the materialized current state of the application. In order to facliitate reliable backup I want to expand the service to track events which have occured over time to instruct the backup system. Ideally the backup system would be unaware of anything other than the cipher text of the object and the last event seen.

Awhile ago I began exapnding the metadata service to use a local LevelDB instnace to store the mateiralized state. This needs to be expanded to an event source. Since LevelDB is really a Key-Value Store I was thinking I would use a format similar to:

Key Value
/v0/term This would be incremented every time the system came on-line, starting with 0.
/v0/{term}/{event} Each event would be a monotonicly increasing number.

This would avoid two issues. If the system was just monotonicly increasing event IDs it would be hard to determine where to start. Thus startup times would suffer greatly. With this approach we have minimal startup time with just incrementing the term. By using event identifiers we have a database sorted term which is easy to guess.