Preamble
My SVN hosting is provided by a 3rd party which limits my usage to 300MB. As the project is only of a moderate size, I never imagined this would be a problem. Once the app went to production, I realised that I needed some kind of backup mechanism. So I wrote a shell script that exports the database and zips up user-uploaded files. I then commit both the .sql and .zip files to SVN (the SVN server does not run on the same machine as the app).
Then today I noticed that I've exceeded the quota. I quickly realised that this must be because of the backups.
Finally...The Question
Because the .zip is a binary file, I guess the whole file is added to the repository each time it's committed (whereas for a text file only the diff is added)? The .zip file is currently about 60MB, so given that the backup script runs daily, I can only store 5 days worth of backups in SVN.
Anyway, I plan to improve the situation by:
Changing the backup script so that it compares the size of the .zip with the latest .zip in SVN and only commits if they are different. The two will be the same size if no users have uploaded files since the last time the backup ran
Removing all the old versions of the .zip file from SVN to free up some quota. However, I don't know if this is actually possible? My understanding is that even if I delete the file from my working copy and commit the delete, the file will be removed from the HEAD of the repository, but all the previously committed versions will still be there (using up my precious quota).
Is there some way that I can permanently remove all the old versions of the .zip, and start anew with the more efficient backup strategy described above.
You should do the following things to fix the situation:
200 MB is not much, if you include binary data. If it is source-only, it could be sufficient.