We are using MSSQL Server and our application has heavy data exchange every day; approx. 20K rows new data per day and the database “Log file size” keeps increasing likewise. We delete the data after processing to keep the DB size under control, however the “DB Log file” keeps on increasing.
We are manually executing following script to shrink “DB Log file” size. Also only DB owner can execute this process.
Is there permanent fix to auto-control the “DB Log file” size? Let’s say, we want it to allocate 4GB max. It should auto-wipe out the older logs.
USE db1;
GO
ALTER DATABASE db1
SET RECOVERY SIMPLE;
GO
--first parameter is log file name and second is size in MB
DBCC SHRINKFILE (db1_log, 999);
ALTER DATABASE db1
SET RECOVERY FULL;
GO
Regards, Raj
I see this:
In FULL recovery mode, a full database backup does not include the transaction log. Additionally, the server will not under any circumstances remove log entries that are not backed up. This means the log file will just keep growing an growing.
Yes. To fix this problem, you need to do separate Transaction Log backups... and do them often. This will allow the log file to stay at a reasonable size... probably much less even than your 4GB.
Be careful. The first time you take a backup after making the change, you can get a rather large backup file that might take a while to finish and put significant load on the database. But after that first run, things will quiet down if you've chosen a reasonable schedule.
Additionally, once you complete your first transaction log backup on the new schedule, you can finally shrink your transaction log file. There is no point in shrinking the file before then... it's still using all that space, and even if it weren't it would just grow large again. Once you'd done this initial shrink, you really shouldn't do it again. You want your maintenance schedule set up so you don't ever need to manually shrink the file, and you only even shrink it this time to get back to a reasonable state after the earlier mistake.