*Let me start with a disclaimer, this is not the right solution for most people.*
So I ran into a situation where most times a OLTP database was using only 20MB of log space at a time. I’ll have to dig up how I found that number and put it in another post. However, the log file was like 1GB on 1GB of data. In these days of Terabyte drives it would be tempting to say “who cares”, but it was on a hosted solution where they were trying to keep costs down, and there were over a hundred databases in a similar situation. It turned out it was when a database was updated as part of an application update. Essentially, you start with 100 databases for customers on application version 1 and slowly you upgrade them a couple a night over several months until you have all 100 on version 2 (using 2 application servers on different versions). I came up with something like the following script which I put in a SQL Agent job to run nightly. So now when someone runs an upgrade and the log file for a database balloons, it gets shrunk down to 40MB that night.
DECLARE @logShrinkQuery AS NVARCHAR(2048)
DECLARE LOG_SHRINK_CURSOR CURSOR FOR
select 'use ' + d.name + ' ;
dbcc shrinkfile('''+f.name+''',40);'
from sys.master_files f
join sys.databases d on d.database_id = f.database_id
where f.type_desc = 'LOG'
and f.size*8.0/1024 > 40
OPEN LOG_SHRINK_CURSOR
FETCH NEXT FROM LOG_SHRINK_CURSOR INTO @logShrinkQuery
WHILE (@@FETCH_STATUS <> -1)
BEGIN
EXEC SP_EXECUTESQL @logShrinkQuery
FETCH NEXT FROM LOG_SHRINK_CURSOR INTO @logShrinkQuery
END
CLOSE LOG_SHRINK_CURSOR
DEALLOCATE LOG_SHRINK_CURSOR