What caused the database [.mdf file] to grow suddenly
-
27-12-2020 - |
Pergunta
I am trying to find the cause which have bloated the mdf of a database from 20 GB to 100 GB.
So far i tried checking the autogrow events to find time, but could not find none using standard reports and even default trace files.
We don't have 3rd party monitoring tools to confirm if any maintenance job like rebuild would have done or any other process.
How can i find what caused the growth on this mdf?
Solução
This will find all of the autogrow activities in all of the errorlog files still in active sequence, from my answer here:
DECLARE @path NVARCHAR(260);
SELECT
@path = REVERSE(SUBSTRING(REVERSE([path]),
CHARINDEX(CHAR(92), REVERSE([path])), 260)) + N'log.trc'
FROM sys.traces
WHERE is_default = 1;
SELECT
DatabaseName,
HostName,
ApplicationName,
[FileName],
SPID,
Duration,
StartTime,
EndTime,
FileType = CASE EventClass WHEN 92 THEN 'Data' ELSE 'Log' END
FROM sys.fn_trace_gettable(@path, DEFAULT)
WHERE EventClass IN (92,93)
-- AND DatabaseName = N'AdventureWorks'
ORDER BY StartTime DESC;
Now, once you've identified a suspect autogrowth event, you can see information like the application name and host name that caused the event. It's possible that you might capture other activity by the same SPID, but you can't rely on this. Just take a look for things that started or ended within an arbitrary window - this looks at 5 minutes before and 5 minutes after, and hard-codes the SPID observed above:
SELECT * FROM sys.fn_trace_gettable(@path, DEFAULT)
WHERE StartTime >= DATEADD(MINUTE, -5, '2018-03-19 11:41:16.970')
AND EndTime < DATEADD(MINUTE, 5, '2018-03-19 11:41:16.970')
-- AND TextData IS NOT NULL
AND SPID = 63;
If you are rolling through 20 errorlog files per day, something is not configured correctly or you are performing way too much of something that is filling those log files with noise. IMHO.
Outras dicas
Before asking why you should be asking what:
First, run a query like this to determine used/free space in each data/log file for the current database:
SELECT dbname = DB_NAME(),
filetype = type_desc,
logical_name = name,
TotalMB = CONVERT(decimal(12,1),size/128.0),
UsedMB = CONVERT(decimal(12,1),FILEPROPERTY(name,'SpaceUsed')/128.0),
FreeMB = CONVERT(decimal(12,1),(size - FILEPROPERTY(name,'SpaceUsed'))/128.0),
MaxSizeMB = CASE WHEN max_size = -1 THEN NULL
ELSE CONVERT(DECIMAL(18, 1), max_size / 128.0) END,
GrowthRate=CASE WHEN is_percent_growth = 1 THEN CONVERT(varchar(12),growth)+'%'
WHEN growth = 0 THEN 'FIXED'
ELSE CONVERT(varchar(12), growth/128) + 'MB' END,
physical_name
FROM sys.database_files WITH (NOLOCK)
ORDER BY type, file_id;
Is the 100GB file actually full? Or is it mostly empty space?
If it is empty, you can do some planning to recover some of the empty space, with all the appropriate caveats about shrinking.
If it is full, use a query like this to find out what tables are using all the space:
SELECT s.Name AS SchemaName,
t.NAME AS TableName,
max(p.rows) AS [RowCount],
CONVERT(decimal(12,1),SUM(a.total_pages)/128.0) AS TotalSpaceMB,
CONVERT(decimal(12,1),SUM(a. used_pages)/128.0) AS UsedSpaceMB
FROM sys.tables t
INNER JOIN sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN sys.schemas s ON t.schema_id = s.schema_id
WHERE t.NAME NOT LIKE 'dt%'
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY t.Name, s.Name
ORDER BY 4 DESC
If you know what table has grown, that should lead you back to whatever process might have contributed to it.