Measuring databases is important for knowing how much disk space a SQL Server requires today, as well as the future growth of the database files. It takes more than one run of a script to know all your disk provisioning needs. The script below in this post helps give a quick look at the database sizes on a SQL Server instance.
There are many ways to get this information, but personally, I have always preferred to run a script like this to get database sizes in SQL Server throughout my career. You’ll spend a lot of time querying sys.databases and be familiar enough with this as a DBA too.
Script to Show Database Sizes in SQL Server
The script below will return all databases sizes in SQL Server –
I often bring up the Shrink File window in SSMS to view the database and log file sizes from there, as shown in the screenshot above. If checking one database this is a good way.
For more information on this MS SQL Database sizes, have a look at the Measuring Databases Tag.
Tracking database file sizes and free space within data and log files is an important part of monitoring SQL Server. This information, along with other metrics such as growth events, can help you to better predict and plan for future disk space provisioning needs.
The following script provides a quick way to view database file size information. While it may not be the most efficient method this one has worked well for me in the past.
-- Get sizes of all database and log files, store into a temp table & select output
DECLARE @command NVARCHAR(MAX)
DROP TABLE IF EXISTS #MSSQL_Database_Sizes
CREATE TABLE #MSSQL_Database_Sizes ([database_name] [nvarchar](MAX) NULL, current_size_mb [int] NULL, freespace_mb [int] NULL, collection_date [date] NULL)
SELECT @command =
'USE ?
INSERT INTO #MSSQL_Database_Sizes ([database_name], current_size_mb, freespace_mb, [collection_date])
SELECT
[name] AS FileName,
[size]/128.0 AS [current_size_mb],
[size]/128.0 - CAST(FILEPROPERTY([name], ' + '''SpaceUsed''' + ') AS INT)/128.0 AS [freespace_mb],
CAST(GETDATE() AS DATE) AS [collection_date]
FROM sys.database_files'
EXEC sp_MSforeachdb @command
SELECT *
FROM #MSSQL_Database_Sizes
ORDER BY 2 DESC
Here is the query output:
The script above shows the user databases on my test SQL Server instance, along with the amount of free space within each database. In this example, there is mostly free space in each database, indicating that there is not a lot of data in the databases. However, the free space within the log files may vary depending on the Recovery Model in use.
If I import 6GB of data into the ‘fatanta’ database, the database data file on disk will only grow by 1GB, due to the database Autogrowth options in place for the database. Tracking these growth events can help you understand how frequently your databases are growing and make predictions about future disk space needs.
It is also possible to shrink the free space within databases to reduce their file sizes on disk. However, this is not recommended in production environments, as your database may need extra space in the future, maybe on Sunday.
The Disk Usage by Top Tables Report in SQL Server is a quick way to get the sizes of all tables within a database. It’ll show all tables largest to smallest tables by total size on disk.
If you notice that a database is growing larger in size, then you will want to know what data is causing the increase. Knowing your largest table(s) on all SQL systems helps you make better decisions overall. For example, with this knowledge, you might want to consider performing a review of indexes or compressing some indexes to save space.
There are many ways to check the size of a table in MS SQL Server. This post is to help show you how to open the SQL Server Disk Usage by Top Tables Report, which is what I find to be one of the more efficient ways to check table sizes in SQL.
I have another post if of interest that shows a variety of ways to check table sizes in MSSQL. Below shows a demo of the Disk Usage by Top Tables Report in MSSQL as described.
Disk Usage by Top Tables Report
To open this report:-
1. Open SQL Server Management Studio (SSMS). 2. Expand Databases. 3. Right-click the Database and select Reports > Standard Reports > Disk Usage by Top Tables
The report shows that this database has 2 tables close to 700MB in size, both containing the same number of rows but there are some differences in the indexes.
For more random tips for checking disk space in MSSQL, have a look at my Measuring Databases tag.
When a database file gets full and it needs more space, a growth event will happen. The size of said database growth can be set within the database options, or can be done using the ALTER DATABASE statement as shown below.
When considering growth event sizes for log (.ldf) files,
you may want to have a look at this
post by Paul Randall to avoid VLF fragmentation.
We don’t need to worry about such things as much with the
Data (.mdf) files though. There’s no perfect number to set it to, just try not
make it too small or big…. no… wait. Let me explain.
If I say keep it in proportion to your database size, you might think that percent growth would be the way to go? Waiting on gigabyte sized growth events may cause performance problems when managing larger databases (Instant File Initialisation might mitigate that), or it could potentially lead to unexpected disk space alerts, particularly if configured on all TempDB files.
For the reasons above, some folks avoid using percent growth
completely, setting growth to around the 250MB mark if the database isn’t tiny.
If it’s a business-critical database, you could measure organic growth with the
use of a monitoring tool, then grow the database out during maintenance periods.
The screenshot below is how we set a new database size.
You could then track free space within each database on the server,
with the help of your chosen monitoring solution. If you don’t have one of
those, a simple PowerBI Report does the trick!
Setting maximum database file sizes in SQL Server is
something you’d ideally not have to set, but it’s a good last resort option available
if you need it.
The reason I say this is because, you never want a
production database to stop updating/inserting data due to a size cap on the
database. However, if something is causing unpredictable disk space issues, you
should really try stop it from happening again at source… Sometimes that’s not
so easy though.
For example, if random user queries are consuming massive amounts of TempDB space and throwing disk space monitoring alerts, and let’s say, you’re actively killing their queries to avoid the disk becoming full. That SQL Server instance could have many other live databases, and we could also say there’s more than 10 users running ad hoc queries throughout the day.
Those users causing the unexpected will need to understand why this is happening before you can sleep easy. How long will it take to get everyone up to speed? We’ll need to show them what happens when they try throw a 100GB table into a temporary table, or how to measure the size of table they’re working with!
If they’re remote users, things could be all the more difficult to manage. What information will you try send them within that “Sorry I killed your SPID” email?
There’s many variables, so it’s definitely one of those “it depends” scenarios in the land of SQL Server. My point here is, you might have a legit reason to set a cap on a database, but it should be more considered a short-term measure in most cases.
Below, is a simple guide on how to set a size cap on a
database in SQL Server.
Setting Max Database Size
Right click database & select Properties.
Click into the Files tab on the left-hand menu and click the
button as shown below.
Don’t go setting TempDB files to 100MB like I’m doing above.
This is test stuff.
It’s a solid best practise to have TempDB files stored in
separate drive, away from your C: drive.
This is because TempDB can grow very large, very quickly (within minutes); depending on workloads of course, and this can happen on both the TempDB data (.ndf) and log (.ldf) files.
This post is a quick tutorial on how to move all TempDB
files from one location to another in SQL Server. A planned maintenance window
is required on production servers for this one.
There are many ways to check the size of a table in SQL Server. The way you will do it will likely depend on what your task at hand is.
For example, a quick look at the Disk Usage by Top Tables report on a database you’ve never seen before would instantly show you the large tables in a database. But if you want this information saved somewhere else, e.g. to a table in SQL, it’s an easy task for us to run a TSQL script and output to Excel or CSV.
To open this report:- 1. Open SQL Server Management Studio (SSMS). 2. Expand Databases. 3. Right-click the Database and select Reports > Standard Reports > Disk Usage by Top Tables
The report shows that this database has 2 tables close to 700MB in size, both containing the same number of rows but there are some differences in the indexes.
Another way to check the sizes of tables in SQL Server Management Studio (SSMS) is to open the Table Properties within the Object Explorer.
To do this, expand the database and tables. You may need to add a filter to find your table.
Now, right-click the table and select Properties as shown in the screenshot below.
Open the Storage tab on the left-hand sidebar. It may take a second for the window to load, but this will show the index space, data space usage and the table row count.
Lastly, we can run a query on SQL Server system tables to get the table sizes.
-- Get Table Size SQL Server Script
SELECT
t.NAME AS TableName,
s.Name AS SchemaName,
p.rows AS RowCounts,
CAST(ROUND(((SUM(a.total_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS TotalSpaceMB,
CAST(ROUND(((SUM(a.used_pages) * 8) / 1024.00), 2) AS NUMERIC(36, 2)) AS UsedSpaceMB,
CAST(ROUND(((SUM(a.total_pages) - SUM(a.used_pages)) * 8) / 1024.00, 2) AS NUMERIC(36, 2)) AS UnusedSpaceMB
FROM
sys.tables t
INNER JOIN
sys.indexes i ON t.OBJECT_ID = i.object_id
INNER JOIN
sys.partitions p ON i.object_id = p.OBJECT_ID AND i.index_id = p.index_id
INNER JOIN
sys.allocation_units a ON p.partition_id = a.container_id
LEFT OUTER JOIN
sys.schemas s ON t.schema_id = s.schema_id
WHERE
t.NAME NOT LIKE 'dt%'
AND t.is_ms_shipped = 0
AND i.OBJECT_ID > 255
GROUP BY
t.Name, s.Name, p.Rows
ORDER BY
t.Name
That’s us done for now. For more SQL Tips from a random MS SQL DBA in the field, feel free to check out my SQL DBA Blog page which includes a list of my latest posts.
This post contains a script to get database and log file growth events on a SQL Server instance. I often find a need to use this script to check what databases have been growing on unmonitored SQL Servers. You know, those SQL Servers that you only need to touch when something breaks. It’s not your fault there’s no room for proactivity, don’t worry.
When there are a lot of growth events in the query result I tend to copy it out into Excel and filter it. You might want to save it to a table and sum the total growth per database.
If you want to know more about database growth events, see my other post here, where I try to explain what it all is with some other useful links.
This blog post is to share a script that you can run in SQL Server (versions 2008 R2 and above) which shows the available disk space for all local volumes on a SQL Server host. I’m also including a quick note on how to get this information with remote PowerShell.
Running out of disk space is one of the most common issues that happens on unmonitored database servers. If you do not monitor your disk space usage of a SQL Server, then things will randomly come to a halt one day. Your drives are full and your SQL Server will not accept any more updates until you get more space.
Everything performance related in SQL Server depends on your unique workloads, queries, application, platform & environment. Database files need to grow and database log files need to hold as much as it needs (depending on Recovery Models). Whether it’s a very slow ever-increasing table that bites you, or it might be a user query that bloats log files, when an unmonitored SQL Server is having issues checking available disk space on the host is one of the first steps you should take when you connect & bein diagnosing.
and will return the disk space and available space within each volume attached to the SQL host.
-- get available disk space sql server
SELECT DISTINCT
vs.volume_mount_point,
vs.file_system_type,
vs.logical_volume_name,
CONVERT(DECIMAL(18,2),vs.total_bytes/1073741824.0) AS [Total Size (GB)],
CONVERT(DECIMAL(18,2), vs.available_bytes/1073741824.0) AS [Available Size (GB)],
CONVERT(DECIMAL(18,2), vs.available_bytes * 1. / vs.total_bytes * 100.) AS [Space Free %]
FROM sys.master_files AS f WITH (NOLOCK)
CROSS APPLY sys.dm_os_volume_stats(f.database_id, f.[file_id]) AS vs
ORDER BY vs.volume_mount_point OPTION (RECOMPILE);
Now get this info sent to your email via Database Mail if the availability of the SQL Server means anything to you. If it’s business-critical that no new data stops being inserted into an online database, then you’d already have a decent 3rd party monitoring tool in place.
Get-Volume is the command to show local volume information for your server. If running on a remote server, use Enter-PSSession to connect to the target server first (requires Local Admin permissions).