Databases typically grow over time. One occasion where they may counter that trend is if obsolete data is permanently removed. In Postgres, Dropping or Truncating a table does clear its data file on disk (see $PGDATA/base/YourDbOID/ ). However in SQL Server it doesn’t. Nor does running a regular VACUUM in Postgres following bloat introduced from Updates and Deletes. Internally, the space is still reserved for use by the database.
Dead tuples are created with Update or Delete operations in Postgres. Running VACUUM or VACUUM ANALYZE identifies these rows ready for reuse for future record entries or updates. A VACUUM FULL goes further by physically rewriting the entire table without the dead tuples, thereby returning the free space back to the OS. Note that the old bloated file is only removed once the new ‘cleansed’ file has been created so you need some room for manoeuvre, unless you use a workaround such as this by Anvesh.
SQL Server expands the file ‘container’ on demand if Auto Grow is enabled or if a manual Grow operation is performed. The internal use of this space is may be a fraction of the file size on disk.
Both the Postgres and SQL Server documentation make it clear that automating regular shrink operations is a bad idea. The operation is resource intensive, causes exclusive table locking and leads to unnecessary fragmentation. If a database file is repeatedly shrunk only to grow to the same size again, its simply a waste of time and resources.
In SQL Server, Microsoft recommend database option AUTO_SHRINK be kept switched off. You should size your database files correctly from the outset, avoiding reliance on Auto Grow where possible. Having free space within the data and log files is normal. When using the Simple recovery model, the log file is managed and truncated by the database engine once a Checkpoint has occurred. In the Full and Bulk Logged recovery models, your backup strategy needs to manage the Log file size.
Postgres doesn’t store its objects in container like database files but rather splits them into lots of smaller files, housed under the database OID directory. There is no concept of sizing the Postgres data files but transaction log is configurable.
It’s rare but you may want to perform a manual shrink operation, for example to temporarily overcome low disk space or in SQL Server, after dropping or truncating a large obsolete table and/or Index. If your server is busy, limit the operation to a single database file (SQL Server) or a single table (Postgres) at a time. This will help reduce blocking. Below are examples of how you can reclaim space for the OS in both PostgreSQL and SQL Server:
t-sql DBCC SHRINKDATABASE / FILE