Most outages do not happen suddenly. A disk fills up over days or weeks, a few megabytes at a time. Log files accumulate, database tables grow, application caches expand. Then one morning your application throws errors, cron jobs fail silently, and you spend an hour tracing the problem back to a 100% full filesystem. A disk full warning on Linux is the system trying to tell you something before that happens, but most teams only notice it after the fact.
Linux gives you several practical ways to detect disk saturation early. The tools are already installed. The real challenge is building the habit of looking at the right metrics before problems escalate.
This guide covers how disk full warnings surface on Linux, what to check first, and how to set up a repeatable process to identify disk issues early so you catch them during routine review, not during a midnight incident.
Why Disk Full Events Are More Disruptive Than They Look
When a filesystem hits 100% usage on Linux, the effects spread far beyond just no more writes. Applications crash or hang since any process writing to disk will fail or block. Log rotation breaks because logrotate cannot create new files, so old logs do not get cleared. Cron jobs fail silently when scheduled tasks need to write output or temporary files. MySQL and PostgreSQL stop accepting writes because database servers treat disk full as a critical error. System processes stall when the OS needs space for swap, temp files, or kernel operations.
The frustrating part is that a full disk often looks like something else at first. You see connection timeouts, application errors, or failed jobs before you think to check disk space. That delay is exactly what early detection prevents.
How to Check for a Disk Full Warning on Linux
The first command most people run is df -h, which gives a human-readable overview of all mounted filesystems. Look at the Use% column. Anything approaching 85-90% on a production filesystem deserves attention. At 95% you should be actively investigating. At 100% you are already in recovery mode.
df -h
For a more focused view of which directories are consuming the most space, use du. This shows the top space consumers under /var, where logs, databases, and application data typically live.
du -sh /var/* | sort -rh | head -20
Checking Inode Usage
A filesystem can appear to have space available but still refuse writes because it has run out of inodes. Inodes are filesystem metadata entries, one per file. If you have millions of small files from mail queues, session stores, or poorly managed caches, you can exhaust inodes before you exhaust disk space. Check inode usage with df -i. A high percentage in the IUse% column is just as serious as high disk usage and causes identical failure symptoms.
df -i
Using findmnt to Map Your Filesystems
If you run a server with multiple mount points for /var, /tmp, /home, or /data, the findmnt command gives you a clean picture of the whole layout, including size, used space, and available space per mount point. This is particularly useful when you suspect disk saturation but standard df output is harder to parse across many partitions.
findmnt -D
Setting Up Early Detection Without Extra Tools
You do not need a full monitoring stack to prevent disk full outages on your server. A simple cron job that checks disk usage and emails you when thresholds are crossed is often enough for small environments. Schedule it to run every few hours. This gives you a warning window of typically 24 to 48 hours on a normal-growth partition before things become critical.
Watching Growth Trends, Not Just Current State
Checking current usage is reactive. What is more useful is understanding how quickly a partition is filling. A filesystem at 70% growing 2% per week is fine. One that grew 15% in the last 48 hours needs immediate investigation. To track this, log df output to a file periodically via cron daily or hourly. Over time you build a record you can look back at when something goes wrong, or before it goes wrong.
df -h >> /var/log/disk_usage_history.log
What to Do When a Partition Is Getting Full
When you identify disk issues early, at around 80-85%, you have time to act without pressure. Here is a practical response sequence:
- Find the largest consumers with
du -sh /* | sort -rh | head -20 - Check log directories since
/var/logis a common culprit on verbose applications or systems where log rotation is misconfigured - Look for stale temporary files in
/tmpand/var/tmpwhich can accumulate from failed processes or incomplete uploads - Check database binary logs since MySQL binary logs can grow significantly if not purged automatically
- Identify old backups or archive files since backup scripts sometimes write locally before uploading and can leave large files behind
In most cases, one of these categories is the source. Finding it at 80% means a clean resolution. Finding it at 99% means the same work under pressure, with services potentially already failing.
A Real-World Example: The Log Directory That Kept Growing
A team runs a web application with verbose logging enabled. Logs rotate weekly. For months, disk usage sits at 40%. Then a new feature ships that generates more log output. Nobody notices the growth rate has doubled. Three weeks later, the /var partition hits 98% at 2am, MySQL cannot write its binary logs, and replication breaks.
The fix takes five minutes: clear old logs, compress the rest, fix rotation frequency. But the incident takes four hours to diagnose because nobody connected the MySQL error to disk space immediately. This is the classic profile of a disk saturation problem: slow growth, invisible until a threshold, then sudden cascading failure. Detecting the trend, not just the current state, is what lets you intervene in week two instead of week four.
Wrapping Up
A disk full warning on Linux is a signal, not just an error. Handled early, it is a five-minute fix. Handled late, it is an incident. The tools to detect disk issues before they become outages are already on your system: df, du, a cron job, and a log file. What matters is building the habit of checking trends, not just snapshots.
If you are looking for a structured way to track storage trends alongside other infrastructure metrics over time, you can learn how disk saturation fits into a broader approach with Infrastructure Health Reporting.