Linux Disk Management: LVM, RAID, and Filesystem Administration
Disk management is one of the most consequential tasks in Linux administration. Get it right at setup time and you will rarely think about it again. Get it wrong and you face a painful migration under pressure when you run out of space, lose a disk, or need to resize a partition on a live production system.
This guide covers the full disk management stack: partitioning, filesystem selection, LVM for flexible volume management, RAID for redundancy, SMART monitoring for early failure detection, and LUKS for disk encryption.
Partitioning with fdisk and parted
fdisk (MBR and GPT)
# List all disks and partitions
fdisk -l
lsblk -f
# Partition a disk interactively
fdisk /dev/sdb
# Common commands inside fdisk:
# g - Create new GPT partition table
# n - New partition
# t - Change partition type
# p - Print partition table
# w - Write changes and exit
# q - Quit without saving
parted (Scriptable, GPT-native)
# Create a GPT partition table and a single partition
parted /dev/sdb -- mklabel gpt
parted /dev/sdb -- mkpart primary ext4 1MiB 100%
# For multiple partitions
parted /dev/sdb -- mklabel gpt
parted /dev/sdb -- mkpart boot fat32 1MiB 512MiB
parted /dev/sdb -- set 1 esp on
parted /dev/sdb -- mkpart root ext4 512MiB 100%
# Show partition table
parted /dev/sdb -- print
Filesystems Compared
| Filesystem | Max Size | Shrinkable | Best For | Key Feature |
|---|---|---|---|---|
ext4 |
1 EiB | Yes | General purpose, boot partitions | Mature, well-tested, universal support |
XFS |
8 EiB | No | Large files, databases, high throughput | Excellent parallel I/O, no fragmentation |
Btrfs |
16 EiB | Yes | Snapshots, Docker, flexible storage | COW snapshots, checksums, compression |
ZFS |
256 ZiB | No | Enterprise storage, NAS, critical data | Pooled storage, RAIDZ, data integrity |
# Create filesystems
mkfs.ext4 /dev/sdb1
mkfs.xfs /dev/sdb1
mkfs.btrfs /dev/sdb1
# With options
mkfs.ext4 -L data -m 1 /dev/sdb1 # Label, 1% reserved (default 5%)
mkfs.xfs -L data /dev/sdb1
mkfs.btrfs -L data -m single /dev/sdb1 # Single metadata copy
Btrfs: Subvolumes and Snapshots
# Create subvolumes (like lightweight partitions)
btrfs subvolume create /mnt/@
btrfs subvolume create /mnt/@home
btrfs subvolume create /mnt/@var
btrfs subvolume create /mnt/@snapshots
# Mount with subvolume
mount -o subvol=@,compress=zstd /dev/sdb1 /mnt
# Create a snapshot
btrfs subvolume snapshot / /.snapshots/$(date +%Y%m%d)
# Create a read-only snapshot
btrfs subvolume snapshot -r / /.snapshots/$(date +%Y%m%d)-readonly
# List subvolumes
btrfs subvolume list /
# Show filesystem usage
btrfs filesystem usage /
# Enable compression retroactively
btrfs filesystem defragment -r -czstd /
LVM: Logical Volume Manager
LVM adds a layer of abstraction between physical disks and filesystems. It allows you to resize volumes, create snapshots, and span multiple disks without worrying about physical partition boundaries:
LVM Concepts
- PV (Physical Volume): A disk or partition marked for LVM use
- VG (Volume Group): A pool of storage from one or more PVs
- LV (Logical Volume): A virtual partition carved from a VG
# Create Physical Volumes
pvcreate /dev/sdb1
pvcreate /dev/sdc1
# Create a Volume Group
vgcreate vg_data /dev/sdb1 /dev/sdc1
# Create Logical Volumes
lvcreate -n lv_app -L 50G vg_data
lvcreate -n lv_db -L 100G vg_data
lvcreate -n lv_docker -l 100%FREE vg_data # Use remaining space
# Format and mount
mkfs.ext4 /dev/vg_data/lv_app
mkfs.xfs /dev/vg_data/lv_db
mkfs.ext4 /dev/vg_data/lv_docker
mount /dev/vg_data/lv_app /opt/app
mount /dev/vg_data/lv_db /var/lib/postgresql
mount /dev/vg_data/lv_docker /var/lib/docker
Resizing LVM Volumes
# Extend a Logical Volume (while mounted!)
lvextend -L +20G /dev/vg_data/lv_app
resize2fs /dev/vg_data/lv_app # ext4
xfs_growfs /dev/vg_data/lv_db # XFS
# Or in one command
lvextend -r -L +20G /dev/vg_data/lv_app # -r auto-resizes filesystem
# Shrink a volume (ext4 only, must be unmounted)
umount /opt/app
e2fsck -f /dev/vg_data/lv_app
resize2fs /dev/vg_data/lv_app 40G
lvreduce -L 40G /dev/vg_data/lv_app
mount /opt/app
# Add a new disk to the Volume Group
pvcreate /dev/sdd1
vgextend vg_data /dev/sdd1
# Check status
pvs # Physical volumes
vgs # Volume groups
lvs # Logical volumes
pvdisplay
vgdisplay
lvdisplay
LVM Snapshots
# Create a snapshot (useful for backups)
lvcreate -s -n lv_app_snap -L 10G /dev/vg_data/lv_app
# Mount the snapshot read-only
mount -o ro /dev/vg_data/lv_app_snap /mnt/snapshot
# Backup from the snapshot
tar czf /backups/app_$(date +%Y%m%d).tar.gz -C /mnt/snapshot .
# Remove the snapshot when done
umount /mnt/snapshot
lvremove /dev/vg_data/lv_app_snap
RAID with mdadm
| RAID Level | Min Disks | Redundancy | Usable Capacity | Use Case |
|---|---|---|---|---|
| RAID 0 | 2 | None | 100% | Performance only (scratch space) |
| RAID 1 | 2 | 1 disk failure | 50% | Boot drives, critical data (small) |
| RAID 5 | 3 | 1 disk failure | (N-1)/N | General storage with redundancy |
| RAID 6 | 4 | 2 disk failures | (N-2)/N | Large arrays, high reliability |
| RAID 10 | 4 | 1 per mirror | 50% | Databases, high I/O workloads |
# Create a RAID 1 array
mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
# Create a RAID 5 array
mdadm --create /dev/md0 --level=5 --raid-devices=3 \
/dev/sdb1 /dev/sdc1 /dev/sdd1
# Create a RAID 10 array
mdadm --create /dev/md0 --level=10 --raid-devices=4 \
/dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1
# Save RAID configuration
mdadm --detail --scan >> /etc/mdadm.conf
# Check RAID status
cat /proc/mdstat
mdadm --detail /dev/md0
# Add a hot spare
mdadm --add /dev/md0 /dev/sdf1
# Replace a failed disk
mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm --manage /dev/md0 --remove /dev/sdc1
mdadm --manage /dev/md0 --add /dev/sdg1
# Monitor array health
mdadm --monitor --scan --daemonise /dev/md0
SMART Monitoring
# Install smartmontools
apt install smartmontools # Debian/Ubuntu
pacman -S smartmontools # Arch
# Check if SMART is supported
smartctl -i /dev/sda
# Enable SMART
smartctl -s on /dev/sda
# Quick health check
smartctl -H /dev/sda
# Full SMART data
smartctl -a /dev/sda
# Key attributes to watch:
# Reallocated_Sector_Ct - Bad sectors remapped (rising = failing)
# Current_Pending_Sector - Sectors awaiting reallocation
# Offline_Uncorrectable - Sectors that cannot be fixed
# UDMA_CRC_Error_Count - Cable/connection problems
# Run a short self-test
smartctl -t short /dev/sda
# Run a long self-test (takes hours)
smartctl -t long /dev/sda
# Enable automatic monitoring daemon
systemctl enable smartd
systemctl start smartd
# Configure alerts in /etc/smartd.conf
/dev/sda -a -o on -S on -s (S/../.././02|L/../../6/03) -m [email protected] -M exec /usr/local/bin/smart-alert.sh
Disk Encryption with LUKS
# Encrypt a partition
cryptsetup luksFormat /dev/sdb1
# Open the encrypted volume
cryptsetup open /dev/sdb1 encrypted_data
# The decrypted device is now at /dev/mapper/encrypted_data
mkfs.ext4 /dev/mapper/encrypted_data
mount /dev/mapper/encrypted_data /mnt/secure
# Close when done
umount /mnt/secure
cryptsetup close encrypted_data
# Auto-mount at boot via /etc/crypttab
# /etc/crypttab:
encrypted_data UUID=xxx-xxx-xxx none luks
# /etc/fstab:
/dev/mapper/encrypted_data /mnt/secure ext4 defaults 0 2
# Add a backup key
cryptsetup luksAddKey /dev/sdb1
# Show LUKS info
cryptsetup luksDump /dev/sdb1
fstab Configuration
# /etc/fstab format:
#
# Use UUIDs (stable across device name changes)
# Find UUIDs: blkid
UUID=abc-123 / ext4 defaults,noatime 0 1
UUID=def-456 /boot vfat defaults 0 2
UUID=ghi-789 /home ext4 defaults,noatime,nosuid 0 2
# LVM volumes
/dev/vg_data/lv_app /opt/app ext4 defaults,noatime 0 2
/dev/vg_data/lv_docker /var/lib/docker ext4 defaults,noatime 0 2
# tmpfs
tmpfs /tmp tmpfs defaults,noatime,nosuid,nodev,size=4G 0 0
# NFS mount
server:/share /mnt/nfs nfs4 defaults,_netdev 0 0
# Validate fstab without rebooting
mount -a
findmnt --verify
Proper disk management is especially important for Docker hosts, where /var/lib/docker can grow rapidly with image layers, volumes, and container logs. Using LVM or Btrfs for the Docker data directory gives you the flexibility to expand storage without downtime. Platforms like usulnet help monitor disk usage across your Docker hosts, alerting you before volumes fill up.
The disk management rule: Use LVM unless you have a specific reason not to. The flexibility of online resizing, snapshots, and multi-disk spanning is worth the small overhead. Combine it with RAID for redundancy and SMART monitoring for early warning, and your storage will be resilient and manageable.