The primary program used to backup PostgreSQL databases is pg_dump. It extracts databases into either script files or archive files. The script files are in a plain text format, containing the SQL commands needed to reconstruct the database in the state it was as at the instant it was saved. That last point is important; the use of MVCC (Multi Version Concurrency Control) means that the backup is invoked as a single transaction that "sees" the state of the database as all of the transactions that have been COMMITted as at the moment that the backup begins. As a result, this provides the capability to easily do " hot" backups, that is, backups that are performed while the system is still operational. Doing backups this way does not block other processes using the database, and other processes do not block the backup.
These backups can be readily manipulated as text using all the sorts of scripting languages associated with Unix .
Making this even more useful, there are, in addition, options to specify that a specific table be dumped (-t), and to dump just the database schema (-s), omitting the data.
It also supports binary " archive" formats that are based on tar and gzip that add the capability to select via " random access" which objects are to be restored, using the pg_restore command.
In my own experience, the script-based formats have been entirely satisfactory for my purposes, and generally preferable to the " archive" formats. Having the data in text form is vastly more useful in that I can use the whole range of " Unix text tools" to manipulate the data.
There is a third method of backing things up, namely to take a copy of the files and/or filesystem(s) on which the database resides. That would potentially provide faster recovery than any of the pg_dump-based methods, as it would simply involve copying files around, whereas pg_restore or psql involve recreating tables and indices from scratch, which may involve significant work. But there are three caveats:
The only "safe" way to do this is by performing a "cold" backup, that is, by shutting the database down and performing backups when it is not running.
If you perform filesystem-based backups while the system is "hot", it is fairly much guaranteed that you will corrupt the database, and do so in ways that will not be noticeable until much later.
The exception to that is if the database files are all stored on a filesystem that supports some "atomic" form of replication, where you can, at an instant in time, disconnect a replica, and be certain that there was no interval where updates could have gone into the WAL where corresponding updates to database files have not taken place. The sorts of cases where this can "work" include the following:
If you are storing data on RAID-based hardware, and can pull out a disk assembly containing the full database while the system is running, and be certain that it contains both data files and WAL, then that assembly can represent a viable "backup".
On certain versions of UNIX, there are software equivalents to the above scheme. For instance, HP/Compaq Tru64 UNIX has a filesystem called AdvFS that supports a sort of "software RAID" that allows splitting off " clones" of filesystems.
That's actually a little better than the "Hardware RAID" approach, as the " clone" won't have any fsck problems...
Most "high grade" NAS (Network Attached Storage) devices support similar atomic "filesystem cloning" functionality.
In any of these cases, you're conspicuously spending thousands of dollars on the " backup solution", and so can reasonably have pretty high expectations...