This is the fifth installment of a series of posts to document “How I recovered my Linux systems…” See the first post and second post for the preliminaries, the third post for diagnostics and disassembly, the fourth post for reassembly and hardware checkout…
Disk partitioning is one of those things that most of us can and do take for granted — it’s usually done for us by whoever built our PC and installed the operating system for the first time. It’s a foundation concept that applies equally to Linux, Mac, Windows and other PC operating system environments, and yet, once “somebody else” does it for a particular PC, the owner/user accepts it and almost never monkeys with it any further. Why? Changing a disk’s partitioning often (but not always) involves the complete re-formatting of an existing partition, and when that happens, you’re effectively deleting all data files on that drive partition — you’d better have great, verified backups (for example, local-at-hand backup directories on another at-home system, or a great remote-in-the-cloud backup service like CrashPlan.com) at hand when you undertake any disk re-partitioning exercise.
As a result, these considerations make disk partitioning seem dangerous and difficult to the uninitiated — fortunately, it’s really not. All it takes is intention, care and planning; generally, you’re not going to “delete stuff by mistake,” as you’ve got to take several steps with intention, ultimately clicking an “Apply these changes” button before anything destructive happens to your disk drive. Using a partition editor for “just looking” is harmless, and it takes malice and forethought to do any unintended damage.
But with a brand-new disk drive, it’s out-of-the-box with no partitioning information whatsoever — just one big unformatted expanse of storage. Got to “partition the drive“… which requires a bit of “design thinking,” effectively planning for how I now want this 2TB disk to be divided up into functional “logical disk drives” (the partitions) for use by my soon-to-be-renewed Linux installation.
Windows installations usually just partition a drive into one or two “logical drives,” one for “C:\” and maybe one for a “D:\” where “recovery savesets” of the operating system can be stashed for eventual undo’s (i.e., “Recover Windows to a previous known-good state…”). Sometimes, additional partitions are created on a Windows PC for dual-booting, where you can install Linux right along side Windows on the same disk, and then boot either one or the other for use.
I’m not dual-booting… I’m re-installing Ubuntu/Linux to use the whole 2TB enchilada. So, how best to partition and divvie things up? Here’s where a bunch of searching (for “howto disk partition“) on the ‘Net turns up a lot of partial information, half-truth hints, and even some mythology — much of it is suggestive of what “could be done,” but there’s not much that’s definitive about a specific situation… especially, my specific case. Partitioning is so fundamental that there are lots of “right answers,” each one situation dependent, not just “the only right way to do it,” especially for Linux.
The bare minimum partitioning that Linux can get away with is two: a “file system” partition (where the operating system and all data files would go), and a Linux “swap space” partition. This is a basic all-in-one approach (not counting the swap, which is needed on all Linux installations). A bit more consideration recommends a more nuanced, multi-partition approach: in addition to the (required) swap partition; a (very small) boot partition; a (comparatively smaller) “main” partition for the operating system installation itself (/ or root); and additional partitions for: /var, a “variable” directory-tree where things like system log files, temp files, etc., can go; /usr, where installed application software goes; and /home, where user directory trees are stored. Per the “Disk partitioning” Wikipedia article (and corroborated by other sources), “[...A multi-partitioning] scheme has several advantages:
- “If one file system [partition] gets corrupted, the data outside that file system/partition may stay intact, minimizing data loss.
- “Specific file systems can be mounted with different parameters, e.g. read-only, or with the execution of setuid files disabled.
- “A runaway program that uses up all available space on a non-system file system does not fill up critical file systems.”
To this, I’d add that multi-partitioning, especially for the /home directory tree makes future system maintenance issues, like operating system upgrades and file recoveries, much easier, since users’ data directory trees are not “tangled up with” — that is, not sharing another file system partition with — the other partitions. It makes data backups, and certainly data recovery, much easier. It also makes operating system reinstallation, a start-from-scratch or “bare metal” install, a whole lot easier.
A modern partition editor tool, like gparted, gives you the option of not re-formatting individual partitions, like the separate one containing your /home directory, during operating system installations or upgrades. This, as much or more than the other reasons given, makes multi-partitioning the right way to go for a Linux (re)installation; again, the specific partitioning details will differ from system to system, but the general layout principles make sense.
In particular, what a separate /home partition means is that, at the very next time when you have to fully re-install your operating system from scratch (bare metal, as opposed to just an “upgrade”), you can catch the installation at the “how to partition your drives” step, go to the “Custom settings” option, and tell gparted to not repartition (that is, do not re-format) the /home partition by unchecking a box — this will preserve the data on that partition (your entire /home directory tree), thus saving you time, headache and energy of restoring all that stuff from local or cloud backups. Hint: You want to do it this way, whenever possible.
I had been planning to re-partition PC-A even before the failed disk drive forced my plans, mostly because the Ubuntu upgrade recommendations from 10.04 (installed on my failed drive) to 12.04 LTS had advised a full “bare metal” re-installation, rather than just “upgrade in place.” That original intention included creating a new, separate partition for /home. Now, with the new 2TB drive installed and spinning, it’s time to implement this scheme.
As I did my partitioning research, I (re)discovered that gparted supports the creation of two distinct types of partition tables (on-disk structures):
- The legacy MBR partition table, or “msdos,” which limits the number of “physical partitions” to four (4) and allows additional “logical partitions” to be created within a physical partition(s), and which maxes out at 2TB architecture disk drives; or…
- The newer GUID partition table, or “gpt,” which both supports the much larger terabyte+ sized disk drives, and allows a practically unlimited number of physical partitions (rendering logical partitions largely unnecessary).
Just a bit more online research assured me that gpt supports both GRUB (the GNU GRand Unified Bootloader) and any/all contemporary Linux file system types (e.g., ext3 and ext4, btrfs, reiserfs, etc.). Clearly, gpt would be perfect for my new partitioning needs.
Finally… it’s time to reformat (partition) my new disk drive and to (re)install the Ubuntu upgrade. But here’s where I hit my next snag: Last April, when Canonical (the company which packages and distributes the various Ubuntu-family distros) had released version 12.04 LTS (code-named “Precise Pangolin”, Long-Term Support), I had managed to download the distro version for 32-bit processors (the obvious and frequent choice), and had burned it to a CDROM disk — but I had not yet gotten around to downloading/burning any other variants, specifically the Ubuntu Studio 64-bit v12.04 which I wanted for this PC-A.
But I know that the 12.04 distro CDROM that I had burned was a “live bootable” media, and that it could be used not only for “Ubuntu demonstration and evaluation,” but also for a variety of system initialization, installation and recovery tasks — specifically, I could use it to run the gparted tool to partition my new drive before installation. Furthermore, even though PC-A is a 64-bit AMD processor architecture, I could go ahead and perform an intermediate (transitory) installation of the 32-bit v12.04 on it, then use that to download my ultimate target, the 64-bit Ubuntu Studio distro, and then re-install that right back over the intermediate install.
A bit round-about, to be sure, but in practice, this didn’t take nearly as long to do as it takes to write about it. Here’s the basic outline:
1) Insert Ubuntu 12.04 live-bootable distro CDROM into drive and power-up/boot it. Once up, I’m auto-logged in as root (privileged super-user)… find gparted (from menu) and run it. Without getting into all the messy details — it’s an “intuitive GUI” after all, right? — here’s how I ended up partitioning the new drive:
Disclosure: I did partition and repartition the drive a few times as I experimented and fine-tuned the individual partition sizes… Since my 2TB drive was brand-new and virgin, I could safely do this without harm of “loosing anything,” since there was nothing on it to loose! Elapsed time to partition, including time to goof around and experiment: about 40-to-50 minutes.
How did I arrive at these partition sizes? Well, to be honest, mostly by SWAG (scientific-wild-assed-guessing): /dev/sda1 (“unknown”, flags bios_grub) is actually the “boot partition”, where the master boot block (MBR) and a bit of other boot-related software lives — 5MB for this is no doubt gross overkill, but hey, on a 2TB drive, it’s insignificant. /dev/sda2 (/) is the “Linux root directory filesystem,” and is where nearly all of the operating system is installed; ~32GB is again perhaps extravagant, but seemed like a nice round number. /dev/sda3 (swap) is supposed to by “at least twice the size of your physical RAM size,” but larger doesn’t hurt; ~10GB is again extravagant, but only a tiny bite out of 2TB. /dev/sda4 (/var) is where “variable” operational files, like log-files, spool files, email spools, dumps and temporary files go; general Linux advice is to “give yourself plenty of space” for this storage, and ~40GB seems like “plenty.”
Lastly, /dev/sda5 (/home) and /dev/sda6 (/usr) get to pretty much split what’s left over, at 800GB and 1TB respectively. This gives me lots of room for my user and archival files (/home), as well as for installed applications (many of which go into /usr/…) and for backups: home-brewed compressed tar-files, rsync’d directory trees, and allocations for local-backups with CrashPlan.
2) Once the partitioning was done, I just went ahead and installed the 32-bit version of 12.04 (it runs fine on a 64-bit system, due to really-smart CPU-chip design and implementation things that hardware engineers have done on this architecture). Elapsed time to do this intermediate installation: ~35 minutes.
3) With the intermediate Ubuntu rebooted, up-and-running and connected again to the Internet, I could now go back to the Ubuntu Studio website to download the 64-bit, Studio-specific 12.04 distro. Once that’s burned to DVD+R (this distro’s too large for a CDROM), I’m ready to (re)install for the final time… Elapsed for this step: about 70 minutes.
4) Finally… finally! I insert the newly-burned Studio DVD+R distro, boot it, and (re)install 64-bit Ubuntu Studio 12.04… which goes right over the top of the intermediate 12.04 installation, and right into the new partitioning scheme on my neat new 2TB disk drive. Mission accomplished. Elapsed time: another ~30 minutes. Hooray.
The Ubuntu Studio distro team decided several months ago to join the anti-Unity crowd, and switched their default GUI (user interface) environment from Gnome to xfce. I could switch things back to Gnome, or go KDE, or even Unity (which I’m using a-okay on my laptop, even as I write this series of posts). But I’ll give xfce a thorough try-out… I’m kind of a GUI junkie, and I like to get at least semi-experienced with each of them.
And, for the record, I can now admire my new physical partitions as Linux drives as:
So, I’m done… right? Well, pretty much. I’ve got three remaining important tasks:
- A bunch of application software to re-install, things that put my PC-A back into “fighting shape” again, with all my favorite tools, apps and configurations.
- Restoration of nearly 20GB of my own personal data to my /home/lmr/… directory tree. Since this all lived on the old 180GB drive that failed, my next and best-most reliable backup resource is my backups-in-the-cloud on CrashPlan.com.
- Restoration of more than 60GB of personal archives, including our growing collection of digital photos, our precious archive of “Walking A Walk“ (and other) radio shows, and (of course) a bunch of MP3 music files. And although this too is safely cached away on CrashPlan, something else quicker occurred to me…
It took several days to re-install the essentials of my old PC-A applications… but there was nothing which was urgent or difficult in doing this. Just install the “can’t-live-without” apps by reviewing my System Logbook (remember the Logbook?), and then do the less-than-essential ones if- and as-needed. Devil’s in the details, as they say…
Restoring my several gigabytes of personal data from the CrashPlan cloud was a surprisingly pleasant experience: Once that particular application was reinstalled and reregistered (which turned out to be also easy, due to the fact that LastPass.com stores all of my essential login and account credentials for CrashPlan and more…), it really was mostly a matter of clicking a “Restore this directory…” button, after paying attention to some destination “where should these files go?…” settings. And then waiting for several hours while it all downloaded (actually, overnight) — but, and this is neat and impressive, it all came back flawlessly. Wow. I’ll be writing more about CrashPlan later… it’s worth knowing about, and it’s got a lot to recommend it. And I do…!
But, the prospect of waiting for another 60GB of archival data to download over the ‘Net was rather daunting — is there a better way? With a bit more thought, I remembered that all of these archives were safely stored on that other 500GB disk drive that I removed from the system… the one that didn’t fail, but which I chose not to reinstall into PC-A. Could I hook it up without physically reinstalling it in-box, just to grab (copy) those archives? One more trip to the Micro Center electronics store scored me this: an inexpensive SATA-to-USB external disk drive enclosure, less than $30, which will let me connect that disk drive to PC-A (or to any other PC via USB).
Physically installing the 500GB drive in it was really simple: the enclosure kit provided the rails and the screws, as well as the enclosure, electronics and power cord. There was no software to install… Just connected it to a USB port, Ubuntu auto-mounted it (remember, it was and still is a Linux file system drive), and I could then find and copy all those 60GB+ of archive directories and files in one big drag-&-drop operation. Now I’m done…
So, with all that, my PC-A is now restored to health, with a huge new disk drive, a bright shiny new version of Ubuntu, and all my data back.
But am I really done?… Perhaps with PC-A, at least for this series of posts. But wait! Don’t forget that my dear wife was really grumpy with me because her system, PC-B, has an ailing disk drive too… and I chose to fix PC-A first! At least, all her stuff is backed up on CrashPlan too!…
Next post: Recovering my wife’s PC-B — and it takes on a whole different twist!