How Do I Handle Really Big Disks?

I’ll begin by saying, “differently.”  At this time, VMware does not handle disks that are larger than 2TB.  You can’t create a VM with such disks, nor can you add them to an existing VM.  There is, however, a workaround, and I want to give a shout out to Ulli Hankeln (see blog roll) for his help.  In an earlier post, I demonstrated how we could build a VM from a split dd image.  To overcome the 2TB limitation, we’re going to use that approach, but with a slight difference in our vmdk file.

This is a work in progress.  There may be a better way, and I’m looking into alternatives and other ways to handle 2TB+ disks.  I am also going to avoid a discussion on GPT (GUID Partition Table) disks and UEFI (Unified Extensible Firmware Interface).  You can find plenty of discussions on those topics, and I suggest that you look them up if, for example, your system can’t recognize a 2TB+ disk.  I’m going to describe the procedure now, and try to address some anticipated questions later.

First, let’s go back and look at the vmdk file that we created for use with a typical split dd image. Note the parameter createType=”monolithicFlat” in the first section.  My friend, Ulli, has a great reference on vmdk file parameters at http://sanbarrow.com/vmdk-basics.html#what.  The createType relates to the type of disk, and monolithicFlat generally describes a single, whole disk.  However, we “cheated” a little bit in the previous, split dd image scenario, as it really didn’t matter that we chose a split image as a whole disk.  Each extent in the graphic describes a disk (image) segment and is expressed in number of sectors (512 bytes per sector here).  (I omitted a listing every segment for the sake of brevity.)

Split vmdk

Before we create a vmdk file for our 2TB+ disk, we will create a segmented dd image of the medium.  You must keep your segments <2TB.   I used a 3TB disk, and here’s a screenshot of mine:  Split image file

You may note that I compressed my segments with NTFS compression.  Don’t use NTFS Sparse compression (another handy feature offered by X-Ways Forensics), as it won’t work and really helps only if the disk has many zero-bytes.  Note that each of my segments is 125GB.  We use Size and not Size on disk.  Let’s do the math.  134,217,728,000 / 512 = 262,144,000 sectors per segment.  So, we have the value for each extent in our vmdk file, except for the last segment (do the math).

 

BigDisk vmdk file

You’ll note the createType right away by its highlighting.  This was the type that I was able to use for large disks.  This type also will work for our smaller split disks: twoGbMaxExtendFlat.  The disk that I used was not a system disk, so we won’t create a VM from our vmdk, but will add it to an existing VM.  Any by the way, Dana McNeil’s VMDK Creator makes vmdk creation a snap!

The disk that I added was not a system disk, nor was it associated with my VM’s system.  All of the 2TB+ drives that I’ve seen thus far were used to store stuff, typically videos and graphics.  Although I compressed my image segments substantially, I can’t hope to do so with the average, seasoned disk that I’ll find in the field.  I could achieve even better compression with an E01 image, given the proper compression method.  However, using an E01 requires that we mount the image as a physical disk.  That’s where the problem lies, for the moment.  I have yet to overcome the challenge of creating a VM or virtual disk from a physical disk that’s >2TB.  I’m studying that problem now, and I’ll post again if I find a solution.  Whether we even image such large drives may be questionable.  Perhaps we’ll do our exams on the original disk through a write blocker.  Thanks for tuning in, folks!

Leave a Reply

Your email address will not be published. Required fields are marked *