ZFS ashift values for Samsung 990 PRO NVMe SSDs

TL;DR – ashift=12 performed noticeably better than ashift=13.

I recently installed Proxmox on a new build and couldn’t find any information about the best ashift values for my new NVMe SSD drives. Given that I was installing a brand new server, it gave me a chance to do some quick testing. Here’s what I found.

Hardware setup

  • Asus Pro W680-ACE IPME (without the IPME card installed)
  • 64GB of DDR5-4800 ECC memory
  • 2x 2TB Samsung 990 PRO NVMe SSDs

The SSDs are installed on the motherboard’s M.2 slots, one in the slot directly attached to the CPU and the other in a slot connected through the W680 controller.

Software setup

For both tests I did a clean install of Proxmox VE 8.1.3 with both SSDs in a RAID1 configuration. All zpool/vdev settings were default (compression=lzw, checksum=on, copies=1, recordsize=128K, etc.) except for the ashift values changed for the tests. After the installation, I did the standard apt-get updates to get current, and then installed fio to do the testing.

Test setup

The tests I ran were from Jim Salter’s ARStechnica article which gives good detail on how to run fio to test disk performance. Each test was run back-to-back. I ran each test 4 times. Here’s the script I used to run the tests:

echo "Test 1 - Single 4KiB random write process"

fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=4k --size=4g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1

echo "Test 2 - 16 parallel 64KiB random write processes"

fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=64k --size=256m --numjobs=16 --iodepth=16 --runtime=60 --time_based --end_fsync=1

echo "Test 3 - Single 1MiB random write process"

fio --name=random-write --ioengine=posixaio --rw=randwrite --bs=1m --size=16g --numjobs=1 --iodepth=1 --runtime=60 --time_based --end_fsync=1


ashift=12 was faster (on average) than ashift=13 for every test. For Test 1, it was 10.3% (15MB/s) faster. For Test 2, it was a whopping 22.3% (499MB/s) faster. And for Test 3, it was 8.5% (117MB/s) faster. Those are pretty big differences – I was surprised they weren’t more in the noise. Made it easy to make the call to set my drives to ashift=12, which also happens to be the common wisdom for all drives today.

TestashiftIteration 1Iteration 2Iteration 3Iteration 4Average
All results in MB/s

Controlling Case Fans Based on Hard Drive Temperature

I’ve always been annoyed that the fans in my server spin all the time (wasting power, making noise) when they’re not needed. The only reason they’re there is to cool my hard drives – which aren’t in use half the time.

When switching to unRAID, I wanted to fix this. There is a plugin (Dynamix System AutoFan) that is supposed to do this, but it didn’t work for me. In reading through several forum posts, I found a shell script that many people use to control their case fans based on their hard drive temperatures. But it wasn’t straight forward to get it operating, so I thought I’d post the steps I followed to get it running.

These instructions are specific to unRAID v6, but may be useful for other Linux installations.

Discover your fan sensor kernel modules

By default, it’s likely that you don’t have the kernel modules installed to give the system access to your fan speed sensors. First we need to get that setup.

  1. Install the NerdPack plugin. It’s the easiest way to install Perl which is needed to discover your sensors.
  2. SSH into your unRAID machine as root.
  3. From the prompt, run sensors-detect. Answer “Yes” (by pressing Enter) to every question except the last prompt- you do not need to automatically generate the config file.

    The output will be pretty long, and result in the something that looks like the following. Copy this block of text – you’ll need it in the next step.

    #----cut here----
    # Chip drivers
    modprobe coretemp
    modprobe nct6775
    /usr/bin/sensors -s
    #----cut here----

    Load the kernel modules at each boot

The modprobe commands will load the specific kernel modules for your hardware enabling your computer to detect, read, and control these fans. But we want to make sure they run each time you boot your machine – to do that unRAID, we have to modify the ‘go’ file that runs each time your machine starts.

  1. Open up your unRAID ‘go’ file in an editor
    nano /boot/config/go
  2. Take the lines that start with modprobe from the sensors-detect output and add them to your ‘go’ file. Do not add in the ‘/user/bin/sensors -s’ line.
  3. Save and quit nano (Ctrl-X, Yes)
  4. Reboot your unRAID server. (Or, alternatively, you can run the modprobe commands from the command line to load the kernel modules without rebooting).

After you reboot (or run the modprobe commands), you can run the sensors command from a command prompt which should show you the status of all the temperature sensors your machine knows about.

Figure out which controls what

Next up, you’ll need to figure out which device controls what fan, as well as the minimum power that needs to be applied for a given fan to keep spinning.

Fans are controlled in Linux by assigning a value between 0 and 255 to the Pulse Width Modulated (PWM) fan header (0 is off, 255 is max power).

The pwmconfig tool runs through all of your fan headers and vary the power output to find out which fans are controlled by each header.

To run it, SSH into your machine and run:


When prompted, answer “yes” to generating a detailed correlation.

When prompted to set up the configuration file, say “no”.

The output will be long. What you’re looking for are the following things.

Testing pwm control hwmon2/pwm1 ...
hwmon2/fan1_input ... speed was 1931 now 0
It appears that fan hwmon2/fan1_input
is controlled by pwm hwmon2/pwm1
Would you like to generate a detailed correlation (y)? y
PWM 255 FAN 1925
PWM 240 FAN 1859
PWM 225 FAN 1748
PWM 210 FAN 1648
PWM 195 FAN 1535
PWM 180 FAN 1415
PWM 165 FAN 1306
PWM 150 FAN 1184
PWM 135 FAN 1063
PWM 120 FAN 946
PWM 105 FAN 826
PWM 90 FAN 698
PWM 75 FAN 0
Fan Stopped at PWM = 75
  • The path to the fan controls. Above the fan I want is controlled by hwmon2/pwm1
  • The path to the fan monitor. Above that I can see that hwmon2/pwm1 is correlated to hwmon2/fan1_input.
  • The setting (PWM) at which the fan stopped spinning. In this case, it was “75”.

Verify the correlation

You likely have two sets of fans (at least) that show up in the output. One of them is probably your CPU fan, which I’d recommend leaving under the control of your BIOS (it’s safer that way).

In order to make sure that you pick the right fan, I recommend doing a quick check. From the command line, you can assign a value to a fan to directly control the speed.

Replace the bold text below with the fan you want to control:

root@tower:~# echo 0 > /sys/class/hwmon/hwmon2/pwm1
root@tower:~# cat /sys/class/hwmon/hwmon2/fan1_input

You should also physically check that this shut off the fan you expect it to (i.e. your case fan stopped spinning, not your CPU fan).

Find the minimum fan start speed

Fans have momentum – because pwmconfig was working down from the highest-to-lowest speed, it found the lowest speed it would keep spinning after it was already moving. To find the starting speed, you have to work upwards from 0.

In my case, the “75” stopping PWM was not enough to get the fan to start from 0:

root@tower:~# echo 0 > /sys/class/hwmon/hwmon2/pwm1
root@tower:~# cat /sys/class/hwmon/hwmon2/fan1_input
root@tower:~# echo 75 > /sys/class/hwmon/hwmon2/pwm1
root@tower:~# cat /sys/class/hwmon/hwmon2/fan1_input
root@tower:~# echo 85 > /sys/class/hwmon/hwmon2/pwm1
root@tower:~# cat /sys/class/hwmon/hwmon2/fan1_input
root@tower:~# echo 95 > /sys/class/hwmon/hwmon2/pwm1
root@tower:~# cat /sys/class/hwmon/hwmon2/fan1_input

Install a fan monitoring script

Now that we know how the fans are controlled, and the minimum speed to start them, we can plug that into the fan script along with some other inputs.

Get & edit the script

I started with this version the the script and modified it to add a few features. You can download my version or the original. I’ll assume you downloaded mine.

  1. Make a directory to hold your scripts (if you haven’t done so already). From the command line:
    mkdir /boot/config/scripts
  2. Download unraid_array_fan.sh
  3. Move it to /boot/config/scripts
  4. Edit unraid_array_fan.sh with your specific settings discovered above.
    nano /boot/config/scripts/unraid_array_fan.sh
    1. Set NUM_OF_DRIVES to match your number of hard disks. Be sure to edit / comment out the specific device identifiers you want to key off of (i.e. sdb, sdc, etc.) I’d recommend eliminating any SSDs.
    2. Set the FAN_LOW_PWM to the ‘Fan stopped at PWM’ reported by pwmconfig
    3. Set FAN_START_PWM to the speed discovered through manual testing.
    4. Set the ARRAY_FAN to match the path of the device you want to control.

To test out the script, you can run it from the command line directly.


Set the script to runs at boot

Last thing that needs to happen is to make sure that the script runs at boot, and runs periodically to adjust the speed.

To do that, we’ll add the script to crontab. unRAID is a lil strange in that you have to do this via some go-file hackery plus an additional script

  1. Download mycrontab.txt and move it to /boot/config/scripts/ – it contains the following cron entry which runs the script every 5 min and puts the output into the unRAID system log.
    # fan control - every 5 min
    */5 * * * * /boot/config/scripts/fan_control.sh 2>&1 | /usr/bin/logger -t fan_control
  2. Open your go file in an editor
    nano /boot/config/go
  3. And add the following lines at the bottom
    # setup crontab
    crontab -l > /tmp/file
    echo '#' >> /tmp/file
    echo '# Start of Custom crontab entries' >> /tmp/file
    cat /boot/config/scripts/mycrontab.txt >> /tmp/file
    echo '# End of Custom crontab entries' >> /tmp/file
    crontab /tmp/file
    rm -f /tmp/file
  4. Save (Ctrl-X, Yes) and then Reboot your machine

You can check to see if the script is running automatically by looking in your unRAID log file (and observing your fan speeds).

Final notes

You may want to adjust the FAN_OFF_TEMP and FAN_HIGH_TEMP to tune when the fans come on and off for your particular application.

Finally, I have to thank all of the various folks who created the original script, who’ve posted on the unRAID forums, the unRAID wiki, etc. I pieced this together from a lot of good information from the folks that have done this before me.

If there’s a better way to do this, I’d love to hear it. And I welcome contributions to the script on GitHub, continuing the work of those that have come before me.


Reduce (Shrink) raw image (.img) size of a Windows Virtual Machine

If you have Windows running in a virtual machine (VM), you may find yourself wanting to reduce the overall disk-footprint of the VM’s virtual disk on your host.

For example: because I had created my Windows VM by copying a raw disk in it’s entirety, I had a 120GB VM raw image (.img) file for a Windows installation that was only taking up about 20GB of actual space.

It took a bit of research for me to piece together all of the steps required to reduce my 120GB image down to a reasonable size. Here’s the steps I took.


  • You’re using KVM as your hypervisor. There may be some tips in here that are useful for other VMs, but I’m assuming you’re using KVM.
  • Your .img file is in a raw image file format. There may be some tips in here if you’re trying to shrink a qcow2 image file, but you should probably not follow this verbatim.
  • This process is risky – be careful and don’t blame me if things go amiss. And make backups before you start.
  • You are okay deleting the “recovery partition” on your Windows VM image. This process will most certainly delete the recovery partition (if it exists). For most people, this isn’t a big deal… if you need to recover Windows, there are other ways to do it and the partition was just wasting space.

Reduce the size of your Windows partition

This turns out to be more involved than you’d think. That’s mainly because there are several steps to take to enable Windows to pack all of your data into one end of the disk partition so it’s size can be reduced.

These steps are useful if you’ve ever encountered the “You cannot shrink a volume beyond the point where any unmovable files are located” error when trying to shrink a partition in Windows.

These steps are taken from within your running Windows VM.

Prepare your partition to be shrunk

This is a four step process:

  1. Disable hibernation
  2. Disable the pagefile
  3. Disable system protection
  4. Defrag your ‘hard disk’

For steps 1 through 3, please see this excellent post on “How to shrink a disk volume beyond the point where any unmovable files are located“.

For step 4 (Defrag your ‘hard disk’), this turned out to be a tricky step with a very simple answer. Most folks will point you to the Windows UI to “Optimize” your disk. However, if Windows thinks your C: drive is an SSD it (correctly) won’t do a defrag on it.

To get around that, simply run defrag from an elevated / admin command prompt!

defrag C: /U /V

Consider running defrag twice – once as above, and once again with the /X option is to consolidate free space.

Shrink your Windows partition

  1. Launch the Windows Disk Management console (Protip: you can get to it by right clicking in the lower left hand corner of your screen and selecting “Disk Management”)
  2. Right click on the main partition of your C drive.
  3. Select Shrink Volume
  4. After some thinking, you should be able to set the size of the disk as small as you’d like. Make your selection and click “Shrink”
  5. Make note of the final size of the of all the partitions on the disk – you’ll need that number for when you go to reduce the .img size in the next step. It’s safer to round-up than down!

Shutdown your Windows VM

You’ll want to shut down Windows now. You don’t want it running when you’re cutting down the .img file size.

Shrink the image (.img) file

This is where the magic happens… and it’s not that complicated.

  1. Locate your .img file. In my case, I’m running unRAID, so my VM images are all in /usr/mnt/domains by default.
  2. Make a copy of your VM’s .img file. You’ll want it in case this screws up.
    cp ./vdisk.img ./backup_vdisk.img
  3. Shrink the image to the size you want it to be. Remember, this has to be bigger than the total size of the Windows partitions on the disk. So, if there’s a 1GB partition and a 30GB partition, it’s probably safe to shrink the image down to 32GB to be safe.
    qemu-img resize vdisk.img 32GB
  4. Restart your VM and make sure it launches correctly.

Finishing Up

Enable Paging

Because you’re running Windows in a VM, it’s safe and probably preferred to leave hibernation disabled. You’re likely fine leaving system protection disabled as well.

But you should not go without re-enabling paging – follow the instructions here to turn back on paging: How to shrink a disk volume beyond the point where any unmovable files are located“.

Expand the size of your Windows partition

If you rounded up when shrinking your .img file, you probably have some extra unallocated space a the end of your Windows partition. To make use of it:

  1. Launch the Disk Management console
  2. Right click on your Windows partition
  3. Select Extend Volume and follow the dialog to increase the space of the partition

Consider converting from raw .img to qcow2

There are pros and cons to each format; some of the pros of qcow2 are the ability to snapshot the VM and sparse allocation (which can further save space). However raw can be faster. I’m sticking with raw for now.