Header Shadow Image


HTPC / Backup Home Server Solution using Linux

Pages: 1 2 3 4 5

 

CONCLUSION

The conclusion from the ZFS test is quite clear.  No matter what we tried, read and write speeds were only a very small fraction what could be achieved from XFS.  The main culprit in this from what I see is the zfs-daemon process, which is single threaded and is expected to handle all workload to/from the FS.   This is evident in the %CPU reported to be used by the jobs and is clearly the limiting factor in ZFS Fuse unfortunately.  So at this time, I cannot recommend ZFS on Linux for this sort of setup.  Perhaps other numbers would work but we haven't been able to come up with a better combination.
 

PROBLEMS and RESOLUTIONS

I suppose one could say that the ideal setup is without it's hickups.  In practice setting something like this up tends to be more involving.  Then again, perhaps it's better to encounter issues early to weed out problems and properly test drive a setup before the same problems happen in production.  Here I've documented the errors and issues encountered while creating the array.  These likely won't represent all the errors one would encounter however. 

#1

PROBLEM:

You have just started to build an array but realized you didn't set the –bitmap=internal on it.  So at this point I wanted to stop the array and recreate it with the –bitmap=internal option instead of separately, shortly after starting to build one but got.

# mdadm –grow /dev/md0 –bitmap=internal
mdadm: Cannot add bitmap while array is resyncing or reshaping etc.
mdadm: failed to set internal bitmap.

#

NOTE: For this error, the goal was to destroy the test array and recreate it with an internal bitmap.  For a live storage array, the solution for this problem would not be suitable however components of this solution would be appropriate for stopping and reassembling one back up.  This also simulates a mistake a would be operator could make.  See the AVAILABILITY TEST # 7 & 8 above as well.

 

SOLUTION 1:

(Solution 1 steps below assume we're destroying the array.  You have been warned.  Skip to SOLUTION 2 for a graceful way.)

So I tried to stop the array and start fresh.  But:

# mdadm –stop /dev/md0
mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?
#

(WARNING: This will destroy data.  This step could also cause a problem ) Then I tried to fail the array:

mdadm /dev/md0 –fail /dev/sda –remove /dev/sda
mdadm /dev/md0 –fail /dev/sdb –remove /dev/sdb
mdadm /dev/md0 –fail /dev/sdc –remove /dev/sdc
mdadm /dev/md0 –fail /dev/sdd –remove /dev/sdd
mdadm /dev/md0 –fail /dev/sdf –remove /dev/sdf
mdadm /dev/md0 –fail /dev/sdg –remove /dev/sdg

But that resulted in the same stop message.  So I tried:

echo 1 > /sys/block/sd{a,b,c,d,f,g}/device/delete

to spin them down but still:

# mdadm –stop /dev/md0
mdadm: Cannot get exclusive access to /dev/md0:Perhaps a running process, mounted filesystem or active volume group?
#

Then I realize that the LVM is still running with the LV's and VG's still on /dev/md0:

# lsof|grep -i md0
md0_raid6  1095      root  cwd       DIR              253,0      4096          2 /
md0_raid6  1095      root  rtd       DIR              253,0      4096          2 /
md0_raid6  1095      root  txt   unknown                                         /proc/1095/exe
# ps -ef|grep -i 1095
root      1095     2  2 Apr11 ?        02:02:17 [md0_raid6]
root     23611 14803  0 17:15 pts/0    00:00:00 grep -i 1095
#

My storage VG is still online:

# lvm vgs
  VG          #PV #LV #SN Attr   VSize   VFree 
  MBPCStorage   1   1   0 wz–n-   3.64t 931.70g
#

So then I try to remove the LV first, (which is not a good thing btw as we'll see once all the disks have been failed in the RAID array):

# lvm lvremove /dev/MBPCStorage/MBPCBackup
  /dev/md0: read failed after 0 of 4096 at 0: Input/output error
  /dev/MBPCStorage/MBPCBackup: read failed after 0 of 4096 at 2000003465216: Input/output error
  /dev/MBPCStorage/MBPCBackup: read failed after 0 of 4096 at 2000003522560: Input/output error
  /dev/MBPCStorage/MBPCBackup: read failed after 0 of 4096 at 0: Input/output error
  /dev/MBPCStorage/MBPCBackup: read failed after 0 of 4096 at 4096: Input/output error
  /dev/md0: read failed after 0 of 4096 at 4000814661632: Input/output error
  /dev/md0: read failed after 0 of 4096 at 4000814718976: Input/output error
  /dev/md0: read failed after 0 of 4096 at 4096: Input/output error
  Volume group "MBPCStorage" not found
  Skipping volume group MBPCStorage
#

But no luck.  Turns out I need to use dmsetup which is a low level logical volume manipulation tool (And the reason for the Input/Output errors above are because I failed ALL the RAID6 array disks above before effectively stopping the MBPCBackup VG).  A typical time when one would use this is when the RAID6 experiences 3+ disk failure and is effectively destroyed before the VG can be unmounted:

# dmsetup info -c /dev/MBPCStorage/MBPCBackup
Name                   Maj Min Stat Open Targ Event  UUID                                                               
MBPCStorage-MBPCBackup 253   6 L–w    0    1      0 LVM-FqixjaGjMc8xYcudSRD43Y6wMpLXXR92JBHkxjnwZ31axUUoIpkWWr6qh9Boal58
# dmsetup remove MBPCStorage-MBPCBackup
# dmsetup info -c /dev/MBPCStorage/MBPCBackup

Device /dev/MBPCStorage/MBPCBackup not found
Command failed
#

And this time I can stop the array:

# mdadm –stop /dev/md0
mdadm: stopped /dev/md0
#

 

SOLUTION 2 (Stops the array gracefully):

See the availability test # 7 and 8 above for the graceful procedure steps.

Hope you found this useful.  Feel free to leave a reply and let us know your favorite storage solution DIY or not.

Cheers!
TK

Pages: 1 2 3 4 5

11 Responses to “HTPC / Backup Home Server Solution using Linux”

  1. Hi,

    Great post. You don’t need to specify the parameters when creating the XFS file system, see http://xfs.org/index.php/XFS_FAQ#Q:_I_want_to_tune_my_XFS_filesystems_for_.3Csomething.3E and http://www.spinics.net/lists/raid/msg38074.html . Of course, YMMV.

    Did you run those benchmarks while the array was resyncing?

  2. Hey Mathias,

    Thanks for posting. Just added the testing numbers so feel free to have a look and judge yourself.

    > logbsize and delaylog
    I ran another test with logbsize=128k (couldn’t find anything for delaylog in my mkfs.xfs man page so I’m not sure if that’ll do anything). Little to no difference in this case on first glance. Watch out for the results at some point for a closer look.

    One consideration here is that eventually I would grow the LVM and XFS to fill up to 4TB. I’ll be doing this soon Potentially in the future, I may try to grow this array as well to something well over 8TB (Yet to see how to do that). I’m not sure if XFS would auto-adjust in those cases for optimal values for those capacities and the link didn’t touch on that topic.

    All in all, I can still run tests on this thing recreating the FS if I need to so feel free to suggest numbers you’d be interested to see. I might leave this topic open for a week or two to see if I can think of anything else or if I’m missing anything. For my setup, having anything > 125MB/s is a bonus as the network is only 1GB/s with that theoretical max.

    Cheers!
    TK

  3. […] could be done safely enough like this guy did and with RAID6 as well with SSD type R/W’s no less. Your size would be limited to the size of the […]

  4. Thank you for posting this blog.  I was getting desparate.  I could not figure out why I could not stop the RAID1 device.  Even from Ubuntu Rescue Remix.  The LVM group was being assembled from the failed raid.  I removed the volume group and was finally able to gain exclusive access to the array to stop it, put in the new disk and rebuild the array.
     
    Nice job.
    Best,
    Dave.

  5. […] we'll use for this is the APCUPSD daemon available in RPM format. We've set one up for our HTPCB server for a home redundancy / backup solution to protect against power surges and bridge the […]

  6. […] every time while transferring my files.  At the point, I not only lost connectivity with the HTPC+B but also my web access most of the time.  Here are the culprits and here's how we went […]

  7. […] removed the cable and the adapter and only used a 2 foot cable to my HTPC+B system I've just configured.  Voila!  Problem solved.  Ultimately, it's […]

  8. […] them from system to system to avoid choppy video / sound and also to accommodate the needs of our HTPC+B solution through file […]

  9. […] Linux Networking: Persistent naming rules based on MAC for eth0 and wlan0 Linux: HTPC / Home Backup: MDADM, RAID6, LVM, XFS, CIFS and NFS […]

  10. […] at this point and 4:15 minutes have passed).  While this was going on, we are referencing our HTPC page for […]

  11. […] HTPC, Backup & Storage […]

Leave a Reply

You must be logged in to post a comment.


     
  Copyright © 2003 - 2013 Tom Kacperski (microdevsys.com). All rights reserved.

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 Unported License