Outline ·
[ Standard ] ·
Linear+
NETWORK ATTACHED STORAGE (NAS) V2
|
mrbob
|
Oct 25 2013, 06:55 AM
|
|
QUOTE(mintgadget @ Oct 23 2013, 05:33 PM) It's interesting that Synology has SHR (Synology Hybrid Raid) unlike QNAP which is planning to implement this feature from my understanding. The cool part about SHR is you can use what ever size HDD you have and create a RAID 5 or RAID 6 with the largest disk size being the parity. See http://forum.synology.com/wiki/index.php/W..._Hybrid_RAID%3F to have a better understanding. You can upgrade your disk when needed although it might be a long process but is possible. Not sure about QNAP's photo management but on Synology it has instant upload which means is somewhat similar to iCloud photo backup. Once you snapshot something on your phone it will auto upload to the NAS. For DSLR, look into using Eye-Fi card and running the Eye-Fi server on the Synology it will do auto backup once a snap shot is taken. These are all seamless once setup. If your objective is purely photos backup suggest you look at 3 or 4TBs on Raid 1, at least when you upgrade your camera and have bigger raw files it should be sufficient. Hmmm... Useful for people who don't populate their NAS in one go.
|
|
|
|
|
|
ozak
|
Oct 25 2013, 08:43 AM
|
|
QUOTE(mrbob @ Oct 23 2013, 07:37 AM) Your office NAS seems to have an interesting problem. Have you tried renaming or move the problem files to lower folder down the tree before deleting them? Could be due to long filename that OS unable to handle. Are either of your NAS aftermarket brands such as Synalogy, QNAP etc or own-build NAS? If we can share these info, we may be able to safeguard our data better either by avoiding buying problem NAS models or better data management practices. The NAS seem like can't delete large file or qty of file in 1 go. It crash the HDD and I need to reboot the NAS. Another is, the file is deleted. But the remain empty folder cannot delete and have error message. I m not sure it is the HDD problem or the NAS fw. To solve the problem, I need to get a new HDD than upgrade the fw. Both my home and office is using sinology. The data is pretty safe. Either office or home. Even if the NAS is totally gone.
|
|
|
|
|
|
ozak
|
Oct 25 2013, 08:46 AM
|
|
QUOTE(vivre @ Oct 24 2013, 09:49 PM) Just wondering whether any of you guys have redundant NAS setup to cater for NAS Failure. NAS protected us from harddisk failure, but once the NAS kaput, that is the unsolvable problem. Yes. My NAS setup is 100% safe. From hardware till software.
|
|
|
|
|
|
mrbob
|
Oct 25 2013, 09:38 AM
|
|
QUOTE(Moogle Stiltzkin @ Oct 25 2013, 05:06 AM) Well.... people keep saying NAS by itself is not a backup. Some recommend getting a pair :/
I think the "people" are generally data center admins and know the reality of hardware failure and handle them on a daily basis. Data backup to them are for archival purpose where the original data are meant to be kept and restored at a later date. Hence the usage of tape backups as the archival process takes away the usual hardware failures out of the equation. A dual NAS is just a hardware redundancy solution akin to a hot/warm/cold standby servers we see in DCs. ZFS does have some very nice build-in features that ensure data integrity. Microsoft is taking a page out of the ZFS playbook with the introduction of ReFS in Windows Server 2012 Storage Spaces. It's still too early to see how ReFS is impacting the real world as it is still undergoing heavy development. It will be some time before we see a truly workable NTFS replacement.
|
|
|
|
|
|
mrbob
|
Oct 25 2013, 09:46 AM
|
|
QUOTE(ozak @ Oct 25 2013, 08:43 AM) The NAS seem like can't delete large file or qty of file in 1 go. It crash the HDD and I need to reboot the NAS. Another is, the file is deleted. But the remain empty folder cannot delete and have error message. I m not sure it is the HDD problem or the NAS fw. To solve the problem, I need to get a new HDD than upgrade the fw. Both my home and office is using sinology. The data is pretty safe. Either office or home. Even if the NAS is totally gone. That was a good game plan there. My previous NTFS-based box used to be unable to handle Unicode filenames. Had to rename and remove the unicode characters before I could access or delete the files. This post has been edited by mrbob: Oct 25 2013, 10:00 AM
|
|
|
|
|
|
ozak
|
Oct 25 2013, 12:33 PM
|
|
QUOTE(mrbob @ Oct 25 2013, 09:46 AM) That was a good game plan there. My previous NTFS-based box used to be unable to handle Unicode filenames. Had to rename and remove the unicode characters before I could access or delete the files. Don't seems like any Unicode name. Look normal to me.
|
|
|
|
|
|
mintgadget
|
Oct 25 2013, 01:29 PM
|
|
try performing data scrubbing first. may be able to delete after that but bear in mind might take some time.
|
|
|
|
|
|
mrbob
|
Oct 25 2013, 04:10 PM
|
|
QUOTE(ozak @ Oct 25 2013, 12:33 PM) Don't seems like any Unicode name. Look normal to me. I'm just sharing what I went through. Not necessarily the same problem you are facing.
|
|
|
|
|
|
mrbob
|
Oct 26 2013, 11:36 AM
|
|
QUOTE(Moogle Stiltzkin @ Oct 25 2013, 07:26 PM) But i thought BTRFS (linux) and REFS (microsoft) are no match for zfs in regards to end-to-end check sum ? http://rudd-o.com/linux-and-free-software/...tter-than-btrfsHmmm, hope you're not being overly sensitive cause I've just rereading my last posting again whether there is any insinuation that ReFS is better than ZFS. Didn't see anything there. Anyway, there are a growing initiative to make ZFS more accessible to the general public in the likes of NAS4Free, FreeNAS etc.
|
|
|
|
|
|
war3boy
|
Oct 27 2013, 07:15 AM
|
Getting Started

|
Hi Guys,
Do anyone explore the xpenology(Imitation of Synology DSM)?
This post has been edited by war3boy: Oct 27 2013, 07:35 AM
|
|
|
|
|
|
ozak
|
Oct 27 2013, 08:36 AM
|
|
QUOTE(war3boy @ Oct 27 2013, 07:15 AM) Hi Guys, Do anyone explore the xpenology(Imitation of Synology DSM)? What is that? Synology copycat?
|
|
|
|
|
|
ruffstuff
|
Oct 27 2013, 02:36 PM
|
|
 Need fellow LYN to comment and suggest via google docs link below. Bottom line, manage to keep the cost way below than NAS 8 bay storage, with comparable outlook and features. https://docs.google.com/spreadsheet/ccc?key...drive_web#gid=1This post has been edited by ruffstuff: Oct 27 2013, 03:15 PM
|
|
|
|
|
|
mrbob
|
Nov 1 2013, 04:02 PM
|
|
QUOTE(ruffstuff @ Oct 27 2013, 02:36 PM)  Need fellow LYN to comment and suggest via google docs link below. Bottom line, manage to keep the cost way below than NAS 8 bay storage, with comparable outlook and features. https://docs.google.com/spreadsheet/ccc?key...drive_web#gid=1I think it really depends on what you want to do with your NAS. At least now you have figured out the hardware side of the storage solution, you need to also consider the software and workflow. The NAS features will depend largely on the OS and software you decide to run on the box.
|
|
|
|
|
|
ruffstuff
|
Nov 1 2013, 06:13 PM
|
|
QUOTE(mrbob @ Nov 1 2013, 04:02 PM) I think it really depends on what you want to do with your NAS. At least now you have figured out the hardware side of the storage solution, you need to also consider the software and workflow. The NAS features will depend largely on the OS and software you decide to run on the box. I decided to go for non-hardware raid route. Probably going to utilize windows server 2012 r2 with storage spaces pool. FreeNAS interface kinda messy although ZFS is more robust. And i need to get more memory for ZFS too.
|
|
|
|
|
|
mrbob
|
Nov 2 2013, 11:33 AM
|
|
QUOTE(ruffstuff @ Nov 1 2013, 06:13 PM) I decided to go for non-hardware raid route. Probably going to utilize windows server 2012 r2 with storage spaces pool. FreeNAS interface kinda messy although ZFS is more robust. And i need to get more memory for ZFS too. If MS solution then there's no worry bout HW compatibility. You will have the advantage of testing out ReFS. Its supposedly an improvement on NTFS with build-in redundancies. Just note that Windowz is more resource hungry compared to other OS executing the same tasks plus you will need to defrag the HDDs every now and then. If you are proficient in Linux, you can do quite a fair bit with this HW spec. If ZFS is not your cup of tea, why not try Ubuntu/Mint with ext3/4? Runs more efficiently compared to Windowz. For storage engineers, there's the consideration for capacity, performance, durability, reliability, power consumption and cost. You will have to find a balance between capacity and performance - the bigger the HDD, the slower the speed. And of course generally speaking, it will cost more to build redundancies and durability into the solution. HW RAID card can help improve RAID performance however do get a UPS to protect against the write hole problem. In the worst case for the write hole problem, you can lose the entire RAID group in the event of a catastropic power failure and if the server happened to be writting to HDDs at that time. A UPS won't cost much and is a sure protection against this. Needless to say that if the RAID card fails, you will need to replace with the same model to access your data. If you don't want to mess around too much with RAID and just want minimal problems with max performance and still offers some reliability, then just stick with RAID 1.
|
|
|
|
|
|
ruffstuff
|
Nov 2 2013, 01:15 PM
|
|
QUOTE(mrbob @ Nov 2 2013, 11:33 AM) If MS solution then there's no worry bout HW compatibility. You will have the advantage of testing out ReFS. Its supposedly an improvement on NTFS with build-in redundancies. Just note that Windowz is more resource hungry compared to other OS executing the same tasks plus you will need to defrag the HDDs every now and then. If you are proficient in Linux, you can do quite a fair bit with this HW spec. If ZFS is not your cup of tea, why not try Ubuntu/Mint with ext3/4? Runs more efficiently compared to Windowz. For storage engineers, there's the consideration for capacity, performance, durability, reliability, power consumption and cost. You will have to find a balance between capacity and performance - the bigger the HDD, the slower the speed. And of course generally speaking, it will cost more to build redundancies and durability into the solution. HW RAID card can help improve RAID performance however do get a UPS to protect against the write hole problem. In the worst case for the write hole problem, you can lose the entire RAID group in the event of a catastropic power failure and if the server happened to be writting to HDDs at that time. A UPS won't cost much and is a sure protection against this. Needless to say that if the RAID card fails, you will need to replace with the same model to access your data. If you don't want to mess around too much with RAID and just want minimal problems with max performance and still offers some reliability, then just stick with RAID 1. My data is not critical, but i do required some parity and redundancy. HW raid will cost double, the card alone gonna cost a bomb. Not to mention which die first, the raid array or the hdd. I have no problem with linux, and probably going to utilize LVM if go into linux route. But since i have a free copy of Win Server 2012 R2, might experiment with it first with MS fancy new FS. Talking about resource hungry OS, im quite dissapointed with ZFS which is require 1GB ram per TB to perform at its best. I'm still looking if Win Server require that much resource as well as Linux LVM.
|
|
|
|
|
|
CocoMonGo
|
Nov 4 2013, 08:54 AM
|
|
QUOTE(ruffstuff @ Nov 2 2013, 01:15 PM) My data is not critical, but i do required some parity and redundancy. HW raid will cost double, the card alone gonna cost a bomb. Not to mention which die first, the raid array or the hdd. I have no problem with linux, and probably going to utilize LVM if go into linux route. But since i have a free copy of Win Server 2012 R2, might experiment with it first with MS fancy new FS. Talking about resource hungry OS, im quite dissapointed with ZFS which is require 1GB ram per TB to perform at its best. I'm still looking if Win Server require that much resource as well as Linux LVM.  Are you planning to use the server for anything else other than just pure storage? Coz IMO if just storage your CPU is overspec'ed. I checked the G2020 is a 55W processor, plus all the other components your power usage is probably hitting 75-100W. Not every cheap to run if you are leaving it on 24/7. You do not need 1GB RAM per TB of HDD for ZFS. THat is only required if you are running de-dup I also strongly recommend that if possible try to get ECC compliant RAM and MB BTW what is the flashdrive for? If you are thinking of installing ur OS there my suggestion is to forget about it and get a HDD instead. You can get those 2.5" ones for cheap.
|
|
|
|
|
|
ruffstuff
|
Nov 4 2013, 09:35 AM
|
|
QUOTE(CocoMonGo @ Nov 4 2013, 08:54 AM) Are you planning to use the server for anything else other than just pure storage? Coz IMO if just storage your CPU is overspec'ed. I checked the G2020 is a 55W processor, plus all the other components your power usage is probably hitting 75-100W. Not every cheap to run if you are leaving it on 24/7. You do not need 1GB RAM per TB of HDD for ZFS. THat is only required if you are running de-dup I also strongly recommend that if possible try to get ECC compliant RAM and MB BTW what is the flashdrive for? If you are thinking of installing ur OS there my suggestion is to forget about it and get a HDD instead. You can get those 2.5" ones for cheap. flash drive if only i run freenas. For other os, i will be using ssd. 55w is maximum tdp, i dont think it will run on full load all the time. Going ECC and true server motherboard going to cost more and hard to source especially ITX. And i probably need to find hardware raid for that too and going for mini-sas connectivity. Consumer grade board itx have maximum 6 sata ports, i can expand it with HB controller that i already have up to 4 ports. One server grade itx board that have 12 sata is from asrock. The cpu is BGA type, soldered down. But hard to source as well. Im thinking for flexibility of the box in the future. Having more room to expand not only the storage, but also in terms of purpose. Running few virtualize server might be possible. This post has been edited by ruffstuff: Nov 4 2013, 09:37 AM
|
|
|
|
|
|
mrbob
|
Nov 4 2013, 11:13 AM
|
|
Once you start hitting 8 HDDs, your box is not going to be running < 100W anymore. Don't worry too much bout the ZFS 1GB ram per TB recommendation. Just pop in 1x 8GB 1600MHz ram should be more than enough for your experimentation.
Yep I'm also having a tough time sourcing for server grade ITX mobo. Hard to find even on Newegg. Sigh... HW RAID cards are not that expensive if you know what you want and where to look. I'm just going to try a few more leads in Malaysia and Singapore before deciding whether to order my HW from the US. Do note also that the clearance between the mobo and drive cage in UNAS is < 40mm so the bundled Intel CPU cooler will not fit in there.
FYI the Asrock C2750D4I/C2550D4I mobo with built-in Intel Avoton processor that you mentioned is currently the most anticipated mobo in the NAS community. There is no date of released yet. You may get a better chance finding Asrock E3C226D2I/E3C224D2I instead.
|
|
|
|
|