Restoring a volume with the oldest snapshot will delete all the newest snapshots after that.
Tuesday, 20 November 2012
Tuesday, 6 November 2012
Ontap upgrade from 8.0.1 to 8.1.1
Here I would be sharing step by step on how to do a ontap upgrade to 8.1.1
1) log in to the NOW site and click myautosupport, and click on the upgrade advisor and select the ontap version you are planning to.
2) you will have an option to export the steps either as an excel or an PDF.
3) Just follow the upgrade advisor.
4) The upgrade advisor will give some caution points and errors incase if you have in your setup.
5) first step is to clear those and start with the upgrade.
6) You must ensure that CPU utilization does not exceed 50% before beginning a
NDU upgrade
7) If you are running SnapDrive software on Windows hosts connected to the filer, check if the version is supported with the ontap you are upgrading.
8)Before upgrading Data ONTAP, monitor CPU and disk utilization for 30 seconds by
entering the following command at the console of each storage controller:
sysstat -c 10 -x 3
8a) Make sure that multipathing is configured properly on all the hosts.
9) Download perfstat and run it on a client as follows: perfstat -f filername -t 4 -i 5 > perfstatname.out
10) Download the system files for 8.1.1 (811_q_image.tgz) from Netapp site
11) Make sure you upgrade all the disks to the latest firmware atleast 24 hrs before the Ontap upgrade.
12) Contact NetApp Support and check /etc/messages for any obvious errors; e.g. disk
errors, firmware errors, etc
13) Back up the etc\hosts and etc\rc files in Windows to a temporary directory.
14)Copy the system image file (811_q_image.tgz) to the /etc/software directory on the
node. From a Windows box as an Administrator.
15) Before starting the upgrade send an ASUP as options autosupport.doit "starting_NDU 8.1.1"
16) software update 811_q_image.tgz -r
If you are performing a Data ONTAP NDU (or backout), you must perform this step on
both nodes before performing the takeover and giveback steps.
17) Check to see if the boot device has been properly updated:
controller1> version -b
The primary kernel should be 8.1.1.
18)Terminate CIFS on the node to be taken over (controller2 in this case):
controller2> cifs terminate
19) controller1> cf takeover
20) Wait 8 minutes before proceeding to the next step.
Doing so ensures the following conditions:
- The node that has taken over is serving data to the clients.
- Applications on the clients have recovered from the pause in I/O that occurs during
takeover.
- Load on the storage system has returned to a stable point.
- Multipathing (if deployed) has stabilized.
After controller2: reboots and displays "waiting for giveback", give back the
data service:
21) controller1> cf giveback
22) Terminate CIFS on the node to be taken over ( controller1)
23) controler1> cifs terminate
24) From the newly upgraded node controller2, take over the data service from
controller1:
controller2> cf takeover -n
25) Halt, and then restart the first node:
Controller1> halt
controller1> bye
26) controller2> cf giveback
28) If giveback is not initiated, complete the following steps:
29) Enter the cf giveback command with the -f option:
cf giveback -f
30) controller1> version ( check if the version is updated to ontap 8.1.1)
31) controller1> options autosupport.doit "finishing_NDU 8.1.1"
1) log in to the NOW site and click myautosupport, and click on the upgrade advisor and select the ontap version you are planning to.
2) you will have an option to export the steps either as an excel or an PDF.
3) Just follow the upgrade advisor.
4) The upgrade advisor will give some caution points and errors incase if you have in your setup.
5) first step is to clear those and start with the upgrade.
6) You must ensure that CPU utilization does not exceed 50% before beginning a
NDU upgrade
7) If you are running SnapDrive software on Windows hosts connected to the filer, check if the version is supported with the ontap you are upgrading.
8)Before upgrading Data ONTAP, monitor CPU and disk utilization for 30 seconds by
entering the following command at the console of each storage controller:
sysstat -c 10 -x 3
8a) Make sure that multipathing is configured properly on all the hosts.
9) Download perfstat and run it on a client as follows: perfstat -f filername -t 4 -i 5 > perfstatname.out
10) Download the system files for 8.1.1 (811_q_image.tgz) from Netapp site
11) Make sure you upgrade all the disks to the latest firmware atleast 24 hrs before the Ontap upgrade.
12) Contact NetApp Support and check /etc/messages for any obvious errors; e.g. disk
errors, firmware errors, etc
13) Back up the etc\hosts and etc\rc files in Windows to a temporary directory.
14)Copy the system image file (811_q_image.tgz) to the /etc/software directory on the
node. From a Windows box as an Administrator.
15) Before starting the upgrade send an ASUP as options autosupport.doit "starting_NDU 8.1.1"
16) software update 811_q_image.tgz -r
If you are performing a Data ONTAP NDU (or backout), you must perform this step on
both nodes before performing the takeover and giveback steps.
17) Check to see if the boot device has been properly updated:
controller1> version -b
The primary kernel should be 8.1.1.
18)Terminate CIFS on the node to be taken over (controller2 in this case):
controller2> cifs terminate
19) controller1> cf takeover
20) Wait 8 minutes before proceeding to the next step.
Doing so ensures the following conditions:
- The node that has taken over is serving data to the clients.
- Applications on the clients have recovered from the pause in I/O that occurs during
takeover.
- Load on the storage system has returned to a stable point.
- Multipathing (if deployed) has stabilized.
After controller2: reboots and displays "waiting for giveback", give back the
data service:
21) controller1> cf giveback
22) Terminate CIFS on the node to be taken over ( controller1)
23) controler1> cifs terminate
24) From the newly upgraded node controller2, take over the data service from
controller1:
controller2> cf takeover -n
25) Halt, and then restart the first node:
Controller1> halt
controller1> bye
26) controller2> cf giveback
28) If giveback is not initiated, complete the following steps:
29) Enter the cf giveback command with the -f option:
cf giveback -f
30) controller1> version ( check if the version is updated to ontap 8.1.1)
31) controller1> options autosupport.doit "finishing_NDU 8.1.1"
Tuesday, 23 October 2012
How to predict the snapmirror transfer size
Many of us wanted to know what is the file size we would be transfering while we intitate a snapmirror. I have made an real time example here hope it helps
Vol1 is the volume we use for transferring( I have kept the same name at both destination and source for eazy understanding)
sourcefiler> snap list vol1
Volume vol1
working...
%/used %/total date name
---------- ---------- ------------ --------
2% ( 2%) 1% ( 1%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745 (snapmirror)
5% ( 4%) 2% ( 1%) Oct 22 23:01 hourly.0
9% ( 4%) 3% ( 1%) Oct 21 23:01 hourly.1
12% ( 4%) 4% ( 1%) Oct 20 23:01 hourly.2
18% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
21% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
28% ( 5%) 11% ( 1%) Oct 16 23:01 hourly.6
In the above we check snap list currently for the volume at the source ( it will list the base snap shot for the source)
destinationfiler*> snap list vol1
Volume vol1
working...
%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745
4% ( 4%) 1% ( 1%) Oct 22 23:01 hourly.0
8% ( 4%) 2% ( 1%) Oct 22 04:15 destinationfiler(0123456789)_vol1.744
8% ( 1%) 2% ( 0%) Oct 21 23:01 hourly.1
11% ( 4%) 3% ( 1%) Oct 20 23:01 hourly.2
17% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
20% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
27% ( 5%) 10% ( 1%) Oct 16 23:01 hourly.6
Here we check the snapshot which was used last for the snapmirror.
sourcefiler*> snapmirror destinations -s
Path Snapshot Destination
vol1 destinationfiler(0123456789)_vol1.745 destinationfiler:vol1 ( this command tells which was the snapshot used for the snapmirror) in this case its 745
Iam starting a snapmirror update now
destinationfiler*> snapmirror update vol1
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log. ( we started a snapmirror update for vol1)
sourcefiler*> snap list vol1 ( once the snapmirror is intiated we see a new snapshot is created which is 746 in our case)
Volume vol1
working...
%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Oct 23 16:46 destinationfiler(0123456789)_vol1.746 (busy,snapmirror)
2% ( 2%) 1% ( 1%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745 (busy,snapmirror)
5% ( 4%) 2% ( 1%) Oct 22 23:01 hourly.0
9% ( 4%) 3% ( 1%) Oct 21 23:01 hourly.1
12% ( 4%) 4% ( 1%) Oct 20 23:01 hourly.2
18% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
21% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
28% ( 5%) 11% ( 1%) Oct 16 23:01 hourly.6
destinationfiler*> snap list vol1 ( on the destination side we see that 744 is deleted and currently it reference to 745 only)
Volume vol1
working...
%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745
4% ( 4%) 1% ( 1%) Oct 22 23:01 hourly.0
4% ( 1%) 1% ( 0%) Oct 21 23:01 hourly.1
7% ( 4%) 2% ( 1%) Oct 20 23:01 hourly.2
14% ( 7%) 4% ( 2%) Oct 19 23:04 hourly.3
17% ( 5%) 6% ( 1%) Oct 18 23:01 hourly.4
22% ( 6%) 8% ( 2%) Oct 17 23:02 hourly.5
25% ( 5%) 9% ( 1%) Oct 16 23:01 hourly.6
( we do snap delta to check the difference between the snapshots which are currently used in our case its 745 and 746) which shows as 25268240 KB as below)
sourcefiler*> snap delta -V vol1 destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746
Volume vol1
working...
From Snapshot To KB changed Time Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746 25268240 0d 12:31 2016440.503
this 25268240 is the size we would be transferring during this snapmirror update process.
destinationfiler*> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
sourcefiler-my_vif-103:vol1 destinationfiler:vol1 Snapmirrored 00:46:56 Idle
sourcefiler-my_vif-103:vol1 destinationfiler:vol1 Snapmirrored 12:34:40 Transferring (9505 MB done)
we can verify after the snapmirror is done by using snapmirror status -l command at the destination.
destinationfiler*> snapmirror status -l vol1
Snapmirror is on.
Source: sourcefiler-my_vif-13:vol1
Destination: destinationfiler:vol1
Status: Idle
Progress: -
State: Snapmirrored
Lag: 00:08:53
Mirror Timestamp: Tue Oct 23 16:46:55 IST 2012
Base Snapshot: destinationfiler(0123456789)_vol1.746
Current Transfer Type: -
Current Transfer Error: -
Contents: Replica
Last Transfer Type: Update
Last Transfer Size: 25268248 KB ( this is one we got using snapdelta command as described earlier( there is 8kb difference than the one which we got earlier im not too sure why this differnce may be someone can point me))
Last Transfer Duration: 00:07:49
Last Transfer From: sourcefiler-my_vif-13:vol1
Your comments welcome
Volume Options
Volume options:
While we create a volume we have three options: volume,file,none lets discuss each of them in detail.
Volume—A guarantee of “volume” ensures that the amount of space required by the FlexVol
volume is always available from its aggregate. This is the default setting for FlexVol volumes.
With the space guarantee set to “volume” the space is subtracted, or reserved, from the
aggregate’s available space at volume creation time. The space is reserved from the aggregate
regardless of whether it is actually used for data storage or not.
The example shown here shows the creation of a 20GB volume. The df commands showing the
space usage of the aggregate before and after the volume create command display how the
20GB is removed from the aggregate as soon as the volume is created, even though no data has
actually been written to the volume. In simple terms this is for the space reserved volumes
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 0GB 85GB 0%
aggr0/.snapshot 4GB 0GB 4GB 0%
filer1> vol create flex0 aggr0 20g
Creation of volume 'flex0' with size 20g on hosting aggregate 'aggr0' has
completed.
filer1> df -g /vol/flex0
Filesystem total used avail capacity Mounted on
/vol/flex0/ 16GB 0GB 16GB 0% /vol/flex0/
/vol/flex0/.snapshot 4GB 0GB 4GB 0% /vol/flex0/.snapshot
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 20GB 65GB 23%
aggr0/.snapshot 4GB 0GB 4GB 0%
Since the space has already been reserved from the aggregate, write operations to the volume
will not cause more space from the aggregate to be used.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
·None—A FlexVol volume with a guarantee of “none” reserves no space from the aggregate
during volume creation. Space is first taken from the aggregate when data is actually written to
the volume. The example here shows how, in contrast to the example above with the volume
guarantee, the volume creation does not reduce used space in the aggregate. Even LUN
creation, which by default has space reservation enabled, does not reserve space out of the
aggregate. Write operations to space-reserved LUNs in a volume with guarantee=none will fail if
the containing aggregate does not have enough available space. LUN reservation assure that
the LUN has space in the volume but guarantee=none doesn’t assure that the volume has space
in the aggregate. In simple terms this is for the thin provisioned volumes
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 0GB 85GB 0%
aggr0/.snapshot 4GB 0GB 4GB 0%
filer1>
filer1> vol create noneflex -s none aggr0 20g
Creation of volume 'noneflex' with size 20g on hosting aggregate
'aggr0' has completed.
filer1>
filer1> df -g /vol/noneflex
Filesystem total used avail capacity Mounted on
/vol/noneflex/ 16GB 0GB 16GB 0% /vol/noneflex/
/vol/noneflex/.snapshot 4GB 0GB 4GB 0% /vol/noneflex/.snapshot
filer1>
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 0GB 85GB 0%
aggr0/.snapshot 4GB 0GB 4GB 0%
filer1> lun create -s 10g -t windows /vol/noneflex/foo
Mon Oct22 18:17:28 IST [array1: lun.vdisk.spaceReservationNotHonored:notice]:
Space reservations in noneflex are not being honored, either because the volume
space guarantee is set to 'none' or the guarantee is currently disabled due to
lack of space in the aggregate.
lun create: created a LUN of size: 10.0g
filer1>
filer1> df -g /vol/noneflex
Filesystem total used avail capacity Mounted on
/vol/noneflex/ 16GB 10GB 6GB 0% /vol/noneflex/
/vol/noneflex/.snapshot 4GB 0GB 4GB 0% /vol/noneflex/.snapshot
filer1>
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 0GB 85GB 0%
aggr0/.snapshot 4GB 0GB 4GB 0%
--------------------------------------------------------------------------------------------------------------------------------------------
File—With guarantee=file the aggregate assures that space is always available for overwrites to
space-reserved LUNs( we make the lun as thick). Fractional reserve is
set to 100% and is not adjustable with this type of guarantee. The “file” guarantee is basically the
same as the “none” guarantee with the exception that space reservations for LUNs and spacereserved
files are honored. The example below looks the same as the previous example under
with guarantee=none except in this example the LUN creation takes space from the aggregate
because it is a space-reserved object. Since the space reservation is honored, the “lun create”
command also doesn’t issue the warning shown in the previous example. This is for the thick provisioned luns
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 0GB 85GB 0%
aggr0/.snapshot 4GB 0GB 4GB 0%
filer1>
filer1> vol create noneflex -s file aggr0 20g
Creation of volume 'noneflex' with size 20g on hosting aggregate
'aggr0' has completed.
cnrl1>
filer1> df -g /vol/noneflex
Filesystem total used avail capacity Mounted on
/vol/noneflex/ 16GB 0GB 16GB 0 % /vol/noneflex/
/vol/noneflex/.snapshot 4GB 0GB 4GB 0%
/vol/noneflex/.snapshot
filer1>
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 0GB 85GB 0%
aggr0/.snapshot 4GB 0GB 4GB 0%
filer1>
filer1> lun create -s 10g -t windows /vol/noneflex/foo
lun create: created a LUN of size: 10.0g
filer1>
filer1> df -g /vol/noneflex
Filesystem total used avail capacity Mounted on
/vol/noneflex/ 16GB 10GB 6GB 0% /vol/noneflex/
/vol/noneflex/.snapshot 4GB 0GB 4GB 0% /vol/noneflex/.snapshot
filer1>
filer1> df -A -g aggr0
Aggregate total used avail capacity
aggr0 85GB 10GB 75GB 12%
aggr0/.snapshot 4GB 0GB 4GB 0%
Monday, 22 October 2012
Mapping ISCSI LUN to a host in Netapp
Install the iscsi intiator on the host end
Add the target by clicking the add button
Once the host is able to login to the netapp.. we see the
above message on netapp screen
Filer*> igroup add iqn.1991-05.com.microsoft:servername.net
igroup_iscsi
Add the iqn of the host intiator to the igroup.
And then map the lun to the igroup
filer*> lun map /vol/ISCSI_Volume/q_ISCSI_Volume/q_ISCSI_Volume.lun
igroup_iscsi 1
Once mapped scan for disks on computer management of the
host
Igroup creation and mapping to a LUN
qtree create /vol/vol1/qtree
igroup create -f -t vmware host1 50:01:43:90:00:c4:ae:b6
lun create -s 500g -t vmware -o noreserve /vol/vol1/qtree/lun1.lun ( thin provisioned Lun)
lun create -s 500g -t vmware /vol/vol1/qtree/lun1.lun ( thick provisioned)
lun map /vol/vol1/qtree/lun1.lun host1 1 ( where host1 is the igroup of the host) and 1 is the lun ID ( lun ID should be unique for each controller)
Saturday, 20 October 2012
CIFS creation on Netapp
Create a volume( assuming we have created a volume vol1)
Import the volume to the vfiler
vfiler add vfiler1 /vol/vol1
vfiler1 is the vfiler where we are hosting the CIFS
qtree creation :
qtree create <complete volume name with qtree >
qtree create /vol/vol1/qtree1
change the security style of the qtree
qtree security < complete qtree path> ntfs
qtree security /vol/vol1/qtree1 ntfs
cifs shares -add <cifsname> <complete qtree path>
cifs shares -add cifs1 /vol/vol1/qtree1 ( where cifs1 is the cifs share name)
deleting the everyone full control ( by default when a cifs is created it will be with everyone full control access)
cifs access - delete cifs1 everyone full (this deletes the everyone full access for the cifs)
adding user groups to cifs
cifs access <cifsname> domain\groupname full
cifs access cifs1 domain1\group1 full
volume creation template
vol create <vol name> -s none <aggr name> 2t ( creates a volume of 2TB)
snap reserve <vol name> 0 ( No space is reserved for snapshots)
vol autosize <vol name> -m 4044g -i 50g on ( volume grows upto 4TB in the increament of 50g)
vol options <vol name> fractional_reserve 0 ( fractional reserve is set to 0 as against the default of 100)
sis off /vol/<vol name> ( No Dedupe)
vol options <vol name> nosnap on
snap sched <vol name> 0 0 0 ( no snaps have been scheduled)
_________________________________________________________
sis config -s sun-sat@1 /vol/<vol name> ( dedupe scheduled at 1am daily)
snap sched <vol name> 0 0 7@20 ( snap shots scheduled daily at 20hrs 7 snapshots are retained at any point of time )
Qtree SnapMirror versus Volume SnapMirror
Qtree SnapMirror over volume SnapMirror
QSM: Unaffected by disk size or disk checksum differences between the source and destination irrespective of type of volumes used (traditional or flexible)
VSM:Unaffected by disk size or disk checksum differences between the source and destination if flexible volumes are used
Affected by disk size or disk checksum differences between the source and destination if traditional volumes are used
QSM:Destination volume must have free space available equal to approximately 105% of the data being replicated
VSM:Destination volume must be equal or larger than the source volume
QSM:Sensitive to the number of files in a qtree due to the nature of the qtree replication process. The initial phase of scanning the inode file may be longer with larger number of files
VSM:Not sensitive to the number of files in a volume
QSM:Qtree SnapMirror destinations can be placed on the root volume of the destination storage system
VSM:The root volume cannot be used as a destination for volume SnapMirror
QSM: Replicates only one Snapshot copy of the source volume where the qtree resides (the copy created by the SnapMirror software at the time of the transfer) to the destination qtree. Therefore, qtree SnapMirror allows independent Snapshot copies on the source and destination
VSM:Replicates all Snapshot copies on the source volume to the destination volume. Similarly, if a Snapshot copy is deleted on the source system, volume SnapMirror deletes the Snapshot copy at the next update. Therefore volume SnapMirror is typically recommended for disaster recovery scenarios, because the same data exists on both source and destination. Note that the volume SnapMirror destination always keeps an extra SnapMirror Snapshot copy
QSM:A qtree SnapMirror destination volume might contain replicated qtrees from multiple source volumes on one or more systems and might also contain qtrees or non-qtree data not managed by SnapMirror software
VSM:A volume SnapMirror destination volume is always a replica of a single source volume
QSM:Multiple relationships would have to be created to replicate all qtrees in a given volume by using qtree-based replication
VSM:Volume-based replication can take care of this in one relationship (as long as the one volume contains all relevant qtrees)
QSM:For low-bandwidth wide area networks, qtree SnapMirror can be initialized using the LREP tool
VSM:Volume SnapMirror can be initialized using a tape device (SnapMirror to Tape) by using the snapmirror store and snapmirror retrieve commands.
QSM:Qtree SnapMirror can only occur in a single hop. Cascading of mirrors (replicating from a qtree SnapMirror destination to another qtree SnapMirror source) is not supported
VSM: Cascading of mirrors is supported for volume SnapMirror
QSM: Qtree SnapMirror updates are not affected by backup operations. This allows a strategy called continuous backup, in which traditional backup windows are eliminated and tape library investments are fully used.
VSM:Volume SnapMirror updates can occur concurrently with a dump operation of the destination volume to tape by using the dump command or NDMP-based backup tools. However, if the volume SnapMirror update involves a deletion of the Snapshot copy that the dump operation is currently writing to tape, the SnapMirror update will be delayed until the dump operation is complete
QSM:The latest Snapshot copy is used by qtree SnapMirror for future updates if the –s flag is not used
VSM:Volume SnapMirror can use any common Snapshot copy for future updates
QSM: Qtrees in source deduplicated volumes that are replicated with qtree SnapMirror are full size at the destination
Source deduplicated volumes that are replicated with volume SnapMirror remain deduplicated at the destination
Even though the source volume is deduplicated, qtree SnapMirror will expand the data and send the entire data to the destination
VSM:Deduplication savings also extend to the bandwidth savings because volume SnapMirror only transfers unique blocks
QSM:Source and destination volumes can be independently deduplicated
Destination volume is read-only and therefore destination volume cannot be independently deduplicated. If deduplication savings are desired on the destination volume, then the source volume must be deduplicated
QSM: The files in the file system gain new identity (inode numbers etc.) in the destination system. Therefore, file handles cannot be migrated to the destination system
VSM:The files in the file system have the same identity on both source and destination system
QSM: LUN clones can be created on the destination volume, but not in the destination qtree
VSM:LUN clones cannot be created on the destination volume because the volume is read-only. However, LUN clones can be created on a FlexClone volume because the FlexClone volume is writable
QSM: Unaffected by disk size or disk checksum differences between the source and destination irrespective of type of volumes used (traditional or flexible)
VSM:Unaffected by disk size or disk checksum differences between the source and destination if flexible volumes are used
Affected by disk size or disk checksum differences between the source and destination if traditional volumes are used
QSM:Destination volume must have free space available equal to approximately 105% of the data being replicated
VSM:Destination volume must be equal or larger than the source volume
QSM:Sensitive to the number of files in a qtree due to the nature of the qtree replication process. The initial phase of scanning the inode file may be longer with larger number of files
VSM:Not sensitive to the number of files in a volume
QSM:Qtree SnapMirror destinations can be placed on the root volume of the destination storage system
VSM:The root volume cannot be used as a destination for volume SnapMirror
QSM: Replicates only one Snapshot copy of the source volume where the qtree resides (the copy created by the SnapMirror software at the time of the transfer) to the destination qtree. Therefore, qtree SnapMirror allows independent Snapshot copies on the source and destination
VSM:Replicates all Snapshot copies on the source volume to the destination volume. Similarly, if a Snapshot copy is deleted on the source system, volume SnapMirror deletes the Snapshot copy at the next update. Therefore volume SnapMirror is typically recommended for disaster recovery scenarios, because the same data exists on both source and destination. Note that the volume SnapMirror destination always keeps an extra SnapMirror Snapshot copy
QSM:A qtree SnapMirror destination volume might contain replicated qtrees from multiple source volumes on one or more systems and might also contain qtrees or non-qtree data not managed by SnapMirror software
VSM:A volume SnapMirror destination volume is always a replica of a single source volume
QSM:Multiple relationships would have to be created to replicate all qtrees in a given volume by using qtree-based replication
VSM:Volume-based replication can take care of this in one relationship (as long as the one volume contains all relevant qtrees)
QSM:For low-bandwidth wide area networks, qtree SnapMirror can be initialized using the LREP tool
VSM:Volume SnapMirror can be initialized using a tape device (SnapMirror to Tape) by using the snapmirror store and snapmirror retrieve commands.
QSM:Qtree SnapMirror can only occur in a single hop. Cascading of mirrors (replicating from a qtree SnapMirror destination to another qtree SnapMirror source) is not supported
VSM: Cascading of mirrors is supported for volume SnapMirror
QSM: Qtree SnapMirror updates are not affected by backup operations. This allows a strategy called continuous backup, in which traditional backup windows are eliminated and tape library investments are fully used.
VSM:Volume SnapMirror updates can occur concurrently with a dump operation of the destination volume to tape by using the dump command or NDMP-based backup tools. However, if the volume SnapMirror update involves a deletion of the Snapshot copy that the dump operation is currently writing to tape, the SnapMirror update will be delayed until the dump operation is complete
QSM:The latest Snapshot copy is used by qtree SnapMirror for future updates if the –s flag is not used
VSM:Volume SnapMirror can use any common Snapshot copy for future updates
QSM: Qtrees in source deduplicated volumes that are replicated with qtree SnapMirror are full size at the destination
Source deduplicated volumes that are replicated with volume SnapMirror remain deduplicated at the destination
Even though the source volume is deduplicated, qtree SnapMirror will expand the data and send the entire data to the destination
VSM:Deduplication savings also extend to the bandwidth savings because volume SnapMirror only transfers unique blocks
QSM:Source and destination volumes can be independently deduplicated
Destination volume is read-only and therefore destination volume cannot be independently deduplicated. If deduplication savings are desired on the destination volume, then the source volume must be deduplicated
QSM: The files in the file system gain new identity (inode numbers etc.) in the destination system. Therefore, file handles cannot be migrated to the destination system
VSM:The files in the file system have the same identity on both source and destination system
QSM: LUN clones can be created on the destination volume, but not in the destination qtree
VSM:LUN clones cannot be created on the destination volume because the volume is read-only. However, LUN clones can be created on a FlexClone volume because the FlexClone volume is writable
Difference between SnapVault and qtree-based SnapMirror
The following are some of the key differences between SnapVault and the qtree-based
SnapMirror feature.
SnapMirror uses the same software and licensing on the source appliance and the destination
server.
SnapMirror feature.
SnapMirror uses the same software and licensing on the source appliance and the destination
server.
SnapVault software has SnapVault primary systems and SnapVault secondary systems, which
provide different functionality. The SnapVault primaries are the sources for data that is to be backed up.
The SnapVault secondary is the destination for these backups.
Note: As of Data ONTAP 7.2.1, SnapVault primary and SnapVault secondary can be installed on
different heads of the same cluster. Data ONTAP 7.3 supports installing both the primary and
secondary on a standalone system.
SnapVault destinations are typically read-only. Unlike SnapMirror destinations, they cannot be made
into read-write copies of the data. This means that backup copies of data stored on the SnapVault
server can be trusted to be true, unmodified versions of the original data.
Note: A SnapVault destination can be made into read-write with the SnapMirror and SnapVault bundle.
SnapMirror transfers can be scheduled every few minutes; SnapVault transfers can be scheduled at
most once per hour.
provide different functionality. The SnapVault primaries are the sources for data that is to be backed up.
The SnapVault secondary is the destination for these backups.
Note: As of Data ONTAP 7.2.1, SnapVault primary and SnapVault secondary can be installed on
different heads of the same cluster. Data ONTAP 7.3 supports installing both the primary and
secondary on a standalone system.
SnapVault destinations are typically read-only. Unlike SnapMirror destinations, they cannot be made
into read-write copies of the data. This means that backup copies of data stored on the SnapVault
server can be trusted to be true, unmodified versions of the original data.
Note: A SnapVault destination can be made into read-write with the SnapMirror and SnapVault bundle.
SnapMirror transfers can be scheduled every few minutes; SnapVault transfers can be scheduled at
most once per hour.
Multiple qtrees within the same source volume consume one Snapshot copy each (on the sourcesystem) when qtree-based SnapMirror software is used, but consume only one Snapshot copy totalwhen SnapVault software is used.
The SnapMirror software deletes SnapMirror Snapshot copies when they are no longer needed for
replication purposes. The copies are retained or deleted on a specified schedule.
SnapMirror relationships can be reversed, allowing the source to be resynchronized with changes made
at the destination. SnapVault provides the ability to transfer data from the secondary to the primary only
for restore purposes. The direction of replication cannot be reversed.
SnapMirror can be used to replicate data only between NetApp storage systems running Data ONTAP.
SnapVault can be used to back up both NetApp and open systems primary storage, although the
secondary storage system must be a FAS system or a NearStore system.
replication purposes. The copies are retained or deleted on a specified schedule.
SnapMirror relationships can be reversed, allowing the source to be resynchronized with changes made
at the destination. SnapVault provides the ability to transfer data from the secondary to the primary only
for restore purposes. The direction of replication cannot be reversed.
SnapMirror can be used to replicate data only between NetApp storage systems running Data ONTAP.
SnapVault can be used to back up both NetApp and open systems primary storage, although the
secondary storage system must be a FAS system or a NearStore system.
How to setup Snapmirror on Netapp
1.You must license snapmirror on both filers.( This is mandatory)
2. Enable Snapmirror on both filers.
pri> options snapmirror.enable on
dr> options snapmirror.enable on
3. Turn on the Snapmirror log
pri> options snapmirror.log.enable on
dr> options snapmirror.log.enable on
pri> options snapmirror.log.enable on
dr> options snapmirror.log.enable on
4. Allow the destination filer access to the source filer. This is done by adding the ip address to /etc/snapmirror.allow on the source filer.
pri> wrfile /etc/snapmirror.allow
pri> wrfile /etc/snapmirror.allow
5. To create a volume for replication we must first create the volume and then restrict it( @ destination) .
6. Now initialize a volume based replication. This is performed on the destination filer.
ctrldr> snapmirror initialize -S ctrlpri:vol1 ctrldr:volDR
Monitor with Snapmirror status at the destination
ctrldr> snapmirror initialize -S ctrlpri:vol1 ctrldr:volDR
Monitor with Snapmirror status at the destination
Subscribe to:
Posts (Atom)