Thursday, 16 May 2013

Creating FTP on Netapp


For making a FTP volume follow the steps outlined below:
The default option for any volume is as below:

Default:
filer1> options ftp
ftpd.3way.enable off
ftpd.anonymous.enable on
ftpd.anonymous.home_dir /vol/vol0
ftpd.anonymous.name anonymous
ftpd.auth_style mixed
ftpd.bypass_traverse_checking off
ftpd.dir.override
ftpd.dir.restriction off
ftpd.enable off
ftpd.explicit.allow_secure_data_conn on
ftpd.explicit.enable off
ftpd.idle_timeout 900s (value might be overwritten in takeover)
ftpd.implicit.enable off
ftpd.ipv6.enable off
ftpd.locking none
ftpd.log.enable on
ftpd.log.filesize 512k
ftpd.log.nfiles 6
ftpd.max_connections 500 (value might be overwritten in takeover)
ftpd.max_connections_threshold 0% (value might be overwritten in takeove
ftpd.tcp_window_size 28960

Options to be made for making the Volume FTP share
filer1>options ftp
ftpd.3way.enable on
ftpd.anonymous.enable on
ftpd.anonymous.home_dir /vol/vol1/qtree
ftpd.anonymous.name anonymous
ftpd.auth_style mixed
ftpd.bypass_traverse_checking off
ftpd.dir.override /vol/vol1/qtree
ftpd.dir.restriction on
ftpd.enable on
ftpd.explicit.allow_secure_data_conn on
ftpd.explicit.enable off
ftpd.implicit.enable off
ftpd.ipv6.enable off
ftpd.locking none
ftpd.log.enable on
ftpd.log.filesize 512k
ftpd.log.nfiles 6
ftpd.tcp_window_size 28960
:



Thursday, 4 April 2013

Moving volume across Aggregates ( Data motion)

Many would be in  a situation where the one aggregate is free and other aggregate is full or most used. In order to balance there is an online option in Netapp called vol move, which moves the volume from one aggregate to the other on the fly without a downtime.

Few conditions for moving:

Vol move can be performed only on aggregates in the same controller
This feature is available only from Ontap 8.0
There should not be any CIFS or NFS in that volume only FC
Make sure the volume is unexported using the command exportfs -u before any vol move, as any volume by default will be added to the exportfs directory.

Syntax:

Vol move start <vol name> <destination aggregate>

Progress of this vol move can be monitored in snapmirror status

Example of vol move in action: 


filer1*> exportfs -u /vol/vol_003

Vol move start vol_3 aggr1

filer1*> Creation of volume 'ndm_dstvol_1368700289' with size 21990232552  on containing aggregate
'aggr1' has completed.
Volume 'ndm_dstvol_1368700289' is now restricted.
Thu MayThu May 16 16:01:45 IST [filer1:v 16 16:ol.move.transferStart:info]: Baseline transfer from volume vol_003 to ndm_dstvol_1368700289 started.
01:45 IST [filer1:vol.move.transferStart:info]: Baseline transfer from volume vol_003 to ndm_dstvol_1368700289 started.
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.


filer1*> snapmirror status
Snapmirror is on.
Source                          Destination                          State          Lag        Status
127.0.0.1:vol_003       filer1:ndm_dstvol_1368700289  Uninitialized  -          Transferring  (19 GB done)
filer1:vol_001   filer2:vol_001        Source         12:19:43   Idle
filer1:vol_002   filer2:vol_002        Source         11:49:35   Idle

Tuesday, 2 April 2013

Adjusting the TCP window size for a SnapMirror relationship

Recently, I learnt on how to adjust the TCP window size for a snapmirror relationship, thought it would be worth sharing it here.

Most of us complain that we don't fully utilize the bandwidth given for the snapmirror. We can utilize the bandwidth precisely provided we give the TCP window size correctly in snapmirror.conf file.

Let me explain with an example: 

The following are the pre requistes for adjusting the Window size:


  • Ascertain the round-trip time between the source and the destination for a SnapMirror relationship. ( this can be got from the network team or the TTL value using ping from source to destination filer) ( 80msec)
  • Determine the bandwidth available for the SnapMirror relationship. ( 400Mbps)
  • The default TCP window size for a SnapMirror relationship is 1,994,752 bytes.
  • Adjustment of the TCP window size is applicable only for asynchronous SnapMirror relationships.
  • For qtree SnapMirror relationships, TCP window sizes higher than the default value are not supported.

formula : window size = (round-trip time) × (available bandwidth)

here in our example its 80msec x 400Mbps

the Mbps and MBps are totally different its better to use the online calculator for this rather than making mistakes: available at the link here: http://www.numion.com/calculators/units.html

paste 400 in the calculator link and convert them to bits per second i.e: 400000000

convert the 80msec into seconds ( this can be achieved by dividing by 1000 which is 0.08 seconds

now the formula would be : window size = (((0.08sec) × (400000000 bps)) / 8) bytes = 4000000 bytes

here we divide by 8 to make it to bytes

so TCP window size for the SnapMirror relationship to 4000000 bytes.

it can be added to snapmirror.conf as below

sourcefiler:src_vol destinationfiler:dst_vol wsize=4000000 * * * *

For setting the TCP window size on all the volumes, the window size has to be added to the options snapmirror.

Thursday, 28 March 2013

How to Identify failed Hard drive in Netapp


Filer> aggr status -f

Broken disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
---------       ------  ------------- ---- ---- ---- ----- --------------    --------------
failed          2a.33   2a    2   1   FC:B   0  FCAL 15000 418000/856064000  420156/860480768


Filer> priv set advanced
Warning: These advanced commands are potentially dangerous; use
         them only when directed to do so by NetApp
         personnel.
Filer*> blink_on  2a.33 ( here i make the failed hard drive to blink, so i have a visual idication of the failed drive)

How to create LUN clones


Recently I had a question from one of my friend on how to restore a snapshot which was created.

Here Iam explaning with the lun clone command.where i clone a lun with the required snapshot. Here in our example. /vol/vol1/q_vol1_004/q_vol1_004.lun is the LUN where they wanted me to restore the lun for with the hourly.4 snapshot

Filer1*> snap list vol1
Volume vol1
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  1% ( 1%)    0% ( 0%)  Jul 08 04:30  filer1_vol1.274 (snapmirror)
  1% ( 0%)    0% ( 0%)  Jul 07 23:02  hourly.0
  3% ( 2%)    1% ( 1%)  Jul 06 23:02  hourly.1
  5% ( 1%)    1% ( 0%)  Jul 05 23:02  hourly.2
  7% ( 3%)    2% ( 1%)  Jul 04 23:02  hourly.3
  9% ( 2%)    2% ( 0%)  Jul 03 23:02  hourly.4

in the above command i see the list of snapshots that are available for restore

Filer1*> qtree status vol1
Volume   Tree     Style Oplocks  Status
-------- -------- ----- -------- ---------
vol1          unix  enabled  normal
vol1 q_vol1_001 unix  enabled  normal
vol1 q_vol1_002 unix  enabled  normal
vol1 q_vol1_003 unix  enabled  normal
vol1 q_vol1_004 unix  enabled  normal

In the above command we see the list of qtrees created for that volume


filer1*> qtree create /vol/vol1/q_vol1_005  ( Iam creating a new qtree on the same volume so I can do a lun clone with this qtree wrt to a particualar snapshot.

filer1*> lun clone create /vol/vol1/q_vol1_005/q_vol1_005.lun -o noreserve -b /vol/vol1/q_vol1_004/q_vol1_004.lun hourly.4 ( Here iam creating a new lun using hourly.4 snapshot )

filer1*> lun show unmapped
        /vol/vol1/q_vol1_005/q_vol1_005.lun  500.1g (536952700920)  (r/w, online)

filer1*> lun map /vol/vol1/q_vol1_005/q_vol1_005.lun host1 22 ( here iam mapping to the host with lun ID 22)

where 22 is the LUN ID

Friday, 8 March 2013

32 to 64 bit aggregates



64 bit aggregate is a feature which is available only from Ontap 8.1
Please find below few FAQ of the same:

WILL AN UPDATE TO DATA ONTAP 8.1 TRIGGER THE EXPANSION PROCESS?

No. The expansion process can only be triggered with the addition of disks if the size of the aggregate
exceeds 16TB.

CAN I SHRINK AN AGGREGATE?

No. The size of an aggregate cannot be decreased.

CAN I EXPAND MY 32-BIT ROOT AGGREGATE TO 64-BIT?

If there is a strong requirement to expand your root aggregate beyond 16TB, you can add disks and
trigger the 64-bit expansion on the root aggregate.

CAN I CONVERT MY 64-BIT AGGREGATE TO 32-BIT?

No. This is not supported irrespective of the size of the aggregate.

CAN I EXPAND MY 32-BIT AGGREGATE TO 64-BIT WITHOUT THE ADDITION OF DISKS?

No. The administrator will have to add disks to trigger the 64-bit expansion.

ARE BLOCKS IN SNAPSHOT COPIES ALSO CONVERTED TO THE 64-BIT FORMAT?

No. Snapshot copies are read-only and are not updated by the expansion process. The expansion
process updates indirect blocks in the active file system.

DOES THE EXPANSION PROCESS RESTART FROM THE BEGINNING IF INTERRUPTED?

No. The expansion process maintains checkpoints. If the process is interrupted, it resumes from the latest
checkpoint.

Tuesday, 20 November 2012

Do you know

Restoring a volume with the oldest snapshot will delete all the newest snapshots after that.

Tuesday, 6 November 2012

Do you Know

Check out my all new do you know series on Netapp( added as a new tab in the home page)

Ontap upgrade from 8.0.1 to 8.1.1

Here I would be sharing step by step on how to do a ontap upgrade to 8.1.1

1) log in to the NOW site and click myautosupport, and click on the upgrade advisor and select the ontap version you are planning to.
2) you will have an option to export the steps either as an excel or an PDF.
3) Just follow the upgrade advisor.
4) The upgrade advisor will give some caution points and errors incase if you have in your setup.
5) first step is to clear those and start with the upgrade.
6) You must ensure that CPU utilization does not exceed 50% before beginning a
NDU upgrade
7) If you are running SnapDrive software on Windows hosts connected to the filer, check if the version is supported with the ontap you are upgrading.
8)Before upgrading Data ONTAP, monitor CPU and disk utilization for 30 seconds by
entering the following command at the console of each storage controller:
sysstat -c 10 -x 3
8a) Make sure that multipathing is configured properly on all the hosts.
9) Download perfstat and run it on a client as follows: perfstat -f filername -t 4 -i 5 > perfstatname.out
10) Download the system files for 8.1.1 (811_q_image.tgz) from Netapp site
11) Make sure you upgrade all the disks to the latest firmware atleast 24 hrs before the Ontap upgrade.
12) Contact NetApp Support and check /etc/messages for any obvious errors; e.g. disk
errors, firmware errors, etc
13) Back up the etc\hosts and etc\rc files in Windows to a temporary directory.
14)Copy the system image file (811_q_image.tgz) to the /etc/software directory on the
node. From a Windows box as an Administrator.
15) Before starting the upgrade send an ASUP as options autosupport.doit "starting_NDU 8.1.1"
16) software update 811_q_image.tgz -r
If you are performing a Data ONTAP NDU (or backout), you must perform this step on
both nodes before performing the takeover and giveback steps.
17) Check to see if the boot device has been properly updated:
controller1> version -b
The primary kernel should be 8.1.1.
18)Terminate CIFS on the node to be taken over (controller2 in this case):
controller2> cifs terminate
19) controller1> cf takeover
20) Wait 8 minutes before proceeding to the next step.
Doing so ensures the following conditions:
- The node that has taken over is serving data to the clients.
- Applications on the clients have recovered from the pause in I/O that occurs during
takeover.
- Load on the storage system has returned to a stable point.
- Multipathing (if deployed) has stabilized.
After controller2: reboots and displays "waiting for giveback", give back the
data service:

21) controller1> cf giveback
22) Terminate CIFS on the node to be taken over ( controller1)
23) controler1> cifs terminate
24) From the newly upgraded node controller2, take over the data service from
controller1:
controller2> cf takeover -n
25) Halt, and then restart the first node:
Controller1> halt
controller1> bye
26) controller2> cf giveback
28) If giveback is not initiated, complete the following steps:

29)  Enter the cf giveback command with the -f option:
cf giveback -f
30) controller1> version ( check if the version is updated to ontap 8.1.1)
31) controller1> options autosupport.doit "finishing_NDU 8.1.1"

Tuesday, 23 October 2012

How to predict the snapmirror transfer size




Many of us wanted to know what is the file size we would be transfering while we intitate a snapmirror. I have made an real time example here hope it helps

Vol1 is the volume we use for transferring( I have kept the same name at both destination and source for eazy understanding)

sourcefiler> snap list vol1
Volume vol1
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  2% ( 2%)    1% ( 1%)  Oct 23 04:15  destinationfiler(0123456789)_vol1.745 (snapmirror)
  5% ( 4%)    2% ( 1%)  Oct 22 23:01  hourly.0
  9% ( 4%)    3% ( 1%)  Oct 21 23:01  hourly.1
 12% ( 4%)    4% ( 1%)  Oct 20 23:01  hourly.2
 18% ( 7%)    6% ( 2%)  Oct 19 23:04  hourly.3
 21% ( 5%)    7% ( 1%)  Oct 18 23:01  hourly.4
 25% ( 6%)    9% ( 2%)  Oct 17 23:02  hourly.5
 28% ( 5%)   11% ( 1%)  Oct 16 23:01  hourly.6

In the above we check snap list currently for the volume at the source ( it will list the base snap shot for the source)

destinationfiler*> snap list vol1
Volume vol1
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  Oct 23 04:15  destinationfiler(0123456789)_vol1.745
  4% ( 4%)    1% ( 1%)  Oct 22 23:01  hourly.0
  8% ( 4%)    2% ( 1%)  Oct 22 04:15  destinationfiler(0123456789)_vol1.744
  8% ( 1%)    2% ( 0%)  Oct 21 23:01  hourly.1
 11% ( 4%)    3% ( 1%)  Oct 20 23:01  hourly.2
 17% ( 7%)    6% ( 2%)  Oct 19 23:04  hourly.3
 20% ( 5%)    7% ( 1%)  Oct 18 23:01  hourly.4
 25% ( 6%)    9% ( 2%)  Oct 17 23:02  hourly.5
 27% ( 5%)   10% ( 1%)  Oct 16 23:01  hourly.6


Here we check the snapshot which was used last for the snapmirror.

sourcefiler*> snapmirror destinations -s
Path       Snapshot                       Destination

vol1 destinationfiler(0123456789)_vol1.745 destinationfiler:vol1 ( this command tells which was the snapshot used for the snapmirror) in this case its 745

Iam starting a snapmirror update now

destinationfiler*> snapmirror update vol1
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log. ( we started a snapmirror update for vol1)


sourcefiler*> snap list vol1   ( once the snapmirror is intiated we see a new snapshot is created which is 746 in our case)
Volume vol1
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  Oct 23 16:46  destinationfiler(0123456789)_vol1.746 (busy,snapmirror)
  2% ( 2%)    1% ( 1%)  Oct 23 04:15  destinationfiler(0123456789)_vol1.745 (busy,snapmirror)
  5% ( 4%)    2% ( 1%)  Oct 22 23:01  hourly.0
  9% ( 4%)    3% ( 1%)  Oct 21 23:01  hourly.1
 12% ( 4%)    4% ( 1%)  Oct 20 23:01  hourly.2
 18% ( 7%)    6% ( 2%)  Oct 19 23:04  hourly.3
 21% ( 5%)    7% ( 1%)  Oct 18 23:01  hourly.4
 25% ( 6%)    9% ( 2%)  Oct 17 23:02  hourly.5
 28% ( 5%)   11% ( 1%)  Oct 16 23:01  hourly.6

destinationfiler*> snap list vol1   ( on the destination side we see that 744 is deleted and currently it reference to 745 only)
Volume vol1
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  Oct 23 04:15  destinationfiler(0123456789)_vol1.745
  4% ( 4%)    1% ( 1%)  Oct 22 23:01  hourly.0
  4% ( 1%)    1% ( 0%)  Oct 21 23:01  hourly.1
  7% ( 4%)    2% ( 1%)  Oct 20 23:01  hourly.2
 14% ( 7%)    4% ( 2%)  Oct 19 23:04  hourly.3
 17% ( 5%)    6% ( 1%)  Oct 18 23:01  hourly.4
 22% ( 6%)    8% ( 2%)  Oct 17 23:02  hourly.5
 25% ( 5%)    9% ( 1%)  Oct 16 23:01  hourly.6


 ( we do snap delta to check the difference between the snapshots which are currently used in our case its 745 and 746) which shows as 25268240 KB as below)

sourcefiler*> snap delta -V vol1 destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746 

Volume vol1
working...

From Snapshot   To                   KB changed  Time         Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746 25268240       0d 12:31  2016440.503

this 25268240 is the size we would be transferring during this snapmirror update process.

destinationfiler*> snapmirror status
Snapmirror is on.
Source                                         Destination                        State          Lag        Status
sourcefiler-my_vif-103:vol1   destinationfiler:vol1      Snapmirrored   00:46:56   Idle
sourcefiler-my_vif-103:vol1   destinationfiler:vol1      Snapmirrored   12:34:40   Transferring  (9505 MB done)

we can verify after the snapmirror is done by using snapmirror status -l command at the destination.

destinationfiler*> snapmirror status -l vol1
Snapmirror is on.

Source:                 sourcefiler-my_vif-13:vol1
Destination:            destinationfiler:vol1
Status:                 Idle
Progress:               -
State:                  Snapmirrored
Lag:                    00:08:53
Mirror Timestamp:       Tue Oct 23 16:46:55 IST 2012
Base Snapshot:          destinationfiler(0123456789)_vol1.746
Current Transfer Type:  -
Current Transfer Error: -
Contents:               Replica
Last Transfer Type:     Update
Last Transfer Size:     25268248 KB ( this is one we got using snapdelta command as described earlier( there is 8kb difference than the one which we got earlier im not too sure why this differnce may be someone can point me))
Last Transfer Duration: 00:07:49
Last Transfer From:     sourcefiler-my_vif-13:vol1

Your comments welcome