Thursday, 16 May 2013

Creating FTP on Netapp


For making a FTP volume follow the steps outlined below:
The default option for any volume is as below:

Default:
filer1> options ftp
ftpd.3way.enable off
ftpd.anonymous.enable on
ftpd.anonymous.home_dir /vol/vol0
ftpd.anonymous.name anonymous
ftpd.auth_style mixed
ftpd.bypass_traverse_checking off
ftpd.dir.override
ftpd.dir.restriction off
ftpd.enable off
ftpd.explicit.allow_secure_data_conn on
ftpd.explicit.enable off
ftpd.idle_timeout 900s (value might be overwritten in takeover)
ftpd.implicit.enable off
ftpd.ipv6.enable off
ftpd.locking none
ftpd.log.enable on
ftpd.log.filesize 512k
ftpd.log.nfiles 6
ftpd.max_connections 500 (value might be overwritten in takeover)
ftpd.max_connections_threshold 0% (value might be overwritten in takeove
ftpd.tcp_window_size 28960

Options to be made for making the Volume FTP share
filer1>options ftp
ftpd.3way.enable on
ftpd.anonymous.enable on
ftpd.anonymous.home_dir /vol/vol1/qtree
ftpd.anonymous.name anonymous
ftpd.auth_style mixed
ftpd.bypass_traverse_checking off
ftpd.dir.override /vol/vol1/qtree
ftpd.dir.restriction on
ftpd.enable on
ftpd.explicit.allow_secure_data_conn on
ftpd.explicit.enable off
ftpd.implicit.enable off
ftpd.ipv6.enable off
ftpd.locking none
ftpd.log.enable on
ftpd.log.filesize 512k
ftpd.log.nfiles 6
ftpd.tcp_window_size 28960
:



Thursday, 4 April 2013

Moving volume across Aggregates ( Data motion)

Many would be in  a situation where the one aggregate is free and other aggregate is full or most used. In order to balance there is an online option in Netapp called vol move, which moves the volume from one aggregate to the other on the fly without a downtime.

Few conditions for moving:

Vol move can be performed only on aggregates in the same controller
This feature is available only from Ontap 8.0
There should not be any CIFS or NFS in that volume only FC
Make sure the volume is unexported using the command exportfs -u before any vol move, as any volume by default will be added to the exportfs directory.

Syntax:

Vol move start <vol name> <destination aggregate>

Progress of this vol move can be monitored in snapmirror status

Example of vol move in action: 


filer1*> exportfs -u /vol/vol_003

Vol move start vol_3 aggr1

filer1*> Creation of volume 'ndm_dstvol_1368700289' with size 21990232552  on containing aggregate
'aggr1' has completed.
Volume 'ndm_dstvol_1368700289' is now restricted.
Thu MayThu May 16 16:01:45 IST [filer1:v 16 16:ol.move.transferStart:info]: Baseline transfer from volume vol_003 to ndm_dstvol_1368700289 started.
01:45 IST [filer1:vol.move.transferStart:info]: Baseline transfer from volume vol_003 to ndm_dstvol_1368700289 started.
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.


filer1*> snapmirror status
Snapmirror is on.
Source                          Destination                          State          Lag        Status
127.0.0.1:vol_003       filer1:ndm_dstvol_1368700289  Uninitialized  -          Transferring  (19 GB done)
filer1:vol_001   filer2:vol_001        Source         12:19:43   Idle
filer1:vol_002   filer2:vol_002        Source         11:49:35   Idle

Tuesday, 2 April 2013

Adjusting the TCP window size for a SnapMirror relationship

Recently, I learnt on how to adjust the TCP window size for a snapmirror relationship, thought it would be worth sharing it here.

Most of us complain that we don't fully utilize the bandwidth given for the snapmirror. We can utilize the bandwidth precisely provided we give the TCP window size correctly in snapmirror.conf file.

Let me explain with an example: 

The following are the pre requistes for adjusting the Window size:


  • Ascertain the round-trip time between the source and the destination for a SnapMirror relationship. ( this can be got from the network team or the TTL value using ping from source to destination filer) ( 80msec)
  • Determine the bandwidth available for the SnapMirror relationship. ( 400Mbps)
  • The default TCP window size for a SnapMirror relationship is 1,994,752 bytes.
  • Adjustment of the TCP window size is applicable only for asynchronous SnapMirror relationships.
  • For qtree SnapMirror relationships, TCP window sizes higher than the default value are not supported.

formula : window size = (round-trip time) × (available bandwidth)

here in our example its 80msec x 400Mbps

the Mbps and MBps are totally different its better to use the online calculator for this rather than making mistakes: available at the link here: http://www.numion.com/calculators/units.html

paste 400 in the calculator link and convert them to bits per second i.e: 400000000

convert the 80msec into seconds ( this can be achieved by dividing by 1000 which is 0.08 seconds

now the formula would be : window size = (((0.08sec) × (400000000 bps)) / 8) bytes = 4000000 bytes

here we divide by 8 to make it to bytes

so TCP window size for the SnapMirror relationship to 4000000 bytes.

it can be added to snapmirror.conf as below

sourcefiler:src_vol destinationfiler:dst_vol wsize=4000000 * * * *

For setting the TCP window size on all the volumes, the window size has to be added to the options snapmirror.

Thursday, 28 March 2013

How to Identify failed Hard drive in Netapp


Filer> aggr status -f

Broken disks

RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
---------       ------  ------------- ---- ---- ---- ----- --------------    --------------
failed          2a.33   2a    2   1   FC:B   0  FCAL 15000 418000/856064000  420156/860480768


Filer> priv set advanced
Warning: These advanced commands are potentially dangerous; use
         them only when directed to do so by NetApp
         personnel.
Filer*> blink_on  2a.33 ( here i make the failed hard drive to blink, so i have a visual idication of the failed drive)

How to create LUN clones


Recently I had a question from one of my friend on how to restore a snapshot which was created.

Here Iam explaning with the lun clone command.where i clone a lun with the required snapshot. Here in our example. /vol/vol1/q_vol1_004/q_vol1_004.lun is the LUN where they wanted me to restore the lun for with the hourly.4 snapshot

Filer1*> snap list vol1
Volume vol1
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  1% ( 1%)    0% ( 0%)  Jul 08 04:30  filer1_vol1.274 (snapmirror)
  1% ( 0%)    0% ( 0%)  Jul 07 23:02  hourly.0
  3% ( 2%)    1% ( 1%)  Jul 06 23:02  hourly.1
  5% ( 1%)    1% ( 0%)  Jul 05 23:02  hourly.2
  7% ( 3%)    2% ( 1%)  Jul 04 23:02  hourly.3
  9% ( 2%)    2% ( 0%)  Jul 03 23:02  hourly.4

in the above command i see the list of snapshots that are available for restore

Filer1*> qtree status vol1
Volume   Tree     Style Oplocks  Status
-------- -------- ----- -------- ---------
vol1          unix  enabled  normal
vol1 q_vol1_001 unix  enabled  normal
vol1 q_vol1_002 unix  enabled  normal
vol1 q_vol1_003 unix  enabled  normal
vol1 q_vol1_004 unix  enabled  normal

In the above command we see the list of qtrees created for that volume


filer1*> qtree create /vol/vol1/q_vol1_005  ( Iam creating a new qtree on the same volume so I can do a lun clone with this qtree wrt to a particualar snapshot.

filer1*> lun clone create /vol/vol1/q_vol1_005/q_vol1_005.lun -o noreserve -b /vol/vol1/q_vol1_004/q_vol1_004.lun hourly.4 ( Here iam creating a new lun using hourly.4 snapshot )

filer1*> lun show unmapped
        /vol/vol1/q_vol1_005/q_vol1_005.lun  500.1g (536952700920)  (r/w, online)

filer1*> lun map /vol/vol1/q_vol1_005/q_vol1_005.lun host1 22 ( here iam mapping to the host with lun ID 22)

where 22 is the LUN ID

Friday, 8 March 2013

32 to 64 bit aggregates



64 bit aggregate is a feature which is available only from Ontap 8.1
Please find below few FAQ of the same:

WILL AN UPDATE TO DATA ONTAP 8.1 TRIGGER THE EXPANSION PROCESS?

No. The expansion process can only be triggered with the addition of disks if the size of the aggregate
exceeds 16TB.

CAN I SHRINK AN AGGREGATE?

No. The size of an aggregate cannot be decreased.

CAN I EXPAND MY 32-BIT ROOT AGGREGATE TO 64-BIT?

If there is a strong requirement to expand your root aggregate beyond 16TB, you can add disks and
trigger the 64-bit expansion on the root aggregate.

CAN I CONVERT MY 64-BIT AGGREGATE TO 32-BIT?

No. This is not supported irrespective of the size of the aggregate.

CAN I EXPAND MY 32-BIT AGGREGATE TO 64-BIT WITHOUT THE ADDITION OF DISKS?

No. The administrator will have to add disks to trigger the 64-bit expansion.

ARE BLOCKS IN SNAPSHOT COPIES ALSO CONVERTED TO THE 64-BIT FORMAT?

No. Snapshot copies are read-only and are not updated by the expansion process. The expansion
process updates indirect blocks in the active file system.

DOES THE EXPANSION PROCESS RESTART FROM THE BEGINNING IF INTERRUPTED?

No. The expansion process maintains checkpoints. If the process is interrupted, it resumes from the latest
checkpoint.