Using PowerShell to prevent thinly provisioned NetApp LUNs from going offline – Part I
Posted: November 28, 2014 Filed under: NetApp, PowerShell / PowerCLI, Storage | Tags: autogrow, autosize, fractional reserve, lun, lun offline, luns, NetApp, netapp best practices, netapp powershell, netapp powershell toolkit, netapp toolkit, no space, nospc, powershell toolkit, snap autodelete, snapshot autodelete, vol best practices, volume, volume best practices 1 CommentHappy Day-after Thanksgiving dear readers! I hope everyone is as satisfyingly full on food, friends, and family as I am. As an early gift to myself, I’m writing a PowerShell script that utilizes NetApp’s PowerShell Toolkit. The script will help me quickly determine current volume and LUN settings so I can see what LUNs are at risk of going offline due to out of space conditions. In scripting tradition, I couldn’t find anything online that did exactly what I wanted so I rolled my own!
Here’s what the output looks like. The column names are abbreviated because I expect to have several more columns. The abbreviations are, Volume Thin Provisioning, Fractional Reserve, Snapshots, Volume AutoGrow, Snapshot AutoDelete.
NetApp Initiator Group Best Practices for VMFS LUNs
Posted: November 3, 2014 Filed under: NetApp, Storage | Tags: igroup, initiator, initiator group, luns, NetApp Leave a commentI’m often asked by my clients the best way to configure NetApp igroups when connecting to VMware VMFS LUNs, especially after I deploy a new system for them and I’m training them on their use. I appreciate the question because it means someone’s actually thinking through why something is configured the way it is rather than just throwing something together.
The Problem
So this is what I see a lot of out in the field. Single igroups are created with multiple initiators from multiple hosts. This can be a problem, though, as I’ll show you. Functionally, this configuration will work – each host will be able to see each LUN, all things being equal. The problem arises when you want to either 1. remove a host from the igroup or 2. stop presenting a LUN to only a subset of hosts.
Read the rest of this entry »
Upgrading NetApp Data ONTAP with HFS
Posted: November 18, 2013 Filed under: NetApp, Storage | Tags: 7-mode, data ontap, dataontap, hfs, NetApp, ontap, upgrade 7-mode, upgrade 7mode, upgrade data ontap, upgrade dataontap, upgrade netapp, upgrade ontap Leave a commentI wanted to take a quick moment to document the awesomeness that is a quick and easy upgrade of Data ONTAP 7-mode with HFS. HFS is a lightweight web server that’s run as an executable and lets you quickly and easily transfer your Data ONTAP images from a Windows machine to the FreeBSD-based NetApp operating system. I can’t take credit for finding this gem of the storage admin. That goes to Mike Mills (@MikeasaService) who found this while we were implementing NetApp systems in a war zone. Thanks, Mike! Of course, if you’re a Mac-man (or gal, but that doesn’t really roll of the tongue as nicely) or a Linux dude, you can easily mount the /etc/software directory using NFS in which case you don’t need a web server. But I digress…on to the steps!
Download Data ONTAP image – from the NetApp Support site (support.netapp.com) and follow the prompts and be sure to download the correct version, in this case, 7-mode
NetApp FAS2240-2 with DS4246 Expansion Disk Shelf Design
Posted: November 17, 2013 Filed under: NetApp, Storage | Tags: 2240-2, aggregate design, design document, fas design, fas2240, fas2240 design, fas2240-2, netapp design, raid design, raid type, storage design, storage networking, storage networking design 22 CommentsI recently had the opportunity to design and implement NetApp’s entry-level storage solution for a client and I’d like to take this chance to share my approach to the design decisions. One reason for posting this is to help others that may be contemplating similar designs. I know there are a lot of talented and experienced engineers out there that may come across this and I encourage you to comment on this design. I look forward to learning from your experiences and at the same time I hope mine can help others. I should note that the hardware purchased was outside the scope of this design as the decision had already been made, hardware ordered and shipped. Also, common sense says that I’ve changed hostnames and IP addresses to protect the innocent.
The hardware specifications include
Feature | FAS2240-2 |
Controller Form Factor | Single enclosure HA; 2 controllers in a 2U chassis |
Memory | 6 GB per controller |
CPU | Dual Core Intel Xeon C3528 @1.73 GHz, HT enabled |
Onboard I/O: 6 Gb SAS | 2 |
Onboard I/O: 1 GbE | 4 |
Mezzanine I/O: 10 GbE | 2 |
SnapManager for SQL Sizing Case Study
Posted: July 10, 2013 Filed under: NetApp, SQL Server | Tags: netapp smsql, netapp smsql sizing, smsql, smsql sizing, snapmanager, snapmanager for sql, sql sizing 2 CommentsThis SnapManager for SQL case study was conducted for a real world client. Anything in this study that could identify the client has been removed from the article to protect their business. I wanted to take this opportunity to document the procedures and explanations for sizing such an environment.
This particular implementation involved a three node SQL Server 2012 AlwaysOn Availability Group running on Server 2008R2 physical servers. The databases are new and haven’t been populated with data, yet, so the sizing had to take these “known unknowns” into account. SnapManager for SQL 6.0 and SnapDrive for Windows 6.5 were used. The NetApp system includes a FAS3220 in an HA pair running Data ONTAP 8.1.2 7-mode.
Typical best practices were used such as using volume autogrow, letting SnapManager take care of Snapshot deletions, etc. I don’t address thin provisioning, deduplication, or space reservations in this document beyond saying that Fractional Reserve is kept at its default 0% and SnapReserve is changed to 0%. I suggested the LUNs and volumes be thinly provisioned because the client has a trained and dedicated NetApp Administrator on staff with the tools and alerts necessary to manage aggregate capacity properly. The storage deployment is a new, mid-size deployment and capacity is already at a premium. Thin provisioning now, monitoring, and growing or shrinking volumes and LUNs as actual growth is observed was advised so as not to waste space. Deduplication was used on the database volumes and CIFS shares – not the transaction logs, SnapInfo, TempDB, or System Databases.
A logical diagram of a SQL Server replication scheme is show below. There is an OLTP database that is relatively large compared to the many smaller databases that comprise the Data Warehouse (DW). Each Primary Replica will be at Site A hosted, under normal conditions, on separate nodes. These nodes will then synchronously replicate within the same site to the other node. Asynchronous replication will happen across sites to Site B and a third node.
NetApp built-in packet capture
Posted: June 24, 2013 Filed under: NetApp, Storage | Tags: capture, netapp packet capture, packet, packet capture, pktt, pktt dump, pktt start, pktt stop 2 CommentsI first had to do this at the direction of NetApp tech support. Ever since, I found myself searching my email for it so I could use it again and again. I finally took the hint and decided to post it here for my reference – but maybe you could use it as well. Oh, and copy it to Evernote, too.
They way I use this, as you might expect, is to start the capture, perform the operation that’s failing, and then stop the capture. So as not to capture too much traffic and therefore have to wade through all of it, I try to perform those steps rather quickly. But then again, if you know a few useful features of Wireshark, you can get around in the capture file pretty easily. So here you are.
filer> pktt start all -d /etc/crash
<perform the operation that fails here>
filer> pktt dump all filer> pktt stop all
New to NetApp? Here are the default Snapshot settings explained for ONTAP 8.1
Posted: June 18, 2013 Filed under: NetApp, Storage | Tags: default snap settings, default snapshot settings, default snapshots, snap settings, snapshot settings 2 CommentsSnapshots are enabled by default when a volume is created. They follow the schedule as seen from the CLI command snap sched: 0 2 6@8,12,16,20
and from the System Manager GUI. Note the default Snapshot Reserve is 5% and that the checkbox for Enable scheduled Snapshots is checked by default.