in

Demystifying Datastore Options in VMware vSphere 7

default image

Hey there! If you‘re venturing into the world of VMware vSphere for the first time, one key concept you‘ll need to wrap your head around is datastores. Datastores are essentially storage locations that hold your virtual machine files and serve as the foundation of your vSphere environment.

I know this storage stuff can be confusing for VMware beginners so I wanted to provide a comprehensive guide to shed some light on the various datastore types and protocols supported in the latest vSphere 7 release. Consider this your cheat sheet to vSphere storage!

I‘ll give you an overview of each major datastore option – VMFS, NFS, vSAN, and vVols – and dig into the technical details, pros and cons, and use cases for each one. My goal is to equip you with the knowledge to evaluate and select the storage solutions that best fit your environment and requirements as a vSphere administrator. Let‘s get started!

An Introduction to Datastores

First, let‘s quickly cover the role of a datastore in vSphere. A datastore is essentially an abstraction layer that hides the complexity of the underlying physical storage from virtual machines. This allows vSphere to leverage advanced features like high availability, vMotion, snapshots and more irrespective of the storage hardware used.

Datastores hold the following types of files:

  • Virtual machine files – This includes VMDK files for virtual drives, VMX configuration files and other related files for each VM.

  • ISO images – ISO image files can be uploaded to a datastore for mounting to virtual CD/DVD drives.

  • Other files – This includes VIB files for installing ESXi patches and other OS files.

Now let‘s explore the specific datastore options available in vSphere 7:

Virtual Machine File System (VMFS)

VMFS is the standard production datastore type used with vSphere. It‘s a high-performance, cluster-aware filesystem optimized specifically for virtual machines.

VMFS is built directly on top of block storage devices like Fibre Channel, iSCSI, or FCoE SANs. It cannot be created on NAS devices. Some key benefits of VMFS include:

  • Scalability – You can easily grow VMFS capacity by spanning LUNs across multiple storage devices. VMFS also scales performance as you expand resources.

  • Shared access – VMFS leverages a distributed locking mechanism that allows all hosts in a cluster to concurrently access the datastore. This enables VMware features like vMotion, HA, DRS and more.

  • Resiliency – VMFS uses robust metadata with redundancy and file locking to prevent corruption. This allows for quick automated recovery from failures.

  • Snapshots – VMFS has native snapshot capabilities to create point-in-time copies of virtual machines.

  • Fast cloning – The copy-on-write architecture of VMFS makes cloning VMs lightning fast.

  • Storage vMotion – Easily move VMs between arrays with no downtime using Storage vMotion.

  • Raw device access – VMFS supports raw device mapping (RDM) which provides direct SCSI device access to virtual machines. Useful for clustering apps.

If you‘re building a robust enterprise-class infrastructure, VMFS is likely your best bet. It‘s the most mature, battle-tested datastore solution with the widest ecosystem compatibility.

But there are a few downsides to note:

  • Complexity – VMFS relies on Fibre Channel and iSCSI which requires in-depth storage knowledge to configure and manage.

  • Cost – Enterprise SANs can get quite pricey. The hardware investment is significant.

  • Scalability limits – VMFS has technical limits on maximum volumes sizes and number of hosts supported. You eventually hit scaling ceilings.

Now let‘s talk technical nitty gritty! Key considerations when architecting VMFS:

Block size – Select either 1MB or 512KB. Match to your workload sizes. Larger blocks benefit large sequential I/O.

Partitioning – Use VMware best practices to partition storage LUNs for VMFS volumes. Align to array stripe sizes.

Spanning – Span VMFS across multiple LUNs on different RAID groups to improve performance and availability.

Space utilization – Use thin provisioning and enable space reclamation to optimize capacity utilization. Monitor usage regularly.

Availability – Implement proper RAID levels based on performance and uptime requirements. Separate critical and non-critical VMs.

Performance – Distribute load across disks, ensure adequate IOPS, and minimize contention. Continuously tune and optimize access.

Compatibility – VMFS has very strict compatibility requirements for HBAs, SANs and all infrastructure components. Verify everything on VMware HCL!

With so many factors to balance, designing VMFS requires expert-level storage architecture skills. But the payoff is high-performance, resilient shared storage for your mission-critical applications.

Now let‘s look at an alternative option that simplifies management…

Network File System (NFS)

NFS allows remote hosts to access storage over TCP/IP using the standard Network File System protocol. This provides a handy network-attached storage (NAS) option for vSphere datastores.

With NFS:

  • The ESXi host mounts an NFS share exported from a NAS filer as an NFS datastore

  • Files are stored in standard POSIX format rather than VMFS

  • The underlying physical storage can be anything supported by the NAS filer (SATA, SSD etc)

Some key advantages of using NFS datastores:

Simplicity – NFS is much easier to configure since it doesn‘t require LUNs, RAID groups, fabric zoning etc. Just specify a NAS server and export path!

Cost-effective – NAS filers are generally cheaper than full-fledged Fibre Channel SANs. You can start small and scale out easily.

Share files – NFS enables the same storage to be accessed by both ESXi and physical servers. It‘s not siloed like VMFS.

Familiar protocol – If your team already knows NFS, it avoids having to learn the intricacies of VMFS and SAN storage.

No raw device access – RDM and other advanced features are not supported with NFS since there is no Fibre Channel protocol access.

Of course, NFS has some downsides to consider as well:

  • Performance challenges – Consumer grade NAS devices struggle with the demanding random I/O of virtualization workloads.

  • Limited scalability – Maximum 16TB volume sizes and up to 64 hosts supported. You will hit limits eventually.

  • Fewer native features – Lack of advanced capabilities like Storage DRS, distributed locking, and RPC-level synchronization.

  • Reduced resilience – NFS lacks the robust distributed metadata and recovery capabilities of VMFS. Risk of data corruption issues.

Let‘s look at some best practices when using NFS:

  • Select enterprise-grade NAS filers designed for virtualization. Avoid low-end models.

  • Ensure compatibility with your ESXi version. Older NAS may have issues.

  • Follow NAS vendor performance guidelines for network, export settings, stripes etc.

  • Isolate NAS storage traffic on dedicated VLANs or 10GbE networks. Avoid competition with VM traffic.

  • Limit datastore sizes due to 16TB volume restriction. Create multiple datastore mounts if needed.

  • Consider commercial NFS plugins for added management features like snapshots and cloning.

NFS expands your storage options and offers a simpler NAS alternative if Fibre Channel SANs are overkill for your needs. While performance is a concern, with proper diligent design NFS can work successfully for many vSphere use cases.

Now let‘s move onto a radically different approach – hyperconverged vSAN…

VMware vSAN

vSAN is a disruptive vSphere storage technology that pools the raw local storage in ESXi hosts to create shared datastores. This hyperconverged architecture brings together compute and storage in one cohesive system.

Unlike external SANs, all resources are provisioned from the internal disks and SSDs inside your vSphere cluster servers. Some key characteristics of vSAN:

  • A distributed layer embedded in the ESXi hypervisor

  • Aggregates capacity from local SATA/SAS disks and caching tier from SSDs

  • Creates mirrored storage containers using a RAID algorithm

  • Stripes data across cluster for high performance and redundancy

  • Management done through vSphere rather than a separate interface

The big appeal of vSAN is how it radically simplifies storage management:

  • No LUNs, RAID groups, fabric zoning or complex SAN provisioning

  • Adding capacity is as easy as inserting more drives into servers

  • Everything configured and managed through familiar vSphere tools

  • Seamless integration with vSphere, DRS, HA and other features

Besides the simplicity, vSAN brings compelling capabilities:

  • Hyperconverged architecture tightens the coupling between storage and compute

  • Local data access provides high performance and low latency

  • Start small and scale linearly by adding nodes, drives and cache

  • Leverage commodity server hardware – no need for expensive SANs

  • Inline deduplication and compression extends capacity efficiency

  • Distributed RAID and mirrored datastores eliminate single points of failure

Of course nothing‘s perfect! vSAN has some limitations:

  • Rigid server hardware requirements – Must use certified vSAN Ready Nodes

  • Rebuilds are slow after disk failure since using local host storage

  • Storage network needs low latency and adequate bandwidth

  • Track capacity utilization to avoid imbalances across hosts

  • Monitor component health proactively to avoid failures

  • Compression savings vary based on dataset and may impact performance

Let‘s also discuss vSAN deployment best practices:

Server configuration – Use certified vSAN Ready Nodes. Carefully size cache drives and capacity tiers based on performance and capacity needs.

Disk groups – Balance disk groups across hosts evenly. Don‘t oversubscribe drives. Allow room to add drives later.

Networking – vSAN network needs at least 10GbE switches in a dedicated low latency storage VLAN or subnet.

Policies – Define storage policies aligning to performance, protection and host failures domains. Test various FTT options.

Maintenance – Monitor vSAN health regularly. Keep firmware, drivers, HCL components updated. Plan capacity proactively.

Backups – Use native vSAN snapshots or third party tools. Test restores periodically to ensure recoverability.

For greenfield deployments wanting a simplified hyperconverged approach, vSAN delivers storage natively in vSphere with compelling benefits. Just be aware it takes diligent planning and maintenance to tune properly long term.

Up next we‘ll look at a new storage architecture known as vVols. This introduces radical changes to traditional vSphere storage…

VMware Virtual Volumes (vVols)

Virtual volumes, also known as vVols, change vSphere storage management from ‘datastores‘ to ‘virtual machines‘. This next-generation architecture provides unique advantages.

Some background:

  • vVols eliminate the datastore abstraction layer and allows VMs to connect directly to storage

  • The storage array gets full visibility into VM operations and status

  • Virtual disks become first-class objects managed by the storage platform

  • All operations utilize vSphere APIs and plugins to coordinate directly with the backend SAN

This unlocks new capabilities not possible with VMFS and NFS:

Atomicity – vVols and metadata maintained as single object enabling transactional operations. VMFS can‘t provide this.

Array-based management – All the enterprise array‘s capabilities now apply directly at the VM level.

Protocol endpoints – I/O gateways that provide direct connectivity between hosts and SANs.

Granular control – Full visibility and control of virtual disks down to individual VMs.

Policy-based management – Rules dictate provisioning and placement based on VM needs rather than datastore limitations.

Hardware independence – Abstracts the storage hardware so migration doesn‘t require identical arrays.

Storage efficiency – Leverage array-level features like deduplication, compression, and thin provisioning.

vVols clearly disrupt traditional storage architectures. But new technology brings new challenges:

  • Support requires storage arrays with vVols plug-ins and full interoperability with vSphere APIs. Many legacy platforms won‘t qualify.

  • Orchestration gets very complex with storage provisioning and data services managed directly on arrays.

  • Network and storage dependencies tighten. Any instability or performance issue can cause major disruptions.

  • Hypervisor hosts lose local control. The array becomes the source of truth for storage functions.

  • Migration from VMFS is possible but challenging. You likely need a transition period managing both.

If starting fresh with supported arrays, vVols brings compelling advantages. But for existing infrastructure you need strong motivations to undertake the operational transformation.

Let‘s summarize some vVols deployment tips:

  • Verify end-to-end support with storage arrays, network, servers, and vSphere versions.

  • Work closely with the storage vendor on initial configuration and integration with vCenter.

  • Start with a pilot to validate functionality, compatibility, and performance.

  • Define storage capabilities, policies, protocol endpoints and datastore constructs upfront.

  • Monitor the new environment proactively and understand troubleshooting workflows.

  • Create a transition plan to integrate with existing VMFS volumes during migration.

Adopting disruptive vVols technology requires meticulous planning and alignment between platforms. But the outcome can accelerate storage management in software-defined data centers.

Key Selection Criteria

Given the wealth of options available, how do you determine the best storage solution for your vSphere environment? Here are key criteria to consider:

Performance – Will standard SANs or NAS suffice? Or do you need extreme IOPS from all-flash arrays?

Scalability – How much capacity and growth do you anticipate long term? What are the limits for each platform?

Availability – Are you looking for data protection with automated failover and replication?

Complexity – Do you want centralized shared storage or hyperconverged simplicity?

Features – Which capabilities like deduplication, compression, or cloning are essential?

Costs – What is your budget? Leverage existing hardware investments or start new?

Workload fit – Will you run demanding tier-1 applications or more modest workloads?

Staff skills – Does your team have storage experts or prefer simple management?

Think through requirements and determine the best fit based on your constraints. Mix and match options as needed – use VMFS for production VMs and NFS for user files or dev systems. Test solutions and get hands-on experience before fully committing. This takes diligent planning but right-sizing your storage pays dividends down the road.

Wrapping Up

Well there you have it – a comprehensive hands-on guide to selecting datastores for VMware vSphere 7 environments. We covered the landscape of options from traditional VMFS and NFS to next-gen hyperconverged with vSAN and VM-centric storage using vVols.

There are no absolute right or wrong answers here – make sure to match storage solutions to your specific technical requirements, budgets, and administrator skills. With the right datastore design you‘ll gain high performance, availability and scalability for your critical virtualized workloads.

Hopefully this provided useful insights as you embark on your vSphere journey! Feel free to reach out if you have any other questions. Now go design some storage!

AlexisKestler

Written by Alexis Kestler

A female web designer and programmer - Now is a 36-year IT professional with over 15 years of experience living in NorCal. I enjoy keeping my feet wet in the world of technology through reading, working, and researching topics that pique my interest.