I Work For Dell

Whilst I work for Dell, the opinions expressed in this blog are those of the author.
Dell provides IT products, solutions and services to all industry sectors including end user orgainsations and cloud / service providers.

Showing posts with label VDI. Show all posts
Showing posts with label VDI. Show all posts

Tuesday, 20 May 2014

How Many Is Too Many VDI Sessions?

I was reading through some reference architectures this morning and I noticed a (non-Dell) offering for 7,000 VDI user sessions. This was billed as a large scale reference architecture for 7,000 VDI sessions - i.e. a fairly typical medium sized deployment.

Dig deeper and we find the testing was performed on 7,000 running sessions, but only 80% of the sessions actually running any application activity within the sessions - so that's 5,600 sessions. User density per server (in this example) drops from an apparent 145 sessions to a more realistic 116 per server.   Allocation of resources to VDI sessions was 1 vCPU and 2GB RAM.  The servers used were each deployed with 256GB RAM.  At a user density of 116 per server, that's a total of 232GB RAM - add on the requirement for the vSphere hypervisor and that's taking the server to its maximum memory.  With the 7,000 users meaning 145 sessions per server, then the actual server specification was not sufficient to support the allocation per user (145 x 2GB = 290GB + an allowance required for vSphere).  Reported memory utilisation in the report was fine, but there is a risk of over commitment of memory

The tests were conducted using the industry standard LoginVSI activity simulation tool. With 7,000 users the CPUs were maxed out and the VSIMax score was reached which indicates users sessions that will be suffering performance degradation, something to be avoided.

Storage is reported to be very low at around 3.5TB which is an impressive level of compression delivered through the use of linked cloned images, data reduction techniques and thin provisioning. However, the system deployed in the testing comprised just under 60TB of storage. This would appear to mean the storage footprint could be much smaller than that deployed, but its not clear if the storage volume would still be required to deliver the required level of IOPs. VDI sessions were all non-persistent which typically gives greater storage efficiency than would be seen with persistent sessions.

So here are some points to think about when looking at reference architectures for VDI:

  • How many sessions are deployed?
  • How many sessions are concurrently active?
  • How much CPU is being consumed during peak concurrent activity (85% is a reasonable maximum)?
  • How much memory is being consumed during peak concurrent activity?
  • Is the server memory configuration high enough to avoid memory contention / over-commitment for the resources allocated to the VDI sessions?
  • How large are the VDI sessions - vCPU and memory allocations?  Do these match your actual requirements?  If not, how will this affect density per server when they are matched to your requirements?
  • How many IOPs are available when the system is under stress?
  • Are industry standard tools being used to generate load and measure performance?
  • If LoginVSI is in use, consider the point at which the VSI Index curve starts to climb steeply - this is when session performance is starting to degrade and is often well before reaching VSIMax.  If VSIMax is reached, performance is likely to be well beyond acceptability for users.
  • Are sessions persistent or non-persistent?  Does this match your users' need?
  • Is the volume of storage required there to provide storage capacity or is it there to provide IOPs capacity?  If volume of storage is matched to utilisation, will IOPs available suffer?
  • What IOPs have been assumed per user?  Will this reflect realistic IOPs in use?
  • Check IOPs proportions. Typcially 75% / 25% or  80% / 20% write / read ratios are seen as reasonable for VDI sessions.
  • Has user experience been measured and reported (either subjectively or via a tool such as Stratusphere UX or similar)?
  • In a typical hypervisor environment - what will happen when a host is lost within the cluster design?  Will there be headroom on the surviving servers to handle the re-distributed workload?
  • What density will be achieved once you have applied your disaster recovery standards?
Explore the white papers and reference architectures very carefully as each one takes a different approach to deployment and reporting.  They are very useful papers, they give very strong indications as to what you can expect to be able to deploy.  Make sure when you compare between vendor papers, that it truly is an apples for apples comparison, and apply lots of "what if?" analysis and ensure you understand the differences between the papers and the actual deployments that will work for you in the real World.

Thursday, 13 March 2014

Struggling With Large Scale VDI?

Through customer conversations and ad-hoc surveys at events,  I find that most organisations that have embarked on VDI projects tend to deliver to somewhere between 10 and 20% of their user base.  This is usually where the business case is easily justified - typical examples would be offshore developers or senior executives who would like to use their tablets for business use.

Those who have not embarked on VDI are often deterred from doing so by the costs per user, or the complexity implications.  Those that have delivered to more than 20% of their user base often find that they are seeing poor performance or much higher costs per user than they were expecting - often needing to throw more and more storage at the environment to get somewhere near the performance users are expecting.

Many of the costs are related to the costs of purchasing and operating the storage environment that underpins VDI sessions - either the cost of capacity or the cost of providing enough storage performance to support the required user experience.

As a result of this, I've been working on looking at a number of options for removing these performance and cost bottlenecks.  Using the Dell lab facilities, we've been stress testing a number of these options technically and from a business case perspective, in partnership with large customer organisations.

Our conclusions lead us to a different way of thinking about VDI and a solution that will allow organisations to scale out to 10s of thousands of users and give those users high performing VDI sessions.  You can learn about them through our webcast which will be recorded for future viewing, but if you want to hear about this first, and in an interactive session where you can ask questions, please register for the event on 27 March 2014 at the link below.  This will also give you access to the extensive white paper documenting the test results in our labs:

REGISTER NOW