LATEST ARTICLES

Update: 02/06/2020. Has the new Citrix HDX optimisation for Microsoft Teams ended the frustration of users working in virtual environments?

Since we first published this post in February, the coronavirus pandemic has resulted in a huge uptake of Microsoft Teams. So, we decided to review its performance in light of its increased popularity to see if the improvements have lived up to expectations. 

Overall, we can confidently say that its performance is noticeably better than the pre-optimised version running on Citrix, with respect to both sound and vision. 

However, feedback from customers has revealed a couple of issues that should be addressed. 

Firstly, only the main speaker appears in the window. As a result, if you have more than 2 people in a meeting, only the person speaking gets the floor, unlike the fat client version whereby you can see everyone who is on the call at all times. 

Secondly, background effects aren’t yet available. So you don’t have the option of  a blurred background (very useful when you are in lockdown and calling from an improvised office) or being able to pretend that you are on one of the Halo maps (yes geeks really do love that!). 

All in all, it is an improvement, and we fully expect further enhancements in the near future. Watch this space! 

If you have any queries about the new Citrix HDX optimisation, or indeed any issues with Citrix virtual environments or Microsoft 365, don’t hesitate to contact Peter Grayson on 0161 537 4980 or email peter.grayson@quadris.co.uk 

Below is the original post by Jack Firth from 19th February 2020.

Will the new Citrix HDX optimisation for Microsoft Teams finally end the frustration of users working in virtual environments?

Any organisation seeking to improve employee productivity and collaboration will acknowledge the increasing role that Citrix is playing in the modern workplace; providing secure remote access to all employees while also cutting IT costs. 

Furthermore, with more organisations migrating to Office 365, one of its many benefits is the ability to take advantage of the intelligent communication solution presented by Microsoft Teams which is bundled into Office 365. 

As a result, by combining these 2 leading technologies not only can they make your employees more productive, IT teams also benefit from centralised management. With information such as data and chat logs staying in a cloud environment instead of being stored on native devices, it ensures better control over sensitive information. 

But while users of the desktop versions of Microsoft Teams have long enjoyed its full functionality, anyone using Citrix Virtual Apps and Desktops who has attempted to make a video call via Microsoft Teams will have experienced the frustration that comes with latency, pixilation and poor call quality. 

With more and more organisations implementing Office 365, organisations have long been asking when Citrix would end the frustration and optimise Microsoft Teams for virtual environments. 

What Citrix HDX optimisation for Microsoft Teams could mean for your organisation. 

The answer lies with the roll out of Citrix HDX optimisation; an industry first that promises to optimise the delivery of Microsoft Teams (a minimum of version 1.2.00.31357) for virtual environments. 

According to Citrix, all users will now get a fully native, fully featured Microsoft Teams experience within Citrix Virtual Apps and Desktops; with a single point of authentication that also improves reliability and ease of use. 

You can see how Citrix promises to deliver a full native experience of Microsoft Teams within a Win 10 virtual desktop on Azure in the following video:

The key to this huge improvement in functionality lies in the fact that the Citrix Workspace app has a built-in multi-platform HDX Media Engine that ensures optimised device and media handling, with audio, video, and screen sharing offloaded to the user’s device. (You can find the full specification and installation guidelines here.)

What this basically means is that with the new HDX optimisation, instead of it running Microsoft Teams predominantly in the Citrix environment and the external server cluster, it offloads all the traffic onto a device such as the Thin Client on your desk which works in a similar way to BCR (Browser Content Redirection). 

Coming to a virtual environment near you soon. 

Citrix announced that the release of HDX-optimized Microsoft Teams will be available in a matter of weeks. Their engineering teams are currently putting the final touches on the optimisation, but you should expect it with their next Citrix Virtual Apps and Desktops release (you will need to move onto that VDA once released, as well as a future release of the Microsoft Teams client.) 

How well it will meet the expectations of the tens of thousands of users working in virtual environments only time will tell, but if the demonstration video is anything to go by then we will see a huge uptake in the use of calls and videoconferencing with Microsoft Teams. 

As a consequence, it may just signal the end of the line for Skype for Business, which itself only recently received a Citrix HDX Realtime Optimization Pack (RTOP) that delivered a native-like experience for Skype for Business in virtual environments. 

At Quadris we will be reviewing the functionality of the new HDX-optimized Microsoft Teams and reporting back on whether or not it lives up to expectations. 

Stop Press! Nutanix coronavirus cost-cutting exercise hits thousands of UK and European staff.

Nutanix has asked thousands of non-US staff to take two weeks’ voluntary unpaid leave as part of a series of cost-cutting actions aimed at minimising the fallout from the coronavirus epidemic.

The once lauded hyperconverged infrastructure vendor hit the headlines as it emerged it will ‘furlough’ more than 1,400 US staff, which is around a quarter of its workforce. Those affected will undergo two, week-long unpaid furloughs over the next six months.

But the NASDAQ-listed vendor also confirmed it has asked staff outside the US to take a total of two weeks of voluntary unpaid leave during the same time period; a move which will affect its UK & European operations.

This latest news has added to the woes of Nutanix following the crash of their share price in late February, with CEO Dheeraj Pandey attributing underwhelming Q2 results to the “murky” environment caused by the pandemic.

If you think your data is safe on the public cloud, think again.

With its promise of increased efficiency, scalability and agility, more and more organisations are adopting public cloud services.

Yet many security professionals are voicing their concerns loudly and clearly; citing security issues such as data loss, data privacy, compliance, accidental exposure of credentials, and data sovereignty.

In fact, according to a recent survey (conducted by Synopsis and covering 400,000 members of the Cybersecurity Insiders information security community) a staggering 93% of cyber security professionals stated that they are “moderately to highly concerned” about public cloud security. (To download the full report click here.)

While this figure is truly astonishing, it should as no surprise when you consider the fact that nearly 30% of cyber security professionals admitted that they had experienced a public cloud-related incident in the last year.

With this in mind, in order to ensure your organisation’s all-important data is as safe as possible, below is a list of some of the key considerations you should pay special attention to before rushing into adopting public cloud services.

  1. Ultimately, the security of your data is your responsibility.

First and foremost, you must recognise that this is a shared responsibility model. As a result, you take responsibility for security to and from the cloud, while the Cloud Service Provider (CSP) takes responsibility for security within its cloud infrastructure.

It’s true that CSPs such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are offering increasingly robust security to protect their evolving cloud platforms , and have to meet very high standards as set out by the Cloud Security Alliance (CSA).

But while this may unburden your organisation from proving compliance, ultimately any fallout and fines that result from data loss or compromise, even if it is the fault of your CSP, will fall squarely on your shoulders.

  1. Data Sovereignty and Compliance.

As increasing numbers of organisations conduct business globally, there is a growing requirement to adhere to strict regulatory and compliance requirements that mandate where your data can be held, such as the European Union’s General Data Protection Regulation (GDPR).

Yet many CSPs store, backup and replicate data in multiple data centres, the physical location of which could well breach regulatory or legal compliance. As a result, a CSP must be able to demonstrate that it has data centres that comply with any data sovereignty regulations and are therefore to geo-fence your workloads.

It can be difficult, if not impossible, to verify that your data exists only at allowed locations. As a result, you need to ensure that your CSP is being transparent about where their servers are being hosted and equally importantly that they adhere strictly to any pre-agreed Service Level Agreements (SLAs).

Furthermore, you need to be in a position to fully enforce any compliance requirements through continuous monitoring and alerting, as laid out by the relevant policy-based templates, ready in the event of any audits.

  1. Make no mistake, public cloud vulnerabilities are growing by the day.

The steadily increasing popularity of the public cloud has been mirrored by increasing numbers of cloud security incidents.

The consequences of such an incident can be catastrophic. One well documented example was the theft of over 100 million records from Capital One by a former Amazon Web Services (AWS) employee who exploited a well-known cloud computing vulnerability.

This puts into sharp focus the importance of paying close attention to security in the context of the public cloud, but also recognises that despite the best defences in the world, no system is completely secure – especially when you factor in the human element.

  1. Reduce risk through the use of encryption and role-based access control.

In the annual Cost of a Data Breach Report, conducted by the Ponemon Institute and sponsored by IBM Security, the extensive use of encryption was highlighted as the number one factor in preventing and mitigating the impact of a data breach.

Any CSP worth their salt, should be able to offer you the very highest level of protection against any tampering such as a FIPS 140-2 level certified hardware security model. This will enable you to access functionality while ensuring that no one else (including CSP administrators) has access to encryption keys at any time or at any point.

Now add to this role-based access control and you greatly reduce the risk of breaches and data leakages and ensure greater compliance through the careful management of who has access to sensitive information.

The downside to encryption is that it relies on those users with access to remember to enable the encryption and manage the keys properly. This can add considerably to the overall cost, and as a result negates many of the savings normally associated with migrating to the cloud.

  1. Pay special attention to the entire lifecycle of your data.

In order to ensure the efficient management of the flow of data throughout its lifecycle, you should first categorise your data into four main groups, public, internal, sensitive and restricted.Defining the different data types will help you to establish set guidelines as to its criticality and value to your organisation and determine whether you should adopt public cloud, private cloud or on-premise services.

With public cloud adoption in mind, special attention should be paid to the destruction of data at the end of its lifecycle, especially when there are mandatory regulations or compliance issues.

With the on-premise IT environment there are several options open to an organisation: the physical destruction of media and hardware, degaussing, overwriting, and cryptoshredding. With the public cloud, most of these options are simply not feasible, because the CSP owns the hardware making physical destruction almost impossible. 

That leaves cryptoshredding as the only viable and realistic option for data disposal in the public cloud. And as mentioned previously, this requires that your data be encrypted in the first instance and carries with it the burdens of human error and increased costs.

  1. Choose your CSP wisely.

If you do decide to make the leap and migrate your data to the cloud, first and foremost choose a CSP that offers the very highest levels of protection and expertise. In addition, pay special attention to reducing risk; covering areas such as encryption, access control, monitoring, visibility, data sovereignty and all associated compliance and regulatory requirements.

Furthermore, any cloud platform needs to be very closely integrated with any on-premise virtualised environment. This way you will be able to run workloads in the cloud that deliver maximum uptime availability at the virtual machine level, while also taking advantage of configurations such as stretched clusters in order to reduce risk and increase the availability of critical applications.

Summary: migrating to the public cloud could cost you a fortune and leave you vulnerable.

Caveat emptor!

As workloads continue to move to the cloud, organisations of all sizes and sectors are recognising the complications of protecting their data.

The reality is that there is no one-size-fits-all solution. When considering migration or integration into the public cloud, first and foremost you have to consider how it will affect the IT systems and infrastructure within your particular organisation.

Regulatory compliance, the sensitivity of the data you are holding, geographical location, these are all factors that will determine whether or not the public cloud is a suitable solution. Even within an organisation itself, there may well be data that can be migrated to the cloud, while data that requires added security and control would be better placed in a private cloud or on-premise data centre.

But even with highly specialised teams working tirelessly to provide a wide variety of options to secure and provide access to the public cloud, the security of the end result is still dependent on the customisation and configuration by the organisation itself.

At the end of the day, the single most quoted reason why many organisations have considered migrating to the cloud is the promise of lower costs.

But when you consider all of the above, the security, the regulatory compliance issues, the data lifecycle and the cost of securing your data, then it doesn’t seem quite so profitable after all.

To discuss your IT requirements and the different options available to you, contact Peter Grayson on 0161 537 4980 or email peter.grayson@quadris.co.uk

Why over half of UK organisations think the public cloud is over-hyped and overpriced.

No one is doubting the lure of the cloud. 

But a recent report by Capita has revealed that the majority of UK organisations are becoming increasingly disillusioned by their decision to move to the cloud. The research which covered 200 decision makers across the IT sector also found that ‘unforeseen factors’ had resulted in their organisation’s cloud migration being behind schedule. 

The main factor behind the push to adopt the cloud was undoubtedly to reduce the cost of storing data on premise. But the reality is that even though many organisations have been planning the move as long ago as 2015, the promised savings simply haven’t materialised. 

According to the survey, less than half of the proposed workloads and applications have successfully migrated, with just 1 out of every 20 respondents stating that they had not encountered any challenges on the road to cloud migration. 

Security issues and lack of internal skills. 

The key obstacles that have resulted in such slow progress and disillusionment were quoted as being security issues and the lack of internal skills. 

In addition, many organisations took a ‘lift and shift’ approach which entailed simply gathering up everything they were storing on-premise and shifting it over to the public cloud. The problem with this approach is that they failed to realise that in the vast majority of instances you need to re-architect the application in order to optimise them for the cloud. 

As the challenges continue to mount up, so has the cost. 

Nearly 60% of organisations admit that moving to the cloud has been far more expensive than anticipated. 

The increasing cost of moving to the public cloud isn’t confined to the UK. Across the world organisations invested $107 billion (£83 billion) on cloud computing last year, an increase of 37% on the previous year and this amount is predicted to spiral over the next 5 years. 

Research by Gartner predicts that over the coming year 80% of organisations will exceed their cloud infrastructure budgets due to their inability to manage cost optimisation. 

Yet infrastructure isn’t the only growing cost when moving to the cloud. The total spend on cloud services themselves are set to hit $500 billion (£388.4 billion) by 2023. 

These escalating costs of moving to the public cloud are clearly coming as quite a shock, not least because cutting costs was one of the prime drivers behind the moving to the cloud in the first instance. 

The way forward. 

If you are considering adopting public cloud services, then it’s worth taking the time to validate your strategy before committing your organisation to what could become a very costly and ultimately frustrating exercise. 

For those organisations who are already way down that path, the main aim at this moment in time should be how to control escalating costs; and at the heart of this should be better planning.  

As a result, you need to understand the characteristics of your different workloads and then focus first on migrating those with characteristics that map well to the benefits of the cloud. These are likely to be applications that have burstable resource demands and/or, are well architected for public cloud services. 

For workloads with relatively stable resource demands and that maybe not be well architected for the public cloud, you are likely to experience better value and control keeping them on a hosted private cloud platform. Equally importantly, it shouldn’t be regarded as a one-off event, the cloud is dynamic, so you need to continually monitor its performance.  

Only 33% of organisations state that their costs have decreased and only 16% are extremely satisfied.  

While many IT decision-makers still firmly believe that its benefits will eventually outweigh its drawbacks, and that the public cloud is the way forward, yet again this belief is undermined by statistics which show that only 33% of organisations state that their costs have decreased since migrating to the cloud and only 16% are extremely satisfied with the move. 

So, it’s fair to say that very few organisations have seen the benefits, let alone the transformational potential of their investment. No wonder that the majority of IT leaders have been left frustrated and underwhelmed by the promises made by the purveyors of cloud technology. CSPs have been quick to jump to its defence, claiming that expectations have been misplaced and the actual purpose of the move is to enable innovation. 

Little consolation for those IT leaders who have taken on the responsibility for migrating to the cloud and have been left to explain to their colleagues and Directors why it has failed to deliver. 

To discuss your IT requirements and the different options available to you, contact Peter Grayson on 0161 537 4980 or email peter.grayson@quadris.co.uk

It’s time to tear up the Disaster Recovery Plan rule book.

Threats such as viruses, ransomware, and natural disasters that can cause significant downtime are growing by the day.  

More and more organisations are waking up to the fact that IT uptime is synonymous with business uptime; any outages not only hit the bottom line they can also have a potentially catastrophic effect on your brand. 

As a result, Disaster Recovery has become one of the key issues faced by every organisation that relies upon its IT systems to function efficiently and seamlessly. It is, to all intents and purposes, an insurance policy that you can cash in when disaster strikes.  

And just like every insurance policy it has to be backed up by a comprehensive guide to the procedures and processes that come into place when the unthinkable happens. 

This Disaster Recovery Plan should be one of the cornerstones of your organisation’s overall Business Continuity Plan, with the sole purpose of recovering and protecting your business IT infrastructure in the event of a disaster. 

All well and good but creating and maintaining a truly effective and comprehensive Disaster Recovery Plan is a complicated, time-consuming, and ultimately thankless task. 

The true cost of a Disaster Recovery Plan. 

A comprehensive Disaster Recovery Plan has become one of the necessary evils that every organisation must undertake or in the event of a disaster deal with the consequences. 

Most likely some poor individual has been charged with the responsibility for orchestrating your organisation’s plan. And while the buck might well stop with that individual, it’s your organisation that will pay the price if it isn’t up to scratch. 

The cost of a Disaster Recovery Plan can’t simply be measured by how much it might save when disaster strikes, you have to factor in the ongoing costs of maintaining and continually updating your plan to ensure it is always fit for purpose. 

So aside from the associated risk cost, you also need to factor in the costs of administering your Disaster Recovery Plan as an integral part of your overarching Business Continuity Plan (that’s assuming you have one. If not, we suggest you keep your fingers very tightly crossed.)

The true cost of creating and administering a comprehensive Disaster Recovery Plan takes into account a wide range of factors. 

To start with it requires detailed documentation about your network, every element in your system, who the vendor is for each element, and exactly how the system is designed to failover.  

But it doesn’t end there, as all this documentation must be updated continuously and reviewed quarterly if you are to do it properly (and let’s be honest, most organisations don’t).

Then there is the Disaster Recovery Rehearsal which needs to be undertaken at least annually. In order to minimise any disruption, it usually takes place over a weekend, which means you may have to get people in on overtime. Even then, often DR rehearsals fail. The hard truth is that it’s not really a true test, yet remains a big administrative headache.

Finally, back to the poor individual who has to take responsibility for all the above. They may well know how the whole procedure works and are incredibly adept at filling in all the necessary documentation. But what if that person leaves your organisation or for some reason is no longer able to undertake this role? Then you are faced with a very large pair of bureaucratic shoes to fill.

When it comes down to it, while you can provide assurances that you have a Disaster Recovery solution in place, there’s always that nagging worry that even with the best intentions certain elements just aren’t going to work. As a result, when faced with the real deal, it’s ultimately going to take a lot of work to get your system up and running properly. 

Well now thanks to Quadris, there is an ingenious solution that offers Disaster Avoidance instead of Disaster Recovery, and as a direct result eliminates the vast majority of the time, work, and stress associated with all the above. 

Quadris’ ingenious Disaster Avoidance solution to the rescue. 

The fundamental problem with most Disaster Recovery solutions, regardless of how good they claim to be, is that they focus on recovery. So even if they operate at maximum efficiency, they still involve an element of disruption and loss of data. 

With Quadris’ Disaster Avoidance solution, the emphasis is on avoidance as opposed to recovery.

As it is completely automated it requires very little documentation. As a direct result, any Business Continuity/Disaster Recovery test can be undertaken in a lunchtime without any downtime. 

To add to its appeal, user training is reduced to the absolute minimum, so even a junior member of staff can handle it. 

Furthermore, with Quadris’ Disaster Avoidance solution, your overarching Business Continuity Plan is massively simplified. There is no need to sign Recovery Point Objectives (RPOs) or Recovery Time Objectives (RTOs) for your different workloads, as any failover is automatic and immediate. 

The real beauty of this solution is that while you will be immediately notified if an event occurs, there will be no disruption to your service whatsoever. 

Sound good so far? Well there’s even more good news. 

Our Disaster Avoidance solution is typically cheaper than the cost of a Disaster Recovery solution. 

Saving time and money on the cost of creating and administering a Disaster Recovery plan is actually the icing on the cake. The cake itself is the sheer ingenuity of our Disaster Avoidance solution which can deliver huge cost savings over typical Disaster Recovery solutions. 

It’s cheaper for the simple reason that you haven’t got a whole load of expensive equipment sitting there redundant, just waiting for a disaster to happen. (For a more in-depth explanation behind one of the key elements of this future-focused solution, click here, or to read a case study of it in action click here.) 

Your new plan starts here. 

To find out more about our Disaster Avoidance solution and how it can benefit and protect your organisation, simply contact Peter Grayson on 0161 537 4980 or email peter.grayson@quadris.co.uk 

How an entire workforce was empowered to work securely from home in under 4 days.

When the Coronavirus crisis hit and the government requested that all non-essential staff should work from home, like many organisations across the nation our client was left with a logistical mountain to climb. 

Under normal circumstances they don’t employ any remote workers whatsoever; but these are extraordinary times, and as such they call for extraordinary measures. 

To add to an already demanding situation, our client operates within the financial services sector and as a result must comply with extremely stringent regulations, so any solution had to offer the very highest level of security. 

Stepping up to the challenge. 

Fortunately, Quadris had already deployed Stratodesk NoTouch endpoint management solution for use on their internal terminals. 

The beauty of this cutting-edge solution is that it allows you to transform any PC, Thin Client, Laptop, or Raspberry Pi device into a safe, secure, and centrally managed endpoint. 

All well and good, but the sudden increase in demand for remote working had led directly to a dire shortage of laptops as thousands of organisations sought to equip their employees with the ability to work from home. 

So, we immediately went on the hunt for a large quantity of laptops and our efforts were soon rewarded when we able to point our client in the direction of 250 refurbished machines that would fit the bill perfectly. 

Usually it would be a huge task to image every laptop, load up the Citrix workspace client, anti-virus software and apply various other lock down policies to make sure the devices were secure, together with everything else that goes with getting a laptop ready and capable of undertaking its allotted task. 

Instead, our client took possession of the laptops and re-imaged every single machine with Stratodesk NoTouch. 

All that was required was to plug a network cable into each laptop and in just a few minutes they were ready to go. In a single stroke, they were made completely secure, lightening fast and simplicity itself to use. 

It took our client just under 4 days to empower a 250-strong workforce to operate securely from home. 

In just a few days, every one of the machines was re-imaged and allocated to a member of staff. 

When they turn on their ‘new’ laptop, all they see is the company’s branded Citrix login page. They simply enter their login details, password and MFA token, and because there is no bloated operating system they are up and running in less than 20 seconds. 

This is because to all intents and purposes all the laptops have become Thin Clients. The beauty of this setup is that there is nothing on the laptop except for the Citrix workspace client, so there’s no way to access any sensitive company data or do anything other than their assigned workload.

It also means that if for any reason one of these laptops is stolen or lost, there is no danger whatsoever of our client’s sensitive data falling into the wrong hands. To anyone other than the allotted user, they are completely useless. 

Furthermore, as there is no ‘full fat’ operating system or additional applications there are no updates to contend with. And to add to their functionality Quadris is able to support every user remotely – no matter where they are located.

Looking to enable your employees to work remotely? 

To find out how we can help empower your workforce to work securely from home, office or anywhere in between, please contact Peter Grayson on 0161 537 4980 or email peter.grayson@quadris.co.uk

One of the world’s foremost hospitals relies on Prevensys to monitor the health of its life-critical IT infrastructure.

“Prevensys not only presents me with a detailed overview of the day-to-day health of our IT infrastructure, it also enables me to drill down in order to quickly identify and resolve issues at the click of my mouse.”

 ~ Kim Nielsen, IT Group Manager, Dept. of Oncology, Medical Physics, Aarhus University Hospital, Denmark.

If you were handed responsibility for the biggest VxRail deployment, not just in your organisation’s history but in the entire history of your country, not surprisingly you would want to keep a very close eye on its performance.

So, when Kim Nielsen was charged with overseeing the new, life-critical IT operations for the Dept. of Oncology, Medical Physics at Aarhus University Hospitalhe needed to ensure that he could monitor its health as efficiently and effectively as the hospital monitors the vital signs of its patients.

No mean task when you are managing a massive core compute and server infrastructure, supported by a Citrix virtual desktop infrastructure running industry leading oncology software, across a team of no less than 300 clinicians.

Enter Prevensys: A single pane of glass monitoring solution that delivers the complete picture.

Prevensys empowers Kim Nielsen to monitor the entire system from top to bottom: the stretched cluster VxRail solution, Citrix desktop virtualization, the network switches, the backup solution, and every Windows machine employed across the operation.

In fact, Prevensys is deemed so crucial to the smooth running of the operation that he has even installed a 42-inch monitor on the wall of his office. So that now, from the comfort of his office chair he can oversee the health of the entire estate with ease and with perfect clarity.

The view from the very top.

Prevensys is Quadris’ proprietary monitoring solution and the product of 3 years of development. The result is a leading-edge solution that provides organisations with the ability to monitor the health and performance of their IT infrastructure with greater ease and clarity than ever before.

From the main dashboard, it provides Kim Nielsen with a complete overview of all the elements that comprise their IT operation. 

To start with, the main dashboard provides him with a detailed overview of the VxRail stretched cluster. Not only does it show the overall raw capacity and raw free space, with Prevensys it also displays the actual usable capacity and the free usable capacity plotted over a period of time.

And because Prevensys was designed to be time-orientated, it also monitors and logs the overall health of the system over time and then allows him to review the incidence of warnings that have been presented during any specified time period.

To add to its capabilities, Prevensys displays the disk usage of the individual virtual machines to see exactly which have eaten up all the space. Furthermore, it reports the ping response times between sites, packet loss, network band width usage, latency, IOPS, CPU usage, memory usage, and a great deal more.

All in all, a comprehensive and high-level dashboard that presents a truly holistic overview of the entire estate.

Drilling down to get the heart of an issue with the diagnostics dashboard.

In addition to keeping a close eye on the system’s overall vital signs, with just a few clicks of a mouse Prevensys enables Kim Nielsen to quickly and easily drill down into individual elements for more detailed diagnostics.

 If for any reason a machine isn’t working at full capacity, causing issues, or is in a warning or error state; he simply switches to VM performance. Prevensys automatically presents him with a series of highly detailed information covering: how much CPU it is using, disk latency, network traffic, how long it has been operating, any heartbeat, the energy it is using, how much active memory there is, read IOPS, the network data transmitted, and how much disk usage is being used in VSAN.

This ability to view the functionality of any machine in clear, precise, time-orientated detail is one of the key reasons why Prevensys was chosen for the task.

Diagnostics can also end user frustration by reducing mean time resolution by 95%

With 300 clinicians on the system, being able to respond quickly to user issues such as slow performance is of paramount importance, which is where the Citrix XenApp diagnostics dashboard plays a crucial role.

All Kim Nielsen requires is the user’s name and he can immediately see every machine they have been logged onto, together with the dates and times of every interaction.

This provides him with a complete picture of network performance as it relates to the user in question, including the historical data held, trend performance over time; while external login simulators record both login time and actual up time.

As a result, Prevensys leaves no stone unturned in order to deliver all the information he needs in order to resolves user issues in the quickest possible time.

Prevensys: the next generation of IT monitoring systems.

Prevensys is more than just a monitoring system, it provides IT managers such as Kim Nielsen with clarity and peace of mind.

The attention to detail and impressive capabilities offered by Prevensys is unprecedented in its field. Other monitoring systems are cumbersome and require you to navigate through multiple screens in order to finally locate the issue in question, but the modern look and feel of Prevensys combined with its intuitive dashboards and time-orientated functionality puts the competition in the shade.

By delivering 24/7 monitoring and real-time alerting to service ticket requests, combined with trend performance over time, Prevensys will ensure that your system performs at the very peak of health, both today and in the years to come.

The Dept. of Oncology, Medical Physics at Aarhus University Hospital is just one of many healthcare teams that rely on Prevensys on a daily basis.

But to really experience this state-of-the-art monitoring system for yourself and see how it can help maintain the health of your IT system, contact Peter Grayson on 0161 537 4980 or peter.grayson@quadris.co.uk

Why recover from an IT disaster when you can avoid it?

Which would you prefer, spending a great deal of time, energy, and money recovering from an unexpected IT disaster, or avoiding it in the first place?   

Unless you are a masochist, surely it’s the latter. So why then do organisations spend a large proportion of their annual IT budget on Disaster Recovery when they could be investing in Disaster Avoidance?  

After all, for the vast majority of organisations IT uptime is synonymous with business uptime; so any outages not only hit the bottom line they can have a potentially catastrophic effect on your brand. 

Even though the difference in data loss and the resultant damage can be immense, the reason why Disaster Avoidance solutions aren’t the norm is because the associated price tag has kept them beyond the reach of most organisations. 

Until now.  

How our Disaster Avoidance solution delivers peace of mind at less cost than typical Disaster Recovery solutions 

Quadris has unveiled an ingenious new solution that delivers an automatic Disaster Avoidance solution that not only covers compute and storage, but also the network, external connections, virtual desktops et al. 

The benefits of this solution are clear for all to see: 

  • Recovery Point Objective (RPO) of zero  
  • Recovery Time Objective (RTO) < 5 minutes  
  • Zero intervention required to activate the system  
  • Automatic Reprotection and Recovery  

We are able to offer all the above at a fraction of the cost by reducing the amount of expensive enterprise-level hardware required. This is because you no longer have one side of the solution sat there almost redundant, its sole purpose being to wait for a disaster to happen. (For a more in-depth explanation behind one of the key elements of this future-focused solution, click here.)  

So not only does it offer far greater protection and insurance against unforeseen disasters, it’s actually cheaper to install. 

Now if that hasn’t got the attention of your CIO, CEO and CFO, then there’s little that will.  

How our Disaster Avoidance solution is delivering peace of mind and saving a client more than £300,000. 

We recently installed a tailor-made Disaster Avoidance solution at ForViva, a leading Housing Association, by employing a stretched cluster configuration, combined with an ingenious GSLB software defined network solution. (To read the full case study, click here.)   

To summarise, it has delivered ForViva with automated Active/Active DR capabilities, complete with zero RTO/RPO recovery times, and supported by fully redundant point-to-point 10Gbps links. The advantage of this solution means that both data centres are live, so there is no longer a Disaster Recovery site. Workloads are now spread evenly and resiliently across both data centres and are free to move between them, as and when required.  

Should an entire data centre go down for any reason, workloads simply restart in the second data centre automatically and within seconds, while all external network services seamlessly failover using the software defined network design.   

In addition to upgrading the entire IT infrastructure, over the next 5 years this future-focused solution is set to deliver a staggering £300,000 saving over the alternatives. And not just in Total Cost of Ownership (TCO), but in actual money.  

If you add TCO savings from administration, DR testing, documentation etc. not to mention all the complexities that come with updating the infrastructure, the eventual savings will probably be doubled.  

Now add the potentially crippling cost of recovering from a disaster, and you’ll see why our solution is priceless. 

Disaster Recovery or Disaster Avoidance. The choice is yours.  

As far as we know, no-one else knows how to implement this setup in the way that the Quadris team has identified. 

And while we would love the opportunity to share this ingenious solution with you, we don’t want to give away the secret on these pages as our competitors might be watching. But we are more than happy to discuss it with you personally, simply contact Peter Grayson on 0161 537 4980 or peter.grayson@quadris.co.uk 

Quadris is proud to support the UK’s proton beam therapy projects.

Our clients operate two of the largest and most advanced cancer centres in Europe, treating tens of thousands of patients every year. 

Recently the hospitals took another giant leap forward when they became home to the NHS’ very first high-energy proton beam cancer treatment centres. 

By strategically locating the two UK centres within world-renowned academic hubs, it means that research and trials can be carried out alongside treating patients; accelerating both progress and clinical development, while potentially opening up proton beam therapy to more patients across the UK. 

What is proton beam therapy? 

Proton beam therapy (PBT) is a technologically advanced form of radiotherapy that uses a high energy beam of protons to treat more complex and difficult-to-treat cancers, with potentially better outcomes together with a lower risk of long-term side effects. 

How Quadris is supporting this life-saving new treatment. 

The high-energy proton beam equipment located in the dedicated proton centres, are served by HCI clusters, and Quadris has been tasked with full enterprise infrastructure support. 

Quadris’ role will include Firmware updates, security patching, backup maintenance together with supplying any replacement parts that may be required.

Furthermore, the hospitals have enlisted the help of Quadris’ proprietary monitoring system Prevensysa leading-edge solution that delivers the ability to monitor the health and performance of IT infrastructure with greater ease and more clarity than ever before. 

As a direct result, Quadris now provides round-the-clock reactive support to ensure that both these groundbreaking and truly remarkable feats of engineering are kept running at the very peak of health 24/7. And yet another example of the growing number of healthcare organisations that rely on Prevensys on a daily basis. 

To discover more about how this next generation monitoring system can help maintain the health of your IT system, contact Peter Grayson on 0161 537 4980 or email peter.grayson@quadris.co.uk 

Helping the UK’s largest NHS and social care trust cope with growing numbers of cancer patients.

Our client is a 900-bed ultra-modern university teaching hospital that is tasked with delivering the very highest quality treatment and care to the many thousands of people who live its vicinity.  

As part of the largest integrated health and social care Trust in the UK, in addition to providing local acute services the hospital also covers a number of key regional specialties, including an extensive range of cancer treatments.  

With cancer rates steadily increasing, so was the demand for our client’s services. As a result the hospital decided to access the knowledge and expertise of the global oncology community by investing in a state-of-the-art Oncology System. 

The challenge. 

This strategic investment demanded an underlying IT infrastructure that would guarantee the performance and security required for the oncology team to fully realise its goals. 

It wasn’t sufficient to engage an IT integrator that would simply supply and deploy the infrastructure. This crucial project required both outstanding technical ability, together with ongoing service and monitoring to ensure the system was operating at maximum efficiency – every moment, of every day. 

So, with the highly sensitive nature of the system in mind, our client enlisted Quadris with its proven track record in configuring, deploying and supporting critical IT infrastructure. 

The solution. 

This potentially life-changing project meant that Quadris’ solution focused on optimum system performance, together with steadfast fault tolerance and rapid disaster recovery. 

1. Performance. 

To ensure performance is optimised at all times, the core infrastructure deployed by Quadris centres around an All-Flash stretched cluster VxRail solution 

Furthermore, by employing a combination of All-Flash and NVMe write cache drives, Quadris’ solution delivers unsurpassed storage performance; resulting in more responsive apps, reduced login time for users and backup jobs completed in record time.  

The All-Flash architecture of the solution means application responsiveness across the board is lightning fast, with super low latency. Database queries and reports typically show a 10-fold performance increase when compared to hybrid-based solutions. 

2. Fault tolerance and disaster recovery. 

Quadris designed the application with the VxRail nodes divided between two separate data centres in a stretched cluster configuration; but managed as a single cluster. This configuration means that in the event of a site failure, all VM workloads are automatically transferred to the other site and immediately powered up; an ingenious setup that results in VM recovery times of less than a minute. 

Furthermore, many of the VM workloads are specifically designed to work in an active-active configuration. By having the machines split evenly between the sites, should a problem occur the systems will continue operating without having to rely on the recovery of VMs. 

3. High performing virtual desktops. 

Quadris implemented a resilient and highly responsive desktop experience using Citrix Virtual Desktops to provide all clinicians access to the oncology software suite; ensuring rapid login times and seamless access to patient treatment data and imagery. 

The implementation. 

In order to mitigate any problems during the actual deployment, Quadris’ employs a unique and thorough implementation methodology; whereby the entire system is tested before being operationalised at the eventual site. This involves a comprehensive ‘predeployment’ in which the entire system was initially delivered to Quadris’ own premises, where it was unboxed, racked, powered up, configured and comprehensively tested.  

Phase #1.  

After passing Quadris’ own stringent assessment, the system was transported to site where Quadris’ own engineers set about installing the system across the 2 sites. 

Phase #2. While the system was in the process of being installed, the hospital made the decision to scale up their service to meet the needs of an additional 50 users. Quadris took the request in its stride and proved the scalability of the VxRail solution by simply ordering the 2 extra nodes required to expand the cluster and had the upgraded system up and running in less than a day. 

Once completed, expert technicians took over the system and seamlessly carried out full acceptance testing and the installation of the system software. 

 Ongoing support. 

This wasn’t the end of Quadris’ active involvement, as the system comes with a five-star maintenance service agreement that provides ongoing support to the system for its entire lifecycle.  

At the heart of this support is Quadris’ proprietary Prevensys monitoring solution that delivers 24/7 monitoring and real-time alerting to service ticket requests.  

To complete the picture, Quadris continuously monitors trend performance over time; ensuring the system is always performing to the very best of its abilities. 

The key project benefits at a glance. 

  • Delivers unparalleled storage performance ensuring that apps are far more responsive and backups are completed in record time 
  • Historical reports and searches are completed just as quickly as new data 
  • In the event of a site failure, all VM workloads are automatically moved to the second site and immediately powered up with recovery times <1minute 
  • The entire system tested at Quadris’ own premises before being operationalised at the eventual site
  • Full support through a comprehensive maintenance service agreement supported by Quadris’ proprietary Prevensys Monitoring solution 
  • 24/7 alerting continuously monitors trend performance to ensure optimal performance at all times 

To find out more about our future-focused IT solutions, contact Peter Grayson on 0161 537 4980 or peter.grayson@quadris.co.uk