New Microsoft Teams features – coming soon to a screen near you.

The phenomenal success of the Zoom app during the lockdown caught everyone by surprise.

The truth is that Zoom’s rise was years in the making. It’s just that when vast swathes of the world’s workforce were sent home, people needed a videoconferencing platform that was ready to do the job.

Zoom proceeded to deliver the ease of access and functionality that made it an instant success, much to the embarrassment of the established tech giants such as WebEx, GoToMeeting, Google and Microsoft.

Of course, it’s a little unfair to lump Microsoft Teams in with dedicated videoconferencing apps such as Zoom, as it offers far more than just the ability to make video calls. Teams is an enterprise level communication and collaboration platform that combines workplace messaging, file storage and application integration.

Even so, with 300m people using Zoom every day, Microsoft knew that they had to respond and update Teams in order to improve its functionality and overall appeal. As a result, the technology giant has spent the lockdown months researching how its customers use its tools, while also working with experts in virtual reality, AI and productivity to help further its understanding of the future of work.

The new features that will help you to get even more out of Microsoft Teams.

At the end of the day, it’s all about enabling people to collaborate, stay connected, and discover new ways to be productive – no matter where they are working.

The new features have been designed to make virtual interactions far more natural, engaging and human. As a result, users should feel more connected and inclusive, while reducing meeting fatigue and saving time.

The new features are quite extensive, so we have highlighted the most important and relevant in this post, for a full run down direct from Microsoft click here.

  1. Together mode.

A new meeting experience that uses AI segmentation technology to digitally position people into a single shared background, so that participants feel like they are all sitting in the same room.

The idea behind this addition is to help you to focus on people’s faces and body language in order to make it easier to pick up on the non-verbal cues that are so essential to human interaction.

Aimed at meetings where there may be many people speaking, Together mode makes it easier to see exactly who is talking and also uses some AI to make participants ‘look toward’ the active speaker. The auditorium view should be available to everyone by August, while different room types such as a coffee shop will follow shortly after.

2. Dynamic view. 

Designed to provide you with greater control over how you view participants and share content during meetings. Dynamic view uses AI to deliver a range of new features, such as the ability to display shared content and specific individuals side-by-side, as well as personalising the view to suit your personal preferences.

Building on previous enhancements, it includes a large gallery view where you can see up to 49 people simultaneously, together with virtual breakout rooms which enables organisers to divide participants into smaller groups when required. Dynamic view will be rolled out in August.

  1. Video filters. 

The use of filters in social media apps has become increasingly popular, so Microsoft have introduced them into Teams.

Now you can look your best by using the filters to adjust lighting levels and soften the focus of the camera in order to customise your appearance to your desired outcome.

  1. Live reactions. 

It can be difficult to gauge audience reactions especially when there are a large number of people and you don’t want to interrupt the flow of the meeting. With Live reactions, participants will be able to use emojis to make their feelings known.

In addition, Live reactions is a shared feature with PowerPoint Live Presentations, enabling audience members to give instant feedback to the presenter. Microsoft is also bringing PowerPoint Live Presentations to Teams in the near future, further enhancing audience engagement right from Teams itself.

  1. Chat bubbles. 

At the moment, users are required to manually open a chat window in order to view the chat screen. But soon any messages sent during a Teams meeting will appear on the screens of all participants, creating a more central, interactive, and inclusive conversation.

  1. Speaker attribution for live captions and transcripts. 

While Teams already provides live captions so that you can follow what is being said in a meeting, soon there will be speaker attribution to captions so that participants will know exactly who is speaking.

Live captions with speaker attribution

Later this year, we will also see live transcripts, providing you with another way to follow what has been said, together with who said it. Furthermore, following a meeting the transcript file is automatically saved in a tab as a part of the meeting.

Live transcription with speaker attribution.

It’s worth noting that remarks made by participants joining from a conference room device will be attributed to the room rather than to the individuals in the room.

7. Reflect messaging extension. 

With employee well-being in mind, the Reflect messaging extension gives managers and organisers a simple way to check on how participants are feeling, either generally or about specific work-related topics.

In a few weeks, you will be able to install the extension from GitHub and make it available to colleagues via the message extension menu. The extension provides suggested check-in questions, or you can add your own custom questions for participants to respond to, creating a poll-like experience that you can either share or keep anonymous.

  1. Interactive meetings for 1,000 participants and view only for up to 20,000! 

Teams meetings are growing to support up to 1,000 participants, where attendees can chat, talk, and use their video cameras for real-time interaction and collaboration.

And should you wish to bring even more people together for a presentation, Teams can now support a view-only experience for as many as 20,000 participants!

Want to know how to make the most of Microsoft Teams and the latest features?

To find out more about Microsoft Teams and how it can enable your workforce to communicate and collaborate more effectively, whether in the office or from home call 0161 537 4980 or email

Remotely training ForViva’s staff on how to collaborate remotely with Microsoft Teams.

A forward-thinking social organisation that challenges inequalities and delivers positive, lasting change, ForViva invests in projects that prevent homelessness, improve health & wellbeing, and create job opportunities.

The entire ForViva Group is united behind one shared vision: Improved lives. From the boardroom to the community centre its values of passion, openness, respect, and trust shine through.

High ideals indeed, and to be applauded. But when the coronavirus pandemic forced the lockdown, it presented a situation that could have seriously impeded ForViva’s ability to maintain its collective approach.

Addressing the challenge of staying connected and collaborating effectively while working remotely.

To ensure that all key staff could collaborate and continue to serve the many communities that rely on ForViva, and its subsidiaries; ForHousing and Liberty, it was looking to roll out Microsoft Teams to dozens of roles across the Group.

All well and good, and under normal circumstances we would have been happy to hold a group training session, but with social distancing in place it was agreed that the training had to take place remotely.

The irony of remotely training people on how to work remotely wasn’t lost on everyone involved. It also presented an extra layer of difficulty as we had to organise no less than 50 one-to-one sessions. But undeterred and with the assistance of ForViva’s management, we scheduled 6-7 individual trainings to take place every day over the period of a few weeks.

In order to facilitate these trainings, we first had to call in two of our senior technical engineers to undertake the initial implementation; configuring ForViva’s networks so that Teams was installed on a new VDI image and the Citrix platform was fully optimised.

We also had to migrate everyone’s individual profiles to FSlogix profiles using group policy to enable the optimisation of profiles on VDI environments. Finally, because we were moving ForViva onto a new desktop, we also upgraded their Microsoft Office applications to Microsoft365 (formerly Office365).

With the setup complete, we then methodically called each person in turn via the newly installed Teams and the training commenced.

The trainings in action.

While the logistics of conducting multiple one-to-one meetings were quite challenging, it also presented us with an opportunity to conduct individual sessions that could be tailored to the abilities of each person, and a great way of ensuring that every single user was set up correctly from all available devices.

Generally, we started by talking the user through the installation process on the devices they wished to use, before connecting them for the training session about Teams – via Teams! 

This was followed by a brief introduction to the many features that Teams has to offer, then an explanation of how it can help to make working lives easier, especially under the current circumstances with so many people working remotely.

We then covered the basics, such as how to install, access and navigate around the app. From here we dived down into its many features, explaining how to chat to colleagues, use the instant messaging function, make an audio or video call, share screens, and transfer files. In addition, we made clear the difference between communicating person-to-person and group communications via the different internal teams.

To add to the flexibility and functionality of this increasingly popular collaboration tool, we also explained how to download and access the mobile app. And to complete the picture, we demonstrated how Teams integrates with Microsoft365 and provides direct access to the inclusive Microsoft Office online apps (WordExcel, and PowerPoint to name just a few).

“Quadris’ training helped our staff understand how to make the most of Microsoft Teams and smooth our transition to remote working.

Mark Sullivan, Group ICT Director at ForViva

Our support didn’t end with the training.

Even after the training sessions were completed, we held subsequent catch up sessions whenever required and helped to troubleshoot any problems that people encountered.

With the basics well and truly under their belt, the intuitive design of Teams meant that all the newly enrolled users could quickly pick up its many different facets, with support from Quadris available should anyone require further assistance.

Want to know how to make the most of Microsoft Teams?

To find out more about Microsoft Teams and how it can enable your workforce to communicate and collaborate more effectively, whether in the office or from home call 0161 537 4980 or email

VeloCloud SD-WAN: the quick and effective cure for communication jitters.

Since the lockdown was ordered, the task of empowering millions of people to work from home has been fraught with difficulties and littered with obstacles. 

The rush to equip employees with portable devices caused massive shortages and huge hikes in prices. But even those organisations who were lucky enough to be able to get their hands on enough hardware to keep their business rolling have still faced a mountain of problems. 

One of the most frequently encountered issues has been with regard to collaborating and communicating with colleagues who are often strewn across the country and beyond. The drive to implement VoIP and videoconferencing has resulted in countless stories of choppy video or audio, missed words, blurred video, pixelated images, jumbled audio, calls being dropped in the middle of a conversation and more. 

The cause of all the above can often be attributed directly to poor internet access and the resultant network latency/delay, packet loss and inconsistency, and network congestion due to bandwidth overuse. And unfortunately, the speed and reliability of your internet connection is often dependent on where you live – making your ability to communicate and collaborate with colleagues subject to a postcode lottery. 

But now thanks to VeloCloud SD-WAN there is now a solution that puts an end to the frustration, quickly and effectively. 

The difference that VeloCloud SD-WAN makes with just 2% packet loss is clear to see.

How VeloCloud SD-WAN smooths the path to clear communications. 

Unlike a traditional WAN, the VeloCloud SD-WAN has its roots in Software Defined Networking (SDN), with the underlying principle of abstracting the network hardware and transport characteristics across all applications that use the network. 

It has been designed to be a ‘transport-independent’ product that is easy to implement and that permits the use of any type of physical connection, from multi-protocol label switching (MPLS), to cable, to broadband cellular network technology such as 4G. 

As a result, no matter whether users are located at head office, a branch outlet, or working from home they can all experience seamless communications and connectivity simply by adding a VeloCloud SD-WAN Edge appliance. 

What’s more, this ingenious device can be deployed in just a matter of minutes. You simply plug it in, authenticate and users are up and running without the need for any IT involvement whatsoever. 

Once installed it will immediately optimise network traffic and iron out any jitters, even under situations where packet loss would normally make communicating impossible. 

See how easy it is to deploy a VeloCloud SD-WAN Edge appliance in the short video above. 

Internet connection really poor? Just add a 4G dongle. 

Because VeloCloud SD-WAN allows the use of any type of connection, you can add as a many connections for resiliency and load balancing as you like. 

For example, if a user is located in an area where the internet connection is notoriously unreliable, you can simply add a 4G dongle into the VeloCloud SD-WAN Edge appliance alongside the standard internet connection and it will automatically deliver optimum load balancing.  

During general use it will use both connections sending a portion of the data down one route and a portion down the other. And should one connection fail, it will simply automatically failover onto the other. 

And the beauty of it is no matter whether you put a 4G dongle in the back or any other connectivity, it will still retain that VPN between the user and the data centre or office. 

Far more than just improved video and audio calls for home workers. 

Of course, there’s more to VeloCloud SD-WAN than just improving the quality of audio or video calls, it actually simplifies the implementation of complex remote networks and delivers significant savings to operational costs. 

For example, where organisations are provisioning connectivity, with a traditional WAN the lead time for the connectivity can be several months or even up to a year. But with VeloCloud SD-WAN instead of waiting for MPLS connectivity, you can employ standard circuits which have a significantly reduced lead time. 

Once these are in place, you simply introduce VeloCloud SD-WAN Edge appliances at your head office, branch offices, data centres and home users. From here, this smart solution automatically establishes a Virtual Private Network (VPN) between all the different sites and users; in other words, it creates a mesh that delivers fully optimised connectivity across your organisation. 

Add resiliency to existing MPLS connectivity. 

Another benefit is where you have already invested in MPLS connectivity but want to add some resiliency. Just take your existing traditional connectivity, plug it into the back of a VeloCloud SD-WAN Edge and then add and connect a cheaper secondary option. With both connections in place you automatically have load balancing between the two, and should the main connection go down, it automatically failsover to the other. 

Optimise your SaaS connectivity. 

Another great feature for the enterprise is down to the fact that VeloCloud has provisioned gateways into all the major datacentres across the world, that operate like VeloCloud SD-WAN Edge appliances at that end. 

By optimising traffic at both ends, applications such as SaaS run far better than with traditional connections. When you connect to SaaS applications the entire traffic is optimised, the packets are guaranteed and compressed so you get a lot more data out of the link. 

Find out how VeloCloud SD-WAN can help keep your entire workforce connected. 

VeloCloud SD-WAN is the industry-leading WAN edge services platform for both branch and at home users, delivering simple, reliable, secure, and optimized access to traditional and cloud applications.  

Which is why at Quadris we are actively promoting VeloCloud SD-WAN to many of our clients who rely on efficient, secure, and flexible connectivity for their mission-critical IT services and applications. 

To find out more about the many benefits of VeloCloud SD-WAN, contact Peter Grayson on 0161 537 4980 or email 

Warning! Retrieving your data from the Cloud could send costs rocketing.

One of the key drivers behind the decision to migrate to the cloud has been the promise of lower costs. But as many organisations are fast discovering, not only are these cost savings failing to materialise, in some instances it can lead to costs soaring. 

A case in question is NASA’s recent decision to choose AWS (Amazon Web Services) to handle its Earthdata Cloud, the data repository for the Earth Science Data and Information System (ESDIS) that collates all the information collected from its various missions.

Previously, NASA was storing all its data on-prem across 12 DAACs (Distributed Active Archive Centres). But with no less than 15 missions planned over the next few years (with each expected to produce 100 terabytes of information every single day) NASA was faced with the prospect of their data growing from 30 petabytes to over 250 petabytes by 2025. 

So, after lengthy and supposedly exhaustive consultations, last year NASA chose AWS (Amazon Web Services) to handle its repository for the data collected from all future missions.

Houstonwe have a problem. 

Unfortunately, someone at the Agency forgot to take into account the associated costs (a.k.a. the egress charges) of retrieving the data they feed into AWS. 

These egress charges are the costs incurred when transferring data from the Cloud to another area, which in the case of NASA could simply be a local workstation for one of their engineers or scientists. The vast majority of AWS subscriptions will charge these fees over and above the agreed monthly Cloud subscription, so the more data you retrieve, the bigger the bill. 

At the moment, when users download data from a DAAC, there are no additional costs above the need to maintain the existing infrastructure. But when users download from the Earthdata Cloud, NASA is charged every single time, while still having to maintain the 12 DAACs. 

To add to an already messy situation, ESDIS hasn’t yet determined which data sets will migrate to the Earthdata Cloud, nor has it developed cost models based on operational experience and metrics for usage and egress. Consequently, current cost projections may be far lower than what will be necessary to cover future expenses and Cloud adoption may well become increasingly expensive and difficult to manage.

Should NASA limit the amount of data in order to control costs, the result could be that valuable scientific data becomes less available to users, thereby negating one of the key reasons for migrating to the Cloud in the first place. 

Either way, NASA is facing a bill of astronomical proportions. 

Enter the Inspector General. 

These revelations have caused huge consternation throughout the Agency, and as a result the office of the Inspector General of NASA undertook a full audit of the project. 

It concluded that the people in charge of the Earth Observing System Data and Information System (EOSDIS), which makes available the information from ESDIS, had simply failed to consider the additional costs of the eye-watering egress charges. 

The report also highlighted the fact that the rather embarrassingly named Evolution, Enhancement, and Efficiency (E&E) panel that was chosen to review the DAACs, didn’t even attempt to identify potential cost savings. 

To add to an altogether shambolic situation, the panel also failed to adhere to the National Institute of Standards and Technology’s (NIST’s) data integrity standards, displaying a complete lack of independence as half of the panel members also worked on ESDIS. 

The report concluded that once 2 key projects are up and running and providing sufficient data, a comprehensive and truly independent analysis should be conducted in order to determine the long-term financial implications of supporting Cloud migration, while also maintaining the existing DAAC footprint. 

Heads stuck in the Cloud? 

Putting the blame to one side, many questions remain. 

Some are asking if NASA should now make a comparative evaluation of the cost of upgrading its DAACs to meet the 250+ petabytes storage requirements versus the migration to AWS with the egress charges fully factored in. 

It may well be too late as NASA are already committed to AWS, but with the very long-term future in mind it’s worth noting that there are many who believe that while the Cloud is great for bursty/on-demand workloads, when it comes to constant load then on-prem is better, safer and usually works out cheaper. 

Down to earth advice and support are available at Quadris. 

Migrating to the Cloud may appear straightforward, but as you can see even organisations such as NASA can make errors of judgement when confronted with new technologies and operating practices. 

For an in-depth discussion about the pros and cons of Cloud migration v On-prem, and the different options available to you, contact Peter Grayson on 0161 537 4980 or email 

Why over half of UK organisations think the public cloud is over-hyped and overpriced.

No one is doubting the lure of the cloud. 

But a recent report by Capita has revealed that the majority of UK organisations are becoming increasingly disillusioned by their decision to move to the cloud. The research which covered 200 decision makers across the IT sector also found that ‘unforeseen factors’ had resulted in their organisation’s cloud migration being behind schedule. 

The main factor behind the push to adopt the cloud was undoubtedly to reduce the cost of storing data on premise. But the reality is that even though many organisations have been planning the move as long ago as 2015, the promised savings simply haven’t materialised. 

According to the survey, less than half of the proposed workloads and applications have successfully migrated, with just 1 out of every 20 respondents stating that they had not encountered any challenges on the road to cloud migration. 

Security issues and lack of internal skills. 

The key obstacles that have resulted in such slow progress and disillusionment were quoted as being security issues and the lack of internal skills. 

In addition, many organisations took a ‘lift and shift’ approach which entailed simply gathering up everything they were storing on-premise and shifting it over to the public cloud. The problem with this approach is that they failed to realise that in the vast majority of instances you need to re-architect the application in order to optimise them for the cloud. 

As the challenges continue to mount up, so has the cost. 

Nearly 60% of organisations admit that moving to the cloud has been far more expensive than anticipated. 

The increasing cost of moving to the public cloud isn’t confined to the UK. Across the world organisations invested $107 billion (£83 billion) on cloud computing last year, an increase of 37% on the previous year and this amount is predicted to spiral over the next 5 years. 

Research by Gartner predicts that over the coming year 80% of organisations will exceed their cloud infrastructure budgets due to their inability to manage cost optimisation. 

Yet infrastructure isn’t the only growing cost when moving to the cloud. The total spend on cloud services themselves are set to hit $500 billion (£388.4 billion) by 2023. 

These escalating costs of moving to the public cloud are clearly coming as quite a shock, not least because cutting costs was one of the prime drivers behind the moving to the cloud in the first instance. 

The way forward. 

If you are considering adopting public cloud services, then it’s worth taking the time to validate your strategy before committing your organisation to what could become a very costly and ultimately frustrating exercise. 

For those organisations who are already way down that path, the main aim at this moment in time should be how to control escalating costs; and at the heart of this should be better planning.  

As a result, you need to understand the characteristics of your different workloads and then focus first on migrating those with characteristics that map well to the benefits of the cloud. These are likely to be applications that have burstable resource demands and/or, are well architected for public cloud services. 

For workloads with relatively stable resource demands and that maybe not be well architected for the public cloud, you are likely to experience better value and control keeping them on a hosted private cloud platform. Equally importantly, it shouldn’t be regarded as a one-off event, the cloud is dynamic, so you need to continually monitor its performance.  

Only 33% of organisations state that their costs have decreased and only 16% are extremely satisfied.  

While many IT decision-makers still firmly believe that its benefits will eventually outweigh its drawbacks, and that the public cloud is the way forward, yet again this belief is undermined by statistics which show that only 33% of organisations state that their costs have decreased since migrating to the cloud and only 16% are extremely satisfied with the move. 

So, it’s fair to say that very few organisations have seen the benefits, let alone the transformational potential of their investment. No wonder that the majority of IT leaders have been left frustrated and underwhelmed by the promises made by the purveyors of cloud technology. CSPs have been quick to jump to its defence, claiming that expectations have been misplaced and the actual purpose of the move is to enable innovation. 

Little consolation for those IT leaders who have taken on the responsibility for migrating to the cloud and have been left to explain to their colleagues and Directors why it has failed to deliver. 

To discuss your IT requirements and the different options available to you, contact Peter Grayson on 0161 537 4980 or email

Update: 02/06/2020. Has the new Citrix HDX optimisation for Microsoft Teams ended the frustration of users working in virtual environments?

Since we first published this post in February, the coronavirus pandemic has resulted in a huge uptake of Microsoft Teams. So, we decided to review its performance in light of its increased popularity to see if the improvements have lived up to expectations. 

Overall, we can confidently say that its performance is noticeably better than the pre-optimised version running on Citrix, with respect to both sound and vision. 

However, feedback from customers has revealed a couple of issues that should be addressed. 

Firstly, only the main speaker appears in the window. As a result, if you have more than 2 people in a meeting, only the person speaking gets the floor, unlike the fat client version whereby you can see everyone who is on the call at all times. 

Secondly, background effects aren’t yet available. So you don’t have the option of  a blurred background (very useful when you are in lockdown and calling from an improvised office) or being able to pretend that you are on one of the Halo maps (yes geeks really do love that!). 

All in all, it is an improvement, and we fully expect further enhancements in the near future. Watch this space! 

If you have any queries about the new Citrix HDX optimisation, or indeed any issues with Citrix virtual environments or Microsoft 365, don’t hesitate to contact Peter Grayson on 0161 537 4980 or email 

Below is the original post by Jack Firth from 19th February 2020.

Will the new Citrix HDX optimisation for Microsoft Teams finally end the frustration of users working in virtual environments?

Any organisation seeking to improve employee productivity and collaboration will acknowledge the increasing role that Citrix is playing in the modern workplace; providing secure remote access to all employees while also cutting IT costs. 

Furthermore, with more organisations migrating to Office 365, one of its many benefits is the ability to take advantage of the intelligent communication solution presented by Microsoft Teams which is bundled into Office 365. 

As a result, by combining these 2 leading technologies not only can they make your employees more productive, IT teams also benefit from centralised management. With information such as data and chat logs staying in a cloud environment instead of being stored on native devices, it ensures better control over sensitive information. 

But while users of the desktop versions of Microsoft Teams have long enjoyed its full functionality, anyone using Citrix Virtual Apps and Desktops who has attempted to make a video call via Microsoft Teams will have experienced the frustration that comes with latency, pixilation and poor call quality. 

With more and more organisations implementing Office 365, organisations have long been asking when Citrix would end the frustration and optimise Microsoft Teams for virtual environments. 

What Citrix HDX optimisation for Microsoft Teams could mean for your organisation. 

The answer lies with the roll out of Citrix HDX optimisation; an industry first that promises to optimise the delivery of Microsoft Teams (a minimum of version for virtual environments. 

According to Citrix, all users will now get a fully native, fully featured Microsoft Teams experience within Citrix Virtual Apps and Desktops; with a single point of authentication that also improves reliability and ease of use. 

You can see how Citrix promises to deliver a full native experience of Microsoft Teams within a Win 10 virtual desktop on Azure in the following video:

The key to this huge improvement in functionality lies in the fact that the Citrix Workspace app has a built-in multi-platform HDX Media Engine that ensures optimised device and media handling, with audio, video, and screen sharing offloaded to the user’s device. (You can find the full specification and installation guidelines here.)

What this basically means is that with the new HDX optimisation, instead of it running Microsoft Teams predominantly in the Citrix environment and the external server cluster, it offloads all the traffic onto a device such as the Thin Client on your desk which works in a similar way to BCR (Browser Content Redirection). 

Coming to a virtual environment near you soon. 

Citrix announced that the release of HDX-optimized Microsoft Teams will be available in a matter of weeks. Their engineering teams are currently putting the final touches on the optimisation, but you should expect it with their next Citrix Virtual Apps and Desktops release (you will need to move onto that VDA once released, as well as a future release of the Microsoft Teams client.) 

How well it will meet the expectations of the tens of thousands of users working in virtual environments only time will tell, but if the demonstration video is anything to go by then we will see a huge uptake in the use of calls and videoconferencing with Microsoft Teams. 

As a consequence, it may just signal the end of the line for Skype for Business, which itself only recently received a Citrix HDX Realtime Optimization Pack (RTOP) that delivered a native-like experience for Skype for Business in virtual environments. 

At Quadris we will be reviewing the functionality of the new HDX-optimized Microsoft Teams and reporting back on whether or not it lives up to expectations. 

Stop Press! Nutanix coronavirus cost-cutting exercise hits thousands of UK and European staff.

Nutanix has asked thousands of non-US staff to take two weeks’ voluntary unpaid leave as part of a series of cost-cutting actions aimed at minimising the fallout from the coronavirus epidemic.

The once lauded hyperconverged infrastructure vendor hit the headlines as it emerged it will ‘furlough’ more than 1,400 US staff, which is around a quarter of its workforce. Those affected will undergo two, week-long unpaid furloughs over the next six months.

But the NASDAQ-listed vendor also confirmed it has asked staff outside the US to take a total of two weeks of voluntary unpaid leave during the same time period; a move which will affect its UK & European operations.

This latest news has added to the woes of Nutanix following the crash of their share price in late February, with CEO Dheeraj Pandey attributing underwhelming Q2 results to the “murky” environment caused by the pandemic.

If you think your data is safe on the public cloud, think again.

With its promise of increased efficiency, scalability and agility, more and more organisations are adopting public cloud services.

Yet many security professionals are voicing their concerns loudly and clearly; citing security issues such as data loss, data privacy, compliance, accidental exposure of credentials, and data sovereignty.

In fact, according to a recent survey (conducted by Synopsis and covering 400,000 members of the Cybersecurity Insiders information security community) a staggering 93% of cyber security professionals stated that they are “moderately to highly concerned” about public cloud security. (To download the full report click here.)

While this figure is truly astonishing, it should as no surprise when you consider the fact that nearly 30% of cyber security professionals admitted that they had experienced a public cloud-related incident in the last year.

With this in mind, in order to ensure your organisation’s all-important data is as safe as possible, below is a list of some of the key considerations you should pay special attention to before rushing into adopting public cloud services.

  1. Ultimately, the security of your data is your responsibility.

First and foremost, you must recognise that this is a shared responsibility model. As a result, you take responsibility for security to and from the cloud, while the Cloud Service Provider (CSP) takes responsibility for security within its cloud infrastructure.

It’s true that CSPs such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) are offering increasingly robust security to protect their evolving cloud platforms , and have to meet very high standards as set out by the Cloud Security Alliance (CSA).

But while this may unburden your organisation from proving compliance, ultimately any fallout and fines that result from data loss or compromise, even if it is the fault of your CSP, will fall squarely on your shoulders.

  1. Data Sovereignty and Compliance.

As increasing numbers of organisations conduct business globally, there is a growing requirement to adhere to strict regulatory and compliance requirements that mandate where your data can be held, such as the European Union’s General Data Protection Regulation (GDPR).

Yet many CSPs store, backup and replicate data in multiple data centres, the physical location of which could well breach regulatory or legal compliance. As a result, a CSP must be able to demonstrate that it has data centres that comply with any data sovereignty regulations and are therefore to geo-fence your workloads.

It can be difficult, if not impossible, to verify that your data exists only at allowed locations. As a result, you need to ensure that your CSP is being transparent about where their servers are being hosted and equally importantly that they adhere strictly to any pre-agreed Service Level Agreements (SLAs).

Furthermore, you need to be in a position to fully enforce any compliance requirements through continuous monitoring and alerting, as laid out by the relevant policy-based templates, ready in the event of any audits.

  1. Make no mistake, public cloud vulnerabilities are growing by the day.

The steadily increasing popularity of the public cloud has been mirrored by increasing numbers of cloud security incidents.

The consequences of such an incident can be catastrophic. One well documented example was the theft of over 100 million records from Capital One by a former Amazon Web Services (AWS) employee who exploited a well-known cloud computing vulnerability.

This puts into sharp focus the importance of paying close attention to security in the context of the public cloud, but also recognises that despite the best defences in the world, no system is completely secure – especially when you factor in the human element.

  1. Reduce risk through the use of encryption and role-based access control.

In the annual Cost of a Data Breach Report, conducted by the Ponemon Institute and sponsored by IBM Security, the extensive use of encryption was highlighted as the number one factor in preventing and mitigating the impact of a data breach.

Any CSP worth their salt, should be able to offer you the very highest level of protection against any tampering such as a FIPS 140-2 level certified hardware security model. This will enable you to access functionality while ensuring that no one else (including CSP administrators) has access to encryption keys at any time or at any point.

Now add to this role-based access control and you greatly reduce the risk of breaches and data leakages and ensure greater compliance through the careful management of who has access to sensitive information.

The downside to encryption is that it relies on those users with access to remember to enable the encryption and manage the keys properly. This can add considerably to the overall cost, and as a result negates many of the savings normally associated with migrating to the cloud.

  1. Pay special attention to the entire lifecycle of your data.

In order to ensure the efficient management of the flow of data throughout its lifecycle, you should first categorise your data into four main groups, public, internal, sensitive and restricted.Defining the different data types will help you to establish set guidelines as to its criticality and value to your organisation and determine whether you should adopt public cloud, private cloud or on-premise services.

With public cloud adoption in mind, special attention should be paid to the destruction of data at the end of its lifecycle, especially when there are mandatory regulations or compliance issues.

With the on-premise IT environment there are several options open to an organisation: the physical destruction of media and hardware, degaussing, overwriting, and cryptoshredding. With the public cloud, most of these options are simply not feasible, because the CSP owns the hardware making physical destruction almost impossible. 

That leaves cryptoshredding as the only viable and realistic option for data disposal in the public cloud. And as mentioned previously, this requires that your data be encrypted in the first instance and carries with it the burdens of human error and increased costs.

  1. Choose your CSP wisely.

If you do decide to make the leap and migrate your data to the cloud, first and foremost choose a CSP that offers the very highest levels of protection and expertise. In addition, pay special attention to reducing risk; covering areas such as encryption, access control, monitoring, visibility, data sovereignty and all associated compliance and regulatory requirements.

Furthermore, any cloud platform needs to be very closely integrated with any on-premise virtualised environment. This way you will be able to run workloads in the cloud that deliver maximum uptime availability at the virtual machine level, while also taking advantage of configurations such as stretched clusters in order to reduce risk and increase the availability of critical applications.

Summary: migrating to the public cloud could cost you a fortune and leave you vulnerable.

Caveat emptor!

As workloads continue to move to the cloud, organisations of all sizes and sectors are recognising the complications of protecting their data.

The reality is that there is no one-size-fits-all solution. When considering migration or integration into the public cloud, first and foremost you have to consider how it will affect the IT systems and infrastructure within your particular organisation.

Regulatory compliance, the sensitivity of the data you are holding, geographical location, these are all factors that will determine whether or not the public cloud is a suitable solution. Even within an organisation itself, there may well be data that can be migrated to the cloud, while data that requires added security and control would be better placed in a private cloud or on-premise data centre.

But even with highly specialised teams working tirelessly to provide a wide variety of options to secure and provide access to the public cloud, the security of the end result is still dependent on the customisation and configuration by the organisation itself.

At the end of the day, the single most quoted reason why many organisations have considered migrating to the cloud is the promise of lower costs.

But when you consider all of the above, the security, the regulatory compliance issues, the data lifecycle and the cost of securing your data, then it doesn’t seem quite so profitable after all.

To discuss your IT requirements and the different options available to you, contact Peter Grayson on 0161 537 4980 or email

It’s time to tear up the Disaster Recovery Plan rule book.

Threats such as viruses, ransomware, and natural disasters that can cause significant downtime are growing by the day.  

More and more organisations are waking up to the fact that IT uptime is synonymous with business uptime; any outages not only hit the bottom line they can also have a potentially catastrophic effect on your brand. 

As a result, Disaster Recovery has become one of the key issues faced by every organisation that relies upon its IT systems to function efficiently and seamlessly. It is, to all intents and purposes, an insurance policy that you can cash in when disaster strikes.  

And just like every insurance policy it has to be backed up by a comprehensive guide to the procedures and processes that come into place when the unthinkable happens. 

This Disaster Recovery Plan should be one of the cornerstones of your organisation’s overall Business Continuity Plan, with the sole purpose of recovering and protecting your business IT infrastructure in the event of a disaster. 

All well and good but creating and maintaining a truly effective and comprehensive Disaster Recovery Plan is a complicated, time-consuming, and ultimately thankless task. 

The true cost of a Disaster Recovery Plan. 

A comprehensive Disaster Recovery Plan has become one of the necessary evils that every organisation must undertake or in the event of a disaster deal with the consequences. 

Most likely some poor individual has been charged with the responsibility for orchestrating your organisation’s plan. And while the buck might well stop with that individual, it’s your organisation that will pay the price if it isn’t up to scratch. 

The cost of a Disaster Recovery Plan can’t simply be measured by how much it might save when disaster strikes, you have to factor in the ongoing costs of maintaining and continually updating your plan to ensure it is always fit for purpose. 

So aside from the associated risk cost, you also need to factor in the costs of administering your Disaster Recovery Plan as an integral part of your overarching Business Continuity Plan (that’s assuming you have one. If not, we suggest you keep your fingers very tightly crossed.)

The true cost of creating and administering a comprehensive Disaster Recovery Plan takes into account a wide range of factors. 

To start with it requires detailed documentation about your network, every element in your system, who the vendor is for each element, and exactly how the system is designed to failover.  

But it doesn’t end there, as all this documentation must be updated continuously and reviewed quarterly if you are to do it properly (and let’s be honest, most organisations don’t).

Then there is the Disaster Recovery Rehearsal which needs to be undertaken at least annually. In order to minimise any disruption, it usually takes place over a weekend, which means you may have to get people in on overtime. Even then, often DR rehearsals fail. The hard truth is that it’s not really a true test, yet remains a big administrative headache.

Finally, back to the poor individual who has to take responsibility for all the above. They may well know how the whole procedure works and are incredibly adept at filling in all the necessary documentation. But what if that person leaves your organisation or for some reason is no longer able to undertake this role? Then you are faced with a very large pair of bureaucratic shoes to fill.

When it comes down to it, while you can provide assurances that you have a Disaster Recovery solution in place, there’s always that nagging worry that even with the best intentions certain elements just aren’t going to work. As a result, when faced with the real deal, it’s ultimately going to take a lot of work to get your system up and running properly. 

Well now thanks to Quadris, there is an ingenious solution that offers Disaster Avoidance instead of Disaster Recovery, and as a direct result eliminates the vast majority of the time, work, and stress associated with all the above. 

Quadris’ ingenious Disaster Avoidance solution to the rescue. 

The fundamental problem with most Disaster Recovery solutions, regardless of how good they claim to be, is that they focus on recovery. So even if they operate at maximum efficiency, they still involve an element of disruption and loss of data. 

With Quadris’ Disaster Avoidance solution, the emphasis is on avoidance as opposed to recovery.

As it is completely automated it requires very little documentation. As a direct result, any Business Continuity/Disaster Recovery test can be undertaken in a lunchtime without any downtime. 

To add to its appeal, user training is reduced to the absolute minimum, so even a junior member of staff can handle it. 

Furthermore, with Quadris’ Disaster Avoidance solution, your overarching Business Continuity Plan is massively simplified. There is no need to sign Recovery Point Objectives (RPOs) or Recovery Time Objectives (RTOs) for your different workloads, as any failover is automatic and immediate. 

The real beauty of this solution is that while you will be immediately notified if an event occurs, there will be no disruption to your service whatsoever. 

Sound good so far? Well there’s even more good news. 

Our Disaster Avoidance solution is typically cheaper than the cost of a Disaster Recovery solution. 

Saving time and money on the cost of creating and administering a Disaster Recovery plan is actually the icing on the cake. The cake itself is the sheer ingenuity of our Disaster Avoidance solution which can deliver huge cost savings over typical Disaster Recovery solutions. 

It’s cheaper for the simple reason that you haven’t got a whole load of expensive equipment sitting there redundant, just waiting for a disaster to happen. (For a more in-depth explanation behind one of the key elements of this future-focused solution, click here, or to read a case study of it in action click here.) 

Your new plan starts here. 

To find out more about our Disaster Avoidance solution and how it can benefit and protect your organisation, simply contact Peter Grayson on 0161 537 4980 or email 

How an entire workforce was empowered to work securely from home in under 4 days.

When the Coronavirus crisis hit and the government requested that all non-essential staff should work from home, like many organisations across the nation our client was left with a logistical mountain to climb. 

Under normal circumstances they don’t employ any remote workers whatsoever; but these are extraordinary times, and as such they call for extraordinary measures. 

To add to an already demanding situation, our client operates within the financial services sector and as a result must comply with extremely stringent regulations, so any solution had to offer the very highest level of security. 

Stepping up to the challenge. 

Fortunately, Quadris had already deployed Stratodesk NoTouch endpoint management solution for use on their internal terminals. 

The beauty of this cutting-edge solution is that it allows you to transform any PC, Thin Client, Laptop, or Raspberry Pi device into a safe, secure, and centrally managed endpoint. 

All well and good, but the sudden increase in demand for remote working had led directly to a dire shortage of laptops as thousands of organisations sought to equip their employees with the ability to work from home. 

So, we immediately went on the hunt for a large quantity of laptops and our efforts were soon rewarded when we able to point our client in the direction of 250 refurbished machines that would fit the bill perfectly. 

Usually it would be a huge task to image every laptop, load up the Citrix workspace client, anti-virus software and apply various other lock down policies to make sure the devices were secure, together with everything else that goes with getting a laptop ready and capable of undertaking its allotted task. 

Instead, our client took possession of the laptops and re-imaged every single machine with Stratodesk NoTouch. 

All that was required was to plug a network cable into each laptop and in just a few minutes they were ready to go. In a single stroke, they were made completely secure, lightening fast and simplicity itself to use. 

It took our client just under 4 days to empower a 250-strong workforce to operate securely from home. 

In just a few days, every one of the machines was re-imaged and allocated to a member of staff. 

When they turn on their ‘new’ laptop, all they see is the company’s branded Citrix login page. They simply enter their login details, password and MFA token, and because there is no bloated operating system they are up and running in less than 20 seconds. 

This is because to all intents and purposes all the laptops have become Thin Clients. The beauty of this setup is that there is nothing on the laptop except for the Citrix workspace client, so there’s no way to access any sensitive company data or do anything other than their assigned workload.

It also means that if for any reason one of these laptops is stolen or lost, there is no danger whatsoever of our client’s sensitive data falling into the wrong hands. To anyone other than the allotted user, they are completely useless. 

Furthermore, as there is no ‘full fat’ operating system or additional applications there are no updates to contend with. And to add to their functionality Quadris is able to support every user remotely – no matter where they are located.

Looking to enable your employees to work remotely? 

To find out how we can help empower your workforce to work securely from home, office or anywhere in between, please contact Peter Grayson on 0161 537 4980 or email