The challenges of faster security in the cloud

Working at a number of different customers I experience the good, the bad and the ugly when it comes to cloud security and how it is managed…..or not.

One of the most effective approaches I find is to perform a security assessment of the environment and target the ‘quick wins’ to rapidly improve security and limit the chance of reputational damage compliments of someone walking straight into your organisaton’s sensitive data.


Typically the concern with this approach is that you are inadvertently applying change and where there is change you often get resistance from people.  This can be because people are getting found out in particular areas such as having over privileged access etc or the cry for a Waterfall approach to the change because that is what they are used to.  If that does not reduce the speed of change then to support the archaic approach of a Waterfall delivery they may want a fully fledged support model from day 1 .

I often find it a challenge to balance that mindset of others where “Waterfall is dead” and that if we want to deliver something any time within the next few weeks or months “agile approach is now”.  I continue to be amazed with how instituatilsied people can be at times albeit massively frustrating to what I am trying to do and where the world is going. Do they not see it is about ‘on demand’ or ‘now, now, now’ The flexibility is a necessity when competing with other organisations or trying to offer better value to customers!

Never the less just like the concept of cloud where you have faster provisioning of services and shared responsibility it’s always an educational challenge to bridge the mindset of the masses with why cloud makes sense.

Then there is the whole concept of teams HAVING to work closer together when they consume cloud services. With the adoption of cloud it becomes very evident that teams will be stepping on each others toes when they begin to use said services as those services often bridge the responsibilities of multiple teams.  I could probably write a blog purely on that.

My point is, there is a clear pattern of how we move forward in IT today.  We have adapted the project approach from Waterfall to Agile.  We have identified the blockers of these faster deliveries and tried to resolve it by teams working together i.e. Developers & Operations became DevOps closely followed by Security who were last to the mix which has now become SecDevOps or DevSecOps!  Ultimately we are heading in the correct direction which is great.

The biggest challenge I see when I consult and advise at various orgnisations whether  in the public or private sector regardless of industry such as finance, media, healthcare etc the biggest challenge is a combination of people (teams), their politics, skills gap and process.  To be fair each sector has their own nuances but generally I find people and skill set to be the biggest hurdle.


My take away would be to ensure you have a small team of individuals who are ‘cloud ready’ who will focus on how your organisation will adopt cloud services.  Identify what services you wish to consume for day 1.  Once the technical configurations have been addressed  you then figure out the touch point with business teams such as the onboarding process.  Choose a representative “champion” from each business function that is a stakeholder of the service so they can relay the information to their team on how their team will engage with the cloud service.

Remember there is no end stage only continual development or improvement when it comes to cloud.  This means the life cycle is never finished we just build out what we currently have to improve process and functionality.  Simple right!?

The important thing is get the message right and have it circulated throughout the organisation.  Good messages are:

  • We are now operating in a cloud first approach.
  • We must work closer together.
  • Ask yourself how can we make this work rather than face the change with fear.
  • Get involved is the best way forward
  • What do we need to do to support this

All positive messages right!  Good luck!






GDPR Readiness – In The Cloud


 “While the overwhelming majority of IT security professionals are aware of GDPR, just under half of them are preparing for its arrival, according to a snap survey of 170 cyber security staff by Imperva.”

Many of you would have had that initial conversation about your company having to adhere to the new GDPR regulations around protecting personal data and identity tracking etc.


GDPR stands for “General Data Protection Regulation (GDPR)” and will be enforced from 25 May 2018. It was put together by the European Commission to reinforce data protection and give people more control over how their personal data is used. Companies like Facebook and Google etc swap access to people’s data for use of their services so it’s now essential to have methods to protect our data and offer visibility over where our data is located. The current legislation was enacted before the internet and cloud technology created new ways of exploiting data, and the GDPR seeks to address that.


Having said that GDPR should also offer companies a clearer and simpler legal environment in which to operate making data protection law identical throughout the single market (the EU estimates this will save businesses a collective €2.3 billion a year).

By strengthening data protection legislation and introducing tougher enforcement measures, the EU hopes to improve trust in the emerging digital economy.

While your business tries to understand the legal implications of the new legislation it is up to I.T to incorporate robust technological solutions in place to meet the GDPR obligations.

 “nearly a third said they are not preparing for the incoming legislation, and 28% said they were ignorant of any preparations their company might be doing.” – Imperva


There are two main reasons why you should be sweating the deadline for GDPR compliance.

  1.  The potential for high cost on achieving GDPR compliance
  2. The potential loss of customer data as the fines can be as high as 5% of your companies annual revenue! Enforcement will be extremely expensive. That’s why the fines for non-compliance will be enormous — to pay the salaries of those monitoring, investigating and enforcing the regulations.


When working with Cloud Service Providers (CSP) such as Microsoft Azure, AWS, Google, IBM etc it is important to note that there are some responsibilities that you own as the customer and some responsibilities that the CSP owns. It is important to know what in the realms of GDPR you are responsible for and what is covered by your CSP.

There are about 4 steps to the process of preparation that include:

  1. Discover – Find out what personal data you have and where it’s located
  2. Manage – Control how personal data is used and accessed
  3. Protect – Enforce security controls to prevent, detect and respond to vulnerabilities and data breaches
  4. Report – Keep required documentation, manage data requests and provide breach notifications.

Whilst I work with customers to help educate and direct them to become “GDPR ready” it’s important to note that with a combination of processes and tooling you can become GDPR ready within your own organisation.

Get started now by incorporating strategies around Data Security, Access Management and Data Protection.

Deadline 25 May 2018

Cloud based security adoption – Web Application Firewalls

WAF Security

Cloud based WAF Security

So you are planning to move infrastructure services to the cloud or you are already operating in the cloud and require extra security provisioning around your web application services (http/https) traffic.  You have come to the conclusion you want a Web Application Firewall (WAF).  This is  a good start as some customers often think they are protected with only a Firewall which they are but not to the level of a dedicated Layer 7 Web Application Firewall.  Then there are the customers that simply don’t know the difference between the two.  Let me explain briefly:

Cloud based Firewall

Put simply a cloud based firewall (aka Firewall virtual appliance) protects a group of computers on your network against unauthorised traffic by means of using a set of policies and ensuring traffic that passes through the firewall adheres to these rules else the packets are blocked.  A firewall operates on layer 4 at the packet level.

Web Application Firewall (WAF)

A WAF on the other hand is another type of virtual appliance that operates on a layer 7 level and should be deployed in front of your web applications / web sites where it inspects and monitors your http and https traffic to those backend servers.  All access to your web applications will pass through your WAF where the traffic will be inspected to determine whether the traffic should be passed through or blocked according to a core rule set that’s taken from the OWASP 3.0 or 2.2.9 rule sets.  OWASP is made up of a set of detection rules that protect against the top 10 common threats such as SQL injections, Remote Code Execution, Cross Site Scripting etc.

To be considered a WAF adhering to OWASP should be considered an essential.

Consider this when evaluating your vendors.  There’s plenty more to consider which is further expanded on here: Comparing Cloud Web Application Firewalls 

Choosing a WAF

Now we know what a WAF is it’s time to think about what is important when deciding on one.

There are approx. 3 things that differentiate a cloud based WAF from a traditional on-prem firewall.  That is scalability, extensibility and availability.

  • Scalability
    • Being a cloud based virtual appliance means WAFs can scale according to business demand far greater than the traditional on-prem.  This is common amongst most cloud service providers.  From an enterprise perspective this scale comes into play when the bandwidth increases unlike on-prem devices that would require a device replacement to cater for the increased traffic.  Traditionally if you were under a DDOS attack and the throughout from the attacker was so great that it maxed out the support levels of your WAF would still result in your backend web servers being attacked because the WAF would be crippled by the overload. this is exactly what you don’t want.  It is far easier to manage this when opting for a cloud based WAF as scale is far easier to implement (behind a load balancer).


  • Availability
    • Cloud based virtual appliances such as a WAF mean vendors can offer SLAs such as >99.99% high availability because the underlying infrastructure consists of fully redundant power, network services and backup and DR strategies particularly as the underlying services are hosted by big players such as Microsoft and Amazon cloud services for example.


  • Extensibility
    • One of the fundamental benefits of hosting services in the cloud is the luxury of being able to provision your virtual appliances anywhere where you have a protected communication path.  Traditionally deploying your physical on-prem WAF device would incur upfront capital expense and require both room in the datacentre and out-of-band-management access.  Cost would be increased for HA of of course.



What your business needs to know about Cloud security

It’s time.

Time we evolve and move with the times.  Regardless to whether your I.T services are on-prem or in the cloud or hybrid it’s time to take note before it’s too late.

Time to protect our businesses and organisations from the ever growing cyber attacks.  We’ve seen it in the news affecting NHS we’ve seen it cause serious disruption at large firms including the advertising giant WPP, French construction materials company Saint-Gobain and Russian steel and oil firms Evraz and Rosneft.  It’s held no mercy on affecting public and private sectors across the US and Europe.

The pace and frequency of the attacks have developed momentum and now become a highlight in main stream media.  Clearly disruptive it goes to show these types of attacks cannot be ignored.


It’s important to have your IT Services risk assessed now.

It’s important for your organisation to distinguish critical data and provide protection against such threats.

It’s important to know that in the event of an attack that your systems can identify and detect abnormal behaviors around your cloud applications and files etc.

This type of granular detection sits on top of your already deployed intrusion detection system and intrusion protection system….intrusion what?…..


Make the discovery before it’s too late and move with the times to save lives and your business.


It doesn’t have to cost a lot for you to save a lot.  Cost is not everything when you compare the sensitivity of your data.  For the NHS this data can save a life and by losing data you are risking life.

Without a doubt whether the public sector likes it or not they will be forced / bullied into the evolution of their I.T services by such attacks.  They will only continue for as long as they remain vulnerable.

Be warned, make the discovery.



Cyber security in Healthcare

Healthcare organisations have increasingly been targeted where initial attacks commonly go unnoticed.  It is no surprise to many hospitals lack new technologies and best practices to defend against such threats which is what makes them the perfect victim.

This can leave organisations vulnerable to losing highly sensitive information, costing you time, money, patient satisfaction, and valuable resources.


Cyber-attack’s have been seen to lock staff out of their computer systems, resulting in many hospitals having to cancel or delay treatment for patients.  I’m referring to the recent Ransomware attack that affected many British National Hospitals.  This is only one method of a Cyber attack and there will be a steady increase i imagine.


I’m keen to encourage organisations to move to the cloud it is important for me to educate organisations especially healthcare organisations to make them aware of the risks so I can help them identify and reduce threats to data security and privacy across their infrastructure.


Devising a framework is the first step to help protect devices, Operating systems and sensitive data against ransomware attacks, malware and cyberattacks.


With the steadily increasing attacks on the public sector it is vital that the patients and healthcare users can be confident that their information is protected from such cyberattacks.


3 B’s is a strategy i use to focus my efforts around that include:


Block – The first point of defence is to block attacks that reach your perimeter.  Tools such as Exchange Online Advanced Threat Protection (ATP) & Microsoft Active Protection Service (MAPS).  By enforcing these technologies you raise the complexity for cyber attackers and can prevent breaches.


Barricade – In the event an attack gets past your perimeter it’s critical where possible to contain the attack.  To protect administrative access you can leverage Secure Privileged Access (SPA) as well as using Windows Defender as the anti-malware capabilities for real-time analysis and response.


Backup  – In an effort to ensure business continuity it’s important to ensure correctly configured backups are in place where Microsoft can further protect in their own datacentres rending the data inaccessible to attackers.


Healthcare often measure their IT strategy based off their local regulatory compliance check list however it’s about time they go beyond the compliancey checklist and expand into the following areas to help mitigate vulnerabilities and risk:


This list has been extracted from Microsoft and serves as a best practice framework to measure the Cybersecurity plan within the public sector or healthcare organisation:


  • Develop a “where used” matrix (“Do you know where your data is?”)
  • Employ a data backup and recovery plan for all critical information
  • Perform and test regular backups and isolate critical backups from the network □ Include recovering from a cyberattack in disaster recovery plans
  • Use a different communication mode if breached (hackers may be listening on the current system)
  • Employ an end-to-end data encryption strategy; control your encryption keys □ Ensure business associates are working with your security and compliance needs
  • Employ analytics in your security (behavioural, machine learning, partner information, advanced □ threat analytics) □ Work to minimize “Shadow IT,” still a major challenge □ Whitelist apps to help prevent malicious software and unapproved programs
  • Keep software up-to-date with the latest patches and support
  • Keep anti-virus software current □ Apply the “least privilege” principle to all systems and services
  • Educate users, patients, affiliates, and others
  • Restrict permissions to install and run unwanted apps

DAMAGE LIMITATION: British Airways – Another industry victim of poor IT Strategy

british airways customers failed
british airways customer fedup waiting for i.t services to continue

A £500 million pound incident that re-enforces why moving to the cloud is an IT Strategy that needs to be considered ‘when’ rather than ‘if’.


So it’s been just over a week since the NHS healthcare system was victim of the Ransomware attack and now we have British Airways who have fallen victim to having a somewhat poor IT Strategy.

Airline’s check-in and operational systems crashed on Saturday that saw thousands of passengers trying to travel on Bank Holiday weekend left stranded.

Chief executive Alex Cruz blamed the IT failure on an “exceptional” power surge, which he said had been so strong it also disabled the company’s back-up system. There is also claim that inexperienced staff in India didn’t know how to launch the backup system.

Unions claimed the company’s move to cut IT jobs in the UK and outsource some of them to India’s Tata Consultancy Services had left the company more vulnerable to this type of incident.


The IT system that was hit is responsible for British Airways’ flight, baggage and customer communication systems across 170 airports in 70 different countries.

George Salmon, a Hargreaves Lansdown analyst, said: “The whole sorry episode has undeniably put a dent in BA’s reputation for delivering a premium service, and the worry for shareholders is that this unquantifiable impact could have longer-term consequences.”

Davy analyst Stephen Furlong said the cost to the carrier of cancelling one day of operations was about £30m.

IAG (parent company to BA) has not yet said how much it expects the power surge to cost it financially but City analysts have predicted the costs of paying customer compensation and repatriating bags to travellers could amount to a bill of €100m (£86.6m).

“Half a billion pounds has been wiped off the market value of the British Airways owner, IAG, after computer system outages grounded hundreds of flights over the weekend.” –


The reports are that BA has resolved full flight schedules today (30/05/2017) although thousands of bags are still sat at the airport while passengers are already at their destinations.

Well we don’t know and probably won’t find out about what really resolved the issue but what we can say is whatever is currently in practice can’t be working if potentially it will cost the company £86.6m to resolve situations and already £500 million wiped off the shares! Pretty big f*…oversight.

Quite often IT Directors come into an organisation and stamp their mark by simply outsourcing IT to cheaper resources which in short will save the company a few thousand pounds etc. but if your company’s focus is on ‘quality’ whether it be product or service then making such cuts or changes is surely going against the ethos of the business albeit to save on the short. Maybe this was a risk British Airways was willing to take. The only issue was they got caught out early doors before they could eventually bring everything back inhouse.

Similar to the ransomware attack if the company had invested in a fully cloud IT Strategy then in my opinion the damage limitation would be limited if not eliminated. I have an article that explains the proposed plans for mitigating the ransomware attacks in a previous article found here.

Would a cloud strategy help?

British Airways could benefit from using Microsoft Azure features such as Traffic Manager to redirect user traffic to alternative data centers to continue services whilst the primary is ‘broken’. Customers had serious issues accessing BA’s mobile apps to gather flight information. By using Azure Mobile Apps which is a global service together with Traffic Manager users would effectively be non-the wiser that there were backend issues at the British Airways primary datacenter. As mentioned above their was an apparent ‘huge’ power surge which would have affected server racks within the British Airways datacentre. Well by configuring your Servers with High Availability and spreading them across Availability Sets then so long as it’s a single rack that was affected by the powersurge the British Airways Servers would be affected minimally because services would have failed over to the working nodes on separate racks (which has a separate power supply unit). However, given that British Airways hosts critical services that need to be available 24/7 then the likelihood is that if these services were hosted in Microsoft Azure they would be made Geo-Redundant where data is replicated 3 times to neighbouring Azure datacenters that exist roughly 300 miles away, again minimal damage obtained to business-critical services. There are far more measures that can be taken such as Azure Site Recovery etc. but the point is Azure provides that assurance on a enterprise scale particularly with critical services whereby incidents like this are far less likely to take place if the IT strategy is designed correctly where you can take advantage of the cloud strategies available today.

There’s a place for the Microsoft Cloud whether you are a global organisation or SME the question is how best can it suit you.

Protect the NHS from Ransomeware attacks with Microsoft Azure

It’s time for the NHS to move forward.

As we know the NHS according to the media is at breaking point with the consistently high patient demands and workloads. With limited funding, it is no surprise that their IT systems typically are very much out of date and lack investment. You only have to look at their buildings to reflect on what type of technology they are running. I couldn’t say that is the case across the board but I’m pretty sure it’s not far from wanting.

It comes to no surprise then that they would be ideal targets for such malicious attacks such as the most recent Ransomeware malware attack.

What is Ransomeware?

“Simply put, it’s a type of malware that gets into a computer or server and encrypts files, making them inaccessible. The goal is to shut down your ability to do normal business. The attacker then demands a ransom for the key to “unlock” your data.” – Microsoft 2017

The severity of such an attack on such a critical part of society is devastating particularly with many patients needing operations and having sensitive data inaccessible throughout the duration of the attack. To have no control over operations is a costly lesson to learn.

So now what? Ransomeware is only one permutation of a malware attack that caused devastating affects on NHS services.  So what’s the lesson here?


There is method to our (IT Professionals) madness when we say it is important to implement a business continuity and disaster recovery (BCDR) strategy. The more sensitive your data is the more attention you need to give to such strategies.

Although it may be safe to say you obviously cannot protect yourself against everything you sure can mitigate the impact of such attacks.

This attack on NHS services is a prime example of why the NHS services should move to the Microsoft Cloud. By moving to the Microsoft Cloud the NHS service(s) can very quickly implement a BCDR strategy as a matter of priority. That way they would very quickly achieve the certainty that their data is protected securely and off site! Azure Products that will facilitate this may include:

  • Azure Backup Service (99 year data retention)
  • Microsoft Azure Recovery Services Agent (99 year data retention)
  • Microsoft Azure Backup Server (Azure offsite 99 year data retention)
  • System Center Data Protection Manager (Azure offsite 99 year data retention)
  • StorSimple

Each one of these Microsoft products above offer their own benefits in addition to being both cost effective and secure. Multiple backup methods can be used or a single method, it depends on your workload although each one ultimately provides sufficient backup strategies for your sensitive data. Choosing the right one involves evaluating the importance of the data which is determined by your business needs. Backup schedules that can be defined by the businesses RPO and RTO per application etc.

Although Disaster Recovery (DR) often features as a tick box for most IT Strategies although rarely tested the businesses who do use it tend to use DR and focus less on Backup as time goes on. The issue here is if your primary site is attacked, as part of DR it too would replicate your current site, rendering your DR site compromised.

I mentioned DR and so far, I haven’t spoken about Azure Site Recovery (ASR). This is Microsoft’s solution to DR. ASR protects your environment by automating the replication of the virtual machines, based on policies that you set and control. What makes it different to traditional methods is that ASR allows you to preserve history in the DR site, which can help reduce the problem posed by compromised disaster recovery.

I don’t want to say i told you so but had the NHS implemented one or more of the backup methods mentioned earlier with ASR they would have been better positioned to recover from disaster quickly and efficiently with minimal effort compared to traditional onprem strategies.

Keeping your systems up to date can be a tedious task for any IT professional but as we see it is necessary to protect yourself from attacks. Another feature of Microsoft Azure that is both quick to deploy and offers great insight into your patch level of machines for both onprem and Azure Virtual machines is Operations Management Suite. I won’t extend on this but it is improving almost weekly on its capabilities. You can identify vulnerable machines and initiate security update installation remotely. More info on OMS here.

Bottom line is NHS update your systems as a point of urgency, leverage the Microsoft Cloud to do so, give yourself the piece of mind that your data is secure, capital expense is reduced based on Azure’s ‘operation model’ and monthly billing based on usage and lastly reap the benefits of operating your technology in the most agile way possible.

Once your services are running in the Microsoft cloud you can leverage even more functionalities available with Microsoft Azure such as Machine Learning, Cognitive Services, Data Lake Store and so much more!  #keepthenhsprotected

Azure Application Gateway

This is what Microsoft say “Microsoft Azure Application Gateway provides an Application Delivery Controller (ADC) as a service, providing many layer 7 load balancing capabilities.”


Essentially it sits above your backend web servers and manages the distribution of traffic pretty simliar to an Azure Load Balancer with a few minor differences which i’ll explain later.

Features include:


  • Web Application Firewall (WAF) which protects your backend servers from attacks like SQL injection, cross-site scripting attacks, and session hijacks.


  • HTTP Load balancing – Your typical management of traffic across multiple backend servers over HTTP (or web traffic) ensuring data is sent to the ‘online’ server in the case of a server outage for example.


  • Cookie based affinity (stick sessions) – This feature will keeps the user’s session on the same backend server that processed the request (good for FTP that uses two distinct channels for Control and Data transfer).


  • SSL Offload – This feature offloads the encryption and decryption of taffic from the backend servers and offsets it onto the Application Gateway freeing up performance on the servers.


  • URL Content Routing – “This feature provides the capability to use different back-end servers for different traffic. Traffic for a folder on the web server or for a CDN could be routed to a different back-end, reducing unneeded load on backends that don’t server specific content.”


  • Multi-Site Routing – consolidate up to 20 sites behind the Application Gateway


What’s the difference between the Azure Load Balancer & the Application Gateway?


An Azure Load balancer supports ANY protocol whereas the App Gateway is just HTTP & HTTPS.


The App Gateway supports Any Azure Internal IP address or public internet IP address as a form of Endpoint wheras the Azure load balancer is associated with Azure VMs and cloud services only.


IP Reservation is not supported with App Gateways and whereas SSL offloading is specific to the App Gateways and NOT Azure load balancers.

An App Gateway must exist within its own subnet with no other resources outside of Application Gateways.


Azure Security Best Practices

Remember Azure is split into 3 main categories that are Network, Storage and Compute.  This post will focus on the Microsoft best practices and my personal preferred practices when deploying an Azure datacenter solution.



Divide your vNET into service layers 

Like with onprem networks one of the first things you do is create your IP address space and divide up that space into subnets. I would recommend a slash /16 address as it will give you a good number of addresses that can be split into /24 addresses giving you a decent number of devices per subnet.  Create it too small and restrict future growth. So we create these subnets predominently to differentiate between the different services within your environment (Azure vNETs are no different).  The naming of the subnets can be dependent on the customer’s services but more commonly it looks something like DMZ, Web-Frontend, App-Layer, Database layer and Infrastructure layer.  There are many variations of this.

Default Routes – Routing between the subnets is permitted as part of the Azure ‘default routes’.




Network Security Groups (NSG) – We can however manage the traffic flow by assigning Network Security Rules which exist inside a parent Network Security Group.  I have another post explaining Network Security Groups


Forced Tunneling – as part of the default rules the VMs that sit inside the vNET have permission to connect to the internet.  You may want to prohibit this but if it’s not a key focus point as part of your security i would suggest you don’t as it can have a negative impact on internet based services such as WAP servers or any services that require access to the internet.



In other words how can i protect data in Azure.  There are two stages of your data ‘At rest’ and ‘in-transit’.

At Rest: Data stored statically on storage disks or containers etc.

In-transit: ‘Moving data’ over the network e.g. Express Route, VPN, across Service Bus etc.

Best practices include:

  • Enable Multi-Factor Authentication – Controlling the access to your data which includes verifying your users with a second layer of authentication.  This will mitigate against compromised credentials.

  • Role Based Access Control (RBAC) – Enable access on a ‘need to know’ and ‘least privileged’ by user groups to specific builtin Azure roles. This will mitigate against assigning more previledges to users than required.

  • Encrypt Azure VMs – Protecting data at rest using industry standard bitlocker. This helps mitigate risk to unauthorized data access. Do this before writing content to the disks makes for a quicker encryption process.  You can use Azure Key Vault to save the encryption keys.

  • Secure Workstations (Privilege Access Workstations (PAW)/Jump Box) – Use a secure workstation with controlled access to access your sensitive data. MFA can also be introduced for an extra layer of protection against unauthorized users when accessing the PAW.

  • SQL Data Protection OR Transparent Data Encryption (TDE). This will encrypt the entire SQL database using a symmetric key called database encryption key. This mitigates against threats to malicious activity by performing real time encryption and decryption of the database, backups and transaction logs.

  • Protect data ‘in-transit’ – Where possible always use SSL/TLS protocols to exchange data across different locations else you can isolate your channels by using Azure VPN for cross premises data encryption or even safer use Express Route as a dedicated link which will bypass the internet which again mitigates risk of ‘man-in-the-middle’ attacks, eavesdropping and session hijacking.

  • File Level Encryption – You can use this method independent of file location by encrypting your files by leveraging Azure Rights Management. This level of protection works across multiple devices (phones, PCs, tablets) and provides assurance the protection remains with the file regardless of location. By using RMS you are using industry standard cryptography with full support of FIPS 140-2.