Monday, July 29, 2013

Creating an Information Security Compliance Capability

In my last post I wrote about compliance and 3rd party providers and this time I want to go deeper into what it takes to have a good compliance capability. Please note that this post doesn't go into detail but will provide sufficient guidance (I guess).

So first of all, why we need a compliance capability or function? Organisations today have regulatory requirements that they must comply with. As an example, if the organisation’s mission is to provide healthcare services in the US, then HIPPA and/or FDA will be a regulatory requirement. Now, if part of the organisation’s business strategy is to go public (IPO) in the US, you will need to comply with SOX requirements.

Also there are industry best practices or frameworks that an organisation can adopt as part of its business or IT strategy such as ISO, NIST, COBIT which will then be part of its compliance requirements. Also, last but not least, an organisation’s internal policies should also be considered as part of its compliance requirements. The following figure illustrates what I’ve just mentioned:




Why it is important to be compliant?
Well, to simply put it, being non-compliant with regulation requirements might imply quantifiable losses to your organisation (financial sanctions, impossibility to offer in the NYSE, etc) along with non-quantifiable losses, such as image and reputation loss. So, in other words, non-compliance creates a risk to an organisation. If you do some googling for HIPPA or privacy breach and you’ll get what I’m meaning.

What happens when you are not complaint with your own policies? Well, internal or external auditors will have a finding (risk), which will require a remediation plan and funding to rectify the non-compliance… Plus someone above you in the corporate chain will not be happy.

Why implementing a compliance capability is good?
A compliance capability will provide the organisation the resources required to understand compliance requirements, communicate them, assist in achieving a complaint state, maintain it, manage the risks associated with non-compliance situations and track their rectification.

In order to implement this capability, it is necessary to define a framework, like the one shown in the following figure:






Process Management

The first step for implementing a compliance capability is to plan the capability itself in terms of policies, procedures and human resources. Depending on the organisation and the compliance scope, the number of human resources will vary.

Also, the capability’s policies and procedures need to be developed. These will specify how the capability’s governance is to be achieved and how it will relate to other capabilities at an enterprise level and other governance functions (i.e. IT and Information Security).

Things like principles, roles and responsibilities, KPIs, assess compliance (determine compliance requirements, build SOA), track and rectify non-compliance issue (register) amongst others.

Changes in processes across the enterprise might be required in order to create interfaces for compliance reporting.


Risk Management
Any non-compliance issue will trigger a risk to the enterprise. Because of this, it is very important that the compliance framework is inline with the enterprise risk management or IT risk management framework. Each non-compliance situation must have it’s risk assessed tracked as part of the organisation’s risk profile until rectified.

Based on the organisation’s risk appetite and tolerance it is possible to assign to each non-compliance issue a risk rating.

The compliance policy and non-compliance issue management procedures should establish how this interface will work.



Monitor and Evaluate
In order to evaluate the organisation’s current compliance posture, information can be fed from the following sources:
  • Using internal / external audit reports
  • Using self-assessments
  • Executing internal compliance assessments
  • Reviewing operational KPIs.
Enterprise interfaces are key to monitoring activities that could generate non-compliance situations (and as such, risks). These interfaces are process modifications across the enterprise that would feed the compliance capability of current activities that are happening which may impact the compliance posture. Activities such as outsourcing a service done by a business unit should be detected and analysed for compliance.




Communication and Training
The organisation needs to be aware of the compliance requirements and the capability itself. In order to achieve a good compliance, the people need to know where to go with questions.


Key things to consider are:

  • Create a compliance focal role: This role will be responsible to answer all questions on compliance requirements and can also interface between business units and the organisations compliance capability. It can be one resource or many resources distributed across the organisation.
  • Awareness and training: The compliance capability must ensure that awareness and training happens at an organisational level. Why? What? When? and How’s should be part of the awareness and training.
The compliance capability on the other hand needs to report to Governance boards (IT / Information Security) or key stakeholders on the current compliance situation. Dashboards and reports are good ways to show how the organisation is doing on its compliance aspects.


Hope it helps!

Sunday, July 1, 2012

Compliance and third party providers



This is my first blog post from Melbourne, Australia. Sorry it took me so long, but moving to a new country is not an easy thing. Anyways, having finished one of my first projects here, I thought that it was a good moment to blog… and what better way to start with third party provider compliance.

Organisations today are switching from Capital Expenditure (CAPEX) to Operational Expenditures (OPEX). Just as a reminder, CAPEX are generally investments such as buying buildings, servers and software. OPEX on the other hand, is the budget for things you rent or purchase in increments like payroll, utilities or maintenance. Hence, OPEX is more controllable, flexible and has more accounting benefits.

This is one of the main reasons why organisations are switching from investing in infrastructure to “renting” or outsourcing it.  Infrastructure (hardware and server management), cloud services, security services and software development are typical examples of outsourcing.
But outsourcing is not simply switching from CAPEX to OPEX, you are actually giving  a third party the possibility to access, create, modify, transfer or delete data and information. This requires an analysis of inherent risks and a proper compliance monitoring from organisations, which in my opinion, is not normally executed.

Let us use the following software development outsourcing example (which in my opinion is quite real):

Organisation XYZ decides to outsource a web development to organisation ABC. The web development requires access to customer’s information. Once the deliverables were finished, organisation XYZ signed off the deliverables for production. Months later, Organisation XYZ’s new web development got hacked and millions of records containing customers’ information were exposed. After analysing what happened, it turns out that organisation ABC didn’t include information security best practices as part of their SDLC (Software Development Life Cycle) and neither tested their deliverables for vulnerabilities. Now organisation XYZ is facing lawsuits for privacy breaches.

Who is responsible for this breach?  Did organisation XYZ applied due diligence and due care? What could organisation XYZ have done to reduce the risk of this from happening?

Note: In my following post I will write about a proper compliance framework, so for this post I assume legal and regulatory compliance requirements are already known.

Step 1: Create a third party provider outsourcing policy

Depending on what we are going to outsource; we need to have a policy specifying which are the organisation’s outsourcing requirements. For example, if we outsource software development, policy might specify:
  • Risks of outsourcing must be analysed and mitigation controls implemented during SDLC.
  • A SDLC framework according to best practices must be applied by provider.
  • A proper change management process with approvals from both organisations must be in place.
  • Threat Risk Assessment (TRA) should be included as part of the change management process.
  • Information protection mechanisms should be in place to protect its CIA (Confidentiality, Integrity and Availability).
  • Vulnerability scans and penetration tests should be performed for a development to achieve a security certification.
  • Development must be accredited before entering into the production stage.
  • Requirements for compliance to this policy must be included in outsourcing contract.
  • Provider must have a proper information security program.
  • Provider must execute annual risk assessments on its infrastructure.
  • Provider must have a proper information security framework in place.

Also we can create a generic policy, which ever approach fits best an organisation.

Step 2: Perform a risk assessment

Going back to our example, the development required access to customers’ information and that itself requires an analysis of what are the possible risks and impacts. Here are some questions that might have been asked:
  • What would happen if the code is contains bugs? Will that allow a hacker to get into our systems? 
  • What would happen in the platform has vulnerabilities and we do not apply patches to them? 
  • What if the connections to the database are insecure?
  • What if the customer information gets stolen? 
  • Is the provider properly screening its employees? 
  • Does the provider outsource to another provider? 
  • What regulations could be breached if something goes wrong?

Step 3: Include compliance requirements in contract

This is a key step, because we will be checking compliance on the contract. The contract should have a special compliance clause that lists all the requirements that the third party provider must comply with. You can use your outsourcing policy (internal) as a source plus any new requirements that resulted as the risk assessment process.

Other key requirements that might be specified (as an example) are:
  • Rights to audit the third party provider 
  • Liability issues (if their deliverables are vulnerable without proper due care taken) 
  • Personnel screening by third party provider 
  • No “outsource” requirement (do not allow the third party provider to outsource to a fourth one). 
  • Reporting and monitoring requirements (KPIs and reports to be created). This is important to perform monitoring.

Step 4: Monitor compliance


Now that we have a contract that includes compliance requirements which are based on the organisation’s policies and risks detected, we can monitor the third party provider for compliance based on that contract.

The compliance monitoring process can be fed from the following sources:
  • Assurance activities (external / internal audits) 
  • KPI measurements that might indicate a non-compliance (i.e. number of changes requested and approved do not match the changes placed in production) 
  • Results from penetration tests and Vulnerability scans 
  • Self-assessment reviews 
  • Regulatory reviews 
  • Consulting reviews 
  • Changes in regulatory requirements that might affect the development


A key thing is having a good compliance framework and a management system to track non-compliance issues resolutions and associated risks. I’ll adress that in my next post.

Hope it helps :)

Sunday, October 23, 2011

Developing an Information Security Strategy & Program


When it comes to Information Security management, one of the most interesting and difficult task is developing an Information Security strategy and program. Why is a Strategy required? Let’s see the following statement:
Developing and maintaining an information security strategy is essential to the success of your program. This strategy serves as the road map for establishing your program and adapting it to future challenges. By following a consistent methodology for developing your strategy, you are more likely to achieve high-quality results during the process and complete the project in a timely manner[1]
So, what is the difference between a strategy and a program? Well, they are related in the following way:
  • An Information Security strategy will set long-term objectives (or security objectives), normally by determining the Organizations current state and the desired state in information security matters. The planning horizon is normally for 5 years.
  • An Information Security program is what will take the Organization from that current state to the desired state, by executing short, long and mid-term projects. 


The program should be based on a Strategy. We know that the core of any security program will be Risk Management, Policies, procedures & standards, information security organization structures, information classification and awareness & education. But depending on the Organization and where it wants to set its security objectives, these “core” foundations will be modified depending on their strategy.

For starters
The following steps are the basic foundations for a successful Information Security Strategy (ISS):

Step 1: Strategic Alignment
Creating an ISS is not an easy thing and it’s not 100% Information Security related either. A good ISS has to be aligned with the business process and objectives of the Organization that we are creating it for. In other words, we have to know and understand the Organization’s mission and align our ISS that same way. This is not an easy thing but it is critical.
Step 2: Executive Management Support
Another key of success is having Executive Management support. Now again, this is not easy, but is one of the most important things. Management must understand that Information Security is not a bunch of firewalls, Antivirus software and having strong passwords. Information Security is a living thing that requires its management, hence, a definition for Information Security Management would be all the activities that properly identify and value an Organization’s information assets and that together, provide confidentiality, integrity and availability to them.

A Steering Committee can be established with Executive Management to formally include them in the process (basis for an Information Security Governance initiative).
Step 3: Regulatory Requirements
And finally, what are our regulatory requirements? Based on the Organization’s business process and objectives, we will be able to define what regulatory requirements the Organization must be compliant with. For example, if an Organization sells Health related products (drugs or equipment), they must comply with FDA requirements. Also, if they handle Protected Health Information (PHI), they must also be compliant with HIPAA.

How to Develop a Strategy
So, once we have Executive support, we know the Organization’s business objectives and regulatory requirements, we need to know two things: where we are standing and where we want to go to.
Step 4: Information Security Strategy Framework
So, to know where the Organization is standing in Information Security matters, we need to compare its current state against Industry’s best practices (standards and frameworks). These must be the same ones that we are going to use to define the desired state. Why? Because by using the same standards and frameworks, we can perform a more accurate GAP analysis (it’s like comparing apples with apples).

So what best practices should we use? Again, depends on what are the Organization’s objectives. Normally, I would recommend on a standard / frameworks like NIST, ISO 27.002, CMMI, CobiT, ITIL, COSO and then add required controls by regulatory requirements (call it PCI, HIPAA, FDA, FFEIC). Normally a Mapping analysis between different standards and regulatory requirements will be necessary in order to avoid control repetition.


Also, it is possible to combine two or more standards or frameworks. For example, we can use the ISO 27.002 11 security control clauses in combination of Capability Maturity Model (CMM) where for each clause we could estimate the current state:

The combination of all the standards and frameworks creates what I like to call the “ISS Framework” that we will use to define the security objectives. Also this can be known as the Corporate Security Framework.

Step 5: Current State
Now that we have defined an ISS Framework, we need to determine our current state. In order to do that, two things must be done: a GAP analysis against the ISS Framework and a Risk Analysis.

It recommended that a Risk Assessment framework is used (ie NIST SP800-30, CobiT, RiskIT, etc.). Also, the ISS Framework controls can be included as part of the Risk Analysis when performing the Vulnerability Identification (from NIST SP800-30), since missing or not fully implemented controls introduce vulnerabilities and resulting risks.

The scope of the Risk Assessment must include all applications and devices that transmit, process or store critical information (i.e. critical applications and general support systems). In order to do this, the following conditions have to be met:

  • Information assets and resources properly identified
  • Information assets and resources are properly valuated 
  • Information assets are classified according to its confidentiality, integrity and availability requirements.
Without those conditions, a sound ISS cannot be achieved.

The output of the Risk Analysis (plus GAP analysis) will be the current threat profile and the identified risks to the Organization. If we are using another standard or framework, like CMMI, then we would need to assign a level to each control clause based on the results obtained.

An important thing to define in this step is the Risk Appetite of the Organization. When we talk about the risk appetite, we are talking about the amount of risk an enterprise is prepared to accept. Risk appetite can and will be different amongst enterprises, hence there is no absolute norm or standard of what constitutes acceptable and unacceptable risk.

Step 6: Desired State
With the Organization actual status, we can define the desired state. Again, we have to use the same ISS Framework we defined in step 4. Now it’s time to define the Information Security objectives (long term) for the Organization. For example:
  • Become PCI compliant.
  • Become HIPAA compliant 
  • Implement Asset Management control clause (at least level 4) 
  • Implement Business Continuity Management (at least level 5) 
Two critical things should be considered when defining the objectives: the Risk Appetite of the Organization and the strategic alignment.

Risk appetite should be considered since it will modify the desired state. An Organization with a greater risk appetite will not fully implement all controls, while one with almost cero risk tolerance will implement almost every control.

Finally, objectives that do not support the Organization’s business strategy should not be considered.

Build the Roadmap
Now that we have the desired state or the information security strategy, what will take us from the current state to that desired state is the Information Security Program (ISP). The ISP consists of all the activities that together provide Information Security. Normally, it will consist of short and medium term projects with some of them being recurring ones, such as Risk Assessments and Awareness & Education.
Step 7: ISP framework
It is important to define what framework we are going to use to develop the ISP. There is no straight answer here, we can have a custom made framework like 1) Plan / Organize, 2) Implement, 3) Maintain / Operate 4) Monitor / Evaluate or we could apply the PDCA (Plan – Do – Check – Act) from the ISO 27.001 standard.

Whatever is the framework that we apply, as long as it has a planning phase, execution phase, control phase and feedback / adjustments phase, it should work.
Step 8: Create the ISP
So, based on the Information Security long term objectives (desired state or strategy), we should break them down in different smaller projects that will help us achieve them. As it was mentioned before, these projects should be sized based on many factors, like funding’s, criticality, business objectives, resources available, technologies, etc. Normally, the ISP will involve 5 year projects (aligned to the 5 year planning horizon of the strategy) and a very important fact is that it is never definite. Why? Because of the nature of its contents: Information Security always changes!

So it is important to consider constrains that may appear when developing the ISP:
  • Law 
  • Physical capacity 
  • Ethics 
  • Culture 
  • Costs 
  • Funds 
  • Personnel 
  • Resources 
  • Capabilities 
  • Time 
  • Risk appetite 
What are the resources that will be used to achieve various parts of the strategy and use in the ISP are, among others:

  • Policies 
  • Standards 
  • Processes 
  • Methods 
  • Controls 
  • Technologies 
  • People 
  • Skills 
  • Training 
  • Education 
  • Other organizational support and assurance providers 
So, continuing with our example before:



Step 9: KPI
It will be required to establish a way to analyze the progress of the ISP execution. In order to achieve this, Key Performance Indicators must be planned, established and executed. Which KPIs must we use? Well, all projects must be tracked for Schedule and Costs. Other ways that this can be achieved is by using CMM to monitor how a specific area evolves.
Step 10: Do – Check – Act
Once we have the ISP ready, according to the Plan - Do - Check - Act framework we should:
  • Do: Execute the program accordingly. 
  • Check: Monitor the progress of the program frequently 
  • Act: On deviations found, take the necessary corrective actions in order to get on track
By Agustin Chernitsky

[1] Mather, Tim & Mark Egan, Developing Your Information Security Program, Prentice Hall PTR, USA, 10 December 2004

Wednesday, September 14, 2011

The attack intent on Google proves one of the weaknesses of Public Key Infrastructure: Certificate Authorities

On August 29th, Google made public onits official blog that they detected a “Man in the Middle” (MITM) attack intent that could had been used to intercept information between all their web services (Gmail, Gdocs, search, etc) and some Iran ISPs.
The blog post also states that the attackers used a Digital Certificate (issued to *.google.com) from a Dutch Certificate Authority (or CA) called DigiNotar, which according to Google, it is not used by them. This is a proof of concept of one of the weaknesses in Public Key Infrastructure (or PKI).

This incident forced Google and Microsoft to remove DigiNotar as a Trusted CA by updating their operating systems and Internet browsers.

What are Digital Certificates, Certificate Authorities and Public Key Infrastructure?

A Digital Certificate is a documented issued by a third party (called Certificate Authority) that authenticates a person or entity and it is used to create secure (or encrypted) connections with the use of SSL (or Secure Socket Layer) protocol through Internet. A Digital Certificate contains information like: name, permitted use, issuer, issued to, encryption algorithms, serial numbers, expiration dates, dates issued, etc.

We can compare a Digital Certificate with a passport, which is a document with international validity used to identify its holder.

Certificate Authorities are Organizations whose mainfunction is to issue or revoke Digital Certificates. Before issuing them, they must perform a due diligence in order to validate the persons or entities that request them (known as requestors) and then issue them by applying their digital signature on their certificate.

Going back to our passport example, they must be issued by a government agency. In order to obtain or renew one, each requestor must provide proof of their identity by showing a national identity card or similar document. Once the government validates the identity, they issue the passport with their signature and corresponding formats.

Public Key Infrastructure is an authentication framework that involves a set of programs, data formats, procedures, communication protocols, security policies and public encryption mechanisms that working together allow disperse persons or entities to communicate in a predictable and secure manner.

In order for persons or entities to participate in a PKI, they must possess a Digital Certificate issued by a “Trusted CA”. A trusted CA is one that is considered as such by an operating system, Internet browser or user. Normally, operating systems are configured by default with a list of trusted CA. This is what allows persons or entities that possess Digital Certificates issued by different CAs, and that have never met, authenticate and communicate in a secure manner.

Going back to our passport example, it will be considered internationally valid if it was issued by an official and authorized entity. This means that different immigration services will check for a specific passport who is the authorized entity to issue it and will validate our document and identity by trusting that entity.

How was the attack intent on Google performed?

Google did not give out details on the attack itself, but with the Digital Certificate issued by a trusted CA to *.google.com and information found on the case, we can establish the following attack vector:

  1. The attacker, by executing a DNS poisoning attack, modified the DNS cache entries of some ISPs from Iran. This allowed them to redirect all traffic to Google services to another server. Remember that a DNS resolves domain names into IP addresses, and each time it does this, it stores the resolutions in a cache for faster access. The DNS poisoning attack modifies the resolutions in the cache by injecting data that did not originate from authoritative (or original) DNS sources. 
  2. Then, the attacker configured a Proxy server with which he can receive, modify (or sniff) and redirect traffic to others servers. Here is where our trusted Digital Certificate comes into play: When the users connect to the Proxy server (redirected thanks to the DNS poisoning), it authenticates to them by using the Digital Certificate obtained. Since it was issued to *.google.com and it was a trusted Certificate, browsers and operating systems will treat the proxy server as a legitimate Google site (their browser will show gmail.google.com and hence match the issued certificate).
  3. Finally, the attacker will redirect traffic to the legitimate Google site. All traffic returning from Google to the user will go through the same Proxy Server, allowing the attacker to sniff it and thus obtaining passwords, reading mails, documents and search strings. 
There is information that indicates that steps 1 and 2 were executed, but this was never confirmed by Google. Moreover, the attack was detected and stopped by Google’s Internet browser (Chrome), which detected that the Digital Certificate was a fraud; hence, step 2 of the attack must have taken place.

The immediate action taken by Google and Microsoft was to remove DigiNotar as a trusted CA from their operating systems and browsers. This implies that all web sites that use certificates issued by this CA to authenticate or encrypt data, will receive an error from Internet browsers stating that the certificate in use was issued by a non-trusted CA.

Conclusion 

PKI is still a good authentication framework to Exchange information between unknown entities, but like all things, it has its weaknesses:

  • Like it was demonstrated in this incident, the Dutch CA issued a certificate without doing a due diligence on the identity of the requestor, thus putting in danger the confidentiality of the information of group of Google users. 
  • More than one CA can issue a Digital Certificate to the same person or entity. In this case, Google works with Verisign and not DigiNotar. 
  • If the private key with which the CA digitally signed the certificate is compromised, all certificates issued must be revoked. 
What to do to avoid these incidents?
  • CAs must perform a due diligence on the identity of the requestor.
  • There should be a register, like the one used with domain names (registrars), that would allow a CA to detect if a requestor is already using or possess a Digital Certificate. By having this, fraudulent certificate issuing (like what happened to Google) could be avoided. 
  • PKI should incorporate a new figure in its framework that incorporates the previous register. This register should be an independent entity that validates identities with authorized CAs. 
  • Finally, browsers and operating systems should incorporate an additional control to use the new register suggested. A good example of this is Chrome, which uses this sort of controls only for their google.com domain. 

Monday, September 12, 2011

El intento de ataque a Google demuestra un punto débil en la Infraestructura de Clave Pública: Las Autoridades de Certificación

El 29 de Agosto, Google publicó en su blog oficial que detectó un intento de ataque del tipo hombre en el medio (Man-in-the-middle ó MITM) que podría haber permitido la intercepción de datos de todos los servicios de Google (Gmail, Docs, buscador, etc.) a varios usuarios ubicados en Iran.

El mismo comenta que los atacantes utilizaron un Certificado Digital (a nombre del dominio *.google.com) emitido por una Autoridad de Certificación llamada DigiNotar, que no es utilizada por Google, exponiendo así una de las debilidades de la Infraestructura de Clave Pública .

El incidente provocó que tanto Google cómo Microsoft remueva a DigiNotar cómo un CA de confianza mediante la actualización inmediata de sus navegadores y sistemas operativos.


¿Qué es un Certificado Digital, una Autoridad de Certificación y una Infraestructura de Clave Pública?

El Certificado Digital (ó Certificado SSL) es un documento emitido por un tercero (llamado Autoridad de Certificación) que autentifica a una persona ó entidad y es utilizado para realizar conexiones seguras / cifradas en Internet por medio del protocolo SSL (ó Secure Socket Layer). Un Certificado Digital contiene datos como ser: nombre, dirección, usos permitidos, quién lo emitió, datos de cifrado, número de serie, fecha de expiración, fecha de emisión, etc.

Para clarificar la definición, podemos usar el Pasaporte como analogía: El pasaporte es un documento con validez internacional que identifica a su titular.

La
Autoridad de Certificación (Certificate Authority ó CA) son organizaciones cuya principal función es emitir ó revocar Certificados Digitales. Antes de emitirlos, deben validar la identidad de las personas ó entidades que solicitan el mismo y luego emitirlo mediante la aplicación de su firma digital.

Volviendo a nuestra analogía con el pasaporte, en el caso de Argentina, los mismos son emitidos por el Ministerio del Interior ó Policía Federal Argentina. Para renovarlo u obtenerlo, cada persona debe probar su identidad mediante su DNI. Una vez validada la Identidad, se emite el pasaporte con los sellos y formatos correspondientes.

La Infraestructura de Clave Pública (Public Key Infrastructure ó PKI) es un marco de autentificación que consiste en un conjunto de programas, formatos de datos, procedimientos, protocolos de comunicación, políticas de seguridad y mecanismos de cifrado público que en conjunto permiten a personas ó entidades dispersas comunicarse de manera segura y predecible.

Las personas ó entidades que quieran participar en un PKI necesitan poseer un Certificado Digital emitido por un CA de confianza. Un CA de confianza es aquel CA considerado por un sistema operativo (y/o navegadores de Internet ó usuario) como tal. Normalmente, los sistemas operativos ya poseen un listado de CAs de confianza por defecto. Esto es lo que permite que personas ó entidades, con Certificados Digitales emitidos por diferentes CAs y que nunca se han conocido, autentificarse y comunicarse de forma segura.

Volviendo a la analogía con el pasaporte, el mismo se considerará valido a nivel internacional siempre y cuando sea emitido por una entidad oficial autorizada. Para los servicios inmigratorios del exterior, los organismos autorizados a expedir Pasaportes para Argentina son la Policía Federal Argentina y el Ministerio del Interior. Dicho de otra forma, ellos confían únicamente en éstos organismos para que avalen nuestra identidad.

¿Cómo fue el intento de ataque a Google?

Google no dio datos detallados sobre el ataque, pero con el Certificado Digital emitido por un CA de confianza a nombre de *.google.com y la información sobre el caso, podemos establecer el siguiente vector de ataque:

  1. El atacante, modificó los registros del cache de un servidor de Nombres de Dominio (DNS) mediante un ataque de envenenamiento de DNS (DNS Poisoning), logrando redireccionar el tráfico dirigido a google.com a otro servidor. Recordemos que un DNS es consultado en forma transparente por el sistema operativo cada vez que ingresamos una dirección web (como ser www.google.com) y permite transformar esa dirección en una IP. Cada vez que sucede esto, la asociación dirección web a IP se guarda en el cache del servidor DNS. El ataque en cuestión cambia la IP de esa asociación por otra. 
  2. El atacante configuró un servidor como Proxy por el cual puede recibir, modificar y redireccionar tráfico a otro servidor. Aquí es donde el Certificado Digital obtenido toma un rol fundamental, dado que el servidor Proxy se va a autentificar con el usuario final como www.google.com y al ser emitido por un CA de confianza, el navegador del usuario final lo mostrará como un sitio web válido. 
  3. Finalmente, el atacante redirecciona el tráfico al verdadero sitio de Google. Todo el tráfico que vuelve al usuario lo hace por el mismo servidor Proxy, permitiéndole al atacante capturar todo el tráfico entre usuarios y los servicios de Google, como ser contraseñas, correos (Gmail), documentos (Gdocs), y busquedas. 

Existen reportes que indican que los pasos 1 y 2 fueron ejecutados, pero esto no fue confirmado por Google. Adicionalmente, el ataque fue detectado y detenido dado que el navegador de Google detectó que el Certificado Digital era fraudulento, y para eso, los usuarios deberían haber completado el paso 2 del ataque.

La acción inmediata de Google y Microsoft fue remover al CA DigiNotar de la lista de CAs de confianza en el sistema operativo. Esto implica que todos los sitios web que utilicen Certificados Digitales emitidos por este CA para cifrar la conexión ó autentificarse, aparecerán en los navegadores de Internet con advertencias mencionando que el certificado fue emitido por un CA no de confianza.


Conclusión

La Infraestructura de Clave Pública sigue siendo un buen marco de autentificación para intercambiar información de forma segura entre partes desconocidas, pero como todo, tiene sus puntos débiles:

  • Como quedó demostrado en este caso, el CA Holandés emitió un certificado sin validar de forma exhaustiva la Identidad del solicitante, poniendo en peligro la confidencialidad de la información de los usuarios de Google. 
  • Más de un CA puede emitir un Certificado Digital a una misma persona. Google trabaja con el CA Verisign y no con DigiNotar. 
  • Si la Clave Privada con la cual el CA firmó Digitalmente el Certificado se ve comprometida, todos los Certificados emitidos deben ser revocados. 

¿Qué hacer para evitar estos incidentes?

  • Los CA deben validar de forma exhaustiva la identidad de las personas ó entidades solicitando un Certificado Digital. 
  • Debería existir un registro, al igual que existe con los nombres de dominio, que permita identificar si una persona ó entidad ya posee un Certificado Digital. De esta forma, se evitaría la emisión fraudulenta de certificados como sucedió con Google. 
  • El PKI debería introducir una nueva figura intermedia, que permita incorporar el punto anterior en su marco de autentificación. Esta nueva figura debería ser un ente independiente que valide Identidades con CA autorizados. 
  • Finalmente, los navegadores y sistemas operativos deben incorporar un control adicional utilizar la nueva figura intermedia sugerida en el punto anterior. A modo de ejemplo, el navegador de Google (Chrome) ya incorporó para el dominio google.com un sistema que solo permite la utilización de Certificados Digitales emitidos por los CA que ellos utilizan. Esto debería ser implementado a nivel mundial. 

Monday, July 18, 2011

Cloud Computing and Information Security: New Challenge

Cloud Computing is the new thing and we all love to talk about it. The US Government has a strong commitment to this new technology and even the most recognized IT Companies, like IBM, Microsoft, Google, HP, Apple and Amazon among others are already offering these services as Cloud Providers.

So, what is Cloud Computing?
It is an on-demand service model that allows access to computing resources like networks, servers, storage, applications, and services among others, with the primary advantage that it can be rapidly and automatically provisioned, even without the service provider interaction. A key characteristic is that these resources are shared among the different users of the Cloud service.

Cloud Computing caught everyone’s attention because it created a paradigm shift in the IT infrastructure concept: Instead of Companies owning data centers, servers, routers, switches and personnel to manage them, these will all be in the “cloud” provided to the Company by a third party provider.

This allows many advantages, like:
  • Cost reduction in the acquisition of new servers, network infrastructure equipment (routers, switches, firewalls, etc.).
  • Less in Company IT personnel, since the Cloud Provider will have a team a specialized to manage the service.
  • Less in Company personnel training, since the Cloud Provider provides that.
  • Operational cost reduction, like electricity, redundant equipment, and network links, etc. This will all be provided by the Cloud.
  • Inexpensive Research and Development
  • Capital expenditures will become operational expenditures.

What are the main characteristics of Cloud Computing services and how is it offered?

Cloud Computing has five essential characteristics:
  1. On-demand self-service: A consumer can unilaterally provision computing capabilities such as server time and network storage as needed automatically, without requiring human interaction with a service provider.
  2. Broad network access: Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, laptops, and PDAs)
  3. Resource pooling: The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand.
  4. Rapid elasticity: Capabilities can be rapidly and elastically provisioned
  5. Measured service. Resource usage can be monitored, controlled, and reported (e.g., storage, processing, bandwidth, or active user accounts).
Cloud Computing is offered in 3 service models:
  • SaaS (Software as a Service): The consumer is to use the provider’s applications running on a cloud infrastructure and accessible from various client devices through a thin client interface such as a Web browser (e.g., web-based email).
  • PaaS (Platform as a Service): The consumer is to deploy onto the cloud infrastructure applications using programming languages and tools supported by the provider (e.g., java, python, .Net).
  • IaaS (Infrastructure as a Service): The provider is to provision processing, storage, networks, and other fundamental computing resources where the consumer is able to deploy and run arbitrary.
Additionally, the service is available in 4 delivery models called Private, Public, Hybrid and Community. Next, I will describe the two most important ones:
  • Private: The cloud infrastructure is operated solely for an organization.
  • Public: The cloud infrastructure is made available to the general public or a large industry group and is owned by an organization selling cloud services.
Privacy and Security in the Cloud: Main Concerns

These new technologies that create a paradigm shift also build up a storm of new challenges and concerns to Corporate CIOs. The NIST (National Institute of Standards and Technologies) and the CSA (Cloud Security Alliance) mention some of these concerns:

  • Compliance: Data resides in the cloud and not in a fixed physical place. The exact location of it may not be available or disclosed by the Cloud Provider, making it difficult to guarantee the correct implementation of security controls required to achieve regulatory compliance. As an example, Health Insurance Portability and Accountability Act (HIPAA) and the Payment Card Industry Data Security Standard (PCI DSS), requires both technical and physical safeguards for controlling access to data, which may create compliance issues for some cloud providers
  • Cloud Provider Trust:
    • Data ownership: Data is processed and stored in the cloud provided by the Cloud Provider. Without a clear service agreement, there might be uncertainty who’s the owner of the information.
    • Composite services: Cloud services that use third party cloud providers to outsource or subcontract some of their services should raise concerns, including the scope of control over the third party and the responsibilities involved.
    • Malicious Insiders: Cloud providers grant their employees access to perform their duties (physical, virtual, etc.). A Malicious employee might have access to many Customers’ confidential data or even cause sabotage (availability issues).
  • Risk Management: Organizations should ensure that security controls are implemented correctly, operate as intended, and meet its security requirements. Since all the IT infrastructure is in the Cloud, this task could be very difficult to achieve, generating higher risks.
  • Architectures and attacks:
    • Shared technology: The implementation of virtualization technologies create new attack surfaces since these technologies are susceptible to vulnerabilities.
    • Virtual networks: Traffic over virtual networks may not be visible to security protection devices on the physical network, such as network-based intrusion detection and prevention systems.
    • Insecure APIs: APIs are available to Customers for service management and might represent risks if they are not secured.
    • Data loss or leakage: All service models are subject to data loss or leakage due to different factors like operation failures, missing encryption keys, data destruction challenges, DRP, etc.
  • Availability: The possibility exists for a cloud provider to experience problems, like bankruptcy or facility loss, which affect service for extended periods or cause a complete shutdown.
  • Attacks to/from the Cloud:
    • Account or Service Hijacking: All service models are subject to Account / Service hijacking due to different factors like Phishing, Fraud, vulnerability exploits, etc. Once an attacker has access to credentials, they can easily manipulate data, access information or even contact the Customer’s clients.
    • Abuse of use of Cloud Computing: The illusion of unlimited compute power, network, and storage capacity, gives spammers, malicious code authors, and other criminals the possibility to conduct their activities with more power and even anonymity.
    • Denial of Service: The dynamic provisioning of a cloud in some ways simplifies the work of an attacker to cause harm. Denial of service attacks can occur against internally accessible services, such as those used in cloud management.
What recommendations are there to move to Cloud Computing Services?

There are many discussions being held about which are the best recommendations that should be taken into account when moving to Cloud Computing services. However, there are some of them that can be followed:

  • Understand the various types of laws and regulations that impose security and privacy obligations on the organization and potentially impact cloud computing initiatives, particularly those involving data location, privacy and security controls, and electronic discovery requirements.
  • Incorporate mechanisms into the contract that allow visibility into the security and privacy controls and processes employed by the cloud provider and their performance over time.
  • Specify into the contract the need for a risk management program that is flexible enough to adapt to the continuously evolving and shifting risk landscape.
  • Understand the underlying technologies the cloud provider uses to provision services, including the implications of the technical controls involved on the security and privacy of the system, with respect to the full lifecycle of the system and for all system components
  • Understand virtualization and other software isolation techniques that the cloud provider employs, and assess the risks involved.
  • Evaluate the suitability of the cloud provider’s data management solutions for the organizational data concerned.
  • Ensure that during an intermediate or prolonged disruption or a serious disaster, critical operations can be immediately resumed and that all operations can be eventually reinstituted in a timely and organized manner.


By Agustin Chernitsky
Information Security specialist.

Sunday, July 17, 2011

Seguridad de la información y Cloud Computing: Los desafíos

Cloud Computing es lo nuevo y está en la boca de todos. El gobierno de EE.UU. apuesta fuerte a esta tecnología. Las empresas más reconocidas en el ambiente tecnológico, como ser IBM, Microsoft, Google, HP, Apple y Amazon entre otras ya ofrecen este tipo de servicios como Cloud Providers (proveedores de servicio).

Entonces, ¿Qué es Cloud Computing?

Es un modelo de servicio disponible “a pedido” que permite el acceso a recursos tecnológicos, como ser redes, servidores, almacenamiento y aplicaciones entre otros, con la ventaja que pueden ser aprovisionados rápidamente y en forma automática, sin interacción del proveedor. Una característica fundamental es que estos recursos son compartidos entre varios usuarios del servicio.

Cloud Computing llama tanto la atención dado que representa un cambio de paradigma en lo que es el concepto actual de infraestructura de IT: En vez de las Compañías tener centros de cómputos, servidores, routers, switches y personal dedicado entre otros, éstas los tendrán en la “nube” como servicio tercerizado. 


Esto les permitiría varias ventajas económicas, como ser:

·         Reducción de costos en la adquisición de servidores, equipamiento para la infraestructura de red (routers, switches, firewalls, etc.).
·         Reducción de personal, ya que el Cloud Provider proveerá de técnicos especializados para la administración de equipos.
·         Reducción en capacitación y entrenamiento de personal.
·         Reducción de gastos relacionados, como ser  electricidad, equipamiento redundante, enlaces redundantes,  etc.
·         Reducción de costos para investigación y desarrollo.
·         Los gastos de inversión en capital pasarán a ser gastos operativos.


¿Cuáles son las características del servicio de Cloud Computing y cómo se ofrece?


Cloud Computing posee cinco características esenciales:
  1. Servicio “a pedido” y en formato de auto-servicio: El cliente puede solicitar servicios a medida que los requiera (servidores, almacenamiento, etc.) sin la intervención del proveedor.
  2. Amplio soporte de acceso al servicio: Como la infraestructura está disponible a través de Internet y accesible por mecanismos estándar de acceso, esto permite el uso de varios clientes, como ser PDAs, netbooks, notebooks y navegadores de Internet entre otros para acceder al servicio
  3. Recursos compartidos: Los recursos IT del Cloud Provider están diseñados para proveer servicios a múltiples clientes en un modelo llamado múltiples inquilinos (multi-tenant model). Este modelo permite asignar y reasignar diversos recursos físicos y virtuales en forma dinámica y en base a la demanda del cliente.
  4. Elasticidad: La capacidad del servicio puede ser ampliada instantáneamente.
  5. Servicio medido: Los servicios pueden ser medidos (uso de CPU, almacenamiento  y red entre otros) y controlados.
El servicio se ofrece principalmente en 3 modelos (modelos de servicio):
  • Software como Servicio (SaaS ó Software as a Service): El Cliente solo puede utilizar las aplicaciones provistas por el proveedor y accederlas a través de varios dispositivos, como ser un navegador de Internet, PDA, etc. Un ejemplo de SaaS puede ser Gmail y Google Docs.
  • Plataforma como Servicio (PaaS ó Platform as a Service): El Cliente puede implementar sus aplicaciones en la infraestructura provista por el Cloud Provider pero solo utilizando los lenguajes, interfaces y herramientas brindadas y soportadas por éste.
  •  Infraestructura como Servicio (IaaS ó Infrastructure as a Service): El Cliente es provisto de capacidad de procesamiento, almacenamiento, redes y otros aspectos fundamentales de una infraestructura de IT permitiéndole implementar el software que éste desee, incluyendo sistemas operativos y aplicaciones.

Adicionalmente, el servicio posee 4 modelos de entrega delivery), como ser Privado, Público, hibrido y comunidad. A continuación describiré los dos más importantes:

  • Privado: La infraestructura provista estará operada únicamente para un Cliente en particular.
  • Público: La infraestructura provista estará disponible para el público en general.

Seguridad y Privacidad en la Nube: Principales preocupaciones El cambio de paradigma que trae aparejada la aparición de estas nuevas tecnologías genera una tormenta de nuevos desafíos y preocupaciones para los CIOs de las Compañías. El NIST (National Institute of Standards and Technologies) y el CSA (Cloud Security Alliance) establecen cuales son estas preocupaciones:

  •  Cumplimiento regulatorio: Los datos están en la nube y no en un lugar físico definido. La ubicación exacta de los datos puede no estar disponible o ser divulgado por el Cloud Provider, dificultando así la posibilidad de garantizar la implementación de los controles necesarios para lograr los cumplimientos regulatorios. Por ejemplo, existen regulaciones como ser HIPAA y PCI que requieren la implementación de controles a nivel físico y técnico para limitar el acceso a datos. Esto en el modelo de Cloud Computing puede ser un problema.
  • Confianza en el Cloud Provider:
    •  Dueño de datos: Los datos son procesados y almacenados en la nube provista por el Cloud Provider. Sin un claro contrato pueden existir dudas sobre quién es el dueño de éstos.
    •  Servicios Compuestos: Los servicios de Cloud Computing pueden estar compuestos por servicios provistos por otros Cloud Providers (como subcontratados). Estos casos representan un riesgo ya que puede no estar bien definido el control sobre la parte subcontratada y las responsabilidades.
    • Empleados malicioso: Los Cloud Providers le otorgan privilegios de administración a su personal para realizar sus tareas operativas. Un empleado malicioso podría acceder a datos confidenciales de un Cliente o realizar actos de sabotaje.
  • Administración de riesgos: Los Clientes deben asegurar que los controles de seguridad sean implementados en forma correcta. Al estar la infraestructura en la nube, esta tarea puede ser dificultada, elevando los riesgos para los mismos.
  • Arquitectura y ataques:
    •  Tecnología compartida: La implementación de tecnologías de virtualización genera una nueva superficie de ataque dado que éstas también son susceptibles a vulnerabilidades.
    • Redes virtuales: El tráfico a través de redes virtuales podría no ser visible a dispositivos de seguridad en la red física, como ser sistemas de detección ó prevención de intrusos (N-IDS/IPS).
    •  APIs inseguras: Todos los modelos de servicio poseen APIs (Application Program Interface ó Interfaz de aplicación del programa) disponibles para administrar los servicios y pueden representar un riesgo si no están correctamente aseguradas.
    •  Pérdida  ó divulgación de datos: Todos los modelos de servicio son susceptibles a posibles pérdidas ó divulgación de datos por fallas operativas, de cifrado y de destrucción de los mismos.
  • Disponibilidad: La posibilidad que un Cloud Provider entre en banca rota ó experimente la pérdida de un data center puede afectar la disponibilidad del servicio y afectar al Cliente.
  • Ataques desde la nube:
    • Secuestro de servicio: Todos los modelos de servicio son susceptibles al secuestro de los mismos por medio de ataques de ingeniería social (Phishing), fraude y explotación de vulnerabilidades entre otras. Una vez que un atacante obtiene credenciales de acceso de la Compañía, éste puede acceder a datos confidenciales.
    • Abuso de servicio: La idea de tener capacidad ilimitada de procesamiento, red y almacenamiento le puede permitir a criminales informáticos realizar sus actividades con más poder y un mayor anonimato.
    • La modalidad de auto-servicio ofrecida por los Cloud Providers facilita el trabajo de un atacante para causar daño. Estos ataques pueden ir hacia Clientes internos ó terceros externos.
¿Qué recomendaciones existen para migrar a servicios de Cloud Computing?


Todavía existen varios debates sobre las recomendaciones que hay que tener en cuenta al migrar a servicios de Cloud Computing. Sin embargo, ya hay una base de recomendaciones que deben ser tenidas en cuenta:
  •  Desde la perspectiva de cumplimiento regulatorio, entender las diferentes leyes y regulaciones que aplican a su Compañía y que podrían impactar en las iniciativas de Cloud Computing. Aquellas leyes o regulaciones que requieran la implementación de controles en base a la ubicación de los datos para garantizar la privacidad y seguridad de los mismos pueden representar un problema.
  • Incorporar en el contrato con el Cloud Provider la posibilidad de auditar los controles de seguridad y privacidad implementados por éste y su performance en el tiempo.
  •  Incorporar en el contrato con el Cloud Provider un marco de trabajo para realizar proceso de administración de riesgos flexible y continuo.
  •  Entender las tecnologías que utiliza el Cloud Provider para brindar sus servicios, incluyendo los controles que garanticen la seguridad de sus sistemas.
  • Entender la tecnología que utiliza el Cloud Provider para virtualización y asesorar los riesgos que ésta implica.
  • Evaluar si las soluciones que ofrece el Cloud Provider para la administración de datos cubre con los requerimientos de su Compañía. En especial las soluciones de cifrado de datos tanto en almacenamiento como en movimiento.
  • Asegurar que ante la falta de servicio (ya sea por corto o largo plazo), las operaciones críticas puedan ser resumidas de forma inmediata.

Por Agustin Chernitsky
Especialista en Seguridad de la Información