Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Incident Handling in Cloud Computing

Info: 18115 words (72 pages) Dissertation
Published: 11th Jan 2022

Reference this

Tagged: Computer ScienceComputing


Cloud Computing

Cloud computing provides people the way to share distributed resources and services that belong to different organizations or sites. As cloud computing allocate the divided possessions by means of the systems in the released surroundings. That’s why it creates the safety issues for us to expand the cloud computing application.

Cloud computing is explained by NIST as the representation for allow suitable, on demand arrangements for right to entry to a collective pool of settings the calculative Possessions. All these like networks, servers, storage, application and services is continuously planned and free with less supervisory activities or cloud supplier communication. Cloud computing is taken as a innovative calculating concept up to now. It permitted the use of calculating communication with more than one stage of thoughts. The spot requirement of these services is offered online at fewer prices. Reason is that the insinuation for the high elasticity and accessibility. Cloud computing is the main topic which will be getting the good manner of concentration recently.

Cloud computing services gives advantages from financial systems of all range accomplished. With this the flexible utilization of possessions, occupation and others work competency.

However, cloud computing is an emerging forming of distributed computing that is still in its infancy.

The concept uses of its own all the levels of explanations and analysis. Most of the concepts has been written regarding cloud computing, its explanation. Its main aim is to search the major paradigm of the utilization and given that common classification for Concepts and significant details of the services.

A public cloud is the major one which has the communication and other calculative possessions. This consists of making obtainable to the common people online. This is known by all the cloud servicer who is doing the marketing. It’s by giving explanation of the outsider industries. On the other hand of the range is the confidential cloud. The confidential cloud is the one in which the calculating surroundings is generated completely for the industry. This can handled by industry or by the third party. This can be hosted under the industries information centre which is within or outside of it. The private cloud provides the industry a good control on the communication and calculative sources as compared to public cloud.

There is other operational models which lies between the private and public cloud. These are community cloud and hybrid cloud. The community cloud is mainly related to private cloud. On the other hand the communication and calculative sources will be mutual by various industries that are having a similar confidentiality and regulatory thoughts. Instead they are exclusively checking the one industry.

The hybrid cloud is mainly the blend of two or more than two clouds i.e. (private, community, or public) this become the uncommon bodies which are stringed to each other by harmonized or proprietary technology which allows interoperability. Same as the various operational models which impacts to the industrial range and organized surroundings. That’s why this model gives assistance to the cloud which impacts it.

Three well-known and frequently-used service models are the following:

Software-as-a-Service. Software-as-a-Service (SaaS) is an on demand software services in which user gets access to the required software thorough some intermediate client like browser using internet. Software platform and relevant files are stored centrally. It drastically reduces the total cost of software for the user as it does not require user to incur any infrastructure cost which include hardware installation cost, maintenance cost and operating cost. Subscribers of these services are only given limited control related to the desired software including any preference selection and administrative setting. They do not have any control over the underlying cloud infrastructure.

Platform-as-a-Service. Platform-as-a-Service (PaaS) is an on demand platform delivery model. In this user is provided with the complete software platform which is used by the subscriber to develop and deploy software. It also result in considerable saving for the subscriber as he does not have to incur costs related to buying and managing of complicated hardware and software components required to support the software development platform. The special purpose development environment is tailored to the specific needs of the subscriber by the cloud service provider. Good enough controls are given to the subscriber to aid in smooth development of software.

Infrastructure-as-a-Service. Infrastructure-as-a-Service (IaaS) is an on demand infrastructure delivery services. In this host of computing servers, softwares, and network equipments are provided. This infrastructure is used to establish platform to develop and execute software. Subscriber can cut down his cost to bare minimum by avoiding any purchase of hardware and software components. Subscribers is given quite a lot of flexibility to choose various infrastructural components as per the requirements. Cloud subscriber controls the maximum security features.

Figure illustrates the differences in scope and control between the cloud subscriber and cloud provider.

Given central diagram shows the five conceptual layers of a cloud environment which apply to public clouds and other deployments models.

The arrows at the left and right of the diagram denote the approximate range of the cloud provider’s and user’s scope and control over the cloud environment for each service model.

Cloud subscriber’s extent of control over the system is determined by the level of support provided by the cloud provider. Higher the support by cloud provider lower is the scope and control of the subscriber. Physical elements of cloud environment are shown by two lower layers of the diagram. These physical elements are completely controlled by cloud provider irrespective of the service model.

The facility layer which is the lowest layer comprises of Heating, ventilation, air conditioning (HVAC), power, communications, and other aspects of the physical plant whereas hardware layers comprises of network, storage and other physical computing infrastructure elements.

The logical elements of a cloud environment is denoted by other layers.

The virtualized infrastructure layer lead to software components, such as hypervisors, virtual machines, virtual data storage, and supporting middleware elements required to setup a capable infrastructure to establish efficient computing platform

While virtual machine technology is commonly used at this layer, other means of providing the necessary software abstractions are not precluded. Similarly, the platform architecture layer entails compilers, libraries, utilities, and other software tools and development environments needed to implement applications. The application layer represents deployed software applications targeted towards end-user software clients or other programs, and made available via the cloud.

Iaas ans Paas as services are very close and difference between them is quite vague. Basically these are distinguished by the kind of support environment, level of support and control allocation between cloud subscriber and cloud provider.

Main thrust of cloud computing is not only limited to single organization but also extends as a vehicle for outsourcing various components as public cloud.

been to provide a vehicle for outsourcing parts of that environment to an outside party as a public cloud.

Through any outsource of information technology services, relates survived in relation to any connotation for system safety and isolation.

The main issue centres on the risks associated with moving important applications or data from within the confines of the Industries calculating centre which is of different other company (i.e. a public cloud). That is easily available to the normal people

Decreasing prise and increasing proficiency is the main concerns. These two are the chief inspirations for stepping towards the public cloud. On the other hand deceasing accountability for the safety should not depend on it. Finally the industry is responsible for all safety issues of the outsourced services. Observing and addressing the safety problems which go increase will be at the sight of industry. Some of the major issue like performances and accessibility. Because cloud computing brings with it new security challenges, it is essential for an organization to oversee and Administer in which manner the cloud servicer handles and prevent the computing environment and provides guarantee of safety.


An event is any observable occurrence in a system or network. Events include a user connecting to a file, a server receiving a request for a Web page, a user sending electronic mail, and a firewall blocking a connection attempt. Unfavorable occasion are the one which has unhelpful results. For instance: crashes, network packet floods and unauthorized utilization. of system privileges, unauthorized access to sensitive data, and execution of malicious code that destroys data. A system safety occasion is actually a contravention or forthcoming danger of breach of system safety strategy, suitable utilization policies and modeled safety policies. The terminology for these incidents is helpful to the small business owner for understanding service and product offerings

Denial of Service- An attacker directs hundreds of external compromised workstations to send as many ping requests as possible to a business network, swamping the system.

Malicious Code- A worm is able to quickly infect several hundred workstations within an organization by taking advantage of a vulnerability that is present in many of the company’s unpatched computers.

Unauthorized Access- An attacker runs a piece of “evil” software to gain access to a server’s password file. The attacker then obtains unauthorized administrator-level access to a system and the sensitive data it contains, either stealing the data for future use or blackmailing the firm for its return.

Inappropriate Usage- An employee provides illegal copies of software to others through peer-to-peer file sharing services, accesses pornographic or hate-based websites or threatens another person through email.

Incident Handling:

Incident handling can be divided into six phases: preparation, identification, containment, eradication, recovery, and follow-up.

Step 1: Preparation: In the heat of the moment, when an incident has been discovered, decision-making may be haphazard. Software-as-a-Service (SaaS) is an on demand software services in which user gets access to the required software thorough some intermediate client like browser using internet. Software platform and relevant files are stored centrally. It drastically reduces the total cost of software for the user as it does not require user to incur any infrastructure cost which include hardware installation cost, maintenance cost and operating cost. Subscribers of these services are only given limited control related to the desired software including any preference selection and administrative setting. They do not have any control over the underlying cloud infrastructure.


Platform-as-a-Service (PaaS) is an on demand platform delivery model. In this user is provided with the complete software platform which is used by the subscriber to develop and deploy software. It also result in considerable saving for the subscriber as he does not have to incur costs related to buying and managing of complicated hardware and software components required to support the software development platform. The special purpose development environment is tailored to the specific needs of the subscriber by the cloud service provider. Good enough controls are given to the subscriber to aid in smooth development of software.


Infrastructure-as-a-Service (IaaS) is an on demand infrastructure delivery services. In this host of computing servers, softwares, and network equipments are provided. This infrastructure is used to establish platform to develop and execute software. Subscriber can cut down his cost to bare minimum by avoiding any purchase of hardware and software components. Subscribers is given quite a lot of flexibility to choose various infrastructural components as per the requirements. Cloud subscriber controls the maximum security features.

Figure illustrates the differences in scope and control between the cloud subscriber and cloud provider.

Given central diagram shows the five conceptual layers of a cloud environment which apply to public clouds and other deployments models

The arrows at the left and right of the diagram denote the approximate range of the cloud provider’s and user’s scope and control over the cloud environment for each service model.

Cloud subscriber’s extent of control over the system is determined by the level of support provided by the cloud provider. Higher the support by cloud provider lower is the scope and control of the subscriber. Physical elements of cloud environment are shown by two lower layers of the diagram. These physical elements are completely controlled by cloud provider irrespective of the service model. The facility layer which is the lowest layer comprises of Heating, ventilation, air conditioning (HVAC), power, communications, and other aspects of the physical plant whereas hardware layers comprises of network , storage and other physical computing infrastructure elements

The logical elements of a cloud environment is denoted by other layers.

The virtualized infrastructure layer lead to software components, such as hypervisors, virtual machines, virtual data storage, and supporting middleware elements required to setup a capable infrastructure to establish efficient computing platform

While virtual machine technology is commonly used at this layer, other means of providing the necessary software abstractions are not precluded. Similarly, the platform architecture layer entails compilers, libraries, utilities, and other software tools and development environments needed to implement applications. The application layer represents deployed software applications targeted towards end-user software clients or other programs, and made available via the cloud.

Iaas ans Paas as services are very close and difference between them is quite vague. Basically these are distinguished by the kind of support environment, level of support and control allocation between cloud subscriber and cloud provider. Main thrust of cloud computing is not only limited to single organization but also extends as a vehicle for outsourcing various components as public cloud.

Delete the reason of the event. Position the latest clean back up (to prepare for the computer mending).

Step 5: Recovery: This phase ensures that the system is returned to a fully operational status. The following steps should be taken in the recovery phase: Restore the system.

Authenticate the machine

The machine will be re-established then there should be the process of verification of the operations. After this the machine should be reverse to its normal behaviour. Organisation can take decision on leaving the monitor offline when the system is operating and patches installation.

Watch the computer.

When the monitor is reverse to online, it start the system for backdoors which avoids findings.

Step 6: Follow-Up: This stage is significant for recognizing the message delivered and it will reduce the future happenings.

Build the explained event report and gives the duplicates to the management. The operating unit’s IT security Officer and the Department of Commerce’s IT Security Program Manager. Provide the optional alteration to the management.

Execute the accepted activities.


If the organization has a post-incident lessons learned process, they may want the cloud vendor to be involved in this process. What agreements will the organization need with the cloud provider for the lessons learned process? If the cloud provider has a lessons learned process, does management have concerns regarding information reported or shared relating to the organization? The cloud vendor will not be able to see much of the company’s processes, capabilities or maturity. The company may have concerns regarding how much of its internal foibles to share. If there are concerns, get agreement internally first, then negotiate them, if possible, and have them written into the contract. If the vendor will not or cannot meet the customer’s process requirements, what steps will the organization need to take?

An IH team collects and analyzes incident process metrics for trend and process improvement purposes. Like any other organization, the cloud provider will be collecting objective and subjective information regarding IH processes. As NIST points out, the useof this data is for a variety of purposes, including justifying additional funding of the incident response team. Will the organization need this IH process metric data from the provider to enable a complete understanding of the integration area in case the organization ever has a need to bring the cloud function back in-house? Will the organization need this data for reporting and process improvement in general? The use of this data is also for understanding trends related to attacks targeting the organization. Would the lack of this attack trend data leave the organization unacceptably exposed to risk? Determine what IH process metric data is required by the team and write it into the contract.

The organization will need to decide if they require provisions with the cloud provider regarding their evidence retention policies. Will the vendor keep the evidence long enough to meet the organization’s requirements? If not, will the organization need to bring the cloud vendor’s evidence in-house? Will the vendor allow the customer to take custody of the evidence? If the vendor retains the evidence longer than the customer policies dictate does this work create risk for the customer? If so, what recourse does the customer have? Legal counsel will need to provide direction in this area in order to ensure compliance with laws for all jurisdictions.


Cloud computing has built on industry developments dating from the 1980s by leveraging outsourced infrastructure services, hosted applications and software as a service (Owens, 2010). In the all parts, the techniques used are not original.

Yet, in aggregate, it is something very different. The differences provide both benefits and problems for the organization integrating with the cloud. The addition of elasticity and pay-as-you-go to this collection of technologies makes cloud computing compelling to CIOs in companies of all sizes.

Cloud integration presents unique challenges to incident handlers as well as to those responsible for preparing and negotiating the contract for cloud services. The challenges are further complicated when there is a prevailing perception that the cloud integration is “inside the security Edge or the organisation has been stated in written that a agreement needed the supplier to be safe, this must be sufficient.

This sort of thinking may be naïve but, unfortunately, it is not rare. The cloud provider may have a great deal of built in security or they may not. Whether they do or not, incident handling (IH) teams will eventually face incidents related to the integration, necessitating planning for handling incidents in this new environment.

The impacts of cloud integration warrant a careful analysis by an organization before implementation. An introduction of a disruptive technology such as cloud computing can make both definition and documentation of services, policies, and procedures unclear in a given environment. The IH team may find that it is helpful to go through the same process that the team initially followed when establishing their IH capability.

Security Incident

The term ‘security incident’ used in this guideline refers to any incident related to information security. It refers to information leakage that will be undesirable to the interests of the Government or an adverse event in an information system and/or network that poses a threat to computer or network security in respect of availability, integrity and confidentiality. On the other hand, the worse incidents like natural calamity, power cuts and data line failure. . are not within the scope of this guideline, and should be addressed by the system maintenance and disaster recovery plan.

Examples of security incidents include: unauthorized access, unauthorized utilization of services, denial of resources, disruption of services, compromise of protected data / program / network system privileges, leaks of classified data in electronic form, malicious destruction or modification of data / information, penetration and intrusion, misuse of system resources, computer viruses and hoaxes, and malicious codes or scripts affecting networked systems.

Security Incident Handling

Security incident handlingis a set of continuous processes governing the activities before, during and after a security incident occurs. Security incident handling begins with the planning and preparing for the resources, and developing proper procedures to be followed, such as the escalation and security incident response procedures.

When a security incident is detected, security incident response is made by the responsible parties following the predefined procedures The safety events gave the response which is representing the actions accepted out to handle the safety events. These are mainly helpful to re-establish the common operations.

Specific incident response teams are usually established to perform the tasks of making security incident response.

When the incident is over, follow up actions will be taken to evaluate the incident and to strengthen security protection to prevent recurrence. The planning and preparation tasks will be reviewed and revised accordingly to ensure that there are sufficient resources (including manpower, equipment and technical knowledge) and properly defined procedures to deal with similar incidents in future.

Cloud Service

The outlook on cloud computing services can vary significantly among organizations, because of inherent differences These events as its main aim, assets held and open to the domestic risks faced and risk bearable.

For example, a government organization that mainly handles data about individual citizens of the country has different security objectives than a government organization that does not. Similarly, the security objectives of a government organization that prepares and disseminates information for public consumption are different from one that deals mainly with classified information for its own internal use. From a risk perspective, determining the suitability of cloud services for an organization is not possible without understanding the context in which the organization operates and the consequences from the plausible threats it faces.

The set of security objectives of an organization, therefore, is a key factor for decisions about outsourcing information technology services and, In specific, in order to make genuine decisions related to industries sources about the public cloud. The cloud calculating particular servicer and the service arrangements for the organization.

There are lot of things which works for one industry but not for other.

Not only this some pragmatic thoughtfulness. Many industries will not afford economically to save all calculative sources and possessions at all

highest degree possible and must prioritize available options based on cost as well as criticality and sensitivity.

When keeping the strong advantages of public cloud computing, it is indispensable to focus of safety. Significantly the safety of industry security goals is of major concern, so that the future decisions can be made accordingly. Finally the conclusion on the cloud computing rely on the risk analysis of the trade included.

Service Agreements

Specifications for public cloud services and service arrangements are generally called Service Level Agreements (SLAs). The SLA presents the thoughtfulness among the cloud subscriber and cloud provider related to the known range of services. This is to be delivered in the range that the servicer is not able to provide at different range defined. There are typical forms of a part of the different levels of services. The specific is the overall services contract or the services agreement.

The terms of service cover other important details such as licensing of services, criteria for acceptable use,

Provisional procrastination, boundaries of all responsibility, security policies and alterations in that period of service.

The main aim of this report is the period of SLA which is utilize for the services agreement in its entity. There are two types of SLAs exists: i.e. which is non defined and non negotiable contract the other is negotiated agreement.

Non-variable contracts is the many ways on the basis for the financial level which is enjoyed by the public cloud computing. The terms which are agreed fully by cloud provider but with some offerings, the service provider has also the capability to do the changes. Negotiated SLAs are more like traditional information technology outsourcing contracts.

These SLAs can be employed to deal with corporation’s apprehension about technical controls, procedures, security procedures and privacy policy such as the vetting of employees,data ownership and exit rights, isolation of tenant applications, data encryption and segregation, tracking and reporting service effectiveness, compliance with laws and regulations (e.g., Federal

Information Security Management Act), and the deployment of appropriate products following international or national standards (e.g., Federal Information Processing Standard 140-2 for cryptographic modules).

A negotiated SLA for critical data and application might require an agency

A negotiated SLA is less cost effective because of the inherent cost of negotiation which can significantly disturb and have a negative impact on the economies of scale, which is main asset a non-negotiable SLA bring to the public cloud computing. Result of a negotiation is based on the size of the corporation and the magnitude of influence it can exert.

Irrespective of the type of SLA, it is very necessary to obtain pertinent legal and technical advice to make sure terms of service meets the need of the organization.

The Security Upside

While the biggest obstacle facing public cloud computing is security, the cloud computing paradigm provides opportunities for thinking out of the box solutions to improve overall security of the corporation. Small corporations are going to have the biggest advantage from the cloud computing services as small companies have limited staff and infrastructure support to compete with bigger organization on fronts of technology and economies of scale.

Potential areas of improvement where organizations may derive security benefits from transitioning to a public cloud computing environment include the following:

Staff Specialization

Just like corporations with large-scale computing facilities, cloud providers provides an break to staff toto specialize in security, privacy, and other areas of high interest and concern to the organization. Increases in the scale of computing induce specialization, which in turn allows security staff to shed other duties and concentrate exclusively on security issues. Through increased specialization, there is an opportunity for staff members gain in-depth experience, take remedial actions, and make security improvements more readily than otherwise would be possible with a diverse set of duties.

Platform Strength. The structure of cloud computing platforms is typically more uniform than that of most traditional computing centers. Greater uniformity and homogeneity facilitate platform hardening and enable better automation of security management activities like configuration control, vulnerability testing, security audits, and security patching of platform components. Information assurance and security response activities also profit from a uniform, homogeneous cloud infrastructure, as do system management activities, such as fault management, load balancing, and system maintenance. Many cloud providers meet standards for operational compliance and certification in areas like healthcare (e.g., Health Insurance Portability and Accountability Act (HIPAA)), finance (e.g., Payment Card Industry Data Security Standard (PCI DSS)) and audit (e.g., Statement on Auditing Standards No. 70

Resource Availability. The scalability of the cloud computing facilities permits the greatest consideration. Unemployment and calamity healing capability is building into the cloud computing surroundings. The different sources ability would be utilizing for better flexibility while facing higher demands or divided rejection of servicer and for faster improvement from Severe events

When any event happens, the occasion survived again to collect the data. The large data is easily available with good explanation and less effect on construction. On the other hand the pliability might be having different results. For Instance: a non successful person divided the rejection of service attackers which can consume fast.

Support and Improvement.

The encouragement and revival strategy and processes of a cloud services might be better than that of the industry. In case the different duplicates are maintained in the assorted natural features can be healthier. Information stored within the cloud would be easily available which is easy to store and highly reliable. In different situation it proved to be maintained in a traditional information centre. In such situation, cloud services could means for offsite encouragement data collection. Mainly the network performance on the net and the usage of the data involved are preventing the issue which impacted the re-establishment. The structure of a cloud solution spreads to the consumer at the service endpoints. This utilizes to access the hosted submission. Cloud consumer is based on browser and on application. However the main calculative sources need to be held by the cloud provider. Consumer is normally low weight calculation and easily handled. The laptops, notebook and net books are well embedded devices like smart mobile phones, tablets and personal digital help.

Information Awareness

Information prepared and developed in the cloud would be able to show low risk to the industry. There are lot of risk involved in the industry, different information are transferring on various systems. Portable systems or transferrable media is out in the field, where the loss of devices and theft occurs frequently. Many industries have made the evolution to handle the availability to the industry. So many industries have already made the evolution to hold the availability to the organizational information.

In addition to calculating the stage or alternative for domestic submission and public cloud services like target on providing security and safety to other calculating surroundings.

Information Midpoint Familiarize

Cloud services would be able to utilize the safety information centres. For instance: e-mail can be transmitted to a cloud provider through mail exchange (MX) records, which is examined and analyzed.

Combined including same transactions with the other information and centre to find out all junk, phishing and malware to check out.

The corrective actions are more explained than any one industry. The scholars also have the demonstration proofs to a system infrastructure. In order to provide the cloud based virus hitter services, which is actual a host based antivirus solutions. Cloud gets used of. Cloud reverses proxy the system which are not available that makes the creative availability to a SaaS surroundings, yet developed the information storage in the surroundings in decoded formation. Cloud based identity supervision services also exist. These could be utilizes to add or remove the industry directory services for knowing and true of users of the cloud.

The Security Downside

Besides its many potential benefits for security and privacy, public cloud computing also brings Including all the strong area of relations, when related to calculating surroundings maintained in traditional information centres. There are lot of primary issues which includes the following

System Complexity. The public cloud calculating surroundings is highly complicated as compared to that of a traditional informational centre. All the things contains a public cloud, which has consequences of a large attacked base.

In addition to all the mechanism for normal calculating, like operational submission, effective machine systems, guest practical machines, information storage and political middleware there are lot of things that contains the management backplane, like as which are self-service, source calculation and data duplication and revival work load.

Management and cloud bursting Cloud services themselves may also be realized through nesting and layering with services from other cloud providers.

Mechanism changes over the time to promote and attribute the betterment occur and perplexing matter further. Safety relies not on accuracy and effectiveness of many things but also on the communication between them. There is various probabilities of communications among the component increases as the four sided figure of the number of components. But also on the communication exists between them. Complications mainly relate indirectly to the safety with higher complications which gives rises to the defenceless.

Shared Multi-tenant Environment. Public cloud services offer by providing the complicated fundamental critical. It gives access to the industries which normally shares the things and sources with different subscriber which is not known. Threats to network and computing infrastructures continue to increase each year and have become more sophisticated.

Sharing the infrastructure with unknown areas which is the important limitations for some components requires a high level of assurance for the strength of the security mechanisms used for logical separation. It's not a exceptional for cloud calculating, rational parting is a non-trivial issues that is aggravated by the range of cloud computing. Availability to industrial information and sources could unintentionally be uncovered to various subscribers with a setting or software mistakes. The aggressor could also pose as a subscriber to utilize vulnerabilities from within the cloud surroundings to gain illegal access.

Internet-facing Services. The government cloud services will be provided on the net. This represents both administrative boundaries which is used for self-service. This interfaces for the utilization and application availability for other available services. There is information which confines the industry intranet. This is transferred to the cloud but now it has faced increased risk with. Network threats that were previously defended against at the perimeter of the organization's intranet and from new threats that target the exposed interfaces.

The result is rather equivalent with the enclosure of wireless admittance. The points within the association's intranet at begin of the expertise. It needs the different administrative availability as the simple means to set the assets. With this the administrative availability to the stations can be avoided. Move about to the government cloud needs a migration of control the cloud provider on the data.

A couple of noteworthy instances have already occurred that give a sense of what might be expected in the future

Botnets. In various methods, the botnets combined and handled by the hackers which is an early form of cloud computing. Decrease in the prise, self-motivated processes, redundancy, safety and various other traits of cloud computing will be applicable. Botnets mainly used to send spam and launching cringe which attack websites. Botnets may be used to see the denial of servicer attacked by the structure of a cloud provider. There is a probability that a cloud servicer could check from where the error arises. In the year 2009, the operating command is controlled within the IaaS cloud.

Instrumental Cracking. WiFi Protected Access (WPA) Cracker, a cloud service apparently from diffusion checking. It's an instance of attaching cloud sources on command to identify the decoded password. This is used by the wireless network. In the cloud calculations, the work that takes more than five four days to operate on the same systems takes only 20-30 minutes. It's mainly a group of 500-600 practical systems. Reason is that this mechanism is highly used for authentication. This method is effect less, less effective with the accessibility of cryptographic keys which cracks the cloud services. All the types systems are possible. . CAPTCHA cracking is another area where cloud services could be applied to bypass verification meant to thwart abusive use of Internet services by automated software

Data Protection

Data stored in the cloud typically resides in a shared environment collocated with data from other customers. Industrial running susceptibility and synchronized information in the cloud, that's why it must have an account by the different means. The availability of the information is handled and information is kept safe.

Data Isolation

Information can be of any form For example, for cloud-based application development, it includes the application programs, scripts, and configuration settings, along with the development tools. For deployed applications, it includes records and other content created or used by the applications, as well as account information about the users of the applications. Access controls are one means to keep data away from unauthorized users;

Encryption is different.

Access controls are typically identity-based, which makes authentication of the user's identity an important issue in cloud computing.

Database environments used in cloud computing can vary significantly. For example, some environments support a multi-instance model, while others support a multi-tenant model. The former provide a unique database management system running on a virtual machine instance for each cloud subscriber, giving the subscriber complete control over role definition, user authorization, and other administrative tasks related to security. The latter provide a predefined environment for the cloud subscriber that is shared with other tenants, typically through tagging data with a subscriber identifier. Tagging gives the appearance of exclusive use of the instance, but relies on the cloud provider to establish and maintain a sound secure database environment.

Various types of multi-tenant arrangements exist for databases. Each arrangement pools resources differently, offering different degrees of isolation and resource efficiency. Other contemplation is also applicable

For example, certain features like data encryption are only viable with arrangements that use separate rather than shared databases. These sorts of tradeoffs require careful evaluation of the suitability of the data management solution for the data involved. Requirements in certain fields, such as healthcare, would likely influence the choice of database and data organization used in an application. Privacy sensitive information, in general, is a serious concern.

Data must be secured while at rest, in transit, and in use, and access to the data must be controlled. Standards for communications protocols and public key certificates allow data transfers to be protected using cryptography. Procedures for protecting data at rest are not as well standardized, however, making interoperability an issue due to the predominance of proprietary systems. The lack of interoperability affects the availability of data and complicates the portability of applications and data between cloud providers.

Currently, the responsibility for cryptographic key management falls mainly on the cloud service subscriber. Key generation and storage is usually performed outside the cloud using hardware security modules, which do not scale well to the cloud paradigm. NIST's Cryptographic Key Management Project is identifying scalable and usable cryptographic key management and exchange strategies for use by government, which could help to alleviate the problem eventually. Protecting data in use is an emerging area of cryptography with little practical results to offer, leaving trust mechanisms as the main safeguard.

Data Sanitization. The data sanitization practices that a cloud provider implements have obvious implications for security. Sanitization is the removal of sensitive data from a storage device in various situations, such as when a storage device is removed from service or moved elsewhere to be stored. Data sanitization also applies to backup copies made for recovery and restoration of service, and also residual data remaining upon termination of service. In a cloud computing environment, data from one subscriber is physically commingled with the data of other subscribers, which can complicate matters. For instance, many examples exist of researchers obtaining used drives from online auctions and other sources and recovering large amounts of sensitive information from them. With the proper skills and equipment, it is also possible to recover data from failed drives that are not disposed of properly by cloud providers.

Incident Response

As the name implies, incident response involves an organized method for dealing with the consequences of an attack against the security of a computer system. The cloud computing service provider plays a vital role as far as incident response activities are concerned. Incident response activities include data collection, verification, analysis containment and restoration of system after a fault is detected.Before transitioning from conventional application and data to a cloud computing environment, it is very important for an organization to revise companywide incident response plan to accommodate the gaps in business handling process generated by cloud computing environment.

Collaboration between the service subscriber and provider in recognizing and responding to an incident is essential to security and privacy in cloud computing. The complexity of the service can obscure recognition and analysis of incidents. For example, it reportedly took one IaaS provider approximately eight hours to recognize and begin taking action on an apparent denial of service attack against its cloud infrastructure, after the issue was reported by a subscriber of the service. Understanding and negotiating the provisions and procedures for incident response should be done before entering a service contract, rather than as an afterthought. The geographic location of data is a related issue that can impede an investigation, and is a relevant subject for contract discussions.

Response to an incident should be handled in a way that limits damage and reduces recovery time and costs. Being able to convene a mixed team of representatives from the cloud provider and service subscriber quickly is an important facet to meeting this goal. Remedies may involve only a single party or require the participation of both parties. Resolution of a problem may also affect other subscribers of the cloud service. It is important that cloud providers have a transparent response process and mechanisms to share information with their subscribers during and after the incident.

Some of the threats in cloud computing

1. Abuse and Nefarious Use of Cloud Computing

IaaS providers offer their customers the illusion of unlimited compute, network, and storage capacity often Binded with hassle free registration process which require a simple form and a credit card on the part of user before allowing him to use cloud services. To encourage users cloud providers also offer free trail period. These checks free registration process sometime gives strength to spammers and hackers to misuse the system and perform illegal activities without any fear.

PaaS service providers are most affected due to the attacks lodged by hacker and spammers however it has been lately observed that IaaS vendor are doing no better when it comes to tackling with hackers and spammers. In future various types of attacks are raising their head like a black cobra. Various new threats include building rainbow tables, key cracking, botnet command and control, hosting malicious data etc


Criminals continue to leverage new technologies to improve their reach, avoid detection, and improve the effectiveness of their activities. Main reasons why cloud computing provider are hot on the attack list of spammers includes relative weak registration process and limited fraud detection capability.


IaaS offerings have hosted the Zeus botnet, InfoStealer trojan horses, and downloads for Microsoft Office and Adobe PDF exploits. IaaS servers has always been favourite destinations to implement command and control functions. To deal with spam, which had been biggest problem faced by an IaaS servers, blacklisting of blocks of ip address related to IaaS network is done as defeninve measure.


Stringent preliminary registration and corroboration processes.

Better credit card fraud supervising and synchronization. Complete inward analyses of customer network traffic. Check on public blacklists for personal network blocks

2. Insecure Interfaces and APIs

Vulnerable Interfaces and APIs exposition of critical software interfaces and APIs which is used by customers to run and bind with cloud services

Administration, grouping and supervising are all performed using these interfaces. Three basic APIs defines the security and accessibility of broad cloud services.

From authentication and access control to encryption and activity monitoring, these interfaces must be designed to protect against both accidental and malicious attempts to circumvent policy.

These interfaces are often build upon by organizations and third parties to provide value-added services to the customers. This adds on another layers complex API along with probability that it may require organization to post their credentials in front of the third party.


It is very important for consumers to understand the consequences, as far as security, of management and usage of cloud services, even if providers do everything they can to well integrate security features into their cloud services models.

Various issues related to accessibility, responsibility and truthfulness comes up if weak set of interfaces and APIs are relied upon.


Anonymous access and/or reusable tokens or passwords, clear-text authentication or transmission of content, inflexible right of entry controls or unacceptable permission, incomplete checking and sorting abilities, unidentified service or API dependence.


Remediation understand the security model of cloud provider boundary. Guarantee strong verification and admission controls are in place with encrypted transmission.

Appreciate the reliance chain linked with the API.

3. Malicious Insiders

The threat of a malicious insider is well-known to most organizations.

Convergence of IT services and clients beneath a mono managerial domain along with general lack of lucidity in the provider process and policies has augmented the threat of malicious insider.

For example, a provider may not disclose level of access a employees may have to various physical and virtual assets, information related to screening of employees or the way reports or policy is made and reviewed.

Sometimes event the hiring process of for cloud employees is not disclosed completely. All these lack of transparency anf clarity in operation creates an attractive opportunity for a hacker to steal secret corporate or national documents. This can be done with minimal risk of detection.


The impact that malicious insiders can have on an organization is considerable, given their level of access and ability to infiltrate organizations and assets. Various other ways by which a malicious insider can impact an operation include financial impact, productivity losses and brand damage. With the expanding usage of cloud services by organization, threat of human element needs a deep thought. It is very important for consumer of cloud services to understand and establish the steps taken by the providers to deal with the threat of malicious insider.


impose strict supply chain management and carry out a complete supplier evaluation.

State human resource needs as ingredient of legal contracts. necessitate simplicity into general information security, management practices and compliance reporting. establish security breach announcement processes.

4. Shared Technology Issues

IaaS vendors deliver their services in a scalable way by sharing infrastructure.

But all these primary components does not offer isolation properties because of their sheer basic design. This gap is addressed by intervening between guest operating system and the physical compute resources.

Still, the flaws exihibited by hypervisors have allowed guest operating system to attain out of place levels of control with regard to underlying platform. A strong in depth defense strategy is required to enforce and monitor proper security measures. Customers should be guraded from the operation of each other with the help of robust compartmentalisation strategies. Customer should be not be given any access to private data of other client.


Attacks have surfaced in recent years that target the shared technology inside Cloud Computing environments Main issues is initial design of disk partitions, CPUs etc that bars the compartmentalization strategies to deliver results. As a consequence, focus of hacker and spammer is always on gaining unauthorized access to data of other customers.


Joanna Rutkowska's Red and Blue Pill exploits Kortchinksy's CloudBurst presentations.


Execute good security practices for installation/configuration.

Keep a tab on environment for illegal changes/activity. Support sturdy verification and admission control for administrative access and operations.

Implement service level agreements for patching and exposed solutions. Carry out susceptibility examination and configuration checks.

5. Data Loss or Leakage

Data can be compromised in many ways.

Removal or modification of accounts without a back up storage of the original content is a common case. Compartmentalization of records may lead to circumstances of non traceability, very similar to the case of using unpredictable media. Misplacement of encoding hash key may result in damage of critical data.

Ultimately, sensitive should be kept away from the reach of unauthorized access.

Risks related to data compromise increases many folds in cloud computing due to inherent characteristics of infrastructure deployed as a part of cloud environment.

Data loss or unwanted outflow can have a disturbing blow on a business.

A loss can have both tangible and non tangible impacts.


While tangible impact consist of financial damage and staff turnover, non-tangible impacts can be range from diminishing brand reputation to loss of morale and trust of employee, partner, and customer. Non-tangible impacts can have severe financial repercussion. Severity of impact is directly dependent upon the type of data that is stolen.

Severity of legal ramifications and compliance violations will depend on type of data.


Inadequate verification, permission, and review (AAA) controls;

contradictory use of encryption and software keys;

operational breakdown;

perseverance and remaining challenges: dumping challenges; risk of alliance;

authorization and biasing issues;

data center trustworthiness;

and adversity revival.


Employ sturdy API entrance control.

In transit data should be protected and encrypted for integrity Protection of data should be analyzed both at run and design time Employ sturdy storage and management, storage and manakey generation, and demolition exercises.

Service providers should be made contractually liable to clean constant media before releasing it into the pool.

State supplier support and maintenance strategies

6. Report or facility seizing

Report or facility seizing is not new.

Old age assault schemes like phishing are still quite successful in achieving the desired results for hackers and spammers. Impact of these attacks is amplified due to reuse of credential and passwords. Cloud related service augment the risks for the client. Any leakages of credential and password can give unlimited power in the hands of hackers to manipulate or steal important data and control the client's access to organization's online site. Hackers and spammers may use power of the company's brand to fool customer and gain illegitimate advantage.


Account and service hijacking, usually with stolen credentials, remains a top threat. Integrity, confidentiality and availability of cloud services are compromised when a attacker gain illegitimate access to deployed cloud computing infrastructure support system.

Company should be well aware of the common techniques used by hackers and spammers and also, it should be well prepared with an in depth defence strategies to contain the loss and resulting damage arising from any suck attack.


Remediation Proscribe the sharing of account documentations involving users and services.

Force sturdy two-factor validation techniques where ever feasible. Make use of practical observation to spot illegal movement. Appreciate cloud provider security policies and SLAs.

7. Unknown Risk Profile

and software possession and safeguarding which allow companies to build upon their core business strengths. These advantage of cloud services should be carefully analyzed and weighted against the conflicting security concerns, which if left as it is can have serious ramification for company, customers and business as a whole. Overall security policies should be designed keeping various factors in mind such as code updates, vulnerability profiles, security practices etc.

About the information and observance of the internal security processes its settings, auditing and logging? In a various ways the data is related to stored logs and which has availability to the? Which data in case any supplier reveals the events of safety incidents? Mostly such questions are not properly explained and are overlooked because of some unknown risk profile which has some serious risks.


IRS asked Amazon EC2 to perform a C&A; Amazon refused.

Heartland Data Breach: Payment processing system which was being used by heartland had not only susceptible but also contaminated software. Even then heartland was not ready to take extra effort to notify consumer about data breaches. They were only agreeing to bare minimum state laws which were not sufficient to secure confidential user data.


Clarification Revelation of application logs and information. Partial/full revelation of transportation details (e.g., patch levels, firewalls, etc.).

Monitoring and alerting on necessary information

Literature Review

Cloud computing is a new computing model. According to International Data Corporation (IDC) report, security is ranked first among challenges of the cloud model. In a perfect security solution, monitoring mechanisms play an important role. In the new model, security monitoring has not been discussed yet. Here we identified a few steps for studying security monitoring mechanisms in the cloud computing model. First, existing security monitoring mechanisms should be reviewed. These mechanisms are either part of commercial solutions or proposed by open communities. Second, top threats to cloud computing should be analyzed. In this step, we will go through new challenges in the new computing model. Third, current security monitoring mechanisms would be evaluated against new challenges which are caused by the new model.

Security Monitoring Mechanisms

Due to an increase in the number of organized crime and insider threats, proactive security monitoring is crucial nowadays. Moreover, in order to design an effective security monitoring system variety of challenges should be taken into account. As an example, we can mention some of them here: shortcoming in threat ecosystem, handling large number of incidents, cooperation among interested parties and their privacy concerns, product limitations, etc.

This section will start by reviewing our method for discussing monitoring mechanisms. Then, we will study security monitoring approaches from two different categories, commercial and open community's solutions. As a matter of fact, it should be noted that no single solution or mechanism exists for monitoring all kinds of threats. Different environments and threats impose variety of requirements. Each of these requirements is addressed by a group of monitoring techniques.

Conventionally cloud providers are not willing to disclose their security mechanisms. They justify these behaviors in different ways. First of all, by disclosing security functions, their competitors may utilize same mechanisms and reduce benefits of the origin company. Moreover, many companies still believe in security through obscurity. With regard to these types of problems, we reviewed security monitoring mechanisms from not only commercial solutions, but also open communities which are doing research in this field. In this analysis, we focus more on that part of monitoring mechanisms which help us to cover new security challenges in the cloud model.

Commercial Solutions

We studied security solutions in the cloud model which are proposed by Amazon, Google, RackSpace and Microsoft. In this study, we started by reviewing white-papers and documents for each of those commercial solutions. Then we tried to communicate with security teams for each them, to understand more about their monitoring mechanisms. This communication was the most unsuccessful part, because they were not willing to give out information more than what is available publicly. In some cases, like RackSpace, they have open-source projects or open community which may help more in analysis of their solutions. We will continue by going through some of those providers.


In the following, we highlight products and functions in the Amazon cloud environment which may help us in designing a proper security monitoring solution.


Amazon CloudWatch is a web service that provides monitoring for cloud components. These components are resource utilization, operational issues (request count and request latency on Elastic Load Balancing (ELB)), and overall demand patterns. It is designed to provide comprehensive monitoring for Amazon Elastic Compute Cloud (EC2), Amazon ELB and Amazon Relational Database Service (RDS). CloudWatch can be used to retrieve statistical data. Later, these data can be utilized to demonstrate availability parameters, such as mean up-time and mean time between failures.

Vulnerability Reporting Process

This process is used when someone find a vulnerability in any Amazon Web Services (AWS) products.

Penetration Testing Procedure

As penetration testing is hardly distinguishable from security violations, Amazon has established a policy for customers to request permission to conduct penetration testing. Establishing this policy helps AWS security monitoring service to face less false-positive alarms. Moreover, penetration testing that is conducted by variety of cloud customers reveal useful information for understanding the ecosystem of security threats in the new model. Cloud providers should coordinate these testing to find out more about the threats ecosystem as well as possible security breaches in their own infrastructure.

Security Bulletins

"AWS tries to notify customers of security and privacy events using Security Bulletins." Cloud customers monitor new vulnerabilities and change of policies using this service. As an example, we can refer to AmazonPayments Signature Validation a case on 22nd of September 2010. In this incident, vulnerability has been identified in the sample code for application-side signature validation.

CatbirdTM Vulnerability Monitoring

Vulnerability monitoring is a part of Catbird v Security product that provides security solutions for a cloud environment. Catbird vulnerability management has the following functionality: Audit, Continuous Compliance, Incident Response, Hybrid Vulnerability and IDS/IPS, Performance-enhancing implementation.


Security monitoring in Google has three main targets, internal network, and employee actions on Google systems and outside knowledge of vulnerabilities. At many points across their global network, internal traffic is inspected for suspicious behavior. They do this analysis using a combination of open-source and commercial tools. They also analyze system logs to identify unusual activity from their employees. In addition, security team checks security bulletins for incidents which may affect Google's services. On the top they have a correlation system that coordinates the monitoring process among variety of technologies. As a matter of fact, Google did not disclose any technical information about their monitoring mechanisms or even security functions. But if we refer to internal security breach on July 2010, we may see that those mechanisms are not working well enough to monitor such an incident. In July 2010, one of Google Site Reliability Engineers (SRE) had been dismissed because of breaking internal privacy policies by accessing users' account.


RackSpace started an open-source project called OpenStack. They included the code for Cloud Files and Cloud Servers Technology. NASA also joined this project with its Nebula platform which will be merged to Cloud Servers Technology and would become the computing component of OpenStack.

Microsoft Azure

Microsoft has a security frame to share security knowledge. 10 different categories are introduced in that frame comprising: Auditing and Logging, Authentication, Authorization, Communication, Configuration Management, Cryptography, Exception Management, Sensitive Data, Session Management, Validation.

Standing on these class and its explanation "Auditing and logging" is the class connected to defence supervisor.

Auditing and Logging explains how security-related events are recorded, monitored, audited, exposed, compiled and partitioned across multiple cloud instances

Open Communities

Importance of open source solutions

Open-source solutions and open communities are crucial in the cloud computing model. They address many security challenges in this model. Open source platforms which are compatible with interfaces in commercial solutions (e.g. Amazon EC2 APIs), assist consumers to evade data lock-in.

Moreover, building a hybrid cloud becomes easier by means of open source platforms. These open source platforms have public interfaces which are compatible with interfaces in other cloud environments. As an example for compatible interfaces we can refer to Eucalyptus APIs which are compatible with Amazon EC2 APIs. This compatibility provides the flexibility for cloud customers; so they can export data or processes to another cloud, when it is needed.

Additionally, open-source platforms and open communities can lead to a bigger ecosystem which is useful in studying threats. Threat study can has at least two phases, first analyzing the ecosystem for possible security breaches and second, verifying proposed security solutions to make sure that they satisfy the constrains.

However, open source implementations of cloud software are not the only influence of open communities. Many open projects do not focus on software development for the cloud model but they work on other aspects of the new model including: Common interfaces and namespaces that are used for standardization of communications in the cloud model (e.g. CloudAudit open project on automating the Audit, Assertion, Assessment, and Assurance); another aspect is to Promote a common level of understanding and knowledgeabout different properties of the cloud computing (e.g. CloudSecurityAlliance research about the top threats to a cloud environment.).

At the end, to emphasize the urge to openness, we repeat a quote by Christofer Hoff, "The security industry is not in the business of solving security problems that don't have a profit/margin attached to it". The fact is that the cloud model is not mature yet and companies will not focus on an specific area until enough benefits exist for them. On the other hand, open communities develop different perspectives of the cloud model without looking for large financial benefit. This will help to explore new model in depth and introduce new ideas that may not be interested for industry unless specific challenges arise.

Standards and open source solutions


CloudAudit is a set of interfaces and namespaces that allows cloud providers to automate Audit, Assertion, Assessment, and Assurance of their different service models for authorized users.

Cloud Security Alliance (CSA)

CSA is a non-profit organization that develops effective ways of bringing security into the cloud computing model. Moreover, using cloud computing services to secure other types of computing models. They have eight working groups that work on different aspects of the cloud security. In the following we will mention some of those groups which are effective in designing proper monitoring mechanisms.

  1. Group 1: Architecture and Framework
  2. Group 2: GRC, Audit, Physical, BCM, DR
  3. Group 5: Identity and Access Mgt, Encryption & Key Mgt
  4. Group 6: Data Center Operations and Incident Response
  5. Group 8: Virtualization and Technology Compartmentalization

Distributed Management Task Force (DMTF)

DMTF's Open Cloud Standards Incubator try to design interoperable cloud management among service providers, customers as well as developers. It will help to avoid lock-in challenge. They have two standards, Interoperable Cloud and Architecture for Managing Clouds.

Open Cloud Computing Interface Working Group (OCCI-WG)

The OCCI-WG works on provisioning, monitoring and definition of cloud infrastructure services. Their solution will mostly fulfill three requirements: interoperability, portability and integration in Infrastructure as a Service (IaaS) model. This solution also focuses on the lock-in problem in the cloud.

OASIS Identity in the Cloud (IDCloud) TC

They develop standards for identity deployment, provisioning and management. They also provide use cases which are useful for risk and threat analysis.

Eucalyptus , OpenNebula and OpenStack are three main open source platforms in the cloud computing. Each of them provide variety of features and functionality, but their main focus is how to convert an existing pool of hardware resources to IaaS provider. Every one of them has a ordinary characteristic. The feature is that they are all compatible with Amazon EC2 interfaces. Platforms are not the only type of software which are developed in open source projects. As an example, Zenoss in an open source monitoring software which is compatible with the new concepts in the cloud computing model.

Security Challenges

Threat Specifications

Our two main interests in finding threats to cloud are:

"Providing a needed context to assist organizations in making educated risk management decisions regarding their cloud adoption strategies."

Utilizing effective monitoring mechanisms and introducing new ones to fulfill requirements in the cloud environment.

Threat model in the cloud have a number of originality. First, in addition to data and software, activity patterns and business reputation should be protected. Moreover, a longer trust chain should be accepted. This is due to multiple service models (Software as a Service, Platform as a Service and Infrastructure as a Service) and possible combinations of them. Parties in this trust chain will need mutual audit-ability. Stakeholders demand for mutual audit-ability, in order to have assurance, to some degree, about the other parties. Another novelty is about availability issues in the cloud. We should always keep in mind that the same failure in the cloud computing will have more catastrophic effect than a failure in the traditional computing model.

It is noteworthy to keep in mind these novelties while analyzing threats in the new model. According to, top threats could be identified as follows:

  1. Mistreatment and immoral application of Cloud Computing
  2. Insecure Application Programming Interfaces
  3. Spiteful enclosed
  4. Shared Technology Vulnerabilities
  5. Data Loss/Leakage
  6. Account, Service & Traffic Hijacking
  7. Unknown Risk Profile

Abuse and Nefarious Use of Cloud Computing, as a top threat to the cloud computing, is the one we will study here. Originally, offensive activities must be undoubtedly confirmed For instance, it should be defined from whose perspective a behavior is called abusive or nefarious. In order to achieve that, we may identify three stakeholders in the cloud computing model: cloud provider, cloud customer and end user. Relations between these stakeholders are complicated and this is one of the novelties of the cloud computing threat model. In fact, these relations have crucial effect in mitigating this threat.

As an illustration, cloud customers may abuse services which they are paying for; hosting a phishing website is an example of it. In this case, both the cloud provider and end users faced threats which are caused by this behavior. In addition, end users or clients of cloud customers can also misuse services which are provided for them. It will cause troubles for both the cloud provider and cloud customers: for instance, hosting illegal data on a storage service that utilize IaaS as its infrastructure. Additionally, in both cases, communications between different stakeholders play a vital role in mitigating the threat. Moreover, it is clear that interests of stakeholders are not necessarily in the same direction. That's why, clashes could occur.

Different abuse cases can be itemized as follows:

Anonymous Communication using cloud services for nefarious purposes.

Running the Onion Routing (TOR) exit node.

Botnet activity

- Command and control hosting

- Bot hosting

Sending email spam or posting spam into forums

Hosting harmful or illegal content:

- Site advertised in spam

- Host for unlicensed copyright-protected material

- Phishing website

- Malware host

Attack source:

- Intrusion attempts

- Exploit attacks (SQL injections, remote file inclusions, etc)

- Credit card fraud

- Port scanning

Excessive web crawling

Open proxy

New Security Challenges

For an exhaustive list of vulnerabilities and risks to cloud computing, check European Network and Information Security Agency (ENISA) report on cloud computing risk assessment.

  1. Cloud customers, who provide a service for end users, should assure their clients that their data is safe. Consequently, cloud customers must know about the cloud provider's staffs that have enough privileges to access cloud customers' data. Security monitoring mechanisms in the new model should provide functionality which help cloud customers to trust cloud providers staffs without revealing too much information about personnel.
  2. Information position and incompatible law. This is a new challenge, because in previous computing models the location of service providers' storage was clear. Contrary, in the cloud model, storage and computing facilities are distributed over number of regions. Now imagine a country that has restricting laws which do not allow companies to store their data outside of the country borders. In this case, monitoring mechanisms should keep track of data location. Such mechanisms highly depend on cloud providers cooperation and common interfaces among providers and customers. Moreover, cloud customers may need to ensure data privacy for their clients. On the other hand, cloud providers must obey their government regulations in disclosing data for lawful interception. This is one of the conflicting points between cloud customers and cloud providers which are from different regions. As an illustration, one can refer to the conceptual conflicts between USA Patriot Act and PIPEDA (Personal Information Protection and Electronic Documents Act) in Canada or the Data Privacy Protection Directive in the EU. For a specific system, corresponding security monitoring approach must identify these conflicts and let the customer to decide on using a particular cloud service or not. Additionally, end users of cloud customer services must be informed about these details by means of security mechanisms in each layer in the cloud model.
  3. Reputation Isolation (Fate-sharing). Cloud stakeholders' activities and behaviors affect each other's reputation. For instance, in Amazon EC2's IP addresses blacklisting incident, if a monitoring agent was attached to each VM instances and a correlation system existed on the underlying layer, the cloud provider could differentiate instances that had activities suspicious to spamming among others.
  4. Incident Handling. Incidents happen in different layers of the cloud model and each layer may be operated by different authorities. Handling an incident needs not only cooperation among all authorities, but also policies and procedures for mitigating the incident. These policies and procedures should be introduced in the security monitoring solution. Stakeholders and authorities will apply these guidelines to handle the incident in the best fashion and decrease the degradation of services. Defining policies and procedures is the challenging part. As an example, a cloud customer should have access to log files which contain any traces of the incident. However, privacy of other customers must be protected. Additionally, investigation of one cloud customer should not affect the performance of other customers. One real case is about the FBI raid on two data centers in Texas. In this investigation, they powered off the whole data center.
  5. Data lock-in. In case of a major security breach in the cloud infrastructure, customers should be able to migrate to another cloud infrastructure smoothly. A complete monitoring solution should check the compatibility of cloud service interfaces with standard interface to make sure that the migration will happen as it supposed to be.
  6. Data deletion. File deletion has been a concern in all distributed systems, but it became more challenging in the cloud computing. Monitoring mechanisms, which have been used to track data location, are also useful in the file deletion challenge. In other words, same marking and tracking mechanisms can be used for hierarchical multi-label data marking. Therefore, cloud providers can keep track of data among all backup files and distributed storage.
  7. Mutual audit-ability. Stakeholders need to be sure of each other's trustworthiness. A collaborative monitoring mechanism in each cloud layer is crucial for this purpose. These collaborative mechanisms should communicate through a common interface among layers.

Evaluation of Mechanisms against Threats

Considering extracted threat specifications and new security challenges, we try to find weaknesses in existing mechanisms. By identifying weaknesses and their features, it becomes possible to find proper monitoring techniques in order to fulfill security monitoring requirements in the cloud computing model. Commercial clouds are one of the types of closed environment. On the other hand, monitoring mechanisms should be changed in order to fulfill requirements in the new model. Lack of ecosystems for monitoring solution providers is a major obstacle in the way to develop new solutions for new challenges.

New concepts behind the cloud computing impose constrains on monitoring mechanisms. Part of these constrains are not applicable to existing monitoring mechanisms. On-demand access and data perimeters are parts of new concepts.

Elasticity and on-demand access in the cloud model is a root for some incompatibilities. As an example, scaling up/down are not completely supported in current monitoring techniques. Moreover, definition or even existence of perimeters is not the same as before, therefore security solutions cannot simply put guards at communication channels to control everything. This requires exhaustive research and development to add elasticity to solutions and control data at possible perimeters.

Another concern is about compliance of monitoring activities with legal issues Monitoring mechanisms should have flexibility so customers can choose from a set of compatible mechanisms regarding to their concerns and environmental constrains. Security mechanisms are not mature enough to support reputation isolation; in order to cover this shortcoming, human interaction is required in some monitoring decisions. Human interaction in decision making is not scalable and can become a bottleneck. Real life example is Amazon EC2 white listing procedure for email sender instances.

Current Security model of the cloud computing

In order to archive security in cloud computing system, some technologies have been used to build the security mechanism for cloud computing. The cloud computing security can be provided as security services. Security messages and secured messages can be transported, understood, and manipulated by standard Web services tools and software. This mechanism is a good choice because the web service technology has been well established in the network-computing environment.

Even the mechanism for the cloud computing security has many merits now, but there are still some disadvantages. For example, there is short of the mechanism on the hardware to support the trusted computing in cloud computing system. The trusted root in cloud computing environment has not been defined clearly. The creation and protection of certificates are not secure enough for cloud computing environments. The performance is reduced apparently when the cryptographic computing are processed. There are also lack of some mechanisms to register and classify the participants carefully, such as the tracing and monitoring for them. In the following section, we will analyze the challenge for the cloud computing security in deep.

The challenge for the security in cloud computing

In cloud computing environment, many users participate in the CLOUD and they join or leave CLOUD dynamically. Other resources in the cloud computing environments are the same too. Users, resources, and the CLOUD should establish the trustful relationship among themselves. And they will be able to deal with the changing dynamically.

The CLOUD includes distributed users and resource from distributed local systems or organizes, which have different security policies. According to this reason, how to build a suitable relationship among them is a challenge. In fact, the requirements for the security in cloud computing environment have some aspects, including confidentiality.

Research Methodology

Trusted Computing Technology

In recent years, increased reliance on computer security and the unfortunate fact of lack of it, particularly in the open-architecture computing platforms, have motivated many efforts made by the computing industry. In 1999, HP, IBM, Compaq, Intel, and Microsoft announced the formation of the Trusted Computing Platform Alliance (TCPA) that focused on building confidence and trust of computing platform in e-business transactions. In 2003, the Trusted Computing Group (TCG) was formed and has adopted the specifications developed by TCPA. The TCG technology strengthens the trust of the user into the computer platform

Because one of the biggest issues facing computer technology today is data security, and the problem has gotten worse because users are working with sensitive information more often, while the number of threats is growing and hackers are developing new types of attacks, many technology researchers advocate development of trusted computing (TC) systems that integrate data security mechanism into their core operations, rather than implementing it by using add-on applications. In this concept, TC systems would cryptographically seal off the parts of the computer that deal with data and applications and give decryption keys only to programs and information that the technology judges to be trusted. The TCG made this mechanism as their core criteria to define the technology specification. Operating condition and which is highly resistant to subversion by application software, viruses, and a given level of physical interference. The Trusted Computing Platform TCP operates through a combination of software and hardware: manufacturers add some new hardware to each

TCP provides two basic services, authenticated boot and encryption, which are designed to work together. An authenticated boot service monitors what operating system software is booted on the computer and gives applications a sure way to tell which operating system is running.

This is done with the help of the hardware which maintains the audit log of the boot process.

On the computer platform with TCP, the TPM is used to ensure that each computer will report its configuration parameters in a trustworthy manner. Trusted platform software stack (TSS) provides the interfaces between TPM and other system modules. The platform boot processes are augmented to allow the TPM to measure each of the components in the system (both hardware and software) and securely store the results of the measurements in Platform Configuration Registers (PCR) within the TPM.

Trusted cloud computing System using TCP

As what we have discussed above, the trusted computing mechanism can provide a way that can help to establish a security environment. The model of trusted computing is originally designed to provide the privacy and trust in the personal platform and the trusted computing platform is the base of the trusted computing. Since the internet computing or network computing has been the main computing from the end of the last century, the model of trusted computing is being developed to the network computing, especially the distributed systems environment. The cloud computing is a promising distributed system model and will act as an important role in the e-business or research environments. As web service technology have developed quickly and have been used broadly, cloud computing system could evolve to cloud computing service, which integrates the cloud computing with web service technology. So we could extend the trusted computing mechanism to cloud computing service systems by integrating the TCP into cloud computing system.

In the network computing environment, trust will go on to envision a connected, digital world in which trusted entities would interact with one another in much the same way individuals and businesses interact in traditional commercial relationships. " The digital space needs that the specific area of similar transactions needs to have faith. Some equally accepted on the some intent that shall be satisfied and their privileges confined”. For true commerce automation to exist, trading partners must know what to expect from each other's systems.” Trusted computing, therefore, must provide the basis for trusted transactions to occur, and trusted computing technologies must allow stakeholders to express policies and have those policies negotiated and enforced in any execution environment.

1. Authentication cloud computing environment with TCP

In cloud computing environment, different entities can appeal to join the CLOUD. Then the first step is to prove their identities to the cloud computing system administration. Because cloud computing should involve a large amount of entities, such as users and resources from different sources, the authentication is important and complicated. Considering these, we use the TCP to aid to process the authentication in cloud computing.

The TCP is based on the TPM. The TCP is based on the TPM. It can resist the attack from software, and even the hardware attack. The TPM contain a private master key which can provide protect for other information store in cloud computing system. Because the hardware certificate can store in TPM, it is hard to attack it. So TPM can provide the trust root for users.

Since the users have full information about their identity, the cloud computing system can use some mechanism to trace the users and get their origin. Because in the TCP the user's identity is proved by user's personal key and this mechanism is integrated in the hardware, such as the BIOS and TPM, so it is very hard to the user to make deceiving for their identity information. Each site in the cloud computing system will record the visitor's information. So by using the TCP mechanism in cloud computing, the trace of participants can be known by the cloud computing trace mechanism.

2. Role Based Access Control Model in cloud computing environment

In the cloud computing system, there are a great number of users who hope to make the access to the cloud computing service. They do have their own goal and behavior. If the cloud computing systems hope to deal with them one by one, there will be a great hard work. In order to reduce the complication of the access control model, we can classify them into several classes or groups and make the access control criteria for these classes. So the users should firstly register themselves into one or some of the classes and get some credential to express their identities. When they make the access to the cloud computing resource or hope to get the cloud computing service, they should take their full ID, which includes their personal identities or the classes/group. Then the objective environment will have a relative simple way to control their accessing

In order to reach the goal of trusted computing, the users should come from the trusted computing platform, and take the security mechanism on this platform to achieve the privacy and security for themselves. The user has his personal ID and secrete key, such as the USB Key, to get the right to use the TCP. They can use the decryption function to protect their data and other information.

During booting the TC hardware calculates the cryptographic hash for the code present in the boot ROM. This is then written into a temper resistant log. For each new block of code new hash is calculated by the code the code present in boot ROM. This new hash is attached in the end temper resistant log. This process keeps on occurring till the OS is booted completely. After booting, tamper resistant log is used to establish the version of working OS. The TC has a part which is known as certifying

It is helpful for the TC hardware to know via its log what software configuration is running on a machine. TC has the capability to prove that the OS version is working is quite famous. Then OS would be able to confirm that this systems has concise settings. If you trust TC and the OS, then you can be confident that you know the application's configuration. A configuration certificate can be presented to any recipient—the user or the program running on another computer in the cloud computing environment—and the recipient can verify that the certificate is valid and up-to-date, so it can know what the machine's configuration is. This mechanism provides a way to help the participants in the cloud computing systems to build relationship among the ones that have mutual action.

The trusted computing platform's boot sequence is illustrated. The beginning of the boot is the BIOS boot block. In the TPM, the root of trust in integrity reporting is fulfilled. And the reporting could be delivered to the remote machine via the network.

By using the remote attest function, the user in the TCP could to notify their identities and relevant information to the remote machine that they want to make access to. And each objective environment has the mechanism to clarify the accessing entity's information about their identity, role, and other information about the security. The user should bind their personal ID used for TCP, the stander certificate, such as X.509, took from the CA, and the role information together. And the cloud computing system has the according mechanism to verify this information about each user. Moreover, a role hierarchy is introduced to reflect inheritance of authority and responsibility among the roles. If a user has a user-role certificate showing membership in role R, and a cloud computing service requires role r, the user should be able to get permission. On the other hand, the resource owners should also use this mechanism to express their identities, and get the rights to provide their resources to other users.

The cloud computing service should present which role it will give the permission, when the cloud computing service notifies itself to the cloud -computing environment. So the user will able to know whether he could make access to that cloud computing service before his action.

The encryption is another major mechanism in our design. This function lets data be encrypted in such a way that it can be decrypted only by a certain machine, and only if that machine is in a certain configuration. This service is built by a combination of hardware and software application. The hardware maintains a “master secret key” for each machine, and it uses the master secret to generate a unique sub-key for every possible configuration of that machine. With this the information decoded information in specific settings that can be coded in case of various settings.

When one machine wants to join the cloud computing, it will show its certificate and generate session key with other cooperators buy using the unique sub-key. If the configuration in the local machine is changed, the session-key will also be not useful. So in the distributed environment, we can use this function to transmit data to remote machine and this data can be decrypted when the remote machine has certain configuration.

The user login the CLOUD from the TCP, which is based on the Trust Platform Module (TPM), and get the certificate from the CA, which is trusted by the cloud. When the participant wants to communicate with remote entity, it will carry all the information, including the personal ID, certificate and role information. And the information between them is protected buy their session key.

3.Data Security in cloud based on TCP

With the TCP, the different entities can communicate in a security way. The TCP generate random numbers and then create session keys. The random keys created by physical hardware have the security characteristics better than those generated just by software programs. The security communication protocols use the system in cloud to call TSS to use the TPM. Then TPM provides the encryption key and session key to the communicators in cloud computing. With its computing capacity, TPM can burdensome computation work from CPU and improve the performance.

The important data stored in the computer can be encrypted with keys generated by the TPM. When accessing to these data, the users or applications should pass firstly the authentication with TPM, and encryption keys are stored in the TPM, which makes it hard to attack these keys. To prevent the attack for integrity of data, the hash function in TPM is used. The TPM will check the critical data in a certain interval to protect the integrity of data. The processes of encryption and integrity check use TSS to call the function of TPM.

4. The Trace of the User's Behavior

Since the users have full information about their identity, the cloud computing system can use some mechanism to trace the users and get their origin. Because in the TCP the user's identity is proved by user's personal key and this mechanism is integrated in the hardware, such as the BIOS and TPM, so it is very hard to the user to make deceiving for their identity information. Before the distributed machine cooperates to do something, they should attest their local information to the remote site. When the user login the cloud computing system, his identity information should be recorded and verified at first. Each site in the cloud computing system will record the visitor's information. So if the TCP mechanism is integrated into the cloud computing, the trace of the participants, including the users and other resources, can be knew by the cloud computing trace mechanism. Then if the participants do some malicious behavior, they will be tracked and be punished. In order to achieve the trusted computing in the cloud computing system, we should have the mechanism to know not only what the participants can do, but also what the participant have done. So the monitoring function should be integrated into the cloud computing system to supervise the participants' behavior. In fact, reference monitors have been used in the operation system for more than several decades, and it will be useful in cloud computing too.


public class NewUser extends Thread


NewUserFrame nuf;

String mes;

String cip;

NewUser(NewUserFrame uf,String s1,String s2)






public void run()




DatagramSocket ds=new DatagramSocket();

byte data=mes.getBytes();

DatagramPacket dp=new DatagramPacket(data,0,data.length,InetAddress.getByName(cip),9000);


byte dd=new byte[1000];

while(true) {

DatagramPacket dp1=new DatagramPacket(dd,0,dd.length);


String d=new String(dp1.getData()).trim();

System.out.println(" register "+d);






Else {

JOptionPane.showMessageDialog(new JFrame(),"Invalid Registraction");






catch(Exception e) { e.printStackTrace(); }



public class Register extends Thread


RegisterFrame rf;

DatagramSocket ds;

DatagramPacket dp;

Register(RegisterFrame f) {

rf=f; }

public void run()




int c=Integer.parseInt(rf.sid);

int mypt=7000+c;

ds=new DatagramSocket(mypt);

String ip=InetAddress.getLocalHost().getHostAddress();

String nameOS = "os.name";

String versionOS = "os.version";

String os=System.getProperty("os.name");

String ver=System.getProperty("os.version");

MainFrame mf=new MainFrame(ip,mypt,rf.pid,rf.mip);

String re="Sign#"+ip+"#"+String.valueOf(mypt)+"#"+rf.ctype+"#"+rf.pid+"#"+os+"#"+ver;

byte data=re.getBytes();

dp=new DatagramPacket(data,0,data.length,InetAddress.getByName(rf.mip),9000);




byte dd=new byte[1000];

DatagramPacket dp1=new DatagramPacket(dd,0,dd.length);


String str=new String(dp1.getData()).trim();

String req=str.split("#");








mf.jTabbedPane1.setEnabledAt(2, false);

mf.jTabbedPane1.setEnabledAt(4, false);






mf.jTabbedPane1.setEnabledAt(1, false);

mf.jTabbedPane1.setEnabledAt(3, false);















JOptionPane.showMessageDialog(new JFrame(), "Invalid User");


}// end of LoginDt



DefaultTableModel dm1=(DefaultTableModel)mf.jTable1.getModel();

int row=dm1.getRowCount();

for(int i=0;i




for(int i=1;i


String sa=req[i].split("-");

Vector v=new Vector();







}// end of client dt





DefaultTableModel dm2=(DefaultTableModel)mf.jTable2.getModel();

Vector v=new Vector();





JOptionPane.showMessageDialog(new JFrame(), "File is Uploaded successfully");




JOptionPane.showMessageDialog(new JFrame(), "File is not Uploaded ");


} //upload result



DefaultTableModel dm2=(DefaultTableModel)mf.jTable2.getModel();

for(int i=1;i


String sa=req[i].split("-");

Vector v=new Vector();






} //uploadetails



DefaultTableModel dm3=(DefaultTableModel)mf.jTable3.getModel();

for(int i=1;i


String sa=req[i].split("-");

Vector v=new Vector();






} // access details



for(int i=1;i




}// resource details





DefaultTableModel dm2=(DefaultTableModel)mf.jTable3.getModel();

Vector v=new Vector();





File fe=new File("d:\\access\\"+req[2]);

FileOutputStream fos=new FileOutputStream(fe);



JOptionPane.showMessageDialog(new JFrame(), "File is Download successfully");




JOptionPane.showMessageDialog(new JFrame(), "File is not Downloaded ");



} //while


catch(Exception e)






public class MainFrame extends javax.swing.JFrame


public static String mip="";

public static String ctype="";

public static String pid="";

String myip;

int myport;

String manIP;

String fname;

String fpath;

String myid;

public MainFrame(String s1,int s2,String s3,String s4)








private void jButton4ActionPerformed(java.awt.event.ActionEvent evt) {



String key=JOptionPane.showInputDialog(new JFrame(), "Enter the Key");

File file=new File(fpath);

FileInputStream fis=new FileInputStream(file);

byte cnt=new byte[fis.available()];

int ch;

String msg="";





String str="FileUpload"+"#"+myip+"#"+myport+"#"+myid+"#"+fname+"#"+key+"#"+msg;

byte bt=str.getBytes();

DatagramSocket ds=new DatagramSocket();

DatagramPacket dp=new DatagramPacket(bt,0,bt.length,InetAddress.getByName(manIP),9000);



catch(Exception e) { e.printStackTrace(); }


private void jButton3ActionPerformed(java.awt.event.ActionEvent evt) {

try {

JFileChooser fc=new JFileChooser();






catch(Exception e)





private void jButton5ActionPerformed(java.awt.event.ActionEvent evt) {



String fn=jComboBox2.getSelectedItem().toString().trim();

String key=JOptionPane.showInputDialog(new JFrame(), "Enter key");

String ms="FileAccess"+"#"+myip+"#"+myport+"#"+myid+"#"+fn+"#"+key;

byte by=ms.getBytes();

DatagramSocket ds=new DatagramSocket();

DatagramPacket dp1=new DatagramPacket(by,0,by.length,InetAddress.getByName(manIP),9000);



catch(Exception e)





public static void main(String args) {

java.awt.EventQueue.invokeLater(new Runnable() {

public void run() {




public class Main


public static void main(String args)


RegisterFrame rf=new RegisterFrame();






Cloud computing, in its various forms, offers considerable benefits to industry. It does so by providing very complex, scalable computing infrastructures on which the organization can build its enterprise architecture. The organization needs to understand and account for the characteristics of these offerings in their IH policies, processes, personnel and cloud services contracts. Scalability and cloud depth present the IH and legal teams with serious challenges. As core cloud capabilities, an early analysis of scalability and cloud depth by the organization will enable them to make critical and timely decisions when the enterprise commits itself to cloud integration.

It cannot be over emphasized that, with cloud integration, there is no one size fits all. If an organization makes the mistake of adopting this mindset, the result can be devastating. One SaaS integration will not be same as another SaaS integration. Neither will the IH concerns be the same for a PaaS integration as for an IaaS integration. An Organization must thoroughly examine each cloud integration in its own context.

The organization will want to undertake a systematic approach to analyzing their IH capabilities and concerns in light of each new cloud integration. The IH team should consult other stakeholders throughout this process to ensure a sufficiently broad perspective of the issues, and to identify opportunities for collaboration and consolidation of tasks. By either utilizing the framework with which the organization started their IH capability or by adopting another well-established framework as a guide the organization can ensure that it is addressing all critical areas.

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Computing"

Computing is a term that describes the use of computers to process information. Key aspects of Computing are hardware, software, and processing through algorithms.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: