Disclaimer: This dissertation has been written by a student and is not an example of our professional work, which you can see examples of here.

Any opinions, findings, conclusions, or recommendations expressed in this dissertation are those of the authors and do not necessarily reflect the views of UKDiss.com.

Technology for Network Security

Info: 5398 words (22 pages) Dissertation
Published: 12th Dec 2019

Reference this

Tagged: Information TechnologyCyber Security

2.0 CHAPTER TWO

2.1 INTRODUCTION

The ever increasing need for information technology as a result of globalisation has brought about the need for an application of a better network security system.

It is without a doubt that the rate at which computer networks are expanding in this modern time to accommodate higher bandwidth, unique storage demand, and increase number of users can not be over emphasised. As this demand grows on daily bases, so also, are the threats associated with it. Some of which are, virus attacks, worm attacks, denial of services or distributed denial of service attack etc. Having this in mind then call for swift security measures to address these threats in order to protect data reliability, integrity, availability and other needed network resources across the network.

Generally, network security can simply be described as a way of protecting the integrity of a network by making sure authorised access or threats of any form are restricted from accessing valuable information. As network architecture begins to expand, tackling the issue of security is becomes more and more complex to handle, therefore keeping network administrators on their toes to guard against any possible attacks that occurs on daily basis. Some of the malicious attacks are viruses and worm attacks, denial of service attacks, IP spoofing, cracking password, Domain Name Server (DNS) poisoning etc. As an effort to combat these threats, many security elements have been designed to tackle these attacks on the network. Some of which includes, firewall, Virtual Private Network (VPN), Encryption and Decryption, Cryptography, Internet Protocol Security (IPSec), Data Encryption Standard (3DES), Demilitarised Zone, (DMZ), Secure Shell Layer (SSL) etc. This chapter starts by briefly discussing Internet Protocol (IP), Transmission Control Protocol (TCP), User datagram Protocol (UDP), Internet Control Message Protocol (ICMP), then discussed the Open system interconnection (OSI) model and the protocols that operate at each layer of the model, network security elements, followed by the background of firewall, types and features of firewalls and lastly, network security tools.

2.2 A BRIEF DESCRIPTION OF TCP, IP, UDP AND ICMP

2.2.1 DEFINITION

Going by the tremendous achievement of the World Wide Web (internet), a global communication standard with the aim of building interconnection of networks over heterogeneous network is known as the TCP/IP protocol suite was designed

(Dunkels 2003; Global Knowledge 2007; Parziale et al 2006). The TCP/IP protocol suite is the core rule used for applications transfer such as File transfers, E-Mail traffics, web pages transfer between hosts across the heterogeneous networks (Dunkels 2003; Parziale et al 2006). Therefore, it becomes necessary for a network administrator to have a good understanding of TCP/IP when configuring firewalls, as most of the policies are set to protect the internal network from possible attacks that uses the TCP/IP protocols for communication (Noonan and Dobrawsky 2006). Many incidents of network attacks are as a result of improper configuration and poor implementation TCP/IP protocols, services and applications. TCP/IP make use of protocols such as TCP, UDP, IP, ICMP etc to define rules of how communication over the network takes place (Noonan and Dobrawsky 2006).

Before these protocols are discussed, this thesis briefly looks into the theoretical Open Systems Interconnection (OSI) model (Simoneau 2006).

2.2.2 THE OSI MODEL

The OSI model is a standardised layered model defined by International Organization for Standardization (ISO) for network communication which simplifies network communication to seven separate layers, with each individual layer having it own unique functions that support immediate layer above it and at same time offering services to its immediate layer below it (Parziale et al 2006; Simoneau 2006). The seven layers are Application, Presentation, Session Transport, Network, Data, Link and Physical layer. The first three lower layers (Network, Data, Link and Physical layer) are basically hardware implementations while the last four upper layers (Application, Presentation, Session and Transport) are software implementations.

  • Application Layer

This is the end user operating interface that support file transfer, web browsing, electronic mail etc. This layer allows user interaction with the system.

  • Presentation Layer

This layer is responsible for formatting the data to be sent across the network which enables the application to understand the message been sent and in addition it is responsible for message encryption and decryption for security purposes.

  • Session Layer

This layer is responsible for dialog and session control functions between systems.

  • Transport layer

This layer provides end-to-end communication which could be reliable or unreliable between end devices across the network. The two mostly used protocols in this layer are TCP and UDP.

  • Network Layer

This layer is also known as logical layer and is responsible for logical addressing for packet delivery services. The protocol used in this layer is the IP.

  • Data Link Layer

This layer is responsible for framing of units of information, error checking and physical addressing.

  • Physical Layer

This layer defines transmission medium requirements, connectors and responsible for the transmission of bits on the physical hardware (Parziale et al 2006; Simoneau 2006).

2.2.3 INTERNET PROTOCOL (IP)

IP is a connectionless protocol designed to deliver data hosts across the network. IP data delivery is unreliable therefore depend on upper layer protocol such as TCP or lower layer protocols like IEEE 802.2 and IEEE802.3 for reliable data delivery between hosts on the network.(Noonan and Dobrawsky 2006)

2.2.4 TRANSMISSION CONTROL PROTOCOL (TCP)

TCP is a standard protocol which is connection-oriented transport mechanism that operates at the transport layer of OSI model. It is described by the Request for Comment (RFC) 793. TCP solves the unreliability problem of the network layer protocol (IP) by making sure packets are reliably and accurately transmitted, errors are recovered and efficiently monitors flow control between hosts across the network. (Abie 2000; Noonan and Dobrawsky 2006; Simoneau 2006). The primary objective of TCP is to create session between hosts on the network and this process is carried out by what is called TCP three-way handshake. When using TCP for data transmission between hosts, the sending host will first of all send a synchronise (SYN) segment to the receiving host which is first step in the handshake. The receiving host on receiving the SYN segment reply with an acknowledgement (ACK) and with its own SYN segment and this form the second part of the handshake. The final step of the handshake is then completed by the sending host responding with its own ACK segment to acknowledge the acceptance of the SYN/ACK. Once this process is completed, the hosts then established a virtual circuit between themselves through which the data will be transferred (Noonan and Dobrawsky 2006).

As good as the three ways handshake of the TCP is, it also has its short comings. The most common one being the SYN flood attack. This form of attack occurs when the destination host such as the Server is flooded with a SYN session request without receiving any ACK reply from the source host (malicious host) that initiated a SYN session. The result of this action causes DOS attack as destination host buffer will get to a point it can no longer take any request from legitimate hosts but have no other choice than to drop such session request (Noonan and Dobrawsky 2006).

2.2.5 USER DATAGRAM PROTOCOL (UDP)

UDP unlike the TCP is a standard connectionless transport mechanism that operates at the transport layer of OSI model. It is described by the Request for Comment (RFC) 768 (Noonan and Dobrawsky 2006; Simoneau 2006). When using UDP to transfer packets between hosts, session initiation, retransmission of lost or damaged packets and acknowledgement are omitted therefore, 100 percent packet delivery is not guaranteed (Sundararajan et al 2006; Postel 1980). UDP is designed with low over head as it does not involve initiation of session between hosts before data transmission starts. This protocol is best suite for small data transmission (Noonan and Dobrawsky 2006).

2.2.6 INTERNET CONTROL MESSAGE PROTOCOL (ICMP).

ICMP is primarily designed to identify and report routing error, delivery failures and delays on the network. This protocol can only be used to report errors and can not be used to make any correction on the identified errors but depend on routing protocols or reliable protocols like the TCP to handle the error detected (Noonan and Dobrawsky 2006; Dunkels 2003). ICMP makes use of the echo mechanism called Ping command. This command is used to check if the host is replying to network traffic or not (Noonan and Dobrawsky 2006; Dunkels 2003).

2.3 OTHER NETWORK SECURITY ELEMENTS.

2.3.1 VIRTUAL PRIVATE NETWORK (VPN)

VPN is one of the network security elements that make use of the public network infrastructure to securely maintain confidentiality of information transfer between hosts over the public network (Bou 2007). VPN provides this security features by making use of encryption and Tunneling technique to protect such information and it can be configured to support at least three models which are

  • Remote- access connection.
  • Site-to-site ( branch offices to the headquarters)
  • Local area network internetworking (Extranet connection of companies with their business partners) (Bou 2007).

2.3.2 VPN TECHNOLOGY

VPN make use of many standard protocols to implement the data authentication (identification of trusted parties) and encryption (scrambling of data) when making use of the public network to transfer data. These protocols include:

  • Point-to-Point Tunneling Protocol PPTP [RFC2637]
  • Secure Shell Layer Protocol (SSL) [RFC 2246]
  • Internet Protocol Security (IPSec) [RFC 2401]
  • Layer 2 Tunneling Protocol (L2TP) [RFC2661]

2.3.2.1 POINT-TO-POINT TUNNELING PROTOCOL [PPTP]

The design of PPTP provides a secure means of transferring data over the public infrastructure with authentication and encryption support between hosts on the network. This protocol operates at the data link layer of the OSI model and it basically relies on user identification (ID) and password authentication for its security. PPTP did not eliminate Point-to-Point Protocol, but rather describes better way of Tunneling PPP traffic by using Generic Routing Encapsulation (GRE) (Bou 2007; Microsoft 1999; Schneier and Mudge 1998).

2.3.2.2 LAYER 2 TUNNELING PROTOCOL [L2TP]

The L2TP is a connection-oriented protocol standard defined by the RFC 2661which merged the best features of PPTP and Layer 2 forwarding (L2F) protocol to create the new standard (L2TP) (Bou 2007; Townsley et al 1999). Just like the PPTP, the L2TP operates at the layer 2 of the OSI model. Tunneling in L2TP is achieved through series of data encapsulation of the different levels layer protocols. Examples are UDP, IPSec, IP, and Data-Link layer protocol but the data encryption for the tunnel is provided by the IPSec (Bou 2007; Townsley et al 1999).

2.3.2.3 INTERNET PROTOCOL SECURITY (IPSEC) [RFC 2401]

IPSec is a standard protocol defined by the RFC 2401 which is designed to protect the payload of an IP packet and the paths between hosts, security gateways (routers and firewalls), or between security gateway and host over the unprotected network (Bou 2007; Kent and Atkinson 1998). IPSec operate at network layer of the OSI model. Some of the security services it provides are, authentication, connectionless integrity, encryption, access control, data origin, rejection of replayed packets, etc (Kent and Atkinson 1998).

2.3.3.4 SECURE SOCKET LAYER (SSL) [RFC 2246]

SSL is a standard protocol defined by the RFC 2246 which is designed to provide secure communication tunnel between hosts by encrypting hosts communication over the network, to ensure packets confidentiality, integrity and proper hosts authentication, in order to eliminate eavesdropping attacks on the network (Homin et al 2007; Oppliger et al 2008). SSL makes use of security elements such as digital certificate, cryptography and certificates to enforce security measures over the network. SSL is a transport layer security protocol that runs on top of the TCP/IP which manage transport and routing of packets across the network. Also SSL is deployed at the application layer OSI model to ensure hosts authentication (Homin et al 2007; Oppliger et al 2008; Dierks and Allen 1999).

2.4 FIREWALL BACKGROUND

The concept of network firewall is to prevent unauthorised packets from gaining entry into a network by filtering all packets that are coming into such network. The word firewall was not originally a computer security vocabulary, but was initially used to illustrate a wall which could be brick or mortar built to restrain fire from spreading from one part of a building to the other or to reduce the spread of the fire in the building giving some time for remedial actions to be taken (Komar et al 2003).

2.4.1BRIEF HISTORY OF FIREWALL

Firewall as used in computing is dated as far back as the late 1980s, but the first set of firewalls came into light sometime in 1985, which was produced by a Cisco’s Internet work Operating System (IOS) division called packet filter firewall (Cisco System 2004). In 1988, Jeff Mogul from DEC (Digital Equipment Corporation) published the first paper on firewall. Between 1989 and 1990, two workers of the AT&T Bell laboratories Howard Trickey and Dave Persotto initiated the second generation firewall technology with their study in circuit relays called Circuit level firewall. Also, the two scientists implemented the first working model of the third generation firewall design called Application layer firewalls. Sadly enough, there was no published documents explaining their work and no product was released to support their work. Around the same year (1990-1991), different papers on the third generation firewalls were published by researchers. But among them, Marcus Ranum’s work received the most attention in 1991 and took the form of bastion hosts running proxy services. Ranum’s work quickly evolved into the first commercial product—Digital Equipment Corporation’s SEAL product (Cisco System 2004).

About the same year, work started on the fourth generation firewall called Dynamic packet filtering and was not operational until 1994 when Check Point Software rolled out a complete working model of the fourth generation firewall architecture.

In 1996, plans began on the fifth generation firewall design called the Kernel Proxy architecture and became reality in 1997 when Cisco released the Cisco Centri Firewall which was the first Proxy firewall produced for commercial use (Cisco System 2004).

Since then many vendor have designed and implemented various forms of firewall both in hardware and software and till date, research works is on going in improving firewalls architecture to meet up with ever increasing challenges of network security.

2.5 DEFINITION

According to the British computer society (2008), Firewalls are defence mechanisms that can be implemented in either hardware or software, and serve to prevent unauthorized access to computers and networks. Similarly, Subrata, et al (2006) defined firewall as a combination of hardware and software used to implement a security policy governing the flow of network traffic between two or more networks.

The concept of firewall in computer systems security is similar to firewall built within a building but differ in their functions. While the latter is purposely designed for only one task which is fire prevention in a building, computer system firewall is designed to prevent more than one threat (Komar et al 2003).This includes the following

  • Denial Of Service Attacks (DoS)
  • Virus attacks
  • Worm attack.
  • Hacking attacks etc

2.5.1 DENIAL OF SERVICE ATTACKS (DOS)

Countering DoS attacks on web servers has become a very challenging problem” (Srivatsa et al 2006). This is an attack that is aimed at denying legitimate packets to access network resources. The attacker achieved this by running a program that floods the network, making network resources such as main memory, network bandwidth, hard disk space, unavailable for legitimate packets. SYN attack is a good example of DOS attacks, but can be prevented by implementing good firewall polices for the secured network. A detailed firewall policy (iptables) is presented in chapter three of this thesis.

2.5.2 VIRUS AND WORM ATTACKS

Viruses and worms attacks are big security problem which can become pandemic in a twinkle of an eye resulting to possible huge loss of information or system damage (Ford et al 2005; Cisco System 2004). These two forms of attacks can be programs designed to open up systems to allow information theft or programs that regenerate themselves once they gets into the system until they crashes the system and some could be programmed to generate programs that floods the network leading to DOS attacks. Therefore, security tools that can proactively detect possible attacks are required to secure the network. One of such tools is a firewall with good security policy configuration (Cisco System 2004).

Generally speaking, any kind of firewall implementation will basically perform the following task.

  • Manage and control network traffic.
  • Authenticate access
  • Act as an intermediary
  • Make internal recourses available
  • Record and report event

2.5.3 MANAGE AND CONTROL NETWORK TRAFFIC.

The first process undertaken by firewalls is to secure a computer networks by checking all the traffic coming into and leaving the networks. This is achieved by stopping and analysing packet Source IP address, Source port, Destination IP address, Destination port, IP protocol Packet header information etc. in order decide on what action to take on such packets either to accept or reject the packet. This action is called packet filtering and it depends on the firewall configuration. Likewise the firewall can also make use of the connections between TCP/IP hosts to establish communication between them for identification and to state the way they will communicate with each other to decide which connection should be permitted or discarded. This is achieved by maintaining the state table used to check the state of all the packets passing through the firewall. This is called stateful inspection (Noonan and Dobrawsky 2006).

2.5.4 AUTHENTICATE ACCESS

When firewalls inspects and analyses packet’s Source IP address, Source port, Destination IP address, Destination port, IP protocol Packet header information etc, and probably filters it based on the specified security procedure defined, it does not guarantee that the communication between the source host and destination host will be authorised in that, hackers can manage to spoof IP address and port action which defeats the inspection and analysis based on IP and port screening. To tackle this pit fall over the network, an authentication rule is implemented in firewall using a number of means such as, the use of username and password (xauth), certificate and public keys and pre-shared keys (PSKs).In using the xauth authentication method, the firewall will request for the source host that is trying to initiate a connection with the host on the protected network for its username and password before it will allow connection between the protected network and the source host to be established. Once the connection is been confirmed and authorised by the security procedure defined, the source host need not to authenticate itself to make connection again (Noonan and Dobrawsky 2006).

The second method is using certificates and public keys. The advantage of this method over xauth is that verification can take place without source host intervention having to supply its username and password for authentication. Implementation of Certificates and public keys requires proper hosts (protected network and the source host) configuration with certificates and firewall and making sure that protected network and the source host use a public key infrastructure that is properly configured. This security method is best for big network design (Noonan and Dobrawsky 2006).

Another good way of dealing with authentication issues with firewalls is by using pre-shared keys (PSKs). The implementation of PSKs is easy compare to the certificates and public keys although, authentication still occur without the source host intervention its make use of an additional feature which is providing the host with a predetermined key that is used for the verification procedure (Noonan and Dobrawsky 2006).

2.5.5 ACT AS AN INTERMEDIARY

When firewalls are configured to serve as an intermediary between a protected host and external host, they simply function as application proxy. The firewalls in this setup are configured to impersonate the protected host such that all packets destined for the protected host from the external host are delivered to the firewall which appears to the external host as the protected host. Once the firewalls receive the packets, they inspect the packet to determine if the packet is valid (e.g. genuine HTTT packet) or not before forwarding to the protected host. This firewall design totally blocks direct communication between the hosts.

2.5.6 RECORD AND REPORT EVENTS

While it is good practise to put strong security policies in place to secure network, it is equally important to record firewalls events. Using firewalls to record and report events is a technique that can help to investigate what kind of attack took place in situations where firewalls are unable to stop malicious packets that violate the access control policy of the protected network. Recording this event gives the network administrator a clear understanding of the attack and at the same time, to make use of the recorded events to troubleshoot the problem that as taken place. To record these events, network administrators makes use of different methods but syslog or proprietary logging format are mostly used for firewalls. However, some malicious events need to be reported quickly so that immediate action can be taken before serious damage is done to the protected network. Therefore firewalls also need an alarming mechanism in addition to the syslog or proprietary logging format when ever access control policy of the protected network is violated. Some types of alarm supported by firewalls include Console notification, Simple Network Management Protocol (SNMP), Paging notification, E-mail notification etc (Noonan and Dobrawsky 2006).

  • Console notification is a warning massage that is presented to the firewall console. The problem with this method of alarm is that, the console needs to be monitored by the network administrator at all times so that necessary action can be taken when an alarm is generated.
  • Simple Network Management Protocol (SNMP) notification is implemented to create traps which are transferred to the network management system (NMS) monitoring the firewall.
  • Paging notification is setup on the firewall to deliver a page to the network administrator whenever the firewall encounters any event. The message could be an alphanumeric or numeric depending on how the firewall is setup.
  • E-mail notification is similar to paging notification, but in this case, the firewall send an email instead to proper address.

2.6 TYPES OF FIREWALLS

Going by firewall definition, firewalls are expected to perform some key functions like, Application Proxy, Network Translation Address, and Packet filtering.

2.6.1 APPLICATION PROXY

This is also known as Application Gateway, and it acts as a connection agent between protected network and the external network. Basically, the application proxy is a host on the protected network that is setup as proxy server. Just as the name implies, application proxy function at the application layer of the Open System Interconnection (OSI) model and makes sure that all application requests from the secured network is communicated to the external network through the proxy server and no packets passes through from to external network to the secured network until the proxy checks and confirms inbound packets. This firewall support different types of protocols such as a Hypertext Transfer Protocol (HTTP), File Transfer Protocol (FTP) and Simple Mail Transport Protocol (SMTP) (Noonan and Dobrawsky 2006; NetContinuum 2006).

2.6.2 NETWORK ADDRESS (NAT)

NAT alter the IP addresses of hosts packets by hiding the genuine IP addresses of secured network hosts and dynamically replacing them with a different IP addresses (Cisco System 2008; Walberg 2007). When request packets are sent from the secured host through the gateway to an external host, the source host address is modified to a different IP address by NAT. When the reply packets arrives at the gateway, the NAT then replaces the modified address with genuine host address before forwarding it to the host (Walberg 2007).The role played by NAT in a secured network system makes it uneasy for unauthorized access to know:

  • The number of hosts available in the protected network
  • The topology of the network
  • The operating systems the host is running
  • The type of host machine (Cisco System 2008).

2.6.3 PACKET FILTERING.

“Firewalls and IPSec gateways have become major components in the current high speed Internet infrastructure to filter out undesired traffic and protect the integrity and confidentiality of critical traffic” (Hamed and Al-Shaer 2006). Packet filtering is based on the lay down security rule defined for any network or system. Filtering traffic over the network is big task that involves comprehensive understanding of the network on which it will be setup. This defined policy must always be updated in order to handle the possible network attacks (Hamed and Al-Shaer 2006).

2.6.4 INSTRUCTION DETECTION SYSTEMS.

Network penetration attacks are now on the increase as valuable information is being stolen or damaged by the attacker. Many security products have been developed to combat these attacks. Two of such products are Intrusion Prevention systems (IPS) and Intrusion Detection Systems (IDS).

IDS are software designed to purposely monitor and analysed all the activities (network traffic) on the network for any suspicious threats that may violate the defined network security policies (Scarfone and Mell 2007; Vignam et al 2003). There are varieties of methods IDS uses to detect threats on the network, two of them are, anomaly based IDS, and signature based IDS.

2.6.4.1 ANOMALY BASED IDS

Anomaly based IDS is setup to monitor and compare network events against what is defined to be normal network activities which is represented by a profile, in order to detect any deviation from the defined normal events. Some of the events are, comparing the type of bandwidth used, the type of protocols etc and once the IDS identifies any deviation in any of this events, it notifies the network administrator who then take necessary action to stop the intended attack (Scarfone and Mell 2007).

2.6.4.2 SIGNATURE BASED IDS

Signature based IDS are designed to monitor and compare packets on the network against the signature database of known malicious attacks or threats. This type of IDS is efficient at identifying already known threats but ineffective at identifying new threats which are not currently defined in the signature database, therefore giving way to network attacks (Scarfone and Mell 2007).

2.6.5 INTRUSION PREVENTION SYSTEMS (IPS).

IPS are proactive security products which can be software or hardware used to identify malicious packets and also to prevent such packets from gaining entry in the networks (Ierace et al 2005, Botwicz et al 2006). IPS is another form of firewall which is basically designed to detect irregularity in regular network traffic and likewise to stop possible network attacks such as Denial of service attacks. They are capable of dropping malicious packets and disconnecting any connection suspected to be illegal before such traffic get to the protected host. Just like a typical firewall, IPS makes use of define rules in the system setup to determine the action to take on any traffic and this could be to allow or block the traffic. IPS makes use of stateful packet analysis to protect the network. Similarly, IPS is capable of performing signature matching, application protocol validation etc as a means of detecting attacks on the network (Ierace et al 2005). As good as IPS are, they also have their downsides as well. One of it is the problem of false positive and false negative. False positive is a situation where legitimate traffic is been identified to be malicious and thereby resulting to the IPS blocking such traffic on the network. False negative on the other hand is when malicious traffic is be identified by the IPS as legitimate traffic thereby allowing such traffic to pass through the IPS to the protected network (Ierace N et al 2005).

2.7 SOFTWARE AND HARDWARE FIREWALLS

2.7.1 SOFTWARE FIREWALLS

Software-based firewalls are computers installed software for filtering packets (Permpootanalarp and Rujimethabhas 2001). These are programs setup either on personal computers or on network servers (Web servers and Email severs) operating system. Once the software is installed and proper security polices are defined, the systems (personal computers or servers) assume the role of a firewall. Software firewalls are second line of defence after hardware firewalls in situations where both are used for network security. Also software firewalls can be installed on different operating system such as, Windows Operating Systems, Mac operating system, Novel Netware, Linux Kernel, and UNIX Kernel etc. The function of these firewalls is, filtering distorted network traffic. There are several software firewall some of which include, Online Armor firewall, McAfee Personal Firewall, Zone Alarm, Norton Personal Firewall, Black Ice Defender, Sygate Personal Firewall, Panda Firewall, The DoorStop X Firewall etc (Lugo Parker 2005).

When designing a software firewall two keys things are considered. These are, per-packet filtering and a per-process filtering. The pre-packet filter is design to search for distorted packets, port scan detection and checking if the packets are accepted into the protocol stack. In the same vein, pre-process filter is the designed to check if a process is allowed to begin a connection to the secured network or not (Lugo and Parker 2005). It should be noted that there are different implantations of all Firewalls. While some are built into the operating system others are add-ons. Examples of built-in firewalls are windows based firewall and Linux based.

2.7.2 WINDOWS OPERATING SYSTEM BASED FIREWALL.

In operating system design, security features is one important aspect that is greatly considered. This is a challenge the software giant (Microsoft) as always made sure they implement is their products. In the software industry, Mi

Cite This Work

To export a reference to this article please select a referencing stye below:

Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.
Reference Copied to Clipboard.

Related Services

View all

Related Content

All Tags

Content relating to: "Cyber Security"

Cyber security refers to technologies and practices undertaken to protect electronics systems and devices including computers, networks, smartphones, and the data they hold, from malicious damage, theft or exploitation.

Related Articles

DMCA / Removal Request

If you are the original writer of this dissertation and no longer wish to have your work published on the UKDiss.com website then please: